Standard

Dissimilarity-based ensembles for multiple instance learning. / Cheplygina, VV; Tax, DMJ; Loog, M.

In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 27, No. 6, 2016, p. 1379-1391.

Research output: Contribution to journalArticleScientificpeer-review

Harvard

Cheplygina, VV, Tax, DMJ & Loog, M 2016, 'Dissimilarity-based ensembles for multiple instance learning' IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 6, pp. 1379-1391. https://doi.org/10.1109/TNNLS.2015.2424254

APA

Cheplygina, VV., Tax, DMJ., & Loog, M. (2016). Dissimilarity-based ensembles for multiple instance learning. IEEE Transactions on Neural Networks and Learning Systems, 27(6), 1379-1391. https://doi.org/10.1109/TNNLS.2015.2424254

Vancouver

Cheplygina VV, Tax DMJ, Loog M. Dissimilarity-based ensembles for multiple instance learning. IEEE Transactions on Neural Networks and Learning Systems. 2016;27(6):1379-1391. https://doi.org/10.1109/TNNLS.2015.2424254

Author

Cheplygina, VV ; Tax, DMJ ; Loog, M. / Dissimilarity-based ensembles for multiple instance learning. In: IEEE Transactions on Neural Networks and Learning Systems. 2016 ; Vol. 27, No. 6. pp. 1379-1391.

BibTeX

@article{1629ec1047ad4ba782420527c26c46fe,
title = "Dissimilarity-based ensembles for multiple instance learning",
abstract = "In multiple instance learning, objects are sets (bags) of feature vectors (instances) rather than individual feature vectors. In this paper, we address the problem of how these bags can best be represented. Two standard approaches are to use (dis)similarities between bags and prototype bags, or between bags and prototype instances. The first approach results in a relatively low-dimensional representation, determined by the number of training bags, whereas the second approach results in a relatively high-dimensional representation, determined by the total number of instances in the training set. However, an advantage of the latter representation is that the informativeness of the prototype instances can be inferred. In this paper, a third, intermediate approach is proposed, which links the two approaches and combines their strengths. Our classifier is inspired by a random subspace ensemble, and considers subspaces of the dissimilarity space, defined by subsets of instances, as prototypes. We provide insight into the structure of some popular multiple instance problems and show state-of-the-art performances on these data sets.",
keywords = "Combining classifiers, dissimilarity representation, multiple instance learning (MIL), random subspacemethod (RSM)",
author = "VV Cheplygina and DMJ Tax and M Loog",
note = "harvest",
year = "2016",
doi = "10.1109/TNNLS.2015.2424254",
language = "English",
volume = "27",
pages = "1379--1391",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "3162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "6",

}

RIS

TY - JOUR

T1 - Dissimilarity-based ensembles for multiple instance learning

AU - Cheplygina, VV

AU - Tax, DMJ

AU - Loog, M

N1 - harvest

PY - 2016

Y1 - 2016

N2 - In multiple instance learning, objects are sets (bags) of feature vectors (instances) rather than individual feature vectors. In this paper, we address the problem of how these bags can best be represented. Two standard approaches are to use (dis)similarities between bags and prototype bags, or between bags and prototype instances. The first approach results in a relatively low-dimensional representation, determined by the number of training bags, whereas the second approach results in a relatively high-dimensional representation, determined by the total number of instances in the training set. However, an advantage of the latter representation is that the informativeness of the prototype instances can be inferred. In this paper, a third, intermediate approach is proposed, which links the two approaches and combines their strengths. Our classifier is inspired by a random subspace ensemble, and considers subspaces of the dissimilarity space, defined by subsets of instances, as prototypes. We provide insight into the structure of some popular multiple instance problems and show state-of-the-art performances on these data sets.

AB - In multiple instance learning, objects are sets (bags) of feature vectors (instances) rather than individual feature vectors. In this paper, we address the problem of how these bags can best be represented. Two standard approaches are to use (dis)similarities between bags and prototype bags, or between bags and prototype instances. The first approach results in a relatively low-dimensional representation, determined by the number of training bags, whereas the second approach results in a relatively high-dimensional representation, determined by the total number of instances in the training set. However, an advantage of the latter representation is that the informativeness of the prototype instances can be inferred. In this paper, a third, intermediate approach is proposed, which links the two approaches and combines their strengths. Our classifier is inspired by a random subspace ensemble, and considers subspaces of the dissimilarity space, defined by subsets of instances, as prototypes. We provide insight into the structure of some popular multiple instance problems and show state-of-the-art performances on these data sets.

KW - Combining classifiers

KW - dissimilarity representation

KW - multiple instance learning (MIL)

KW - random subspacemethod (RSM)

U2 - 10.1109/TNNLS.2015.2424254

DO - 10.1109/TNNLS.2015.2424254

M3 - Article

VL - 27

SP - 1379

EP - 1391

JO - IEEE Transactions on Neural Networks and Learning Systems

T2 - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 3162-237X

IS - 6

ER -

ID: 1282467