Documents

DOI

Learning performance can show non-monotonic behavior. That is, more data does not necessarily lead to better models, even on average. We propose three algorithms that take a supervised learning model and make it perform more monotone. We prove consistency and monotonicity with high probability, and evaluate the algorithms on scenarios where non-monotone behaviour occurs. Our proposed algorithm MTHT makes less than 1% non-monotone decisions on MNIST while staying competitive in terms of error rate compared to several baselines. Our code is available at https://github.com/tomviering/monotone.

Original languageEnglish
Title of host publicationAdvances in Intelligent Data Analysis XVIII - 18th International Symposium on Intelligent Data Analysis, IDA 2020, Proceedings
EditorsMichael R. Berthold, Ad Feelders, Georg Krempl
Place of PublicationCham
PublisherSpringer Open
Pages535-547
Number of pages13
ISBN (Electronic)978-3-030-44584-3
ISBN (Print)978-3-030-44583-6
DOIs
Publication statusPublished - 2020
Event18th International Conference on Intelligent Data Analysis, IDA 2020 - Konstanz, Germany
Duration: 27 Apr 202029 Apr 2020
Conference number: 18

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12080
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference18th International Conference on Intelligent Data Analysis, IDA 2020
Abbreviated titleIDA 2020
CountryGermany
CityKonstanz
Period27/04/2029/04/20
OtherVirtual/online event due to COVID-19

    Research areas

  • Learning curve, Learning theory, Model selection

ID: 72979658