Standard

‘Computer says no’ is not enough : Using prototypical examples to diagnose artificial neural networks for discrete choice analysis. / Alwosheel, Ahmad; van Cranenburgh, Sander; Chorus, Caspar G.

In: Journal of Choice Modelling, Vol. 33, 100186, 01.12.2019.

Research output: Contribution to journalArticleScientificpeer-review

Harvard

APA

Vancouver

Author

BibTeX

@article{031f7df0a2424d09b6591355eeae7e05,
title = "‘Computer says no’ is not enough: Using prototypical examples to diagnose artificial neural networks for discrete choice analysis",
abstract = "Artificial Neural Networks (ANNs) are increasingly used for discrete choice analysis, being appreciated in particular for their strong predictive power. However, many choice modellers are critical – and rightfully so – about using ANNs, for the reason that they are hard to diagnose. That is, for analysts it is hard to see whether a trained (estimated) ANN has learned intuitively reasonable relationships, as opposed to spurious, inexplicable or otherwise undesirable ones. As a result, choice modellers often find it difficult to trust an ANN, even if its predictive performance is strong. Inspired by research from the field of computer vision, this paper pioneers a low-cost and easy-to-implement methodology to diagnose ANNs in the context of choice behaviour analysis. The method involves synthesising prototypical examples after having trained the ANN. These prototypical examples expose the fundamental relationships that the ANN has learned. These, in turn, can be evaluated by the analyst to see whether they make sense and are desirable, or not. In this paper we show how to use such prototypical examples in the context of choice data and we discuss practical considerations for successfully diagnosing ANNs. Furthermore, we cross-validate our findings using techniques from traditional discrete choice analysis. Our results suggest that the proposed method helps build trust in well-functioning ANNs, and is able to flag poorly trained ANNs. As such, it helps choice modellers use ANNs for choice behaviour analysis in a more reliable and effective way.",
author = "Ahmad Alwosheel and {van Cranenburgh}, Sander and Chorus, {Caspar G.}",
year = "2019",
month = "12",
day = "1",
doi = "10.1016/j.jocm.2019.100186",
language = "English",
volume = "33",
journal = "Journal of Choice Modelling",
issn = "1755-5345",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - ‘Computer says no’ is not enough

T2 - Journal of Choice Modelling

AU - Alwosheel, Ahmad

AU - van Cranenburgh, Sander

AU - Chorus, Caspar G.

PY - 2019/12/1

Y1 - 2019/12/1

N2 - Artificial Neural Networks (ANNs) are increasingly used for discrete choice analysis, being appreciated in particular for their strong predictive power. However, many choice modellers are critical – and rightfully so – about using ANNs, for the reason that they are hard to diagnose. That is, for analysts it is hard to see whether a trained (estimated) ANN has learned intuitively reasonable relationships, as opposed to spurious, inexplicable or otherwise undesirable ones. As a result, choice modellers often find it difficult to trust an ANN, even if its predictive performance is strong. Inspired by research from the field of computer vision, this paper pioneers a low-cost and easy-to-implement methodology to diagnose ANNs in the context of choice behaviour analysis. The method involves synthesising prototypical examples after having trained the ANN. These prototypical examples expose the fundamental relationships that the ANN has learned. These, in turn, can be evaluated by the analyst to see whether they make sense and are desirable, or not. In this paper we show how to use such prototypical examples in the context of choice data and we discuss practical considerations for successfully diagnosing ANNs. Furthermore, we cross-validate our findings using techniques from traditional discrete choice analysis. Our results suggest that the proposed method helps build trust in well-functioning ANNs, and is able to flag poorly trained ANNs. As such, it helps choice modellers use ANNs for choice behaviour analysis in a more reliable and effective way.

AB - Artificial Neural Networks (ANNs) are increasingly used for discrete choice analysis, being appreciated in particular for their strong predictive power. However, many choice modellers are critical – and rightfully so – about using ANNs, for the reason that they are hard to diagnose. That is, for analysts it is hard to see whether a trained (estimated) ANN has learned intuitively reasonable relationships, as opposed to spurious, inexplicable or otherwise undesirable ones. As a result, choice modellers often find it difficult to trust an ANN, even if its predictive performance is strong. Inspired by research from the field of computer vision, this paper pioneers a low-cost and easy-to-implement methodology to diagnose ANNs in the context of choice behaviour analysis. The method involves synthesising prototypical examples after having trained the ANN. These prototypical examples expose the fundamental relationships that the ANN has learned. These, in turn, can be evaluated by the analyst to see whether they make sense and are desirable, or not. In this paper we show how to use such prototypical examples in the context of choice data and we discuss practical considerations for successfully diagnosing ANNs. Furthermore, we cross-validate our findings using techniques from traditional discrete choice analysis. Our results suggest that the proposed method helps build trust in well-functioning ANNs, and is able to flag poorly trained ANNs. As such, it helps choice modellers use ANNs for choice behaviour analysis in a more reliable and effective way.

UR - http://www.scopus.com/inward/record.url?scp=85073168340&partnerID=8YFLogxK

U2 - 10.1016/j.jocm.2019.100186

DO - 10.1016/j.jocm.2019.100186

M3 - Article

VL - 33

JO - Journal of Choice Modelling

JF - Journal of Choice Modelling

SN - 1755-5345

M1 - 100186

ER -

ID: 62631215