Standard

Evaluating Intelligent Knowledge Systems : Experiences with a user-adaptive assistant agent. / Berry, Pauline M.; Donneau-Golencer, Thierry; Duong, Khang; Gervasio, Melinda; Peintner, Bart; Yorke-Smith, Neil.

In: Knowledge and Information Systems, Vol. 52, No. 2, 08.2017, p. 379-409.

Research output: Scientific - peer-reviewArticle

Harvard

Berry, PM, Donneau-Golencer, T, Duong, K, Gervasio, M, Peintner, B & Yorke-Smith, N 2017, 'Evaluating Intelligent Knowledge Systems: Experiences with a user-adaptive assistant agent' Knowledge and Information Systems, vol 52, no. 2, pp. 379-409. DOI: 10.1007/s10115-016-1011-3

APA

Berry, P. M., Donneau-Golencer, T., Duong, K., Gervasio, M., Peintner, B., & Yorke-Smith, N. (2017). Evaluating Intelligent Knowledge Systems: Experiences with a user-adaptive assistant agent. Knowledge and Information Systems, 52(2), 379-409. DOI: 10.1007/s10115-016-1011-3

Vancouver

Berry PM, Donneau-Golencer T, Duong K, Gervasio M, Peintner B, Yorke-Smith N. Evaluating Intelligent Knowledge Systems: Experiences with a user-adaptive assistant agent. Knowledge and Information Systems. 2017 Aug;52(2):379-409. Available from, DOI: 10.1007/s10115-016-1011-3

Author

Berry, Pauline M.; Donneau-Golencer, Thierry; Duong, Khang; Gervasio, Melinda; Peintner, Bart; Yorke-Smith, Neil / Evaluating Intelligent Knowledge Systems : Experiences with a user-adaptive assistant agent.

In: Knowledge and Information Systems, Vol. 52, No. 2, 08.2017, p. 379-409.

Research output: Scientific - peer-reviewArticle

BibTeX

@article{671828e02ff245ff892d290cead36f4c,
title = "Evaluating Intelligent Knowledge Systems: Experiences with a user-adaptive assistant agent",
keywords = "Technology evaluation, Knowledge-based systems, Personal assistant agent, User-adaptive, Time management, CALO project",
author = "Berry, {Pauline M.} and Thierry Donneau-Golencer and Khang Duong and Melinda Gervasio and Bart Peintner and Neil Yorke-Smith",
year = "2017",
month = "8",
doi = "10.1007/s10115-016-1011-3",
volume = "52",
pages = "379--409",
journal = "Knowledge and Information Systems",
issn = "0219-1377",
publisher = "Springer-Verlag London",
number = "2",

}

RIS

TY - JOUR

T1 - Evaluating Intelligent Knowledge Systems

T2 - Knowledge and Information Systems

AU - Berry,Pauline M.

AU - Donneau-Golencer,Thierry

AU - Duong,Khang

AU - Gervasio,Melinda

AU - Peintner,Bart

AU - Yorke-Smith,Neil

PY - 2017/8

Y1 - 2017/8

N2 - This article examines experiences in evaluating a user-adaptive personal assistant agent designed to assist a busy knowledge worker in time management. We examine the managerial and technical challenges of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. The CALO project was a seminal multi-institution effort to develop a personalized cognitive assistant. It included a significant attempt to rigorously quantify learning capability, which this article discusses for the first time, and ultimately the project led to multiple spin-outs including Siri. Retrospection on negative and positive experiences over the 6 years of the project underscores best practice in evaluating user-adaptive systems. Lessons for knowledge system evaluation include: the interests of multiple stakeholders, early consideration of evaluation and deployment, layered evaluation at system and component levels, characteristics of technology and domains that determine the appropriateness of controlled evaluations, implications of ‘in-the-wild’ versus variations of ‘in-the-lab’ evaluation, and the effect of technology-enabled functionality and its impact upon existing tools and work practices. In the conclusion, we discuss—through the lessons illustrated from this case study of intelligent knowledge system evaluation—how development and infusion of innovative technology must be supported by adequate evaluation of its efficacy.

AB - This article examines experiences in evaluating a user-adaptive personal assistant agent designed to assist a busy knowledge worker in time management. We examine the managerial and technical challenges of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. The CALO project was a seminal multi-institution effort to develop a personalized cognitive assistant. It included a significant attempt to rigorously quantify learning capability, which this article discusses for the first time, and ultimately the project led to multiple spin-outs including Siri. Retrospection on negative and positive experiences over the 6 years of the project underscores best practice in evaluating user-adaptive systems. Lessons for knowledge system evaluation include: the interests of multiple stakeholders, early consideration of evaluation and deployment, layered evaluation at system and component levels, characteristics of technology and domains that determine the appropriateness of controlled evaluations, implications of ‘in-the-wild’ versus variations of ‘in-the-lab’ evaluation, and the effect of technology-enabled functionality and its impact upon existing tools and work practices. In the conclusion, we discuss—through the lessons illustrated from this case study of intelligent knowledge system evaluation—how development and infusion of innovative technology must be supported by adequate evaluation of its efficacy.

KW - Technology evaluation

KW - Knowledge-based systems

KW - Personal assistant agent

KW - User-adaptive

KW - Time management

KW - CALO project

U2 - 10.1007/s10115-016-1011-3

DO - 10.1007/s10115-016-1011-3

M3 - Article

VL - 52

SP - 379

EP - 409

JO - Knowledge and Information Systems

JF - Knowledge and Information Systems

SN - 0219-1377

IS - 2

ER -

ID: 25371494