Standard

Evaluating Intelligent Knowledge Systems : (Article Abstract). / Yorke-Smith, Neil.

BNAIC 2017 pre-proceedings: 29th Benelux Conference on Artificial Intelligence. ed. / Bart Verheij; Marco Wiering. 2017. p. 344-345.

Research output: Scientific - peer-reviewConference contribution

Harvard

Yorke-Smith, N 2017, Evaluating Intelligent Knowledge Systems: (Article Abstract). in B Verheij & M Wiering (eds), BNAIC 2017 pre-proceedings: 29th Benelux Conference on Artificial Intelligence. pp. 344-345, BNAIC 2017, Groningen, Netherlands, 8/11/17.

APA

Yorke-Smith, N. (2017). Evaluating Intelligent Knowledge Systems: (Article Abstract). In B. Verheij, & M. Wiering (Eds.), BNAIC 2017 pre-proceedings: 29th Benelux Conference on Artificial Intelligence (pp. 344-345)

Vancouver

Yorke-Smith N. Evaluating Intelligent Knowledge Systems: (Article Abstract). In Verheij B, Wiering M, editors, BNAIC 2017 pre-proceedings: 29th Benelux Conference on Artificial Intelligence. 2017. p. 344-345.

Author

Yorke-Smith, Neil. / Evaluating Intelligent Knowledge Systems : (Article Abstract). BNAIC 2017 pre-proceedings: 29th Benelux Conference on Artificial Intelligence. editor / Bart Verheij ; Marco Wiering. 2017. pp. 344-345

BibTeX

@inbook{855579529fd7447ba852f6cb7340ef57,
title = "Evaluating Intelligent Knowledge Systems: (Article Abstract)",
abstract = "The article published in Knowledge and Information Systems examines the evaluation of a user-adaptive personal assistant agent designed to assist a busy knowledge worker in time management. The article examines the managerial and technical challenges of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. The PTIME agent was part of the CALO project, a seminal multi-institution effort to develop a personalized cognitive assistant. The project included a significant attempt to rigorously quantify learning capability, which the article discusses for the first time, and ultimately the project led to multiple spin-outs including Siri. Retrospection on negative and positive experiences over the six years of the project underscores best practice in evaluating user-adaptive systems. Through the lessons illustrated from the case study of intelligent knowledge system evaluation, the article highlights how development and infusion of innovative technology must be supported by adequate evaluation of its efficacy.",
author = "Neil Yorke-Smith",
year = "2017",
month = "11",
pages = "344--345",
editor = "Bart Verheij and Marco Wiering",
booktitle = "BNAIC 2017 pre-proceedings",

}

RIS

TY - CHAP

T1 - Evaluating Intelligent Knowledge Systems

T2 - (Article Abstract)

AU - Yorke-Smith,Neil

PY - 2017/11

Y1 - 2017/11

N2 - The article published in Knowledge and Information Systems examines the evaluation of a user-adaptive personal assistant agent designed to assist a busy knowledge worker in time management. The article examines the managerial and technical challenges of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. The PTIME agent was part of the CALO project, a seminal multi-institution effort to develop a personalized cognitive assistant. The project included a significant attempt to rigorously quantify learning capability, which the article discusses for the first time, and ultimately the project led to multiple spin-outs including Siri. Retrospection on negative and positive experiences over the six years of the project underscores best practice in evaluating user-adaptive systems. Through the lessons illustrated from the case study of intelligent knowledge system evaluation, the article highlights how development and infusion of innovative technology must be supported by adequate evaluation of its efficacy.

AB - The article published in Knowledge and Information Systems examines the evaluation of a user-adaptive personal assistant agent designed to assist a busy knowledge worker in time management. The article examines the managerial and technical challenges of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. The PTIME agent was part of the CALO project, a seminal multi-institution effort to develop a personalized cognitive assistant. The project included a significant attempt to rigorously quantify learning capability, which the article discusses for the first time, and ultimately the project led to multiple spin-outs including Siri. Retrospection on negative and positive experiences over the six years of the project underscores best practice in evaluating user-adaptive systems. Through the lessons illustrated from the case study of intelligent knowledge system evaluation, the article highlights how development and infusion of innovative technology must be supported by adequate evaluation of its efficacy.

UR - http://resolver.tudelft.nl/uuid:85557952-9fd7-447b-a852-f6cb7340ef57

M3 - Conference contribution

SP - 344

EP - 345

BT - BNAIC 2017 pre-proceedings

ER -

ID: 32870561