Standard

Evaluating Automatic Spreadsheet Metadata Extraction on a Large Set of Responses from MOOC Participants. / Roy, S.; Hermans, F.; Aivaloglou, E.; Winter, J.; van Deursen, Arie.

2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER). ed. / A. Jiu. Vol. 2 Los Alamitos, CA : IEEE Society, 2016. p. 135-145.

Research output: Scientific - peer-reviewConference contribution

Harvard

Roy, S, Hermans, F, Aivaloglou, E, Winter, J & van Deursen, A 2016, Evaluating Automatic Spreadsheet Metadata Extraction on a Large Set of Responses from MOOC Participants. in A Jiu (ed.), 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER). vol. 2, IEEE Society, Los Alamitos, CA, pp. 135-145, SANER 2016, Osaka, Japan, 14-18 March. DOI: 10.1109/SANER.2016.98

APA

Roy, S., Hermans, F., Aivaloglou, E., Winter, J., & van Deursen, A. (2016). Evaluating Automatic Spreadsheet Metadata Extraction on a Large Set of Responses from MOOC Participants. In A. Jiu (Ed.), 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER). (Vol. 2, pp. 135-145). Los Alamitos, CA: IEEE Society. DOI: 10.1109/SANER.2016.98

Vancouver

Roy S, Hermans F, Aivaloglou E, Winter J, van Deursen A. Evaluating Automatic Spreadsheet Metadata Extraction on a Large Set of Responses from MOOC Participants. In Jiu A, editor, 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER). Vol. 2. Los Alamitos, CA: IEEE Society. 2016. p. 135-145. Available from, DOI: 10.1109/SANER.2016.98

Author

Roy, S.; Hermans, F.; Aivaloglou, E.; Winter, J.; van Deursen, Arie / Evaluating Automatic Spreadsheet Metadata Extraction on a Large Set of Responses from MOOC Participants.

2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER). ed. / A. Jiu. Vol. 2 Los Alamitos, CA : IEEE Society, 2016. p. 135-145.

Research output: Scientific - peer-reviewConference contribution

BibTeX

@inbook{34045a3402db44418846b9f094680d7d,
title = "Evaluating Automatic Spreadsheet Metadata Extraction on a Large Set of Responses from MOOC Participants",
keywords = "computer aided instruction, meta data, personal computing, spreadsheet programs, text analysis, MOOC participants, automatic spreadsheet metadata extraction, automatic textual documentation generation, data structure, end-user computing applications, metadata user perception, spreadsheet patterns, Computers, Conferences, Data mining, Documentation, Metadata, Reliability, Software, Empirical evaluation, MOOC, Meta-data extraction, Spreadsheet, User-study",
author = "S. Roy and F. Hermans and E. Aivaloglou and J. Winter and {van Deursen}, Arie",
year = "2016",
doi = "10.1109/SANER.2016.98",
isbn = "978-1-5090-1855-0",
volume = "2",
pages = "135--145",
editor = "A. Jiu",
booktitle = "2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER)",
publisher = "IEEE Society",

}

RIS

TY - CHAP

T1 - Evaluating Automatic Spreadsheet Metadata Extraction on a Large Set of Responses from MOOC Participants

AU - Roy,S.

AU - Hermans,F.

AU - Aivaloglou,E.

AU - Winter,J.

AU - van Deursen,Arie

PY - 2016

Y1 - 2016

N2 - Spreadsheets are popular end-user computing applications and one reason behind their popularity is that they offer a large degree of freedom to their users regarding the way they can structure their data. However, this flexibility also makes spreadsheets difficult to understand. Textual documentation can address this issue, yet for supporting automatic generation of textual documentation, an important pre-requisite is to extract metadata inside spreadsheets. It is a challenge though, to distinguish between data and metadata due to the lack of universally accepted structural patterns in spreadsheets. Two existing approaches for automatic extraction of spreadsheet metadata were not evaluated on large datasets consisting of user inputs. Hence in this paper, we describe the collection of a large number of user responses regarding identification of spreadsheet metadata from participants of a MOOC. We describe the use of this large dataset to understand how users identify metadata in spreadsheets, and to evaluate two existing approaches of automatic metadata extraction from spreadsheets. The results provide us with directions to follow in order to improve metadata extraction approaches, obtained from insights about user perception of metadata. We also understand what type of spreadsheet patterns the existing approaches perform well and on what type poorly, and thus which problem areas to focus on in order to improve.

AB - Spreadsheets are popular end-user computing applications and one reason behind their popularity is that they offer a large degree of freedom to their users regarding the way they can structure their data. However, this flexibility also makes spreadsheets difficult to understand. Textual documentation can address this issue, yet for supporting automatic generation of textual documentation, an important pre-requisite is to extract metadata inside spreadsheets. It is a challenge though, to distinguish between data and metadata due to the lack of universally accepted structural patterns in spreadsheets. Two existing approaches for automatic extraction of spreadsheet metadata were not evaluated on large datasets consisting of user inputs. Hence in this paper, we describe the collection of a large number of user responses regarding identification of spreadsheet metadata from participants of a MOOC. We describe the use of this large dataset to understand how users identify metadata in spreadsheets, and to evaluate two existing approaches of automatic metadata extraction from spreadsheets. The results provide us with directions to follow in order to improve metadata extraction approaches, obtained from insights about user perception of metadata. We also understand what type of spreadsheet patterns the existing approaches perform well and on what type poorly, and thus which problem areas to focus on in order to improve.

KW - computer aided instruction

KW - meta data

KW - personal computing

KW - spreadsheet programs

KW - text analysis

KW - MOOC participants

KW - automatic spreadsheet metadata extraction

KW - automatic textual documentation generation

KW - data structure

KW - end-user computing applications

KW - metadata user perception

KW - spreadsheet patterns

KW - Computers

KW - Conferences

KW - Data mining

KW - Documentation

KW - Metadata

KW - Reliability

KW - Software

KW - Empirical evaluation

KW - MOOC

KW - Meta-data extraction

KW - Spreadsheet

KW - User-study

U2 - 10.1109/SANER.2016.98

DO - 10.1109/SANER.2016.98

M3 - Conference contribution

SN - 978-1-5090-1855-0

VL - 2

SP - 135

EP - 145

BT - 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER)

PB - IEEE Society

ER -

ID: 7607036