Standard

Describing Data Processing Pipelines in Scientific Publications for Big Data Injection. / Mesbah, Sepideh; Bozzon, Alessandro; Lofi, Christoph; Houben, Geert-Jan.

SWM'17 Proceedings of the 1st Workshop on Scholary Web Mining. Cambridge, UK : Association for Computing Machinery (ACM), 2017. p. 1-8.

Research output: Scientific - peer-reviewConference contribution

Harvard

Mesbah, S, Bozzon, A, Lofi, C & Houben, G-J 2017, Describing Data Processing Pipelines in Scientific Publications for Big Data Injection. in SWM'17 Proceedings of the 1st Workshop on Scholary Web Mining. Association for Computing Machinery (ACM), Cambridge, UK, pp. 1-8, Workshop on Scholary Web Mining, Cambridge, United Kingdom, 10/02/17. DOI: 10.1145/3057148.3057149

APA

Mesbah, S., Bozzon, A., Lofi, C., & Houben, G-J. (2017). Describing Data Processing Pipelines in Scientific Publications for Big Data Injection. In SWM'17 Proceedings of the 1st Workshop on Scholary Web Mining (pp. 1-8). Cambridge, UK: Association for Computing Machinery (ACM). DOI: 10.1145/3057148.3057149

Vancouver

Mesbah S, Bozzon A, Lofi C, Houben G-J. Describing Data Processing Pipelines in Scientific Publications for Big Data Injection. In SWM'17 Proceedings of the 1st Workshop on Scholary Web Mining. Cambridge, UK: Association for Computing Machinery (ACM). 2017. p. 1-8. Available from, DOI: 10.1145/3057148.3057149

Author

Mesbah, Sepideh ; Bozzon, Alessandro ; Lofi, Christoph ; Houben, Geert-Jan. / Describing Data Processing Pipelines in Scientific Publications for Big Data Injection. SWM'17 Proceedings of the 1st Workshop on Scholary Web Mining. Cambridge, UK : Association for Computing Machinery (ACM), 2017. pp. 1-8

BibTeX

@inbook{71c4c6f4a5b54b28a4b5fe59b8adb674,
title = "Describing Data Processing Pipelines in Scientific Publications for Big Data Injection",
abstract = "The rise of Big Data analytics has been a disruptive game changer for many application domains, allowing the integration into domain-specific applications and systems of insights and knowledge extracted from external big data sets. The effective ``injection'' of external Big Data demands an understanding of the properties of available data sets, and expertise on the available and most suitable methods for data collection, enrichment and analysis. A prominent knowledge source is scientific literature, where data processing pipelines are described, discussed, and evaluated. Such knowledge is however not readily accessible, due to its distributed and unstructured nature. In this paper, we propose a novel ontology aimed at modeling properties of data processing pipelines, and their related artifacts, as described in scientific publications. The ontology is the result of a requirement analysis that involved experts from both academia and industry. We showcase the effectiveness of our ontology by manually applying it to a collection of publications describing data processing methods.",
keywords = "Ontology, Document structure, Digital Libraries and archives",
author = "Sepideh Mesbah and Alessandro Bozzon and Christoph Lofi and Geert-Jan Houben",
year = "2017",
doi = "10.1145/3057148.3057149",
pages = "1--8",
booktitle = "SWM'17 Proceedings of the 1st Workshop on Scholary Web Mining",
publisher = "Association for Computing Machinery (ACM)",
address = "United States",

}

RIS

TY - CHAP

T1 - Describing Data Processing Pipelines in Scientific Publications for Big Data Injection

AU - Mesbah,Sepideh

AU - Bozzon,Alessandro

AU - Lofi,Christoph

AU - Houben,Geert-Jan

PY - 2017

Y1 - 2017

N2 - The rise of Big Data analytics has been a disruptive game changer for many application domains, allowing the integration into domain-specific applications and systems of insights and knowledge extracted from external big data sets. The effective ``injection'' of external Big Data demands an understanding of the properties of available data sets, and expertise on the available and most suitable methods for data collection, enrichment and analysis. A prominent knowledge source is scientific literature, where data processing pipelines are described, discussed, and evaluated. Such knowledge is however not readily accessible, due to its distributed and unstructured nature. In this paper, we propose a novel ontology aimed at modeling properties of data processing pipelines, and their related artifacts, as described in scientific publications. The ontology is the result of a requirement analysis that involved experts from both academia and industry. We showcase the effectiveness of our ontology by manually applying it to a collection of publications describing data processing methods.

AB - The rise of Big Data analytics has been a disruptive game changer for many application domains, allowing the integration into domain-specific applications and systems of insights and knowledge extracted from external big data sets. The effective ``injection'' of external Big Data demands an understanding of the properties of available data sets, and expertise on the available and most suitable methods for data collection, enrichment and analysis. A prominent knowledge source is scientific literature, where data processing pipelines are described, discussed, and evaluated. Such knowledge is however not readily accessible, due to its distributed and unstructured nature. In this paper, we propose a novel ontology aimed at modeling properties of data processing pipelines, and their related artifacts, as described in scientific publications. The ontology is the result of a requirement analysis that involved experts from both academia and industry. We showcase the effectiveness of our ontology by manually applying it to a collection of publications describing data processing methods.

KW - Ontology

KW - Document structure

KW - Digital Libraries and archives

UR - https://ornlcda.github.io/SWM2017/

U2 - 10.1145/3057148.3057149

DO - 10.1145/3057148.3057149

M3 - Conference contribution

SP - 1

EP - 8

BT - SWM'17 Proceedings of the 1st Workshop on Scholary Web Mining

PB - Association for Computing Machinery (ACM)

ER -

ID: 18491789