Summary of the paper

Title DBpedia Abstracts: A Large-Scale, Open, Multilingual NLP Training Corpus
Authors Martin Brümmer, Milan Dojchinovski and Sebastian Hellmann
Abstract The ever increasing importance of machine learning in Natural Language Processing is accompanied by an equally increasing need in large-scale training and evaluation corpora. Due to its size, its openness and relative quality, the Wikipedia has already been a source of such data, but on a limited scale. This paper introduces the DBpedia Abstract Corpus, a large-scale, open corpus of annotated Wikipedia texts in six languages, featuring over 11 million texts and over 97 million entity links. The properties of the Wikipedia texts are being described, as well as the corpus creation process, its format and interesting use-cases, like Named Entity Linking training and evaluation.
Topics Corpus (Creation, Annotation, etc.), Named Entity Recognition, Semantic Web
Full paper DBpedia Abstracts: A Large-Scale, Open, Multilingual NLP Training Corpus
Bibtex @InProceedings{BRMMER16.895,
  author = {Martin Brümmer and Milan Dojchinovski and Sebastian Hellmann},
  title = {DBpedia Abstracts: A Large-Scale, Open, Multilingual NLP Training Corpus},
  booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)},
  year = {2016},
  month = {may},
  date = {23-28},
  location = {Portorož, Slovenia},
  editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Sara Goggi and Marko Grobelnik and Bente Maegaard and Joseph Mariani and Helene Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
  publisher = {European Language Resources Association (ELRA)},
  address = {Paris, France},
  isbn = {978-2-9517408-9-1},
  language = {english}
 }
Powered by ELDA © 2016 ELDA/ELRA