TODO data downloads and system participation / evaluation in each task.

Call for participation in the community tasks - MultiLing 2019

Community Task Descriptions:
The following tasks will run in the MultiLing community (before, but also beyond MultiLing 2019):

* Headline generation
The objective of the Headline Generation (HG) task is to explore some of the challenges highlighted by current state of the art approaches on creating informative headlines to news articles: non-descriptive headlines, out-of-domain training data, and generating headlines from long documents which are not well represented by the head heuristic. We propose to make available a large set of training data for headline generation, and create evaluation conditions which emphasize those challenges. We will also rerun the task in DUC 2004 conditions in order to create comparable results.

* Summary evaluation
This task aims to examine how well automated systems can evaluate summaries from different languages. This task takes as input the summaries generated from automatic systems and humans in the Summarization Tasks of MultiLing 2015, but also in the Single document summarization tasks of 2015 and 2017 (when the latter is completed). The output should be a grading of the summaries. Ideally, we would want the automatic evaluation to maximally correlate to human judgement, thus the evaluation will be based on correlation measurement between estimated grades and human grades.

* Financial narrative summarization
This task aims to demonstrate the value and challenges of applying summarization to financial text, usually referred to as financial narrative disclosures. The participants will be asked to provide structured summaries, based on real-world, publicly available financial annual reports by extracting information from different key sections. Generated summaries should reflect the analysis and assessment of the financial trend of the business over the past year, as provided by annual reports.
The manual evaluation will be conducted by experts. Automatic evaluation will be carried through established summary evaluation measures, using as gold-standard models executive summaries provided by humans.

For more information, visit the MultiLing community website (

We encourage authors to participate in the above tasks.

Information about the Multiling 2019 workshop and its organization can be found here (