Task description

Following the task of 2015, the multi-lingual single-document summarization task will be to generate a single document summary for all the given Wikipedia feature articles from one of about 38 languages provided. The provided training data will be the Single-Document Summarization Task data from MultiLing 2015. A new set of data will be generated based on additional Wikipedia feature articles. The summaries will be evaluated via automatic methods and participants will be required to perform some limited summarization evaluations. The manual evaluation will consist of pairwise comparisons of machine(-generated) summaries. Each evaluator will be presented the human(-generated) summary and two machine summaries. The evaluation task is to read the human summary and judge if the one machine summary is significantly closer to the human summary information content (e.g. system A > system B) or if the two machine summaries contain comparable quantity of information as the human summary.

Papers on multi-lingual summarization based on the 2015 train and test data may be submitted for consideration as part of the workshop proceeedings.  Note that the 2017 test data will be released the day AFTER the papers are due.  There will be an opportunity for poster sessions giving results on the 2017 data for all who submit summaries in for at least 5 languages in the 2017 multi-lingual summarization task.

Data

For 2017 the training data will be the 2015 test data which may be downloaded from the 2015 site or simply CLICK THIS. as well as the 2015 training data.

The submitted summaries for 2015 and their automatic evaluation scores can be found downloaded by clicking here.  ROUGE needs to modified to run on multilingual data you may download the modifications with the scripts used for 2015 here.

2017 Test Data is availabe here. These data consist 30 featured Wikipedia articles for each of 41 languages.  The data are formated in both an XML format and raw text.   The documents are in UTF-8 without mark-ups and images.  For MultiLing 2015 and 2017 the character length of the human summary for each document is provided, called the target length. Each machine summary should be as close to the target length provided as possible. For the purpose of evaluation all machine summaries greater than the target length will be truncated to the target length. The summaries will be evaluated via automatic methods and participants  may be asked to perform some limited summarization evaluations.

Each team will be allowed to submit up to 5 submissions for each language.  The submissions should be placed in a archived (.zip or .tar file) with the name of the directory being the team name.  At the next level there should be a subdirectory named Priority1 and optional subdirectories, Priority2, Priority3, Priority4, and Priiority5, corresponding to the up to 5 submissions for the team.  In each of these subdirectories, there should be a directory for each language, using the 2 chararcter names provided in the testing data.   In each of these directories there should be 30 files with the name of the provided hash value name for it.  You may optionally have extensions using one or more "."''s (periods).   For example if the featured article's name is feedc24a067e279b75b5c9fbfea1dfd5.txt then example of valid file names for a summary of this file are

      feedc24a067e279b75b5c9fbfea1dfd5.txt
      feedc24a067e279b75b5c9fbfea1dfd5.txt.ABC.NMF
      feedc24a067e279b75b5c9fbfea1dfd5.txt.CFD

Please then email the archived file to rankel@math.umd.edu and conroyjohnm@gmail.com with the subject Line "MultiLing17 Single Doc Submission".

 

 

Results

To be announced

Dates

Training data available Dec 18, 2016

Submission deadline extended to Monday, Jan 30, 2017 (end of day GMT-12, i.e. end of day wherever you are in the world).

Test data available Jan 31, 2017: Now available!

Submissions Due Feb 15, 2017

Manual Evaluation Begins: February 20, 2017

Preliminary Results Released: March 15, 2017.  Now available!

Workshop:  April 3, 2017