LiLa: Linking Latin

Building a Knowledge Base of Linguistic Resources for Latin

Third Workshop on Language Technologies for Historical and Ancient LAnguages (#LT4HALA2024)

 

Description

This one-day workshop seeks to bring together scholars, who are developing and/or are using Language Technologies (LTs) for historically attested languages, so to foster cross-fertilization between the Computational Linguistics community and the areas in the Humanities dealing with historical linguistic data, e.g. historians, philologists, linguists, archaeologists and literary scholars. Despite the current availability of large collections of digitized texts written in historical languages, such interdisciplinary collaboration is still hampered by the limited availability of annotated linguistic resources for most of the historical languages. Creating such resources is a challenge and an obligation for LTs, both to support historical linguistic research with the most updated technologies and to preserve those precious linguistic data that survived from past times.

Relevant topics for the workshop include, but are not limited to:

  • handling spelling variation,
  • detection and correction of OCR errors,
  • creation and annotation of linguistic resources,
  • deciphering,
  • morphological/syntactic/semantic analysis of textual data,
  • adaptation of tools to address diachronic/diatopic/diastratic variation in texts,
  • teaching ancient languages with LTs,
  • NLP-driven theoretical studies in historical linguistics,
  • NLP-driven analysis of literary ancient texts,
  • evaluation of LTs designed for historical and ancient languages,
  • Large Language Models for the automatic analysis of ancient texts.

The workshop will also be the venue of the:

  • third edition of EvaLatin, an evaluation campaign entirely devoted to the evaluation of NLP tools for Latin. The third edition of EvaLatin will focus on two tasks (i.e. dependency parsing and emotion polarity detection). Dependency parsing will be based on the Universal Dependencies framework. No specific training data will be released but participants will be free to make use of any (kind of) resource they consider useful for the task, including the Latin treebanks already available in the UD collection. In this regard, one of the challenges of this task will be to understand which treebank (or combination of treebanks) is the most suitable to deal with new test data. Test data will be both prose and poetic texts from different time periods. Also for the emotion polarity detection task, no training data will be released but the organizers will provide an annotation sample, a manually created polarity lexicon and annotation guidelines. Also in this task, participants will be free to pursue the approach they prefer, including unsupervised and/or cross-language ones (which promise to be the most efficient, given the lack of training data for Latin for this task). Test data will be poetic texts from different time periods.
  • third edition of EvaHan, the evaluation campaign for the evaluation of NLP tools for Ancient Chinese. EvaHan 2024 will focus on two tasks: Ancient Chinese sentence segmentation and sentence punctuation.

Submissions

Submissions of three forms of papers will be considered:

  • Regular long papers: up to eight (8) pages maximum*, presenting substantial, original, completed, and unpublished work.
  • Short papers: up to four (4) pages*, describing a small focused contribution, negative results, system demonstrations, etc.
  • Position papers: up to eight (8) pages*, discussing key hot topics, challenges and open issues, as well as cross-fertilization between computational linguistics and other disciplines.

* Excluding any number of additional pages for references, ethical consideration, conflict-of-interest, as well as data, and code availability statements.

We encourage the authors of papers reporting experimental results to make their results reproducible and the entire process of analysis replicable, by making the data and the tools they used available. The form of the presentation may be oral or poster, whereas in the proceedings there is no difference between the accepted papers. The submission is NOT anonymous. The LREC-COLING 2024 official format is requested. Each paper will be reviewed but three independent reviewers.

As for EvaLatin and EvaHan, participants will be required to submit a technical report for each task (with all the related sub-tasks) they took part in. Technical reports will be included in the proceedings as short papers: the maximum length is 4 pages (excluding references) and they should follow the LREC-COLING 2024 official format. Reports will receive a light review (we will check for the correctness of the format, the exactness of results and ranking, and overall exposition). All participants will have the possibility to present their results at the workshop.

Important Dates

Workshop

  • 26 February 2024: submission due
  • 18 March 2024: reviews due
  • 22 March 2024: notifications to authors
  • 5 April 2024: camera-ready (PDF) due

EvaLatin

  • 22 December 2023: guidelines available
  • Evaluation Window I – Task: Dependency Parsing
    • 1 February 2024: test data available
    • 8 February 2024: system results due to organizers
  • Evaluation Window II – Task: Emotion Polarity Detection
    • 12 February 2024: test data available
    • 19 February 2024: system results due to organizers
  • 11 March 2024: reports due to organizers
  • 22 March 2024: short report review deadline
  • 5 April 2024: camera ready version of reports due to organizers

EvaHan

  • 22 December 2023: training data available
  • Evaluation Window
    • 12 February 2024: test data available
    • 19 February 2024: system results due to organizers
  • 11 March 2024: reports due to organizers
  • 22 March 2024: short report review deadline
  • 5 April 2024: camera ready version of reports due to organizers

Identify, Describe and Share your LRs!

When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC-COLING authors to share the described LRs (data, tools, services, etc.) to enable their reuse and replicability of experiments (including evaluation ones).

LT4HALA Organizers

EvaLatin Organizers

  • Rachele Sprugnoli, Università Cattolica del Sacro Cuore, Milan, Italy
  • Federica Iurescia, Università Cattolica del Sacro Cuore, Milan, Italy
  • Marco Passarotti, Università Cattolica del Sacro Cuore, Milan, Italy

EvaHan Organizers

  • Li Bin, School of Chinese Language and Literature, Nanjing Normal University, P.R. China
  • Bolin Chang, Nanjing Normal University, P.R. China
  • Minxuan Feng, Nanjing Normal University, P.R. China
  • Chao Xu, Nanjing Normal University, P.R. China
  • Dongbo Wang, Nanjing Agricultural University, P.R. China

Programme Committee

  • Adam Anderson, FactGrid Cuneiform Project, USA
  • Yannis Assael, Google DeepMind
  • Monica Berti, University of Leipzig, Germany
  • Luca Brigada Villa, Università di Bergamo, Italy
  • Flavio Massimiliano Cecchini, Università Cattolica del Sacro Cuore di Milano, Italy
  • Margherita Fantoli, University of Leuven, Belgium
  • Federica Gamba, Charles University, Czech Republic
  • Shai Gordin, Ariel University, Israel
  • Federica Iurescia, Università Cattolica del Sacro Cuore di Milano, Italy
  • Bin Li, School of Chinese Language and Literature at Nanjing Normal University, P.R. China
  • Eleonora Litta, Università Cattolica del Sacro Cuore di Milano, Italy
  • Yudong Liu, Western Washington University
  • Barbara McGillivray, Turing Institute, UK
  • Beáta Megyesi, Uppsala University, Sweden
  • Chiara Palladino, Furman University, USA
  • John Pavlopoulos, Athens University of Economics and Business, Greece
  • Eva Pettersson, Uppsala University, Sweden
  • Sophie Prévost, Laboratoire Lattice, France
  • Thea Sommerschield, Ca’ Foscari University of Venice, Italy
  • James Tauber, Eldarion, USA
  • Toon Van Hal, Katholieke Universiteit Leuven, Belgium
  • Tariq Yousef, University of Southern Denmark, Denmark


    Contact

For more information on the worksop and EvaLatin, please write to rachele.sprugnoli[AT]unipr.it writing “LT4HALA” or “EvaLatin” in the subject of your email.

For more information on EvaHan, please write to libin.njnu[AT]gmail.com writing “EvaHan” in the subject of the email.

Follow @ERC_LiLa and the hashtag #LT4HALA on Twitter for updates.