Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/22686
Title: | Introducing shared task evaluation to NLG : the TUNA shared task evaluation challenges |
Other Titles: | Empirical methods in natural language generation : data-oriented methods and empirical evaluation |
Authors: | Gatt, Albert Belz, Anja |
Keywords: | Natural language processing (Computer science) Corpora (Linguistics) Linguistic analysis (Linguistics) Reference (Linguistics) Word (Linguistics) |
Issue Date: | 2010 |
Publisher: | Springer-Verlag Berlin Heidelberg |
Citation: | Gatt, A., & Belz, A. (2010). introducing shared tasks to NLG: the TUNA shared task evaluation challenges. In E. Krahmer, & M. Theune (Eds.), Empirical methods in natural language generation: data-oriented methods and empirical evaluation (pp. 264-293). Heidelberg: Springer-Verlag Berlin Heidelberg. |
Abstract: | Shared Task Evaluation Challenges (stecs) have only recently begun in the field of nlg. The tuna stecs, which focused on Referring Expression Generation (reg), have been part of this development since its inception. This chapter looks back on the experience of organising the three tuna Challenges, which came to an end in 2009. While we discuss the role of the stecs in yielding a substantial body of research on the reg problem, which has opened new avenues for future research, our main focus is on the role of different evaluation methods in assessing the output quality of reg algorithms, and on the relationship between such methods. |
URI: | https://www.um.edu.mt/library/oar//handle/123456789/22686 |
ISBN: | 9783642155727 |
Appears in Collections: | Scholarly Works - InsLin |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
tunareg-bkchapter.pdf | 491.41 kB | Adobe PDF | View/Open |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.