Nalazite se na CroRIS probnoj okolini. Ovdje evidentirani podaci neće biti pohranjeni u Informacijskom sustavu znanosti RH. Ako je ovo greška, CroRIS produkcijskoj okolini moguće je pristupi putem poveznice www.croris.hr
izvor podataka: crosbi !

BLEU Evaluation of Machine-Translated English- Croatian Legislation (CROSBI ID 587076)

Prilog sa skupa u zborniku | izvorni znanstveni rad | međunarodna recenzija

Seljan, Sanja ; Vičić, Tomislav ; Brkić, Marija BLEU Evaluation of Machine-Translated English- Croatian Legislation // Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12) / Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis (ur.). Istanbul: European Language Resources Association (ELRA), 2012

Podaci o odgovornosti

Seljan, Sanja ; Vičić, Tomislav ; Brkić, Marija

engleski

BLEU Evaluation of Machine-Translated English- Croatian Legislation

This paper presents work on the evaluation of online available machine translation (MT) service, i.e. Google Translate, for English- Croatian language pair in the domain of legislation. The total set of 200 sentences, for which three reference translations are provided, is divided into short and long sentences. Human evaluation is performed by native speakers, using the criteria of adequacy and fluency. For measuring the reliability of agreement among raters, Fleiss' kappa metric is used. Human evaluation is enriched by error analysis, in order to examine the influence of error types on fluency and adequacy, and to use it in further research. Translation errors are divided into several categories: non- translated words, word omissions, unnecessarily translated words, morphological errors, lexical errors, syntactic errors and incorrect punctuation. The automatic evaluation metric BLEU is calculated with regard to a single and multiple reference translations. System level Pearson’s correlation between BLEU scores based on a single and multiple reference translations is given, as well as correlation between short and long sentences BLEU scores, and correlation between the criteria of fluency and adequacy and each error category.

BLEU metric; English-Croatian legislation; human evaluation

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

Podaci o prilogu

2012.

objavljeno

Podaci o matičnoj publikaciji

Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)

Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis

Istanbul: European Language Resources Association (ELRA)

978-2-9517408-7-7

Podaci o skupu

Language Resources and Evaluation (LREC'12)

poster

23.05.2012-25.05.2012

Istanbul, Turska

Povezanost rada

Informacijske i komunikacijske znanosti

Poveznice