Napredna pretraga

Pregled bibliografske jedinice broj: 582714

BLEU Evaluation of Machine-Translated English- Croatian Legislation


Seljan, Sanja; Vičić, Tomislav; Brkić, Marija
BLEU Evaluation of Machine-Translated English- Croatian Legislation // Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12) / Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis (ur.).
Istanbul, Turkey: European Language Resources Association (ELRA), 2012. (poster, međunarodna recenzija, cjeloviti rad (in extenso), znanstveni)


Naslov
BLEU Evaluation of Machine-Translated English- Croatian Legislation

Autori
Seljan, Sanja ; Vičić, Tomislav ; Brkić, Marija

Vrsta, podvrsta i kategorija rada
Radovi u zbornicima skupova, cjeloviti rad (in extenso), znanstveni

Izvornik
Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12) / Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis - Istanbul, Turkey : European Language Resources Association (ELRA), 2012

ISBN
978-2-9517408-7-7

Skup
Language Resources and Evaluation (LREC'12)

Mjesto i datum
LREC 2012, 23-25.5.2012.

Vrsta sudjelovanja
Poster

Vrsta recenzije
Međunarodna recenzija

Ključne riječi
BLEU metric; English-Croatian legislation; human evaluation

Sažetak
This paper presents work on the evaluation of online available machine translation (MT) service, i.e. Google Translate, for English- Croatian language pair in the domain of legislation. The total set of 200 sentences, for which three reference translations are provided, is divided into short and long sentences. Human evaluation is performed by native speakers, using the criteria of adequacy and fluency. For measuring the reliability of agreement among raters, Fleiss' kappa metric is used. Human evaluation is enriched by error analysis, in order to examine the influence of error types on fluency and adequacy, and to use it in further research. Translation errors are divided into several categories: non- translated words, word omissions, unnecessarily translated words, morphological errors, lexical errors, syntactic errors and incorrect punctuation. The automatic evaluation metric BLEU is calculated with regard to a single and multiple reference translations. System level Pearson’s correlation between BLEU scores based on a single and multiple reference translations is given, as well as correlation between short and long sentences BLEU scores, and correlation between the criteria of fluency and adequacy and each error category.

Izvorni jezik
Engleski

Znanstvena područja
Informacijske i komunikacijske znanosti



POVEZANOST RADA


Projekt / tema
130-1300646-0909 - Informacijska tehnologija u prevođenju hrvatskoga i e-učenju jezika (Sanja Seljan, )
318-0361935-0852 - Govorne tehnologije (Ivo Ipšić, )

Ustanove
Filozofski fakultet, Zagreb,
Sveučilište u Rijeci - Odjel za informatiku