Pregled bibliografske jedinice broj: 903550
Paraphrase-focused learning to rank for domain-specific frequently asked questions retrieval
Paraphrase-focused learning to rank for domain-specific frequently asked questions retrieval // Expert systems with applications, 91 (2018), 418-433 doi:10.1016/j.eswa.2017.09.031 (međunarodna recenzija, članak, znanstveni)
CROSBI ID: 903550 Za ispravke kontaktirajte CROSBI podršku putem web obrasca
Naslov
Paraphrase-focused learning to rank for domain-specific
frequently asked questions retrieval
Autori
Karan, Mladen ; Šnajder, Jan
Izvornik
Expert systems with applications (0957-4174) 91
(2018);
418-433
Vrsta, podvrsta i kategorija rada
Radovi u časopisima, članak, znanstveni
Ključne riječi
question answering ; FAQ retrieval ; learning to rank ; ListNET ; LambdaMART ; convolutional neural network
Sažetak
A frequently asked questions (FAQ) retrieval system improves the access to information by allowing users to pose natural language queries over an FAQ collection. From an information retrieval perspective, FAQ retrieval is a challenging task, mainly because of the lexical gap that exists between a query and an FAQ pair, both of which are typically very short. In this work, we explore the use of supervised learning to rank to improve the performance of domain-specific FAQ retrieval. While supervised learning-to-rank models have been shown to yield effective retrieval performance, they require costly human-labeled training data in the form of document relevance judgments or question paraphrases. We investigate how this labeling effort can be reduced using a labeling strategy geared toward the manual creation of query paraphrases rather than the more time-consuming relevance judgments. In particular, we investigate two such strategies, and test them by applying supervised ranking models to two domain-specific FAQ retrieval data sets, showcasing typical FAQ retrieval scenarios. Our experiments show that supervised ranking models can yield significant improvements in the precision-at- rank-5 measure compared to unsupervised baselines. Furthermore, we show that a supervised model trained using data labeled via a low-effort paraphrase- focused strategy has the same performance as that of the same model trained using fully labeled data, indicating that the strategy is effective at reducing the labeling effort while retaining the performance gains of the supervised approach. To encourage further research on FAQ retrieval we make our FAQ retrieval data set publicly available.
Izvorni jezik
Engleski
Znanstvena područja
Računarstvo
POVEZANOST RADA
Ustanove:
Fakultet elektrotehnike i računarstva, Zagreb
Citiraj ovu publikaciju:
Časopis indeksira:
- Current Contents Connect (CCC)
- Web of Science Core Collection (WoSCC)
- Science Citation Index Expanded (SCI-EXP)
- SCI-EXP, SSCI i/ili A&HCI
- Scopus