Pretražite po imenu i prezimenu autora, mentora, urednika, prevoditelja

Napredna pretraga

Pregled bibliografske jedinice broj: 1110029

Staying True to Your Word: (How) Can Attention Become Explanation?


Tutek, Martin; Šnajder, Jan
Staying True to Your Word: (How) Can Attention Become Explanation? // Proceedings of the 5th Workshop on Representation Learning for NLP
online, 2020. str. 131-142 doi:10.18653/v1/2020.repl4nlp-1.17 (poster, međunarodna recenzija, cjeloviti rad (in extenso), znanstveni)


CROSBI ID: 1110029 Za ispravke kontaktirajte CROSBI podršku putem web obrasca

Naslov
Staying True to Your Word: (How) Can Attention Become Explanation?

Autori
Tutek, Martin ; Šnajder, Jan

Vrsta, podvrsta i kategorija rada
Radovi u zbornicima skupova, cjeloviti rad (in extenso), znanstveni

Izvornik
Proceedings of the 5th Workshop on Representation Learning for NLP / - , 2020, 131-142

Skup
Association for Computational Linguistics

Mjesto i datum
Online, 05.07.2020. - 10.07.2020

Vrsta sudjelovanja
Poster

Vrsta recenzije
Međunarodna recenzija

Ključne riječi
Natural Language Processing ; Interpretability ; Explainable AI ; Recurrent neural networks

Sažetak
The attention mechanism has quickly become ubiquitous in NLP. In addition to improving performance of models, attention has been widely used as a glimpse into the inner workings of NLP models. The latter aspect has in the recent years become a common topic of discussion, most notably in recent work of Jain and Wallace ; Wiegreffe and Pinter. With the shortcomings of using attention weights as a tool of transparency revealed, the attention mechanism has been stuck in a limbo without concrete proof when and whether it can be used as an explanation. In this paper, we provide an explanation as to why attention has seen rightful critique when used with recurrent networks in sequence classification tasks. We propose a remedy to these issues in the form of a word level objective and our findings give credibility for attention to provide faithful interpretations of recurrent models.

Izvorni jezik
Engleski

Znanstvena područja
Informacijske i komunikacijske znanosti



POVEZANOST RADA


Ustanove:
Fakultet elektrotehnike i računarstva, Zagreb

Profili:

Avatar Url Jan Šnajder (autor)

Avatar Url Martin Tutek (autor)

Citiraj ovu publikaciju:

Tutek, Martin; Šnajder, Jan
Staying True to Your Word: (How) Can Attention Become Explanation? // Proceedings of the 5th Workshop on Representation Learning for NLP
online, 2020. str. 131-142 doi:10.18653/v1/2020.repl4nlp-1.17 (poster, međunarodna recenzija, cjeloviti rad (in extenso), znanstveni)
Tutek, M. & Šnajder, J. (2020) Staying True to Your Word: (How) Can Attention Become Explanation?. U: Proceedings of the 5th Workshop on Representation Learning for NLP doi:10.18653/v1/2020.repl4nlp-1.17.
@article{article, author = {Tutek, Martin and \v{S}najder, Jan}, year = {2020}, pages = {131-142}, DOI = {10.18653/v1/2020.repl4nlp-1.17}, keywords = {Natural Language Processing, Interpretability, Explainable AI, Recurrent neural networks}, doi = {10.18653/v1/2020.repl4nlp-1.17}, title = {Staying True to Your Word: (How) Can Attention Become Explanation?}, keyword = {Natural Language Processing, Interpretability, Explainable AI, Recurrent neural networks}, publisherplace = {online} }
@article{article, author = {Tutek, Martin and \v{S}najder, Jan}, year = {2020}, pages = {131-142}, DOI = {10.18653/v1/2020.repl4nlp-1.17}, keywords = {Natural Language Processing, Interpretability, Explainable AI, Recurrent neural networks}, doi = {10.18653/v1/2020.repl4nlp-1.17}, title = {Staying True to Your Word: (How) Can Attention Become Explanation?}, keyword = {Natural Language Processing, Interpretability, Explainable AI, Recurrent neural networks}, publisherplace = {online} }

Citati:





    Contrast
    Increase Font
    Decrease Font
    Dyslexic Font