Pregled bibliografske jedinice broj: 1135086
Multimodal Emotion Analysis Based on Acoustic and Linguistic Features of the Voice
Multimodal Emotion Analysis Based on Acoustic and Linguistic Features of the Voice // Social Computing and Social Media: Experience Design and Social Network Analysis. HCII 2021. Lecture Notes in Computer Science, vol 12774 / Meiselwitz, Gabriele (ur.).
online: Springer, 2021. str. 301-311 doi:10.1007/978-3-030-77626-8_20 (predavanje, međunarodna recenzija, cjeloviti rad (in extenso), znanstveni)
CROSBI ID: 1135086 Za ispravke kontaktirajte CROSBI podršku putem web obrasca
Naslov
Multimodal Emotion Analysis Based on Acoustic and Linguistic
Features of the Voice
Autori
Koren, Leon ; Stipančić, Tomislav
Vrsta, podvrsta i kategorija rada
Radovi u zbornicima skupova, cjeloviti rad (in extenso), znanstveni
Izvornik
Social Computing and Social Media: Experience Design and Social Network Analysis. HCII 2021. Lecture Notes in Computer Science, vol 12774
/ Meiselwitz, Gabriele - : Springer, 2021, 301-311
ISBN
978-3-030-77625-1
Skup
13th International Conference Social Computing and Social Media (SCSM 2021)
Mjesto i datum
Online, 24.07.2021. - 29.07.2021
Vrsta sudjelovanja
Predavanje
Vrsta recenzije
Međunarodna recenzija
Ključne riječi
Emotion recognition ; Affective robotics ; Multimodal information fusion ; Voice analysis ; Speech recognition ; Learning ; Reasoning
Sažetak
Artificial speech analysis can be used to detect non-verbal communication cues and reveal the current emotional state of the person. The inability of appropriate recognition of emotions can inevitably lessen the quality of social interaction. A better understanding of speech can be achieved by analyzing the additional characteristics, like tone, pitch, rate, intensity, meaning, etc. In a multimodal approach, sensing modalities can be used to alter the behavior of the system and provide adaptation to inconsistencies of the real world. A change detected by a single modality can generate a different system behavior at the global level. In this paper, we presented a method for emotion recognition based on acoustic and linguistic features of the speech. The presented voice modality is a part of the larger multi-modal computation architecture implemented on the real affective robot as a control mechanism for reasoning about the emotional state of the person in the interaction. While the audio is connected to the acoustic sub- modality, the linguistic sub-modality is related to text messages in which a dedicated NLP model is used. Both methods are based on neural networks trained on available open-source databases. These sub-modalities are then merged in a single voice modality through an algorithm for multimodal information fusion. The overall system is tested on recordings available through Internet services.
Izvorni jezik
Engleski
Znanstvena područja
Interdisciplinarne prirodne znanosti, Računarstvo, Strojarstvo, Informacijske i komunikacijske znanosti, Kognitivna znanost (prirodne, tehničke, biomedicina i zdravstvo, društvene i humanističke znanosti)
POVEZANOST RADA
Projekti:
HRZZ-UIP-2020-02-7184 - Afektivna multimodalna interakcija temeljena na konstruiranoj robotskoj spoznaji (AMICORC) (Stipančić, Tomislav, HRZZ - 2020-02) ( CroRIS)
Ustanove:
Fakultet strojarstva i brodogradnje, Zagreb
Citiraj ovu publikaciju:
Časopis indeksira:
- Current Contents Connect (CCC)
- Web of Science Core Collection (WoSCC)
- Science Citation Index Expanded (SCI-EXP)
- SCI-EXP, SSCI i/ili A&HCI
- Conference Proceedings Citation Index - Science (CPCI-S)
- Scopus