Tracking emotional responses as they unfold has been one of the hallmarks of applied neuroscience and related disciplines, but recent studies suggest that automatic tracking of facial expressions have low validation. In this study, we focused on the direct measurement of facial muscles involved in expressions such as smiling. We used single-channel surface electromyography (sEMG) to evaluate the muscular activity from the Zygomaticus Major face muscle while participants watched music videos. Participants were then tasked with rating each video with regard to their thoughts and responses to each of them, including their judgment of emotional tone ("Valence"), personal preference ("Liking") and rating of whether the video displayed strength and impression ("Dominance"). Using a minimal recording setup, we employed three ways to characterize muscular activity associated with spontaneous smiles. The total time spent smiling (ZygoNum), the average duration of smiles (ZygoLen), and instances of high valence (ZygoTrace). Our results demonstrate that Valence was the emotional dimension that was most related to the Zygomaticus activity. Here, the ZygoNum had higher discriminatory power than ZygoLen for Valence quantification. An additional investigation using fractal properties of sEMG time series confirmed previous studies of the Facial Action Coding System (FACS) documenting a smoother contraction of facial muscles for enjoyment smiles. Further analysis using ZygoTrace responses over time to the video events discerned "high valence" stimuli with a 76% accuracy. Additional validation of this approach came against previous findings on valence detection using features derived from a single channel EEG setup. We discuss these results in light of both the recent replication problems of facial expression measures, and in relation to the need for methods to reliably assess emotional responses in more challenging conditions, such as Virtual Reality, in which facial expressions are often covered by the equipment used.

A Minimal Setup for Spontaneous Smile Quantification Applicable for Valence Detection

Nascimben, Mauro
Primo
Methodology
;
2020-01-01

Abstract

Tracking emotional responses as they unfold has been one of the hallmarks of applied neuroscience and related disciplines, but recent studies suggest that automatic tracking of facial expressions have low validation. In this study, we focused on the direct measurement of facial muscles involved in expressions such as smiling. We used single-channel surface electromyography (sEMG) to evaluate the muscular activity from the Zygomaticus Major face muscle while participants watched music videos. Participants were then tasked with rating each video with regard to their thoughts and responses to each of them, including their judgment of emotional tone ("Valence"), personal preference ("Liking") and rating of whether the video displayed strength and impression ("Dominance"). Using a minimal recording setup, we employed three ways to characterize muscular activity associated with spontaneous smiles. The total time spent smiling (ZygoNum), the average duration of smiles (ZygoLen), and instances of high valence (ZygoTrace). Our results demonstrate that Valence was the emotional dimension that was most related to the Zygomaticus activity. Here, the ZygoNum had higher discriminatory power than ZygoLen for Valence quantification. An additional investigation using fractal properties of sEMG time series confirmed previous studies of the Facial Action Coding System (FACS) documenting a smoother contraction of facial muscles for enjoyment smiles. Further analysis using ZygoTrace responses over time to the video events discerned "high valence" stimuli with a 76% accuracy. Additional validation of this approach came against previous findings on valence detection using features derived from a single channel EEG setup. We discuss these results in light of both the recent replication problems of facial expression measures, and in relation to the need for methods to reliably assess emotional responses in more challenging conditions, such as Virtual Reality, in which facial expressions are often covered by the equipment used.
File in questo prodotto:
File Dimensione Formato  
fpsyg-11-566354-1.pdf

file ad accesso aperto

Licenza: Creative commons
Dimensione 1.62 MB
Formato Adobe PDF
1.62 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11579/180745
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 3
social impact