Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2012;7(7):e41792.
doi: 10.1371/journal.pone.0041792. Epub 2012 Jul 25.

Diagnostic features of emotional expressions are processed preferentially

Affiliations

Diagnostic features of emotional expressions are processed preferentially

Elisa Scheller et al. PLoS One. 2012.

Abstract

Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Illustration of the trial structure (Experiment 1).
Figure 2
Figure 2. Frequency histograms showing the distribution of saliency ratios (average saliency in the eye region divided by the average saliency in the mouth region) for fearful, happy and neutral facial expressions.
Figure 3
Figure 3. Proportions of fixation changes towards the other major facial feature as a function of task, presentation time, facial expression and initial fixation (Experiment 1).
Error bars indicate standard errors of the mean.
Figure 4
Figure 4. Illustration of the modulatory effect of facial expression on fixation changes (A,C) and fixation durations (B,D) across all experimental tasks and presentation times (Experiment 1).
The upper panels (A,B) show the values for the whole stimulus set whereas the lower panels (C,D) only depict the respective values for a subset of faces with a comparable saliency ratio in the eye as compared to the mouth region. Error bars indicate standard errors of the mean.
Figure 5
Figure 5. Heat maps illustrating the normalized fixation time on different face regions for the long presentation time of Experiment 1 (A) and the normalized distribution of saliency (B) as derived from a computational model of bottom-up visual attention , for fearful, happy and neutral facial expressions.
Figure 6
Figure 6. Proportion of time spent fixating either the eye or the mouth region in relation to the time subjects spent fixating the overall face in the long presentation time condition (Experiment 1).
Mean proportions for fixations on the eye or mouth region are shown as a function of task, initial fixation and facial expression with error bars indicating standard errors of the mean. The regions of interest that were used to define fixations in the eye or mouth region, respectively, are shown on the right side.
Figure 7
Figure 7. Proportions of fixation changes towards the eye and the mouth region as a function of the position in the visual field and the facial expression (Experiment 2).
Results are depicted separately for a stimulus set with a similar saliency ratio of the eye as compared to the mouth region across facial expressions (stimulus set 1) and for a stimulus set with a saliency ratio of approximately 1 (stimulus set 2). The regions of interest that were used to define whether saccades targeted the eye or mouth region, respectively, are shown on the right side. Error bars indicate standard errors of the mean.
Figure 8
Figure 8. Proportion of time spent fixating either the eye or the mouth region in relation to the time subjects spent fixating the overall face (Experiment 2).
Mean proportions for fixations on the eye or mouth region are shown as a function of the position in the visual field and the facial expression. Results are depicted separately for a stimulus set with a similar saliency ratio of the eye as compared to the mouth region across facial expressions (stimulus set 1) and for a stimulus set with a saliency ratio of approximately 1 (stimulus set 2). The regions of interest that were used to define fixations in the eye or mouth region, respectively, are shown on the right side. Error bars indicate standard errors of the mean.

Similar articles

Cited by

References

    1. Martino B De, Kalisch R, Rees G, Dolan RJ. Enhanced processing of threat stimuli under limited attentional resources. Cereb Cortex. 2009;19:127–133. doi: 10.1093/cercor/bhn062. - DOI - PMC - PubMed
    1. Palermo R, Rhodes G. Are you always on my mind? A review of how face perception and attention interact. Neuropsychologia. 2007;45:75–92. doi: 10.1016/j.neuropsychologia.2006.04.025. - DOI - PubMed
    1. Hanawalt NG. The role of the upper and the lower parts of the face as a basis for judging facial expressions: II. In posed expressions and “candid-camera” pictures. J Gen Psychol. 1944;31:23–36.
    1. Gosselin F, Schyns PG. Bubbles: a technique to reveal the use of information in recognition tasks. Vision Res. 2001;41:2261–2271. - PubMed
    1. Schyns PG, Bonnar L, Gosselin F. Show me the features! Understanding recognition from the use of visual information. Psychol Sci. 2002;13:402–409. - PubMed

Publication types