Brain network specialization may support growth of vocal emotion recognition skills in adolescence

Adult and teen talking

As youth mature, they become better able to identify other people’s emotional states based on their facial and vocal expressions (“emotion recognition”, or ER). How does this happen? This is still an open question! Amongst other changes to cognition and social interactions, changes to brain functioning during childhood and adolescence could play a role in the growth of emotion recognition skills. Although previous research has investigated how brain activation to facial expressions of emotion changes with age and across time, this study examined change in neural response to vocal emotional expressions.

We recruited children and teenagers aged 8 to 19 to participate in the study. While they were in an MRI scanner, we asked them to complete a vocal emotion recognition task: they were presented with recordings of other people saying things in different tones of voice, and had to select what emotion the speaker was expressing. Participants then came back one year later to do the same task again. We looked at how participants’ brains activated when they were hearing the emotional voices, compared to the neutral voices that we used as a baseline.

We asked three main questions:

  1. Did participants’ brain response to the emotional voices depend on their age?

  2. Did brain response change across 1 year’s time (i.e., from visit 1 to visit 2)?

  3. And, did the rate of change in brain response across visits depend on participants’ age?

We found that certain areas of the brain did respond differently in younger vs. older adolescents (Question 1, above): notably, there was an area of the prefrontal cortex that responded more strongly to emotional voices (than to neutral voices) in the older participants than the younger ones. This was a bit of a surprising finding, because previous research that asked similar questions about how the brain responds to emotional faces found that activation in this area of the brain decreased with age! We’re not sure why this result is different, but this may be because voices elicit different patterns of neural response than do faces.

We also found that regions like the dorsal striatum and inferior frontal gyrus—which have previously been involved in coding the value of certain outcomes, and in ‘mentalizing’ tasks where people are asked to infer others’ emotions or mental states—showed less activation at the second visit than at the first (Question 2). Although we have to speculate about what this pattern means, it could be reflecting increased efficiency (i.e., needing less response in these areas) with repeated practice. Lastly, and perhaps most interestingly, we found that activation patterns in the right temporo-parietal junction (TPJ)—an area thought to be crucial to understanding others’ emotions and intentions—varied as a function of both age and time (Question 3). For younger teens, we saw increases in TPJ response between visits. But, for older teens, we found the opposite pattern. This inverted U-shaped pattern is consistent with the predictions of a prominent theory of neurodevelopment called the “interactive specialization” model (Johnson et al., 2000). According to this theory, early development is characterized by broad and diffuse patterns of activation (meaning that a given task would require lots of activation in many parts of the brain); with maturation, however, less activation is required, and less parts of the brain need to participate.

Images of brain on analysis program

What do these patterns of activation mean in the real-world? Some of these patterns were linked to performance on the task. Specifically, people who showed decreases in dorsal striatum and TPJ activation across timepoints did best on the vocal emotion recognition task. Therefore, our findings suggest that some of these changes in neural activation to emotional voices may be part of the neural mechanisms that are supporting increases in vocal ER skills across childhood and adolescence. Our findings help shed some light on how changes to our brain’s function may be helping us better navigate our social world during adolescence.  

To learn more, VISIT https://doi.org/10.1093/scan/nsac021

Brain regions involved in ‘mentalizing’ process vocal emotions differently in youth with epilepsy

Previous research has found that youth with epilepsy are at risk for poorer social and relational outcomes. Although this is not true of everyone with epilepsy, many children and adolescents with this neurodevelopmental disorder report having a hard time making and maintaining friendships. Perhaps related to this, youth with epilepsy often also struggle to interpret others’ emotions: they tend to be less accurate than youth without epilepsy on emotion recognition tasks, where people are asked to identify the intended emotion in facial or vocal expressions. Why might this be? Some researchers have suggested that the brains of youth with epilepsy may respond differently to emotional faces, compared to youth without epilepsy. Could something similar be happening with emotional voices?

To answer this question, the current study recruited youth who had been diagnosed with intractable epilepsy (meaning they still experienced seizures despite taking medication to prevent them), and some who had not. Participants were asked to listen to recordings of emotional voices (e.g., angry voices, fearful voices, etc.) while they were in an MRI scanner. After each recording, participants were asked to indicate what emotion they thought was being expressed. We examined how accurate they were at determining the intended emotion in each recording, and how their brains responded to the different types of voices.

We found that youth with epilepsy were less accurate than youth without epilepsy on this vocal emotion recognition task—especially at younger ages. In addition, we found six regions of the brain that responded differently to the emotional voices in youth with vs. without epilepsy. Activation patterns in these areas (including regions like the right temporo-parietal junction, the right hippocampus, and the right medial prefrontal cortex) could actually predict whether any given participant had been diagnosed with epilepsy or not. Interestingly, many of these six regions are often found to be involved in ‘mentalizing tasks’, where participants are asked to make judgments about others’ emotions, thoughts, and beliefs. Our findings suggest that these brain areas might be responding differently when trying to interpret others’ emotions (based on their tone of voice) in youth with epilepsy. We don’t yet know whether these different patterns of activation are actually related to emotion recognition accuracy, or to social difficulties; they could simply reflect an alternative “strategy” when processing vocal emotional cues. Although more research is needed to determine this, our findings contribute to our understanding of how neurodiverse brains process social and emotional information.

Higher emotional intensity doesn't always make vocal cues easier to recognize--it depends on the emotion type

Man speaking angrily into phone

We know that someone’s tone of voice can be a good indicator of what they are feeling in that moment. Someone who is speaking quickly, loudly, and with a high-pitched voice is likely to be feeling angry—and many of us would be able to infer this from their vocal cues. But what if these cues were more subtle? Would we still be able to understand that they were upset based on their tone of voice?

The current study examined whether listeners’ ability to identify a speaker’s emotional state depended on the level of ‘emotional intensity’ with which an emotion was conveyed. Intuitively, it makes sense that high-intensity expressions would be better recognized than low-intensity expressions. But, less is known about how fine-grained variations in vocal cues influence listeners’ accuracy of interpretation.

To answer our questions, we created a series of recordings that varied from neutral expressions to full-intensity expressions of different emotions, in 10% intervals. We used actors’ full-intensity and neutral expressions from a previous study as end points, and merged them together to create recordings that were 10% neutral and 90% angry, 20% neutral and 80% angry, 30% neutral and 70% angry, etc. We presented these recordings to listeners in increasing order of intensity, and asked them to tell us what emotion was being conveyed in each. From this, we obtained an estimate of the slope of listeners’ accuracy across increasing levels of emotional intensity, for each emotion type.

Varying curves of increasing recognition accuracy (on the y-axis) across increasing levels of emotional intensity (on the x-axis)

We determined that listeners’ ability to identify each emotion did not increase linearly alongside the recordings’ intensity level. Even though the acoustic characteristics of the recordings changed linearly with intensity, listeners’ accuracy did not map onto that. Further, the exact pattern of change in accuracy across intensity levels varied for different emotions. For example, anger was easy for listeners to recognize, even at low intensity levels (see blue arrow in the figure above). In contrast, listeners didn’t start to be able to identify sadness until it was expressed with more than 50% emotional intensity (see green arrow).

These results tell us that some emotions may be much harder than others to identify at low intensities. This is important, because those low-intensity versions are likely more common than full-intensity expressions in the real world. The findings of the current study point us to emotion types that may be particularly challenging to identify at low intensities, such as happiness. They also make us think twice about assuming that listeners’ accuracy is a direct reflection of linear changes in the vocal cues associated with different emotions.

To read more, visit: https://psycnet.apa.org/record/2021-86253-001

Youth with and without epilepsy differ in 'social brain' connectivity during a social cognitive task, but not at rest

Deficits in social cognition are common in people with epilepsy. This means that individuals with epilepsy may struggle to understand others' intentions in social situations, may find it harder to interpret others' facial expressions or tone of voice in social interactions, or may have trouble forming social connections with others. We know that epilepsy is associated with atypical functioning in regions of the brain that are thought to be involved in social cognition, but most existing research has examined patterns of brain connectivity at rest--that is, when nothing is happening. The current study wanted to investigate whether youth with epilepsy showed different brain connectivity patterns in these 'social brain' areas, when participants were completing a social cognition task. To answer this question, we compared brain connectivity within the "mentalizing network" (involved in theory of mind and other social cognitive functions) and within a network centered around the amygdala (involved in processing salient social information) in youth with and without epilepsy, while they were either completing a facial emotion recognition task or were at rest.

Compared to typically-developing youth, youth with epilepsy showed weaker connectivity between the left posterior superior temporal sulcus and the medial prefrontal cortex of the brain when seeing facial expressions in the emotion recognition task. These regions are thought to work together during social cognitive tasks, so decreased connectivity between these areas may indicate that these network nodes aren't communicating as efficiently or as well as they could be in youth with epilepsy. On the flip side, we found that youth with epilepsy had greater connectivity within the temporal lobe (between the left temporo-parietal junction and the anterior temporal cortex, to be precise) compared to typically-developing adolescents. This pattern was associated with poorer accuracy on the facial emotion recognition task. It is possible that youth with epilepsy are using a different 'strategy' in the task that results in different brain connectivity patterns in the temporal lobe, but we would need to test this possibility explicitly in future studies. In contrast to these findings, youth with and without epilepsy did not differ in their connectivity within either social brain network during resting-state scans (i.e., when they weren't doing a task).

israel-palacio-ImcUkZ72oUs-unsplash.jpg

Overall, our findings highlight that there may be important differences in how regions associated with social cognition are connected to one another during social cognitive tasks in youth with and without epilepsy. Although this is only a first step in understanding this phenomenon, our results indicate that looking at neural connectivity patterns during relevant tasks may be important to understanding the association between epilepsy and social cognitive deficits.

Find out more and read the paper here: https://www.sciencedirect.com/science/article/abs/pii/S0028393221001330

Ongoing maturation of neural responses to voices—but not faces—in adolescence

With age, we become better able to understand the meaning behind others’ nonverbal cues. In other words, we become more skilled at identifying others’ emotional states or attitudes based on their facial expressions, postures or gestures, or tone of voice. Previous research has found that the ability to recognize emotions in vocal cues (i.e., the way in which someone says something, beyond their verbal content) follows a more protracted developmental trajectory throughout adolescence than the same ability with facial expressions. Are there similar differences in the maturational trajectory of the neural representation of both types of nonverbal cues?

The current study examined age-related changes in a) facial and vocal emotion recognition skills, and b) in neural activation to both types of stimuli in adolescence. A group of 8- to 19-year-old participants were asked to complete both a facial and a vocal emotion recognition task—in which they were asked to identify the intended emotion in other teenagers’ facial expressions and voices—while undergoing functional magnetic resonance imaging (fMRI). We found that accuracy on the emotion recognition tasks began to plateau around age 14 for faces, but continued to increase linearly throughout adolescence for voices. At a neural level, a variety of subcortical regions, visual-motor association areas, prefrontal regions, and the right superior temporal gyrus responded to both faces and voices. While there were no age-related changes in activation within these areas when responding to faces, prefrontal regions (specifically, the inferior frontal cortex and dorsomedial prefrontal cortex) were more engaged when hearing voices in older adolescents. These findings suggest that the maturation of vocal emotion recognition skills and associated neural responses in frontal regions of the brain are continuing to develop throughout adolescence, following a more protracted trajectory than other social cognitive skills like the ability to interpret facial emotions. This may make it harder for teenagers to navigate social situations in which they must rely on vocal cues—for instance, when others are wearing masks.

girl-1245713_1920.jpg

Social information processing in pediatric anxiety and depression

Anxiety and depression increase in prevalence during the teenage years. Adolescence is considered a sensitive period for the development of internalizing disorders, due in part to the dramatic changes in body, brain, and behaviour that occur at this time. Shifting interactions between limbic and executive function networks during adolescence may underlie maladaptive processing of positive and negative stimuli. For instance, typically-developing youth typically show enhanced amygdala and reduced prefrontal response to emotional stimuli. In contrast, youth with anxiety and depression show heightened activation to negative or threatening stimuli in both regions, and reduced amygdala response to positive stimuli. These neural patterns translate to heightened processing of threat cues, but reduced response to reward—both of which are hallmarks of anxiety and depression. Alterations in social information processing in the brain may have effects on behaviour and associated psychosocial well-being. Early intervention to ameliorate deficits in social information processing may be effective in preventing the long-term consequences of pediatric affective disorders.

Read this chapter of the Oxford Handbook of Developmental Cognitive Neuroscience here.

girl-3704998_1920.jpg

Emotional faces elicit less activation in face-processing regions of the brain in youth with epilepsy

smiley-2979107_1920.jpg

Youth with epilepsy sometimes report having a hard time forming and maintaining relationships with others. This may be due to a variety of factors, but some research has suggested that deficits in “emotion recognition’”—or, the ability to interpret emotion in others’ facial expressions or tone of voice—may make it more challenging for youth with epilepsy to navigate social interactions. Difficulties in emotion recognition tend to be more pronounced in adults with childhood-onset epilepsy, suggesting that recurrent seizures may be disrupting the integrity of brain circuits involved in this social-cognitive skill in youth. However, though previous studies had investigated the neural representation of emotional faces in adults, none had examined the neural correlates of emotional face processing in youth with epilepsy. The current study examined whether emotional faces elicited different neural response in the brains of youth with and without epilepsy—and whether such differences were related to deficits in emotion recognition. Participants completed a facial emotion recognition task, in which they were asked to identify the emotion in other teenagers’ facial expressions, while undergoing functional magnetic resonance imaging (fMRI). We found that, compared to typically-developing youth, youth with epilepsy were less accurate in the facial emotion recognition task. In addition, youth with epilepsy showed blunted activation in the fusiform gyrus and right posterior superior temporal sulcus—two regions that play an important role in the processing of faces and social information. Reduced activation in these regions was correlated with poorer accuracy in the facial emotion recognition task. Together, our results suggest that reduced engagement of brain regions involved in processing socio-emotional signals may contribute to difficulties in social cognition experienced by youth with epilepsy.

Read more about the study here.

Brain characteristics associated with symptoms of anxiety/depression in youth with epilepsy

lonely-1822414_1920.jpg

Compared to the general population or to other groups of people with chronic health conditions, individuals with epilepsy are more likely to also experience internalizing disorders (i.e., depression and/or anxiety) during their lifetime. In adults, these comorbid conditions are thought to be indexed by specific neural biomarkers, including irregularities in the structure and function of frontal and temporal regions of the brain. However, less work has investigated whether similar patterns may be noted in children and adolescents with epilepsy, who are at risk of developing depression and/or anxiety. The current study capitalized on the fact that youth with epilepsy often undergo MRI (magnetic resonance imaging) scans, PET (positron emission tomography) scans, and psychological assessments as part of their clinical evaluations. We examined whether youth with epilepsy who experienced clinically-significant levels of internalizing problems had different patterns of brain structure and/or function than youth who scored in the normal range for such symptoms. We found that 42% of youth in our sample scored in the clinical range for internalizing symptoms on a parent-report of psychological well-being (Child Behavior Checklist; Achenbach, 2001)—suggesting that anxiety and depression may be a common concern for many young patients. Symptoms were not predicted by characteristics of the illness (like age of seizure onset or location of seizure focus) nor of the patient (like age or gender). However, youth in the clinical range showed reduced cortical volume overall, as well as cortical thinning and decreased function (measured via glucose reuptake) in bilateral parietal/occipital lobes and left temporal regions, compared to youth in the normal range. A follow-up classifier analysis demonstrated that these brain characteristics were predictive of internalizing problems at an individual level. Taken together, our findings suggest that children and adolescents with epilepsy who show widespread reductions in cortical thickness and neural function in clinical evaluations may benefit from intensified psychological evaluation and support for possible mood and anxiety symptoms.

Read more at: https://www.ncbi.nlm.nih.gov/pubmed/31882324

Loneliness in adolescents is associated with the recognition of vocal fear and friendliness

adult-2178209_1920.jpg

During the teenage years, adolescents typically begin forming complex social networks and spending more time with friends than with their parents. However, not all teenagers experience the same level of social connection at this age. Feelings of loneliness can be hard to manage, and may impact the way in which teenagers interpret social information. Previous research has shown that lonely individuals are highly attuned to social information, including both cues of social threat and signals of affiliation. Relatedly, loneliness has been linked to better recognition of negative emotions conveyed by others’ facial expressions. However, little is known about whether loneliness has similar associations with the interpretation of non-facial information, such as others’ tone of voice. To answer this question, we asked 11- to 18-year-old adolescents to report on their feelings of loneliness and to complete a vocal emotion recognition task, in which they were asked to select the emotion they thought was being conveyed in recordings of emotional voices. Contrary to our expectations, we found that loneliness was linked to poorer recognition of fear (a negative emotion), but better recognition of friendliness (an affiliative expression), in others’ voices. We speculated that differences from previous findings may stem from the differential timecourse over which vocal emotion unfolds: though negative cues may initially grab listeners’ attention, lonely individuals’ tendency to avoid threat may interfere with their accurate interpretation of this type of social cue. This work provides some evidence that youth’s cognitive response to social information is likely relevant to their social experiences, but highlights the importance of extending our assessment of social information processing to non-facial modalities.

More details about this work can be found here: https://tandfonline.com/doi/full/10.1080/02699931.2019.1682971

Morningstar, M., Nowland, R., Dirks, M.A., & Qualter, P. (2019). Links between feelings of loneliness and the recognition of vocal socio-emotional expressions in adolescents. Cognition & Emotion. doi: 10.1080/02699931.2019.1682971

Age-related changes in adolescents’ neural connectivity and activation when hearing vocal prosody

girls-914823_1920.jpg

The ability to understand others' emotional state based on their tone of voice (vocal emotional prosody) develops throughout adolescence. Does neural activation to vocal prosody also change with age during the teenage years? We asked 8 to 19 year-old youth to complete a vocal emotion recognition task, in which they had to identify speakers' intended emotion based on their prosody, while in the MRI scanner. Age was associated with greater functional activation in regions of the frontal lobe often associated with language processing and emotional categorization. Further, age was linked to greater structural and functional connectivity between these frontal regions and the temporal-parietal junction, an area crucial for social cognition. These maturational changes were associated with greater accuracy in identifying the intended emotion in others' voices, suggesting that these neurodevelopmental processes may be supporting the growth of vocal emotion recognition skills during adolescence.