Emotion Perception: CNN Insights into Human Brain

In a paper published in the journal PLoS Computational Biology, researchers investigated the origin of affect-specific visual representations in the human brain. They combined convolutional neural network (CNN) models of the ventral visual cortex with datasets of affective images to shed light on this debate.

Study: Emotion Perception: CNN Insights into Human Brain. Image credit: Triff/Shutterstock
Study: Emotion Perception: CNN Insights into Human Brain. Image credit: Triff/Shutterstock

The study findings revealed that artificial neurons in all layers of the CNN models responded selectively to neutral, pleasant, or unpleasant images. Manipulating these neurons impacted emotion recognition performance, indicating the intrinsic ability of the visual system to represent the affective significance of visual input. CNNs offer a promising platform for testing neuroscientific theories.

Related Work

Previous research on understanding human emotions has relied mainly on cognitive experiments and brain activity recordings. However, how the brain translates visual stimuli into subjective emotional judgments remains a puzzle. With the emergence of artificial neural networks (ANNs), particularly CNNs, there's an opportunity to tackle this puzzle using modeling approaches. CNNs, resembling the hierarchical organization of the visual system, have shown impressive performance in recognizing objects.

Importantly, CNNs trained on tasks like ImageNet exhibit selectivity for stimuli beyond their training data, suggesting capabilities similar to the human visual system. The debate over the role of the visual cortex in processing emotional content persists: some argue for an intrinsic ability of the visual cortex to represent emotions. In contrast, others suggest these representations arise from feedback from emotion-modulating structures like the amygdala.

Emotion Analysis Study

This study utilized two sets of widely used affective images: the International Affective Picture System (IAPS) library, which comprises 1,182 images across various emotion subclasses, and the Nencki Affective Picture System (NAPS) library, containing 1,356 images. Each image was rated for valence, indicating whether it expressed unpleasant, neutral, or pleasant emotions.

Researchers categorized images into three main groups based on valence scores, with soft thresholds applied for ambiguous cases. Researchers transformed to grayscale on the images after categorizing them, aiming to remove color as a confounding factor.

The CNN model used in this study was VGG-16, known for its object recognition capabilities. VGG-16 consists of 13 convolutional layers followed by three fully connected layers, with the last layer containing 1000 units for recognizing different visual objects. Neurons in each layer, characterized by rectified linear unit (ReLU) activation functions, were analyzed for their responses to affective images. Researchers utilized the model in two ways: first, to assess emotional selectivity in neurons trained for object recognition using pre-trained ImageNet data, and second, to test the functionality of emotion-selective neurons by training a separate emotion recognition layer.

Two methods were employed to evaluate emotional selectivity: tuning value and selective index. Tuning value emphasized the normalized response strength of a neuron to a particular emotion, while the selective index measured the difference in responses between emotions. Researchers considered neurons selective for an emotion if their response to images of that emotion was highest among all feelings. Researchers guarded against spurious identification of emotion-selective neurons by actively applying two analyses: they rank-ordered neurons based on selective index values. They overlapped the sets of neurons identified from IAPS and NAPS datasets.

Testing the functionality of emotion-selective neurons involved two approaches: lesion and attention enhancement. Specific neurons, selective for emotions and randomly selected, were lesioned to observe changes in emotion recognition performance. Lesioning involves setting the output of targeted neurons to zero. This approach aimed to assess the functional importance of emotion-selective neurons in recognizing emotional content within images.

CNN Emotion Study

This study investigated whether emotion selectivity could naturally emerge in a CNN model trained for visual object recognition. They employed the visual geometry group (VGG) -16 model pre-trained on ImageNet data. They analyzed the responses of neurons within different layers to images from affective picture datasets (IAPS and NAPS).

By tuning curves and selectivity index (SI) calculations, they assessed the emotion selectivity of neurons across various layers of the CNN model. The results revealed a trend of increasing emotion selectivity from earlier to deeper layers of the CNN model, particularly pronounced when evaluating responses to images from the IAPS dataset.

Additionally, researchers observed that emotion selectivity generalized across both IAPS and NAPS datasets, indicating a consistent pattern beyond chance. This generalizability was further supported by comparing pre-trained VGG-16 with randomly initialized models, emphasizing the role of training for object recognition in developing emotion selectivity.

Researchers conducted lesion and attention enhancement analyses to evaluate the functional significance of emotion-selective neurons. Lesioning emotion-selective neurons led to significant performance declines in emotion recognition tasks, particularly in deeper layers of the CNN model, highlighting their importance.

Conversely, enhancing the gain of emotion-selective neurons improved emotion recognition performance, especially in the middle and deeper layers, further underlining their functional relevance in the CNN model's processing of emotional content within images.

Conclusion

To sum up, this study demonstrated the emergence of emotion selectivity in a CNN model trained for visual object recognition using the VGG-16 architecture pre-trained on ImageNet data. Analysis of neuron responses across different layers revealed increasing emotion selectivity from earlier to deeper layers, with generalizability observed across multiple datasets. The functional significance of emotion-selective neurons was confirmed through lesion and attention enhancement analyses, highlighting their crucial role in emotion recognition tasks within the CNN model.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2024, April 10). Emotion Perception: CNN Insights into Human Brain. AZoAi. Retrieved on November 21, 2024 from https://www.azoai.com/news/20240410/Emotion-Perception-CNN-Insights-into-Human-Brain.aspx.

  • MLA

    Chandrasekar, Silpaja. "Emotion Perception: CNN Insights into Human Brain". AZoAi. 21 November 2024. <https://www.azoai.com/news/20240410/Emotion-Perception-CNN-Insights-into-Human-Brain.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Emotion Perception: CNN Insights into Human Brain". AZoAi. https://www.azoai.com/news/20240410/Emotion-Perception-CNN-Insights-into-Human-Brain.aspx. (accessed November 21, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2024. Emotion Perception: CNN Insights into Human Brain. AZoAi, viewed 21 November 2024, https://www.azoai.com/news/20240410/Emotion-Perception-CNN-Insights-into-Human-Brain.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Automated Framework Enhances Neural Network Interpretability With Scalable Explanations