Does Facial Expression and Genders of a Person Affect Memory Recollection?

By Jedsada Thavornfung

University of Texas at Austin

PSY 458 Experimental Psychology

February 2022 - May 2022

Advisor: Dr. David Gilden


Program:

  • Excel - Organize data

  • PsychoPy - Data logging and gathering

  • JASP - Data analysis and visualization

Abstract

Humans are social species where we meet new people every day in daily basis: making new friends from class or workplace. Memorizing people faces had become significantly important to both short- and long-term relationship because people would want to associate more with someone who were able to recognize their faces. Remembering or recognizing people faces could illustrate how well you as a person pay attention to other people, which could improve the social interaction skills. This research investigated if certain facial expression and gender of a person influence false memory and memorizing stranger faces. The purpose of this research is to identify if facial expression and gender of a person influence the false memory or how well a person can correctly recognize stranger faces. Participants took the experiment on the computer via PsychoPy. First, six images of people with either happy or sad facial expression were shown for participants to memorize. Second, once the images have been shown, the same people with both facial expressions will be shown to participants (twelve images). Then, participants will be asked to identify which images have the same expression as the images that being shown in the first step. Participants repeated both steps for six time (72 images in total). The result suggests that both male and female participants illustrated same-sex bias when the images have happy facial expression. Meanwhile, both male and female participants illustrated opposite-sex bias only when the images have sad facial expression. Thus, facial expression has an influence on false memory and facial recognition in humans depending on the sex or genders of strangers.

Introduction

In today's society, social interaction and communication skills are increasingly significant for survival and development, not only for humans but for all species. All species can survive and thrive through interaction with both their own and different species (Hebblewhite & Merrill, 2008). Family, for instance, is a crucial foundation that provides individuals with essential skills before they face the real world. Families play a significant role in teaching language for communication and guiding individuals on how to interact with others. Social skills become vital once a person steps outside of their family and into broader social settings such as school, work, relationships, and friendships. Therefore, remembering someone met for the first time can be a crucial step in blending into society. This raises the question: do people exhibit biases when meeting strangers for the first time? For example, are men better at remembering male faces than female faces? Are women better at remembering female faces than male faces? Does a person’s facial expression affect how well they are remembered? Could this relate to false memory formation? Accurately remembering faces is crucial for humans, as our species relies on cooperation for survival, success, and development.

Facial expressions significantly influence how people recognize strangers’ faces, especially upon first meeting (Calder & Young, 2005). Specific facial expressions—such as sadness, crying, happiness, or smiling—can increase the likelihood of a person being remembered by those around them. Faces that appear more trustworthy, particularly those with happy expressions, are more memorable than faces with less trustworthy expressions like anger or disgust (Sutherland et al., 2017). Cortes (2017) observed a trend in participants showing an own-sex bias, meaning participants were more likely to remember individuals of the same sex regardless of external factors such as voice, facial expression, or accessories. The findings suggest that own-sex bias is influenced more by recollection than by familiarity. False memories were more common for negative scenes or facial expressions, indicating a higher chance of false memory for faces with sad expressions (Zhang et al., 2021). While a person’s current mood or emotional state does not typically affect their ability to remember strangers’ faces, Storbeck et al. (2011) found that sad moods or expressions can reduce false memories.

The purpose of this research is to determine if facial expressions and gender influence false memory or the accuracy of recognizing strangers’ faces. The hypothesis is that participants will have fewer false alarms for stranger faces with sad facial expressions. Participants are also expected to demonstrate own-sex bias, remembering strangers of the same sex better than those of the opposite sex. Additionally, sad facial expressions are predicted to be more memorable, as they are less commonly seen in strangers compared to happy expressions.

Method

Participants

There were 20 participants in total (n = 20), which ranged from 20 years old to 37 years old. Also, 10 participants were male (n = 10), and 10 participants were female (n = 10). All participants were friends, classmates of the researcher, and studying at the University of Texas at Austin who volunteered to take the experiment. Thus, the sample might not be perfectly random because all participants had similar ages and education background. No additional demographic data was collected from the participants. Also, no compensation was offered for participation.

Stimuli and Design

The experiment was constructed using PsychoPy, a platform enabling the creation and distribution of research programs. Different images were shown to participants. This experiment was a within-subject design where all participants would experience the same condition. There were total of 72 images: 36 images were image of men, 36 images were image of women, 36 images had a person with happy facial expression, and 36 images had a person with sad facial expression. In those men and women images, 16 out of 36 were happy facial expression, and 16 out of 36 sad facial expression. Thus, there would be 16 images of men with happy facial expression, 16 images of men with happy facial expression, 16 images of women with happy facial expression, and 16 images of women with sad facial expression.

Men with Happy Facial Expression

Men with Sad Facial Expression

Women with Happy Facial Expression

Women with Sad Facial Expression

Table 1. Image Samples where each category: men with happy facial expression had 16 images (72 images in total).

Procedure

Participants were taken the experiment on site via the researcher’s laptop, MacBook Air 2019. On PsychoPy, there were two phases in total of six trials. On the first phase, 6 images of different person were randomly shown to participants. The images were randomized by using random options in PsychoPy. Participants were then asked to memorize the 6 images that being shown in the first phase. On the second phase, 12 images were being shown to participants: 6 images were the same images as being shown in the first phase, and another 6 images were the images with the same person, but different facial expression. Thus, the images in each trial included 6 images in the first phase and 12 images in the second phase (6 out of 12 were the same images as the first phase). Then, participants were asked to identify which images in the second phase were the same or different image from the first phase: same person and same facial expression. If participants decided that certain images in the second phase were already being shown in the first phase, participants would press “o” (old image). However, if participants decided that certain images in the second phase were not already being shown in the first phase, participants would press “n” (new image). Participants had no option to skip images. For both phases, each image was displayed only 1.5 seconds. There were six trials in total; therefore, participants saw 36 images in the first phase (6 images/each trial) and all 72 images in the second trials (12 images/each trial).   

Result

All the data were recorded via PsychoPy. Instead of counting the percentage of corrected score for each participant, researcher decided to use the equation similar to the signal detection theory which measuring the d-prime value. By doing so, researcher could detect the false-alarm rate to prevent the data where participants randomly press certain keys without knowing the answers. The d-prime equation is illustrated below:

Hit” is when participants pressed “o” and the image was old (being shown in the first phase): correct answer. “False Alarm” is when participants pressed “o,” but the image was new (never being shown in the first phase): incorrect answer. In this experiment, researcher decided to disregard every time participants pressed “n” because it could not be used to detect false alarm nor hit rates. Each participant could have their own d-prime value where 1 is the highest score (no false alarm/perfect score), and -1 is the lowest score (no hits rate).


As shown in Figure 1, the descriptive plot illustrated that there was a higher d-prime value when the images had sad facial expression rather than happy facial expression. The images that had a woman with sad facial expression had a higher hit rate than the images that had a man with sad facial expression. Nevertheless, the images with happy facial expression tend have approximately the same d-prime for both the images that had a man and a woman. Figure 1 included all participants without deleting the outlines.


Figure 1. The d-prime value (x-axis) for all participants where “Gender” indicates that gender of a person in the images, and “Expression” indicates the expression of a person in the image. This graph included all participants regardless of participants’ sex. This graph also included outliners.


In Figure 2, the graphs were differentiate based on the sex of participants. Similar to Figure 1, Figure 2 included all participants without deleting the outlines. As shown in Figure 2, male and female participants had different d-prime pattern. For female participants (left graph), they were able to recognize and recall the images with sad facial expression better than happy facial expression where there were only small differences between the images of men and women. However, female participants performed better when the image had a woman with happy facial expression than a man with happy facial expression. Female participants also tended to have a higher d-prime on a woman image than a man image. For male participants, they illustrated the opposite trendline for happy and sad facial expression. Male participants performed better when the images were a man with happy facial expression and a woman with facial expression. 


Figure 2. The comparison between d-prime value (x-axis) between male and female participants. “Gender” indicates that gender of a person in the images, and “Expression” indicates the expression of a person in the image. This graph also included outliners.


In conclusion, there were difference between male and female participants on bias when memorizing a person face. Female participants tended to correctly recognize sad facial expression on a person in the image better than happy facial expression, and they memorized women faces better than men face (regardless of the facial expression in the images). Male participants tend to correctly recognize women with sad facial expression better than women with happy facial expression; meanwhile, they tend to correctly recognize men with happy facial expression better than men with sad facial expression. Nevertheless, both Figure 1 and Figure 2 illustrated large error bars because both graphs contain the participants that performed significantly better or lower than other participants (outliner). Thus, for future analysis, the outliners will be count into consideration with the hope to decrease the error bars.

Discussion

Information regarding false memories can be used in real world problem by finding and explaining the connection between certain things. For instance, this research investigated how well participants could remember and recognize stranger faces with different gender and facial expressions. This research can be used to improve the first impression when you meet with someone that wants them to remember your faces such as the job interview, dating, making friends, etc. According to Figure 2, the outcomes are differences between the sex of participants. Female participants illustrated the results as expected where they could remember women faces better than men, own-sex bias, and sad facial expression better than happy facial expression. Meanwhile, male participants only illustrated own-sex bias with happy facial expression. The data shows that the outliners were overlapping to each other; thus, the trend might not be as distinguishable from one another. For instance, male participants might have false alarm at the same rate for women faces regard less of their facial expression because the error bars were overlapping each other.


Although some patterns were observed in Figure 1 and Figure 2, both figures contain large error bars, which overlap to each other. Thus, the result might be slightly inaccurate. In Figure 3, the outliners were removed from the data. Individuals that had d-prime equals to 1 or negative value were treated as an outliner, which did not contribute to the overall data. The results from the graph without outliners were slightly different from the graph with outliners. For female participants, instead of image of women with sad facial expression, they generate less false memory with the image with sad facial expression for both image with men and women faces. Although the error bar between happy and sad facial expression of the images with women faces is still slightly overlapping, the graph illustrated a significant different between sad and happy facial expression of the images with men faces, where female participants had significant higher d-prime value for men with sad facial expression than any other categories. For male participants, they illustrated approximately the same result: recognize women with sad facial expression better than women with happy facial expression; meanwhile, they tend to correctly recognize men with happy facial expression better than men with sad facial expression. However, without the outliners, male participants had approximately the same d-prime values for the image with men faces regardless of facial expression; they only had the significant distinction between facial expression for the image with women faces.


Figure 3. The comparison between d-prime value (x-axis) between male and female participants. “Gender” indicates that gender of a person in the images, and “Expression” indicates the expression of a person in the image. This graph does not include outliners.


In conclusion, the hypothesis which stated that participants will have less false alarm on stranger faces with sad facial expression is incorrect after the outlines being removed. As illustrated in Figure 3, both male and female participants illustrated same-sex bias only when the images have happy facial expression, which means that the hypothesis is incorrect. The same-sex bias with happy facial expression is more noticeable in male participants. The researcher hypothesized that the same-sex bias only occurs with happy facial expression, especially for male participants, because participants might connect happy facial expression with good memory; it might be easier for them to connect happy facial expression of the images with their good memory since they shared the same biological sex. Good or happy memories tend to last longer than traumatic memories (Cromheeke et al., 2016). Thus, participants might see themselves being happy by looking at a person with happy facial expression, which helps them remember and recognize those faces better. However, both male and female participants illustrate opposite-sex bias when the images have sad facial expression and resulted in less false alarm rate. On this trend, the researcher hypothesized that participants could memorized the images with sad facial expression better than happy facial expression when the strangers have the opposite biological sex with them because sad facial expression is not so common to see on the opposite sex; thus, participants might remember strangers better when they were sad. Also, human evolutionary perspective might relate to the result because humans tend to share empathy toward the opposite sex, which could be for either friendship or romantic or sexual relationship (Ciarrochi et al., 2017). Thus, empathy toward the opposite sex might help participants remember the opposite sex better when they have sad facial expression, which illustrated that they were in the vulnerable state. This result did not entirely agree upon any previous study because this research combined both facial expression and gender of the images into consideration, which is why there is difference result between male and female participants.


The limitations of this study are, first, the experiment could include more diversity of the images because the 72 selected images only contained black and white race faces. The race of participants and a person in the images might influence false alarm. For instance, people might remember people with the same race/ethnicity as them. Second, the research could increase the number of participants to increase the variety of samples: age. Participants in this research only contain those who are studying as an undergraduate at the University of Texas at Austin. The result could be changed depending on the age or educational background of participants. Last, for the future experiment, the researcher could increase the number of images in general, for instance, from 72 images in total to 120 images in total. During the early stage of this experiment, the researcher decided to use 120 images in total; however, the experiment became significant challenging for participants, which is why there was only 72 images in the end. The future experiment should find a solution to increase from 72 images back to 120 images as the original without overwhelming the participants.

Reference

Calder, & Young, A. W. (2005). Understanding the recognition of facial

identity and facial expression. Nature Reviews. Neuroscience, 6(8),

641–651. https://doi.org/10.1038/nrn1724

Ciarrochi, Joseph et al. “When Empathy Matters: The Role of Sex and

Empathy in Close Friendships.” Journal of personality 85.4 (2017):

494–504. Web.

Cromheeke, Sofie, and Sven C. Mueller. “The Power of a Smile: Stronger

Working Memory Effects for Happy Faces in Adolescents Compared to

Adults.” Cognition and emotion 30.2 (2016): 288–301. Web.

Hebblewhite, & Merrill, E. (2008). Modelling Wildlife-Human Relationships

for Social Species with Mixed-Effects Resource Selection Models. The

Journal of Applied Ecology, 45(3), 834–844.

https://doi.org/10.1111/j.1365-2664.2008.01466.x

S Cortes, Laukka, P., Lindahl, C., & Fischer, H. (2017). Memory for faces

and voices varies as a function of sex and expressed emotion. PLOS

ONE, 12(6), e0178423–e0178423.

https://doi.org/10.1371/journal.pone.0178423

Storbeck, Justin, and Gerald L Clore. “Affect Influences False Memories at

Encoding: Evidence from Recognition Data.” Emotion (Washington,

D.C.) 11.4 (2011): 981–989. Web.

Sutherland, Clare A. M, Andrew W Young, and Gillian Rhodes. “Facial First

Impressions from Another Angle: How Social Judgements Are

Influenced by Changeable and Invariant Facial Properties.” The British

journal of psychology 108.2 (2017): 397–415. Web.

Zhang, Weiwei et al. “Emotional Content of the Event but Not Mood

Influences False Memory.” Applied cognitive psychology 35.6 (2021):

1418–1426. Web.

Calder, & Young, A. W. (2005). Understanding the recognition

of facial identity and facial expression. Nature Reviews.

Neuroscience, 6(8), 641–651.

https://doi.org/10.1038/nrn1724

Ciarrochi, Joseph et al. “When Empathy Matters: The Role of

Sex and Empathy in Close Friendships.” Journal of

personality 85.4 (2017): 494–504. Web.

Cromheeke, Sofie, and Sven C. Mueller. “The Power of a

Smile: Stronger Working Memory Effects for Happy Faces

in Adolescents Compared to Adults.” Cognition and

emotion 30.2 (2016): 288–301. Web.

Hebblewhite, & Merrill, E. (2008). Modelling Wildlife-Human

Relationships for Social Species with Mixed-Effects

Resource Selection Models. The Journal of Applied

Ecology, 45(3), 834–844. https://doi.org/10.1111/j.1365-

2664.2008.01466.x

S Cortes, Laukka, P., Lindahl, C., & Fischer, H. (2017).

Memory for faces and voices varies as a function of sex

and expressed emotion. PLOS ONE, 12(6), e0178423–

e0178423. https://doi.org/10.1371/journal.pone.0178423

Storbeck, Justin, and Gerald L Clore. “Affect Influences False

Memories at Encoding: Evidence from Recognition Data.”

Emotion (Washington, D.C.) 11.4 (2011): 981–989. Web.

Sutherland, Clare A. M, Andrew W Young, and Gillian

Rhodes. “Facial First Impressions from Another Angle: How

Social Judgements Are Influenced by Changeable and

Invariant Facial Properties.” The British journal of

psychology 108.2 (2017): 397–415. Web.

Zhang, Weiwei et al. “Emotional Content of the Event but Not

Mood Influences False Memory.” Applied cognitive

psychology 35.6 (2021): 1418–1426. Web.

Calder, & Young, A. W. (2005). Understanding

the recognition of facial identity and facial

expression. Nature Reviews. Neuroscience,

6(8), 641–651. https://doi.org/10.1038/nrn1724

Ciarrochi, Joseph et al. “When Empathy

Matters: The Role of Sex and Empathy in

Close Friendships.” Journal of personality

85.4 (2017): 494–504. Web.

Cromheeke, Sofie, and Sven C. Mueller. “The

Power of a Smile: Stronger Working Memory

Effects for Happy Faces in Adolescents

Compared to Adults.” Cognition and emotion

30.2 (2016): 288–301. Web.

Hebblewhite, & Merrill, E. (2008). Modelling

Wildlife-Human Relationships for Social

Species with Mixed-Effects Resource

Selection Models. The Journal of Applied

Ecology, 45(3), 834–844.

https://doi.org/10.1111/j.1365

2664.2008.01466.x

S Cortes, Laukka, P., Lindahl, C., & Fischer, H.

(2017). Memory for faces and voices varies

as a function of sex and expressed emotion.

PLOS ONE, 12(6), e0178423–e0178423.

https://doi.org/10.1371/journal.pone.0178423

Storbeck, Justin, and Gerald L Clore. “Affect

Influences False Memories at Encoding:

Evidence from Recognition Data.” Emotion

(Washington, D.C.) 11.4 (2011): 981–989. Web.

Sutherland, Clare A. M, Andrew W Young, and

Gillian Rhodes. “Facial First Impressions from

Another Angle: How Social Judgements Are

Influenced by Changeable and Invariant

Facial Properties.” The British journal of

psychology 108.2 (2017): 397–415. Web.

Zhang, Weiwei et al. “Emotional Content of the

Event but Not Mood Influences False

Memory.” Applied cognitive psychology 35.6

(2021): 1418–1426. Web.

Data and Additional Files

Let’s connect!

jedsada.thavornfung@gmail.com

Let’s connect!

jedsada.thavornfung@gmail.com