Navigate Up
Sign In
Home | Contact A | A

 Cognitive Foundations

Cognitive Foundations

Emotions can partially, but only partially, be understood from the biological perspective. To better understand emotions the biological perspective needs to be complemented by both the cognitive and social perspectives. In this second major section of this book, I will focus on the cognitive perspective on emotions.

In general, the cognitive perspective is concerned with describing the mind and how it works. In relation to emotions, the cognitive perspective looks at how the individual’s mind both influences, and is influenced by, emotions. One interesting aspect that the cognitive perspective examines is how emotional information is processed at both an unconscious and a conscious level. Surprisingly, we are consciously aware of only a small portion of the information that our brain\mind initially processes about the world. For example, at the very beginning of visual perception, approximately 130 million rod and cone cells in the retina of our eyes register visual information. However, only 1-2 million axons leave the eyes via the optic nerve. Thus, information gets condensed and selected at the very beginning of perceptual processing. Similarly, we can only consciously think of so many things at any given time; our conscious awareness has a limited capacity. As a result, our mind sometimes works at a problem that is bothering us at an unconscious level, particularly when our conscious thoughts have been pulled to some other matter. (Have you ever suddenly thought of an answer, without at that moment consciously trying, to a question that you gave up trying to answer several hours ago?) So, our mind processes some perceptual and cognitive information at an unconscious (automatic) level. However, while I will draw a distinction between unconscious and conscious processing, keep in mind that the exact boundary between unconscious and conscious information can be hazy. For example, the point between unconscious and conscious detection of stimuli (visual, auditory, or otherwise) can be altered by factors such as motivation or fatigue.

The cognitive perspective concerning emotions tries to answer a host of other questions. For example, what exactly are we processing? How do we convey our emotions? How can we use our thinking to control our emotions? Do our emotional experiences affect our memory of events? Do the particular emotions we experience influence the decisions we make in our lives? This section on cognitive foundations will attempt to answer, at least partially, each of these questions. However, our introduction to the cognitive perspective starts with the process of attention. After all, our mind cannot react to something that is going on in front of us if we are not paying attention!

Attention and Emotion

As part of the process of visual perception, we have unconscious biases that cause us to focus on particular aspects of the world, sometimes to the exclusion of other aspects of visual information. One of these biases is a tendency to focus on emotional information. Simply put, emotional stimuli are attention grabbing (Ni et al., 2011). These emotional stimuli can be scenes or objects that evoke emotions (by the stimuli being associated with emotions in our minds), or these stimuli can be other people’s emotional facial expressions (Biggs, Kreager, Gibson, Villano, & Crowell, 2012) or body posture (Bannerman, Milders, & Sahraie, 2010). We sometimes process these things even when they are outside of our present focus of attention. For example, we appear able to process facial expressions of anger versus happiness even when most of our attentional resources are engaged in completing another task (Shaw, Lien, Ruthruff, & Allen, 2011). When pictures of facial displays of emotions are presented it appears that attentional resources are automatically allocated to processing the emotional faces (Roesch, Sander, Mumenthaler, Kerzel, & Scherer, 2010).

Once we attend to an emotional stimulus, it appears that one path the information takes is directly from the occipital lobe to the neural centres that correspond to the appropriate emotion (Hofelich & Preston, 2012). Such early unconscious processing may lead to later enhanced conscious (cortical) processing (Homes, Vuilleumier, & Eimer, 2003). At the same time, we often focus extra attentional resources on emotional stimuli (Roesch et al., 2010) to further process their possible relevance to us. These shifts to selective attention may be seen (by using evoked potential recording methodology) at 100ms after a stimulus has been presented. Sustained selective attention to emotional stimuli can be seen at 300ms after the stimulus has been presented (Holmes et al., 2003), and this is when conscious processing is clearly present (Williams et al., 2004), and of course can last as long as it is important to the individual. The neural system that involves unconscious processing of emotional stimuli (e.g., fearful faces; Williams et al., 2004) appears to be, in part, separate from the system involved in the conscious processing of emotional stimuli. So, there is a path that stimuli take in perception that goes from very quick unconscious processing to, sometimes, conscious processing. If a stimulus is initially assumed by the mind\brain to be unimportant, then presumably it is not processed (much or at all) further. However, if the stimulus is deemed to be important, it is conceivable that further unconscious processing could proceed in parallel with conscious (deliberate) consideration.

Back to Attention & Emotion Top of Page Table of Contents

When do we Start Unconsciously Attending to Faces?

One of our attentional biases is to focus on emotional faces. How quickly do we start to process emotional facial expressions? It turns out that this happens very quickly indeed. It only takes 39ms for most individuals to be able to judge subtle facial expressions as displaying a threat. In one study (Bar, Neta, & Linz, 2006), these judgments were independently concurred with by others (r = .55) who got to see the same faces for a longer (1700ms) exposure time. In fact, for the most highly threatening faces, the correlation between the judgments of the 39ms and 1700ms exposure groups was r = .94. To perhaps give you a better feeling for how fast this is, 100ms is one-tenth of a second. Thus, people make this judgment of a facial threat in less than one-tenth of a second.

One study (Kirouac & Dore, 1984) investigated recognition accuracy for people’s facial expressions of basic emotions (pictured on slides) as a function of exposure time (10 - 50ms). It was found that people could recognize a happy face at better than chance levels after an exposure of only 20ms. People could recognize facial expressions of surprise, disgust, anger, sadness, and fear at better than chance levels after only 30ms of exposure. Asymptotic levels of accuracy were reached for happiness at 30ms; for surprise at 40ms; and for disgust, anger, sadness, and fear at 50ms (Kirouac & Dore, 1984). Similarly, another study (Williams et al., 2004) found that most of their participants could detect, with better than chance accuracy, a facial expression of fear at 20ms, with accuracy increasing to 95% at 50ms for this early detection system.

At both a conscious and unconscious level, people pay particular attention to threatening, fearful, or anxiety provoking things in the world (Sutton & Altarriba, 2011). For example, we are more likely to pay attention to angry faces than to faces with a neutral expression. We are also more likely to follow the fleeting gaze of an angry than a happy or neutral face (Becker, 2010). This makes sense from an evolutionary perspective as avoiding dangerous things in the world will increase the chance of our survival (Sutton & Altarriba, 2011). So, the perceptual system has a built in bias to allocate a disproportionate amount of its limited attentional resources to processing negative things in the environment and this actually occurs very early in the process of visual perception. Nevertheless, we are best at recognizing happy faces, and at doing so very quickly. From an evolutionary perspective this also makes sense, as we are more likely to choose happy people as mates, and we are more likely to have sex with someone when he or she is happy.

Not surprisingly, there are considerable individual differences in detection sensitivity for facial expressions of emotions. One study (Pessoa, Japee, & Ungerleider, 2005) found that 2 of their 11 participants could actually detect other people’s facial fear expressions that were presented for only 17ms and then immediately masked by a neutral facial expression. (Masking inhibits further processing of the original stimulus.)

Other aspects of people’s faces related to emotions are also processed very quickly, although perhaps not quite as quickly as are the facial expressions of basic emotions. It only takes 40ms of viewing a person’s face before being able to make an initial judgment of that person’s social dominance (Rule, Adams, Ambady, & Freeman, 2012). When individuals were presented faces for 100ms, they we able to judge the faces on: aggression, competence, likeability, trustworthiness, and attractiveness (Willis & Todorov, 2006). We should note that it is possible that these judgments might be made more quickly (i.e., with less visual information), as no exposure of less than 100ms was tested in this latter study.

We appear to be predisposed to attend to physically attractive people early in our attentional sequence. In speed dating we assess a potential date’s physical attractiveness, his or her body weight for height, age, and race, and decide whether we want to date that individual, all within three seconds (Kurzban & Weeden, 2005). Thus, some of that initial attraction we have for someone may be based on our very quick assessment of that individual’s attractiveness, personality, and how they match with our own.

When we are in romantic relationships we frequently need to consciously inhibit some automatic attentional biases. Specifically, we tend to inhibit our subjective attention to very attractive others who, potentially, could be an alternative partner (Maner, Rouby, & Gonzaga, 2008). Simply put, this helps to maintain our current relationship, and individuals may question their partner’s intentions when their partner does not inhibit his or her attention to attractive potentially alternative partners (Guys, are you listening to this?!!).

One of the other things that we automatically process, because our visual system is biased to do so, at a very early perceptual stage is gross aspects of body language that signal another person’s emotional state. We only need 33ms to make a decision about someone’s emotional state based on strong body language signals (Stienen, Tanaka, & de Gelder, 2011). The Ginsburg et al. (1977) finding (discussed earlier on pp. 6-7) that a child’s nonverbal behaviour predicts the end of a fight without people being aware of this, demonstrated that our processing of the body language of others can guide our own behaviour without our conscious awareness that this is so.

We sometimes also unconsciously process and then mimic the body posture, gestures, tones of speech, or facial expression of others (Uleman, Saribay, & Gonzalez, 2008). We do this when we socialize with others. The feedback from our facial muscles (remember the facial feedback hypothesis?) allows us to feel something of what others are feeling. It helps us to connect with others, to empathize.

What happens after our mind detects and identifies emotional stimuli? When stimuli are highly pleasant for us we quickly shift our attention to them and focus additional attentional resources on them (Ni et al., 2011). The processing of an emotional stimulus can sometimes enhance the processing of any subsequent stimulus because of the continued focusing of attention resources. Emotional stimuli can enhancing the processing of coarser information and can also automatically facilitate motor responses initiated within 100ms (Bocanegra & Zeelenberg, 2012). Paradoxically, the processing of an emotional stimulus can also inhibit the processing of subsequent stimuli (Most & Wang, 2011), especially the processing of the details of the subsequent stimulus (Bocanegra & Zeelenberg, 2012). This subsequent inhibited processing can result from either (a) having reached processing capacity limitation (Most & Wang, 2011) or (b) perceptual defense – when the “self” actively censors emotionally threatening stimuli. So, some emotional situations can help lead to an insight about a problem, while other emotional situations may lead to us not functioning well at all.

Our attention to stimuli is also affected by (a) our immediate emotional state and (b) our personality. Our expectations can shift our attentional resources to search for particular things (Becker, 2010). For example, when we are happy we are more likely to broadly pay attention a scene, focus our attention more on positive things, and more fully process the second of two stimuli (Vermeulen, 2010). When we are sad we are more likely to have narrowed (focused) attention (Srinivasan & Hanif, 2010; Zeelenberg & Bocanegra, 2010), focus our attention on negative things, are less likely to accurately “see” (identify) the second of two stimuli (Vermeulen, 2010), and have more difficulty disengaging our attention from negative things (Biggs et al., 2012).

Our personality biases our emotional state. Individuals who are high in neuroticism are generally anxious people who look for the negative things in the world. People who are highly anxious narrow their attention and focus it on threats, then have a more difficulty disengaging their attention from any threat. Fox (1996) conducted an interesting laboratory study of the attention processes of highly anxious people. She had people simply classify a number as odd or even, while a distractor word (neutral of threatening) was also presented. Sometimes the words were presented subliminally (14ms and masked). Fox found that highly anxious participants (but not people low in anxiety) took longer to classify a number when a threatening word was subliminally presented near the number. Thus, the highly anxious people appeared to be unconsciously searching for threats. Interestingly, this bias to unconsciously look for threats only occurred if participants already consciously knew that some words were threatening. One of the things this study nicely illustrated is that most mental events are a combination of both conscious (controlled) and unconscious (automatic) processes. It would be a mistake to ignore the importance of the conscious regulation of our attention. When we consciously move our attention away from frightening movie scenes, we are less scared by the movie. Indeed, we need to regulate our inner attention away from negative thoughts in order to keep us from being anxious or depressed (Johnson, 2009).

In this section on attention I’ve narrowly focused on visual attention. However, we automatically attend to several senses at once, such as the visual and auditory senses. Typically, more attention is devoted to the processing of emotional visual information than emotional auditory information (Collignon et al., 2008). However, if the visual information is ambiguous, our mind then automatically pays more attention to the auditory information (Collignon et al., 2008), such as the pitch and speed of speech.

When do we Start Unconsciously Attending to Faces? Top of Page Table of Contents

Reading Facial Expressions

One of the things our mind pays particular attention to is the facial expressions of others. Surprisingly, although the facial expressions of basic emotions are universal (Ekman & Friesan, 1971; Ekman, 1994), and although we are biased to attend to facial expressions of emotions, many of us often do not accurately read someone’s true feelings from his or her facial expressions. This is partly due to the fact that people are often trying to mask their true feelings. For example, do you sometimes hide what you are feeling when you are dating someone, not wanting to give away your feelings until you know the other person better, or perhaps you feel forced to mask your feelings when your boss has criticized your work? Many people try to mask their emotions in these and other situations and some people try to mask their emotions in most situations.

Reading facial expressions can sometimes be challenging. Facial expressions can be subtle and fleeting. Some facial expressions do not communicate a specific emotion but just the presence or absence of emotion (Motley, 1993). Adding to the difficulty in accurately identifying the facial expression of basic emotions is that faces of a particular demographic group (e.g., different ages or races) can look different until one has more experience with people of that group. Also, a given facial expression on a given person looks different when the face is seen from different angles. To deal with these difficulties, most of us rely on the context, the situation, to interpret what someone else is feeling (Motley, 1993). However, some of us (and not just autistic individuals) are just not very good at reading facial expressions and this can cause a variety of social problems. Fortunately, we can improve our ability to read other people’s facial expressions. To become better at reading faces we need to learn what specific facial features to attend to, what features go with each emotion, and we need to practice this skill.

Ekman and Friesan (1978) provided a comprehensive description of the facial muscle movements involved with emotions. The major visible facial change for each basic emotion is as follows:

involves the raising of the eyebrows (both the inner and outer part of each brow), raising of the upper eyelids, and the jaw dropping.







Happiness involves raising the corners of the mouth and raising the cheeks with muscles active at the corner of the eyes. This is obviously a smile, but note that it is a true enjoyment smile as opposed to a nonenjoyment smile. Let me explain these two different smiles.

Enjoyment smiles (called Duchenne smiles) involve the activation of the zygomatic major muscle (that pulls up the corner of the mouth) and the orbicularis oculi muscle (around the eyes). Nonenjoyment smiles only involve raising the corners of the mouth. A nonenjoyment smile is a false smile that we make when we do not want others to know that we did not really enjoy something. For instance, if a friend told a joke that was not really funny, maybe you gave a nonenjoyment smile after the punchline, just to be polite, to be social, and not offend the other person.

There are some additional, more subtle, differences between Dechenne and nonenjoyment smiles. The zygomatic major muscle shows a smoother action in a Duchenne smile than in a nonenjoyment smile (Frank, Ekman, & Friesen, 1993). In addition, the zygomatic major muscle in Duchenne smiles is usually active between 0.5 and 4 seconds, but is frequently active for a longer time in nonenjoyment smiles (Frank et al., 1993). So, sometimes you can detect a false smile because it has been on the face for “too” long.


Anger involves a number of facial features. In anger the eyebrows pull down. The upper eyelid raises while the lower eyelid tenses. The lips narrow and press together hard.







In fear the inner and outer eyebrows raise and the brows pull together. This pulling together can sometimes be seen in a wrinkling of the skin between the brows when the emotion is being experienced. The upper eyelid raises and the lower eyelid tenses [This should actually be the same for both anger and fear]. Finally, the mouth stretches horizontally.






Sadness involves raising the inner eyebrows (but not the outer eyebrows). In addition, in sadness the corner of the mouth pull down, the chin raises, and the cheeks may be active pulling up towards the lower lip.











Disgust is expressed through raising the upper lip and wrinkling the nose.














Contempt is seen when the corner of one side of the mouth is pulled back, often in conjunction with the head tilting back.










There are a number of web sites that are designed to help someone to identify what emotions people are experiencing by their facial displays. Try linking to the following sites on youtube:

Facial Expression Tutorial by Khappucino
The Art of Deciphering Facial Expressions
Basic Emotions (LIE TO ME)

Table 4 (from Gosselin, Perron, & Beaupre, 2010)
Facial Action Units With Their Appearance Changes and Associated Emotions
FACS Name Appearance Change Associated Emotion
Inner brow raiser Raises only inner part of the eyebrow Fear, sadness, surprise
Outer brow raiser Raises only outer part of the eyebrow Fear, surprise
Brow lowerer Lowers eyebrows and pull them together Anger, fear, sadness
Upper lid raiser Raises the upper lid, exposing more of the upper portion of the eyeball Anger, fear, surprise
Cheek raiser Raises the cheek, causing crow’s feet and wrinkles below the eye Happiness, sadness
Lid tightener Raises and tightens the lower eyelid Anger
Nose wrinkler Wrinkles and pulls the skin upward along the sides of the nose Disgust
Upper lip raiser Raises the upper lip and causes bend in its shape Anger, disgust
Nasolabial furrow deepener Deepens the middle portion of the nasolabial furrow Sadness
Lip corner puller Pulls lip corner up diagonally toward the cheekbone Happiness
Lip corner depressor Pulls the lip corner down Sadness
Lower lip depressor Pulls the lower lip down, flattens the chin boss Disgust
Chin raiser Pushes chin boss and lower lip upward Anger, disgust
Lip stretcher Stretches lips horizontally Fear
Lip funneler Funnels lips outward, pulling in medially on the lip corners and turning out at least one lip Anger
Lip tightener Tightens the lips, making them appear more narrow Anger
Lip pressor Presses the lips together, tightens and narrows the lips Anger
Lips part Parts lips to a limited extent All 6 Emotions
Jaw drop Parts lips so that space between the teeth can be seen All 6 emotions
Mouth stretch Stretches mouth open quite far Fear, surprise


Reese (1993) has suggested that interest is associated with a unique facial expression: riveted eyes and a still head. Is this what happens when we see someone who interests us? So, interest basically involves a "hard stare". If we do not what someone to know we are interested in us then we may look at them surreptitiously, but some people may do this noticeably as part of flirting.

Back to Reading Facial Expressions Top of Page Table of Contents


Part of the difficulty in reading facial expressions is that people are often trying to mask (hide) what they are feeling. One of the times when people work hardest to mask their facial expressions is when they are trying to lie. (It does not go over well if you say you are “truly sorry” while you are also grinning from ear to ear!) The truth is that almost everyone lies at least some of the time, and some people lie frequently. Extroverts and high self-monitors (people who are particularly concerned with how others view them) lie more often than do other people. Now, sometimes people lie to spare the feelings of others, but most lies are told to serve one’s own motives. So, how can we tell if someone is lying to us?

A police force might use a polygraph, or lie detector test, to try to tell if someone is lying (although they could not submit the result in court). The polygraph indirectly measures lying by measuring changes in anxiety. The polygraph measures whether an individual’s sympathetic nervous system’s level of activation increases in response to particular yes or no questions, usually related to some crime. Sympathetic activation is gauged by the polygraph measuring breathing pattern, blood pressure, and the level of sweating (Meijer & Verschuere, 2010).

The accuracy of the polygraph partly depends on the tester using the right questions for the particular case and “demonstrating” to the test taker that the polygraph is completely accurate. In reality, a trained tester will detect deception at a much better than chance level, but far from perfectly. A trained tester using a polygraph can detect lying over 74% of the time. However, anywhere from 10 - 20% of the time innocent people tend to be falsely classified as lying (Meijer & Verschuere, 2010). In addition, a polygraph is not accurate with people who can control their sympathetic nervous system’s activation or for whom their sympathetic activation is depressed (e.g., psychopaths and people who are somewhat depressed).

Although the use of a polygraph does not guarantee accuracy (and because few of us carry one around in our pockets!) you might say “So what! Who needs a polygraph? I can tell when someone is lying!” However, most of us are actually not very good at detecting deception. In fact, many people who need to detect deception as part of their job are not very good at telling whether or not someone is lying. For example, Vrij (1993) had confederates either lie or tell the truth during a structured police interview that was videotaped. The videos were played back to 91 police detectives who had to say whether the person was lying or not. The result was that the detectives’ accuracy was no better than chance but the detectives were none-the-less very confident of their judgments. Although one detective did achieved 70% accuracy, 44% of the detectives scored worse than chance. One mistake the detectives made was that they associated lying with (a) less smiling, and (b) more arm or hand movements. Although one study (Biland, Pyb, Allione, Demarchi, & Abric, 2008) of French women found that liars were more likely to show fake smiles and embarrassment smiles, smiling or not is generally unrelated to deception, and less (not more) arm and hand movements is actually associated with lying (Vrij, 1993).

After performing a meta-analysis of 247 studies, Bond and DePaulo (2008) came to two conclusions about our ability to detect lying. First, they suggested that we are rarely accurate in detecting deception in real-time situations (i.e., as it is actually occurring). Evidence consistent with this conclusion comes from a separate meta-analysis (i.e., Aamodt & Custer, 2006) that found that both experts and general citizens tend to be accurate less than 55% of the time (when chance performance would be 50%). Second, Bond and DePaulo suggested that there are almost no individual differences in this very poor ability to detect lying. They described individual differences in the ability to detect lying as “minicule” (p.486). They suggested that our ability to detect lying is unrelated to our: age, sex, education, confidence, expertise, Machiavellianism, or self-monitoring tendency.

In contrast, O’Sullivan (2008) argued that there are indeed meaningful individual differences in the ability to detect lying. O’Sullivan found that some select police experts perform significantly above chance levels. Specifically, she identified 11 different groups of police officers (one group was Canadian parole officers) who performed between 60% and 88% in detecting lies/truths. (However, note that 8 other police forces scored between 48-56%.) O’Sullivan found that detection accuracy was correlated with: being left-handed, emotion recognition accuracy, training or attentional instructions, professional experience, honesty/moral values, and social or academic intelligence. Evidence for the importance of training is provided by the finding that some police officers are very accurate in detecting criminal lies but not emotional lies, while some therapists are very accurate in detecting emotional lies but not criminal lies (O’Sullivan, 2008).

In fact, even Bond (2008) later admitted that there are some “very rare” experts who consistently perform at 80%+ accuracy levels when dealing with felons telling truths and lies. He found 2 experts (out of 112 law enforcement professionals) who performed at 80%+ accuracy on each of the 4 occasions that they were tested.

So, the ability of untrained people to detect deception is not very good, but the ability of some experts to detect deception is quite good, but not perfect. How can we tell when someone is lying?

One approach is actually to avoid any specific detection strategy and just to reply on our unconscious processing. Albrechtsen, Meissner and Susa (2009) found that their participants actually did 10—15% better when they simply used their “intuition.” In other words, they simply gave their unconscious processing free rein. However, this is not as good a level of detection as shown by some police forces who had specific training to detect lying.

A second approach to improve the detection of lying is to increase the cognitive load of the liar. Lying requires a high level of cognitive resources as liars are controlling their behavior and monitoring what they are doing so that they will not be caught. Further increasing their cognitive load might mean less available resources for controlling behavior while lying, resulting in the appearance of more verbal and nonverbal cues of lying. Vriji et al. (2008) increased the cognitive load by having subjects recall their story in reverse chronological order and this improved the accuracy of the police in detecting lying by 12%. Specifically, in the cognitive load condition (but not in a control condition) liars supplied less details and context, talked more slowly and with more hesitations, and more often blinked their eyes or moved their foot or leg. This fits with Vriji’s (2008) suggestion that focusing on verbal cues (speech errors, speech fillers, pauses, voice) and memory details (perceptual, spatial, temporal details, logical structure) increases detection accuracy.

A third approach to detect lying relies on reading facial expressions. Ekman suggested that facial leaks of automatically experienced emotions (such as guilt or fear) may be present when someone is engaged in an emotional lie. Facial leaks are when some of the muscles used in the facial expression of an experienced emotion are not completely suppressed and so are instead briefly expressed on the face. For example, someone might try to mask that they are feeling sad but their chin muscles involved in the emotional facial expression of sadness are briefly activated. Facial leaks are often difficult to detect as some emotional leaks may only be present for 1/25th of a second. Nevertheless, Bond (2008) found that it was the ability to detect nonverbal cues, such as facial leaks, that differentiated experts from non-experts in their ability to detect lies.

So, each of these three approaches might help you to detect whether someone is lying. However, note that it is impossible to “read” everyone. Some people are simply very accomplished liars. Still, it is important for us to tell someone’s true intentions, such as whether they are attracted to us, or not.

Back to Lying Top of Page Table of Contents

Interpersonal Attraction in Bars.

We do not usually think of it this way, but our evolutionary heritage has some influence on who sexually attracts us and on our thinking processes and communication when we try to initiate relationships with those who attract us. Some of these influences are evident in public spaces such as in bars or at parties. In these situations, as we are thinking about what to say and do while gauging the intent and sincerity of the other person, we are both consciously and unconsciously reading that person’s faces and body language. This leads to a a set sequence of stages in the communication between people sexually attracted to each other: (a) looking at the other, (b) approaching and initiating conversation, (c) light touching, (d) the first kiss, and (f) perhaps later, in private, sex (Naworynski, 1993).

There are a variety of strategies that are used to begin the sequence. Some women deliberately place themselves where they will be discovered by a man who interests them (Perper, 1989). For example, the woman might move into an area close to where the man is standing. Many women will signal interest by looking and smiling at a particular man, while other women will go through a repeated sequence of glancing at a particular man and then looking away (Moore, 1985). Still other women signal that they are generally looking for someone of interest by a look that is a 5-10 second sweep of room not directed at any individual, or they might engage in a “parade” using exaggerated hip sway and a tightened stomach with their chest out, to attract a man’s interest (Moore, 1985).

Both men and women who flirt are more likely to attract interest from others. Flirting is used both to attract initial interest and when the couple are talking. Besides the different types of looking, the nonverbal cues in flirting that women use include: the head toss (where the face is tilted upwards for less than 5 seconds); head nods during conversation; tilting their head about 45 degrees which presents their neck; flicking their hair; licking their lip; pouting; smiling; laughing or giggling at another’s comments; whispering in the ear of the person of interest; gesticulation during speech; primping clothing; caressing one part of their own body\face; and caressing an object (Moore, 1989). The frequency of women’s flirting behaviors is often a better predictor of men’s approaches than is the attractiveness of the women, and those women who are the most flirtatious are 8 times more likely to be approached by men (Moore, 1985).

When men flirt they often puff up their chest, show off their muscles, and brag about themselves (Naworynski, 1993). In an observational study (Renninger, Wade, & Grammer, 2004) of men’s behavior in bars, a number of nonverbal behaviors by men were associated with preferential attention from women. These nonverbal behaviors included signaling interest through brief glances at a woman of interest, and signaling the man’s own status through his: (a) taking up more space in the bar, (b) frequently changing his position in the bar, and (c) touching other men who did not touch him back. Flirting tends to be reciprocated for both men (r = .68) and women (r = .69) (Back et al., 2011).

In a speed dating study, flirting (as measured from auditory recordings) was associated with more subsequent date requests from others (Back et al., 2011). Those who flirted more tended to be those who were more attractive and who had a high opinion of their value as a mate. Those who only flirted with one person were more likely to choose that person for a date. However, flirting can also sometimes be ambiguous or involve deception to try to discern someone else’s intentions while hiding one’s own intentions (Back et al., 2011).

Once a man and woman signal their interest, one of them must approach the other and initiate conversation. The man might approach the women, a situation that typically creates a high level of anxiety in the man (Perper, 1989). Alternately, the women might approach the man, perhaps asking for help with something. Either way, once the approach has been made and received, the couple have to talk.

Initial conversation is often about trivial things as the two people look for areas of mutual interest (Perper, 1989). Similarity in interests, attitudes, and personalities are major factors in interpersonal attraction (Miller, 2012) and people often look for a mutual area(s) of interest on which they can build the conversation, and perhaps some sort of relationship. The conversations frequently involve some of the flirting behaviours previously mentioned. As time goes on, it may also involve body synchronization. In body synchronization, both partners might come to move their hands or head at the same time. They might both drink at the same time. They might both adopt the same posture. Many people are not aware of this mutual synchronization of body movements that occur as intimacy builds (Perper, 1989).

If the conversation goes well, the couple, who initially stood at a “V” angle to each other, gradually begin a process of turning to face each other (Perper, 1989). The gradual turning of the position of our bodies is another aspect of the process of sexual attraction that operates unconsciously for most of us. Accompanying this turning of our bodies, the gaze of each partner becomes more focused on the face and body of the other partner (Perper, 1989). Most of us are aware of our focused attention on the other person, focused to the exclusion of much that is going on around us.

After the partners have begun to turn towards each other, the woman will often initiate the first touch as a signal of warm interest (Perper, 1989). This may be a light touch on the man’s hand, arm or shoulder. A gentle touch when individuals face each other is considered both flirtatious and signals romantic interest. These gentle touches may be on the face, shoulder, forearm or elsewhere. Intimacy increases if the man reciprocates the touch, but deceases if he does not. If touching is mutually reciprocated and continues then it may signal a mutual sexual interest (Perper, 1989). Some women might also use the, not as subtle, signal of leaning towards and brushing her body against the man.

Finally, at some point one person will kiss the other, and this will happen either in a public space or in private. It need not be a long passionate kiss (although it can be). However, it is a signal of the growing sexual interest between the two people.

Back to Interpersonal Attraction in Bars Top of Page Table of Contents

street address:

Grenfell Campus
Memorial University

20 University Drive
Corner Brook, NL
A2H 5G4

mailing address:

Grenfell Campus
Memorial University

PO Box 2000
Corner Brook, NL
A2H 6P9

email addresses:

For questions about the university:

For questions about the website:

For technical support questions:


Switchboard: 1-709-637-6200

Student Recruitment: 1-888-637-6269

Registrar: 1-866-381-7022