Article
7 min
ArticlePerformance management
14 min read ·August 29, 2024
Written by
Former Senior People Scientist, Culture Amp
I am biased. You are biased. All humans are biased. Not buying it yet? Consider the research of Daniel Kahneman, a psychologist awarded the Nobel Prize for his groundbreaking work applying psychological insights to economics. In his research, Kahneman demonstrated one simple truth: most human decisions are based on biases, beliefs, and intuition, not facts or logic.
This is part of why people tend to bring bias into the performance review process, even when going into it with the best intentions. And when it comes to performance reviews (also known as performance appraisals), biases have a huge impact.
So, what can your and your organization do to ensure that performance review processes are as bias-free as possible? Incorporate bias blockers into each step of the process. Once you know these biases exist, you can use various strategies (and a good dose of self-awareness) to minimize their effects.
In this article, we'll share the ten types of biases that affect performance reviews, examples of they look in action, and prevention strategies to help mitigate their effects.
Commonly referred to as "unconscious bias" or "implicit bias," a bias is an error in judgment that happens when a person allows their conscious or unconscious prejudices to affect their evaluation of another person. It usually implies an unfair judgment against or in favor of someone.
Unconscious biases can lead to inflation or deflation of employee ratings, which can have serious implications in high-stakes situations directly affected by performance assessments. For example, promotion, compensation, hiring, or even firing decisions.
Given the weight of these decisions, it’s critical to ensure that the performance management process is fair and objective as possible. Otherwise, your people will lose trust in the process, become disengaged, and lose the motivation to use the performance review as an opportunity to make improvements and grow.
What is recency bias?
Recency bias is the tendency to focus on the most recent time period instead of the total time period.
We also call this the “What have you done for me lately?” bias. If someone recently rocked a presentation or flubbed a deal, that recent performance will loom larger in a manager’s mind. Why? Because it’s easier to remember things that happened recently.
Example of recency bias:
Imagine there is an employee named Jamie. At the beginning of the year, she landed a huge deal for the company and received much recognition. But in the last two months, her performance has slipped. Unfortunately, Jamie’s manager focused only on the recent events of the past few months during Jamie’s performance review and didn’t acknowledge Jamie’s incredibly valuable contributions from earlier in the year.
How to prevent recency bias:
To limit the impact of recency bias on your performance data, develop a habit of collecting employee feedback at different points throughout the year. Did someone just complete a 3-month project? Great, send their peers a request for feedback so you can get some data on how well they did. Did someone just complete internal training? Awesome, request feedback from the instructor about their participation. This way, you have more frequent data points from throughout the entire time period at the end of the year.
What is primacy bias?
Primacy bias is the tendency to emphasize information learned early on over information encountered later.
In performance reviews, managers often fall for primacy bias when they let a first impression affect their overall assessment of that mentee.
Example of primacy bias:
Dr. Heidi Grant Halvorson of Columbia Business School describes the following scenario:
If I’m a jerk to you when we first meet, and I buy you a coffee the next day to make up for it, you are going to see that nice gesture as some sort of manipulative tactic and think, “This jerk thinks he can buy me off with a coffee.” However, if I make a great first impression, and buy you a coffee the next day, then you’re likely to see it as an act of goodwill and think to yourself, “Wow, that Kevin really is a nice guy.”
How to prevent primacy bias:
Preventing primacy bias is similar to preventing recency bias. By putting together a dossier of performance snapshots that include feedback from multiple points in time, you can dampen managers’ tendency to weigh their first impressions too heavily.
What is halo/horns effect bias?
The halo/horns effect bias is the tendency to allow one good or bad trait to overshadow others (i.e., letting an employee’s congenial sense of humor override their poor communication skills.)
After all, we all have our own pet peeves and turn-ons. Sometimes those quirks and inclinations can overshadow our ability to assess people overall. For example, this bias is why attractive people are likelier to be rated as trustworthy. At the same time, if or when attractive people fail to live up to those higher expectations, they also suffer a penalty for not living up to the presumptions of others.
Example of halo/horns effect bias:
For example, a particular manager may have a soft spot for proactive, outspoken individuals. If one of their mentees tends to be quiet and withdrawn during meetings, the manager may give that mentee a lower score, even if the mentee offers other valuable qualities and contributions.
How to prevent halo/horns effect bias:
To dampen the effect of the halo/horns effect bias, evaluate performance on multiple dimensions of performance instead of leaving it open to interpretation. Are you rating individual achievement but failing to look at the way people contribute to the success of others? Does this person happen to have a particular set of highly sought-after technical skills but doesn’t finish their work on time? To get a holistic view, assess at least 2-3 different aspects of performance so that one awesome or awful trait or skill doesn’t overshadow everything else.
What is centrality bias?
Centrality bias is the tendency to rate most items in the middle of a rating scale.
While moderation is great in most things, high-stakes situations like performance appraisals often require taking a stand. When everyone receives the same rating, it isn’t easy to distinguish low-performing employees from top-performing ones.
Example of centrality bias:
A manager hands in his annual performance evaluations, but almost everyone on his team scored in the middle of the scale. If the company is working on a 5-point rating scale, most employees receive a 3. This is a common occurrence as many managers don’t like being extreme and trend moderate in their reviews.
How to prevent centrality bias:
Centrality bias can be overcome by taking a flexible approach to designing scales. The simplest way is to eliminate the neutral option from the rating scale, such as switching from a 5-point scale to a 4-point scale. This way, evaluators have to make a choice one way or the other.
What is leniency bias?
Leniency bias occurs when managers give favorable ratings even though they have employees with notable room for improvement.
Like many types of biases, leniency bias can weaken the objectivity of the data. The truth is some employees do outperform others. Giving everyone a 4 out of a 5-point rating makes it challenging to distinguish who the top-performing employees are. On top of that, it becomes difficult to identify who deserves a promotion or raise, and it can leave your top talent feeling disgruntled.
Example of leniency bias:
Alex and Jamie are both managed by the same person. Alex consistently produces average-quality work. While it’s not bad, he rarely goes above and beyond what’s asked of him. On the other hand, Jamie is consistently one of the top-performing employees. She goes the extra mile on her projects, always raises her hand to take on more responsibilities, and delivers outstanding outcomes. Despite these differences, their manager gives them the same “above average” rating on their performance review to avoid hurting anyone’s feelings. As a result, Alex and Jamie stay on similar career trajectories – leaving Jamie to wonder if anyone notices her hard work.
How to prevent leniency bias:
Instead of making “above average” the top possible rating, try using a rating scale that reflects how people talk about and think about their team members. To create more spread to identify your top people, build that spread into the rating labels.
For example, you could have a scale where the top rating is "above average."
However, if you wanted to give managers more opportunities to identify stellar performers, you could create a scale with "above average" as the middle rating and "top performer" as the top rating.
What is similar-to-me bias?
Similar-to-me bias is the inclination to give a higher rating to people with similar interests, skills, and backgrounds as the person doing the rating.
Simply put, we tend to like people that are like us. In addition to making performance reviews tricky, similar-to-me bias can make your workplace feel less inclusive and may even affect how diverse the overall makeup of the organization is.
Example of similar-to-me bias:
Imagine there is a manager that attended a top-ranked school that they loved. When conducting a performance appraisal for someone who went to the same top-ranked school, the manager may rate them higher because of their inflated impression of the school and those who graduated from that school.
How to prevent similar-to-me bias:
Reduce the effect of similar-to-me-bias by requiring specificity in managers’ assessments. In three separate studies, Yale researchers found that when you first agree to the criteria used in an assessment and then you make the evaluation, you are less likely to rely on stereotypes, and your assessments are less biased.
What is idiosyncratic rater bias?
Idiosyncratic rater bias occurs when managers evaluate skills they’re not good at highly. Conversely, they rate others lower for skills they’re great at. In other words, managers weigh their performance evaluations toward their own personal eccentricities.
In fact, one of the largest studies on feedback found that more than half of the variance associated with ratings had more to do with the quirks of the person giving the rating than the person being rated. Rater bias was the biggest predictor, holding more weight than actual performance, the performance dimension being rated, the rater’s perspective, and even measurement error.
Therefore, idiosyncratic rater bias presents a huge problem in performance data because the score given often tells us more about the rater than the person being rated.
Example of idiosyncratic rater bias:
Let’s say there’s a manager who excels at project management but knows very little about computer programming. As a result, she unknowingly gives higher marks to those who are good at computer programming and lower marks to those who are good at project management or other skills similar to hers.
Why? Because the manager is good at project management, she’s likelier to have higher standards for this skill and compare the employee to herself. On the other hand, since she’s unfamiliar with computer programming, she’s less familiar with the standards for performance and is more likely to be lenient. In other words, her feedback reflects more on her own skills than her employee’s skills.
How to prevent idiosyncratic rater bias:
It’s difficult for people to rate others on things like “lateral and strategic thinking” (whatever that means). But, as one researcher put it: “People might not be reliable raters of others, but they are reliable raters of their own intentions.” So consider rewriting some performance questions to reflect your team's decisions and intentions.
Here are some examples from the Culture Amp platform:
What is confirmation bias?
The tendency to search for or interpret new information in a way that confirms a person’s preexisting beliefs. Confirmation bias is a type of bias that is pretty similar to primacy bias but can tend to go much deeper.
Confirmation bias makes it easier to believe people who align with you on specific facts, beliefs, or stances. It’s also why you’re more likely to be skeptical of people who disagree with you. While this is a normal human tendency, it can skew the interpretation of valuable performance data.
Have you ever had a question about something and searched the internet for the answer? If you’re like most people, your search terms will probably pull up web pages that confirm your existing beliefs. For instance, if you love beans and want to prevent cancer, you might Google “beans help fight cancer.” But, on the other hand, if you can’t stand beans, you might search for “beans cause cancer.” Sure enough, you will find millions of results for both searches. Similarly, if you initially think someone might be a bad apple, you are much more likely to seek (and find) information confirming your initial suspicion.
Example of confirmation bias:
Imagine a highly productive, technically skilled employee who is a pleasure to work with. That employee's manager may receive feedback that supports these beliefs, which they’re going to believe. However, when managers receive feedback contrary to their beliefs about the employee, they may discount or ignore that valuable information.
How to prevent confirmation bias:
To overcome confirmation bias, think like a scientist. When researchers ask questions, they form hypotheses that seek to disconfirm rather than confirm their initial beliefs. Whenever you have an impression about someone, seek evidence that they are the opposite or entirely different from what you suspect. When collecting feedback from others, pay close attention to the feedback that goes against your beliefs.
What is gender bias?
When giving feedback, individuals tend to focus more on the personality and attitudes of women and feminine-presenting individuals. Contrarily, they focus more on the behaviors and accomplishments of men and masculine-presenting individuals.
Priya Sundararajan, Culture Amp’s Senior Data Scientist, reviewed 25,000 peer feedback statements across a performance cycle of nearly 1,500 employees. She discovered the following:
Gender biases like these exacerbate gender bias, growth/promotion opportunities, and the pay gap.
Example of gender bias:
Imagine two employees - Nick and Susan - are up for promotions. They’re both highly qualified, have similar years of experience, and received many positive accolades. They also received constructive feedback from their managers that needs to be taken into account for the promotion:
As you can see, Nick’s feedback is based on his skill set – which can easily be improved with the right guidance and training. On the other hand, Susan’s feedback is based on her work style. This raises doubts about her personality and seems like something that can’t be “fixed.” As a result of this feedback, Nick gets the promotion, and Susan doesn’t. This situation is all too common and contributes to the gender pay gap and unequal growth opportunities experienced by women.
How to prevent gender bias:
Sometimes, unstructured feedback allows bias to creep in. Without some set criteria, people will likely reshape the criteria for success in their own image. As Stanford researchers have put it, the big takeaway is that open boxes on feedback forms make feedback open to bias. That’s why it helps to take a “mad libs” approach to feedback.
Help raters by giving them a format and then allowing them to fill in the blanks. Additionally, nudge managers into specifically talking about situations, behaviors, and impacts rather than personality or style.
Quick but important reminder: Gender biases can have a huge impact on the experiences and assessments of nonbinary and/or transgender folks. Although these biases may manifest themselves in a slightly different manner, it’s important to stay alert and keep your eyes peeled.
What is the law of small numbers bias?
The law of small numbers bias is the incorrect belief that a small sample closely shares the properties of the underlying population.
Example of the law of small numbers bias:
Imagine a stellar team full of top performers, with one person doing the work of four others. Naturally, you rate that person as higher than the rest and the others a bit lower. However, it turns out that even the lowest performer on your team is among the best in the whole company. So, when it comes time to look at company-wide performance, it appears your team is about average, even if they’re all exceptional.
How to prevent the law of small numbers bias:
Performance review calibrations are key to overcoming this bias. Calibration is when all reviews and ratings are looked at holistically to ensure that your definition of “above average” is similar to everyone else’s definition of “above average.” This helps guarantee that people at the organization speak the same language and use the same nomenclature when conducting performance reviews.
Unfortunately, we’re not that good at knowing our own biases. Research has suggested that the more help you need in this area, the harder it is to recognize that you need help. People underestimate their own bias, and the most biased among us underestimate it the most.
So, one step is to check yourself through some unbiased means. One method that researchers at the University of Washington, University of Virginia, Harvard University, and Yale University have used is the Implicit Association Test, which is freely available to everyone. Fair warning, though: you might not be comfortable or agree with the results, but that’s probably just your bias talking.
Next, give yourself permission to be human and recognize the limits of your own understanding. Just being aware of your biases will not, in and of itself, enable you to overcome your biases. This doesn’t mean we should ignore or give in to our biases. Instead, we need to set up systems, processes, procedures, and even technology that enable us to make better decisions. Ask for help. Get feedback from others. Set firm criteria and be consistent. Most of all, keep an open mind.