Marine Sgt. at New Orleans, La. By Howard R.Hollem. Library of Congress collection via Flickr.

Why We Over-Estimate In Evaluations

Forrest ChristianManaging, Reviews - Articles 3 Comments

While at the PeopleFit “Assessing Raw Talent” class this last week, I heard that it is common for people to overestimate the CIP (Elliott Jaques’s idea of Complexity of Information Processing) of persons who have a lower CIP and to underestimate their subordinates who have a higher CIP than they do. I figured that they were simply a 2002 article by Robert E. Wood. Turned out that they’ve seen this in their own data.

Wood, Robert E. (2002). “Self-versus others’ ratings as predictors of assessment center ratings: Validation evidence for 360-degree feedback programs”. Personnel Psychology, Dec. 22, 2002: pp.?. From the HiBeam Online Research Library
Compass. Licensed from 123rf.comWood’s article (“Self-versus others’ ratings as predictors of assessment center ratings: Validation evidence for 360-degree feedback programs“, Personnel Psychology, Dec. 22, 2002) is pretty interesting. He tested employees’ self-ratings, their managers’ ratings of them, and their peers’ rating of them against an assessment center’s rating of them. It turns out that the people who have the very highest opinion of themselves were the worst in assessment. But the best performers consistently underrated themselves.

Not only that, but peers were likely to overrate underperformers within their own group, while managers were likely to underrate high performers.

…those who rated themselves in the mid-range of the scale were more likely to be high performers than those who rated themselves at the top or bottom ends of the scale. Furthermore, those who rated themselves most highly tended to be the poorest performers… [S]upervisor ratings successfully discriminated between overestimators but were not as successful at discriminating underestimators, suggesting that more modest feedback recipients might be underrated by their supervisors… Perhaps, like feedback recipients themselves, peers were conscious of the evaluative nature of 360-survey feedback and felt the need to exaggerate the performance of poor performers in order to boost their overall evaluation.

From the perspective of Elliott Jaques’s Requisite Organization / Stratified Systems Theory, we can see some more things happening. We tend to think of people as having the same level of capability as we do, unless shown otherwise or unless social markers contraindicate. If these poor performers were overemployed (only doing the work of the worklevel below the one their role was assigned to) then the peers may have simply not been able to see them properly. Managers did not ferret out the underestimators — those who perform higher than they self-rated — because they were probably in the same or higher stratum as the manager.

Without data that’s just speculation. And of course we don’t have data because they didn’t think to collect current capability.

Anyway, yet another piece to bolster my opinion that 360-degree feedback programs don’t do what their proponents insist that they do. In this, I am in agreement with Ed Schein and oh so many others I respect. I know that some folks get a lot from their 360s, but there is just such a paucity of supporting research. They sound great but you can’t run a business on what you want to believe in. You have to use data.

Cited

Marine Sgt. at New Orleans, La. ca. 1941-1945 by Howard R. Hollem. Library of Congress collection.

Compass, © Allyson Kitts (123rf.com)

Comments 3

  1. Post
    Author

    I’m not sure an apples to apples comparison can be done here. 360 Degree feedback does not separate knowledge, skills, experience from CIP and all these items influence how people score others.

    But the biggest problem is that it does not take into account the relative capacity of the respondents to the person being assessed regardless of their relative positions to one another (peer, manager, MoR, subordinate). I think relative CIP is the number one influence on evaluations. Unless you know that, the data is not being evaluated with an appropriate frame of reference.

    As I’ve said before, you could have three subordinates of the same manager (A, B, C). Subordiante A rates him well, B and C rate him poorly but for opposite reasons – one says he’s a micromanager one says he’s too distant. This reflecting a subordinate appropriately spaced (A), one at the same level (B) and one (C) who is at least two levels below the manager.

    To add to the confusion, if memory serves me, don’t 360’s average all the subordinates’ scores together to come up with one score? Now that’s really a mess. The only place you get any diferentiation is in the comments.

    Another problem is that we all know that people whose current capability is over their role tend to do really well their first six months and then boredom sets in an performance CAN be affected. So, 360 results will likely be influenced by where the underemployed employee’s learning curve is at the moment.

    Changing gears again, your comment about overestimating people below you. I don’t believe this is the case. What has been found is that when attempting to observe CIP via interview analysis, a GREEN observer will tend to overestimate those below him during his first few attempts at interview analysis. However, this is short lived and is quickly overcome by practice and coaching.

    When you are doing your “assessment” via real life interactions with someone rather than via interview analysis…

    In the working world and with Presidential Elections, we intuitively know when someone is above us or below us but stratifying it is beyond the general frame of reference.

    We can also intuitively rank people relative to one another.

    One other finding that we have observed when using a talent gearing process with managers making capacity judgments rather than using interview analysis:
    When a manager (Joe) is one level below his role, Joe?s manager (MoR) will likely rate Joe?s subordinates (SoR) lower than they are.

    Meaning the MoR’s impression of a SoR is influenced by middle man (manager). This reinforces Jaques’ recommendation that skip level interaction (Mor to SoR) is another requirement for Requisite Organization. My guess is that this skip level interaction would clear up this bias because the MoR would then have first hand information from which to judge the SoR.

    I hope this hasn’t been to scattered. I don’t have a lot of time for editing.

  2. Post
    Author

Leave a Reply

Your email address will not be published. Required fields are marked *