In “Dilbert,” that humorous testament to life in the bureaucracy, the subject of performance reviews is a frequent target. Cartoonist Scott Adams usually portrays the exchange as a rather pointless game between two soulless, lifeless entities, having no purpose other than scoring points on each other. In the real world, organizational leaders often ask me what they can do to make performance evaluations work better. Our supervisors hate doing them, they explain. Our people complain bitterly about them, they say. And it’s a massive paperwork exercise.
That said, in the next breath they argue for something that will hold people accountable, provide feedback on individual performance, and reduce the organization’s exposure to possible legal action by an aggrieved employee. Meanwhile, when practitioner surveys keep telling us that something like 90 percent of American organizations are currently dissatisfied with their performance evaluation process, and 90 percent have changed their process within the last five years, we have to ask: what’s going on here? Someone once said that the definition of insanity is repeating the same actions over and over again, expecting a different outcome. Seem familiar?
After nearly 30 years of living and working within many organizations, I have reached the conclusion that most would be better off if they dumped their performance evaluation process altogether—provided, that is, they are willing to fully re-examine their basic assumptions about planning, organizing and tracking the work essential to their mission.
The central sticking point seems to be around this notion of making people accountable (whatever that means to them). Accountability starts when one promises to deliver something, and then chooses to be held to account for producing that something. A performance evaluation process should give people the opportunity to make promises for action, and then help them track how they are doing on delivering what they promised. Although in service to and framed by the organization’s mission and program objectives, the performance evaluation process is driven by the individual, not the organization, and emphasizes active choosing—otherwise, all you really have is passive compliance, a much less energized state. To what degree does your system allow for and support active choosing and commitment?
Following are a few things I have found useful to keep in mind if you decide to re-examine your accountability systems for staff.
First, examine the design principles upon which your system is based. Most work systems now function on the set of assumptions Fred Emery1 called Design Principle 1: all work is controlled at one level above where it happens. This is the organizational logic for bureaucracy—and this principle is embedded in almost all performance evaluation systems. As a result, performance evaluation too often ends up being about power, judgment, blame and failure. Its victims will typically perceive the evaluation process as another means by which management exercises arbitrary control—and most people do not react positively to being the objects of control, though they may love being in control. The object of the game for both parties is to win and stay in control, not to learn or gather valid information about what has happened in performance situations. Is this craziness, or what?
The alternative design principle described by Emery is Design Principle 2: work is controlled at the level where the work is done. This means that the team of workers, working together, must learn to control the quality of output and solve production-related problems. This means that they must learn to pay attention to their clients and constituents, and find out how to deliver what works for them. Leaders or managers assist staff in this inquiry, and reward them for successfully meeting and exceeding the expectations of constituents. This stance may represent a profound shift in your thinking about performance evaluation, but I recommend temporary discomfort as you adjust over the permanent insanity of maintaining a badly conceived and limiting system.
Second, focus people on the learning and not on the breakdown. Performance reviews are frequently associated with situations requiring major corrective efforts initiated when key commitments are not met, and people are feeling stuck, frustrated and vulnerable. Get the conversation focused on producing real inquiry into what works and what doesn’t work around getting things done. Assume that people are doing the best they can given the resources they have. If they are not producing, it might be a skill problem, but it could also be a problem of the tools, work processes and roles, or the original conception of the task itself. Broaden the inquiry. Avoid the easy explanation focusing blame on the person.
Third, stop focusing on the forms. Most performance evaluation re-design efforts seem to spend hours on the form. The stiffly worded scales and multiple-choice categories foster the illusion of scientific precision and objectivity, but most of us know better: what, exactly, constitutes adequate performance? What subjective experiences, preferences and, yes, biases are you investing in each allegedly value-neutral item on the form?
The action is not in the form. The action is in the conversation that occurs between people, and how people are being in that conversation. Design a form that is no more than one page long that is useful in organizing the conversation you want people to have. Have the form track the outputs of the work (and the quality of those outputs) and the key competencies it takes to do the work well. Focus on the vital few outputs and competencies that make a difference, not everything.
Fourth, keep the focus of the system on what people deliver. Who actually has the best picture of the quality of work delivered by someone? If you were to look at most performance evaluation processes, the answer would appear to be the boss. Yet, repeatedly, we find that the boss often has the least information about the individual’s performance and is often the poorest rater. The best raters are people who actually receive the work—clients and constituents. Shift your system in a more “customer-focused” orientation. You will be surprised at the impact this adjustment can produce.
Fifth, don’t mix conversations. We tend to ask too much from a performance evaluation conversation. We want to give people feedback, dispense rewards or compensation, and develop career plans, learning plans, succession plans, and so on. Keep these conversations separate. As soon as money enters the conversation, it is really about nothing else. You need to talk to people about compensation. Just don’t try to connect that conversation to learning.
It’s hard to shake the insanity that radiates out around performance evaluation—it’s like an addiction. Despite all evidence that it is bad for an organization’s health, we continue to be users. Many self-help groups have successfully employed 12-step programs to break drug and alcohol addictions; perhaps we should construct a 12-step program to break our addiction to performance evaluation—at least as we currently think of it.
1. Fred Emery’s many books include Democracy at Work with Einar Thorsrud (Leiden, Martin Nijoff, 1976), and Systems Thinking Vol. 1 (Penguin Books, 1969).
Steve Williamson is the founder of The New World Network, a network of senior practitioners devoted to developing the tools and conversations of practice for living successfully in a collaborative-democratic world. Online at: (firstname.lastname@example.org). The author invites you to keep this conversation going. E-mail him with your ideas, opinions and reactions on this subject. Are there additional steps that you have found useful in your practice? He will keep you informed of our collective effort to stop the insanity.