Performance Management is a familiar term that has many different meanings.
An operational manager might see performance management in terms of processes and the balance between inputs and outputs. An HR specialist might see performance management in terms of employee goals and evaluation. A Strategy Director will want to know about measures relating to the medium/long term strategy goals. To know what Performance Management means, you need to know the context. Knowing the context will directly determine how you go about building a Performance Management solution, and how it will be used by an organisation. Based on our many years of experience and some solid academic research findings, we think there are four performance management areas you need to know about: in this page we describe what they are, how they differ, and what these differences mean for how you work on them within an organisation.
- Managing for Strategy Implementation
- Managing for Operations
- Monitoring and Evaluation
- Offering Incentives
There are many similarities - but also key differences that have a profound effect on how you go about choosing and using performance data. This is something we understand well - one of 2GC’s first published research papers was on this very topic - and since then we’ve done much to deepen our knowledge of how performance management works best in each of the four areas.
Effective strategy implementation happens when the people within the organisation do things to help achieve it - which means that at some level everyone involved in implementing the strategy will change the way they work so that they can do what is required of them.
Sounds easy doesn’t it?
But if it were that easy then multiple studies would not be showing how many organisations fail at effectively implementing strategy. In our experience, strategy implementation fails when either the people in the organisation don’t know what changes to make to their work, or the organisation stops them doing what they need to do, which is more common than you would think.
So how does your organisation not become one of the failure statistics?
Approaching the strategy project with the right method is essential - how 2GC approaches this is detailed below and in our Tools section as our ACME strategy implementation framework - but there are other considerations too.
Speed more important than accuracy
We think that if you are implementing a strategy, you need feedback today about what is happening, not a report at the end of the year that tells you everything went wrong six months ago! In performance management terms, it means you should focus on regular frequent updates - even if this means you have to use rough and ready measures rather than highly accurate numbers.
Learn from feedback - choose a better activity
If it turns out you have not achieved your aim in a period, strategic performance management best practice encourages you to learn from the information - in particular if necessary come up with another way to achieve what you are trying to do: strategy implementation is more concerned about end-results than how you get there!
We think organisations need to address four things if they are to effectively implement a strategy. These four things, as embodied in 2GC’s ACME strategy implementation framework are discussed in our Tools section.
Dig deeper on this topic
Performance Management systems are used when the information they contain is seen as relevant to the management challenges faced by those who use them. To be relevant therefore, a Performance Management system has to be designed to reflect its particular purpose and the management issues and activities associated with that purpose. The number and nature of the measures that should be included, the kinds of target values that should be set, the frequency with which the information is reported, the acceptable delay between measurement and reporting and required level of accuracy are all factors that need to be defined. For an Operational Performance Management solution, the starting point is the process or task definition, supported by discussions with the staff who actually work on the process.
Speed of reporting more important than accuracy
Performance Management for operations shares one key trait with strategic performance management - the need for timely information is dominant. If you are running a key process for your organisation, it is no use to you to find out several months after the event that your process has been producing poor outputs: by then it is far to late to fix the problem at source, and so you’re going to be playing catchup.
Learning from feedback - fix the process
Where Operational Performance Management differs from strategic performance management is in how you respond to bad results: in operations the challenge is to fix the process not to come up with a new one. In operations the means to the end are just as important as the end itself.
2GC combines an expert understanding of the factors that influence successful operational performance management with its extensive experience of working with organisations. The methods and approaches we use are as varied as the issues we need to deal with, but you can be sure that our work draws upon our world-class combination of technical knowledge and practical experience.
Dig deeper on this topic
Incentives work, and so most organisations link personal performance to some kind of earned rewards. To support this incentive giving activity organisations have developed complex and carefully managed systems to set goals for staff, to periodically assess their performance against these goals, and to act upon the outcomes of these assessments. But it is not always clear that the goals set by these processes are the right ones, or that the incentives associated with these goals are the appropriate ones. Getting the right content into the incentive system is the critical factor determining whether the incentives offered will improve organisational performance. As our fourth category of performance management, incentives have some similarities with Monitoring and Evaluation when it comes to what kind of data to use, and some similarities with operational performance management when it comes to how the information affects decision making. In the paragraphs below we look at these two in more depth.
Accuracy more important than speed
As with M+E reviews, you really don’t want to be rushed into paying someone a bonus (or firing them) based on questionable data. So it makes sense when designing a performance management system for incentives to build in an expectation that data accuracy will be a key requirement - even if this means a delay of a few months between the end of a period and the carrying out of a review. It also highlights the importance of these reviews being data based.
Learning to try harder next time around
For most staff, organisational incentives relate to goals tied to the task or role the person fulfils within the organisation. Do some specific and defined aspect of this activity well and the organisation will reward you in some way beyond simply paying your wages. For this kind of offer to have traction with an employee, it is better if the activity focused on has to be one over which they have influence. For example, a sales target for the territory an employee covers and for product they actually sell will have more effect on their behaviour than one on whole-company sales of all products. If you choose goals of this kind, the typical response to failing to achieve a goal during a period is to encourage the employee to try harder next time around - doing the same activity better rather than doing something else.
The controllable activities for each employee will be different, as will be the way in which you might measure the impact of their activities also varies. So a focus when designing performance management systems for incentives to make it possible to select the right activity and outputs to use for each employee, but without at the same time creating a huge complicated tool that is difficult to work with and maintain. If the incentive system is too complex to use, it gets used (because people want to know what their incentives are) but the quality of the content within the system suffers, and the incentives given to employees are poorly aligned with the goals of the organisation.
Dig deeper on this topic
Monitoring and Evaluation
You use performance data for Monitoring and Evaluation to assess how successful a project or unit has been at the end of a period or project. You may not be familiar with the name (which comes from the NGO community), but you will be familiar with the activity. Monitoring and Evaluation is important in the NGO community as much of the work done is project based and paid by external funding organisations: to continue to receive funds the NGO needs to be able to demonstrate to these stakeholders that money given is used for (and hopefully achieves) the purpose for which it was intended. But even outside the NGO community, every post-project review, every annual performance review and most strategic reviews will include Monitoring and Evaluation type performance assessments. The mechanics of a Monitoring and Evaluation type use of performance management data are quite different to the strategic and operational uses discussed above. In the paragraphs below we explain the differences and the implications for how you design and use this kind of Performance Management tool.
Monitoring and Evaluation reviews are retrospective and focus on questions like “Did the project work as we hoped?”, “Which product range was most valuable to our firm last year?”, “How well did we comply with emissions regulations across the organisation?”. The answers to these questions lead to future policy choices (for example, deciding to whether to repeat the project in another territory, which products to emphasise in our marketing, and whether we need to invest in new emissions equipment). But the decisions rarely relate to the management of activity being evaluated - the project is already completed, and the performance period being considered has ended. This lack of direct managerial response has big implications on both the type of data collected, and how managers use the output of a Monitoring and Evaluation review.
Accuracy more important than speed
Monitoring and Evaluation (M+E) decisions usually relate to future policy choices, not ongoing activity, and are usually done once (when the project ends, or at the end of the review period). So being able to hold an M+E review frequently or quickly following the end of the period is typically less important than whether the performance data being reviewed is both appropriate and accurate. As a result, M+E performance management systems put emphasis on getting the right data and making sure it is accurate, even if this means you have to wait six months until the review can happen.
Learning from feedback - next time do the right thing
The main purpose of an M+E review is to identify which parts of the past performance being reviewed were good - which parts achieved the aims that were set. As a result, the decisions that are taken usually concern whether or not to repeat the activity or project again in the future: if it worked this time maybe it will work again next time, and also whether the methods used within the project should be changed in the light of experience. These kinds of response are very ‘strategic’ in nature, and so it is not surprising that we see similarities between this kind of learning response and the one we discussed in the strategy section above.
M+E type performance management is central to much of the management agenda in all organisations, and so having the ability to set up and deliver this kind of review is critically important. Because these kinds of reviews are not part of ‘normal management activity’ they typically need to be supported by specific review processes, and organisation staff need to trained to be able to execute the process well and efficiently.