Something that I touched on a while ago was how you go about exploring the concept of a “performance review”. We’re used to these kinds of “rational” exercises in our cartesian world of cause and effect. A performance review takes a lot of forms: interviews, quality management processes,
Morgue recently posted on this topic; his words struck me as both true and discordant with the stated objectives of these processes:
The performance review isn’t about reviewing performance – that happens all the time, in an ongoing way, the manager is always reviewing the employee’s performance, the employee is always interpreting their own performance.
This has been true about most of my performance reviews over the years, and I was struck when talking to an acquaintance recently that this basic surety of outcome probably applies to most formal examining where there’s a strong teacher/pupil relationship.
Perhaps Morgue’s conclusions about the process enabling communication between moving parts in the machinery of your job are okay as far as they go. However, I am also inclined to think that this is a somewhat glib dismissal of the importance of the activity in many instances. If you already know the answer, why are you asking the question? Why structure your thoughts in terms of “performance” and “review” if the “performance” is already adequately monitored, and the “review” is nominal?
The most recent performance review for me has been the application of CPEng. It would have been easy to get complacent and/or cynical about this process for two reasons:
1. I well and truly exceed the minimum standards for experience expected in the CPEng standard.
2. I have done enough “peer” reviews of work by supposedly competent professionals that the standard itself begins to seem a bit thin.
In real terms however, I think that the truly professional attitude must be to undertake the self-review process with clear and open eyes, and attempt to actually fulfil the spirit as well as the letter of the standard. This is what worried me about doing it: I have never been too concerned that I couldn’t produce sufficient evidence to convince the panel of my expertise; my concern was convincing myself.
This meant that a significant part of this process for me was looking into jobs where I had fallen short of the standard: jobs where mistakes were made. Nobody is infallible, and so if you look back without seeing errors one of two things is happening: You’re missing something, or you’re not trying hard enough. The standard for a professional engineer is that you can apprehend problems and utilize rational methods to determine a design outcome in the absence of a template. Sure, we have codes, guides, peer reviews: but at the end of the day, the standard is higher than plugging numbers into the correct formula.
Finding mistakes is therefore a crucial learning exercise. A performance review that finds no mistakes, which certifies the candidate’s impression of total competence, has failed. It has not enabled the candidate to improve in their practice, or it has confirmed that the candidate is being wasted in their current role.
In my case, I think that the performance review found some significant shortfalls – I identified several of the 12 “core competences” where I now know that I have inadequate training and experience. Having identified these shortcomings, I was able to do two useful things:
1. I undertook additional reading and study to shore up knowledge not imparted in my real practice
2. I was able to limit and control my practice area to exclude areas where I’m not competent
The nett result of my labours has been that reviewing my competence actually improved it. Even in the short few months since I began to seriously compile my evidence and work through the self-review, I have found small benefits in how I deal with clients and problems compared to beforehand.
Proving my competence was always going to be the easy part of the CPEng challenge – the hard part was actually being competent.