James Lawley and Penny Tompkins
2. Research methodology
is a kind of ‘behind the scenes’ process we all do regularly. We are constantly informally evaluating people's behaviour against our own internal (often subconscious) standards. What makes a formal assessment 'formal' is that the standards and process of assessing are known, and hopefully well-defined.
A different kind of evaluation happens when we make an assessment of another person's evaluation. To do so we need to take into account their
means of evaluating, which may not be the same as ours. How accurate – or not – are we are calibrating
another person's evaluation?
We devoted the December 2011 Developing Group to Clean Evaluative Interviewing
. The aim on that day was to learn how to use Clean Language as a research interview method when the topic being researched was how people evaluate an experience.
Dr. Susie Linder-Pelz and James have recently concluded an academic research project in which six coaching sessions were evaluated from three perspectives: by the coach, the client and an expert-assessor.*
At the 2nd August Developing Group James updated the group on the findings of the research, and we explored how we can individually and collectively make use of the conclusions. in particular we experientially investigated:
- As a coach, how aware are you of how your client and an expert would evaluate a coaching session?
- Does knowing your client and an expert's opinion affect your own evaluation?
Calibration and Evaluation
Over the years we have approached the topic of calibrating in different ways.
We have long noticed that when people on a training course are asked to evaluate a practice coaching session they often give an answer which varies wildly with the opinion of the client and/or us as expert observers.
For example, one coach said a session was “catastrophic”, while the client said “I got some useful insights and lots to think about”. James who was observing said to the coach, “You did what the activity called for. The client got what they asked for with their desired outcome. A more direct approach might have got to the meat earlier, and even so, you and they now have a lot more of a landscape to work with and a good basis for the next session.”
When the coach was asked what their evaluation of the session was now, having heard the opinion of the client and expert they said “Well I’m pleased the client got something out of it and I still think it was catastrophic”. We wonder what scale the coach was using to evaluate their effectiveness, and what they would have labelled a much worse session! (See The Importance of Scale)
Our modelling of excellent facilitators (not only those who use Clean Language) showed that a key skill was the ability to calibrate the experience of the client and to notice when it changed and in what direction. (See Systemic Outcome Orientation)
There are lots of ways to calibrate, and what seems more important than the method of calibrating is that (a) the facilitator is actively calibrating moment-by-moment; (b) there is a correspondence between the facilitator’s calibration and the client’s experience; and (c) the facilitator can quickly change in response to the results of their calibration. This led us to make the “First Principle of Symbolic Modelling” (See REPROCess and Modelling Attention):
Know what kind of experience the client is having (i.e. what you are modelling).
While calibrating is a matter of efficacy, we have pointed out that it is also an ethical matter. If you do not calibrate the kind of experience the client is having, how do you know whether what what you are doing is, or is not, working for the client? (See Calibrating Whether What You Are Doing is Working – Or Not)
James and Susie’s research of coaching sessions shows that even experienced coaches and experts can give widely differing ratings compared to those of the client and to each other. While this may be surprising at first, once it is appreciated that each tend to use different criteria in coming to their evaluations, the variation makes more sense.
In our opinion, a bigger issue is the difficultly there appears to be in managing multiple perspectives when they diverge. Many certification and evaluation processes use one perspective: Experts decide if a coach is competent to be certified or suitable for a job, or clients decide if they are satisfied with the service. Rarely are both
taken into account. Even more rarely does the coach’s ability to calibrate both the client and the expert perspective become part of the assessment.
One reason for this may be the difficultly in comparing apples, oranges and bananas. This is compounded if the aim is to find a single composite score. The result is likely to be an arbitrary weighting of the contribution of each perspective. Rather than trying to reduce the perspectives to a single rating, an alternative is to live with the complexity of three perspectives and set acceptable levels in all three.**
By bringing our own evaluations out from ‘behind the scenes’ and making them 'centre
stage' we can play with our own patterns of assuming, and get a ‘reality
check’ on our how and what we are unconsciously calibrating.
— — — — — —
* The first part of the study was published as: Linder-Pelz, S. & Lawley, J. (2015). Using Clean Language to explore the subjectivity of coachees' experience and outcomes. International Coaching Psychology Review
, 10(2):161-174. http://shop.bps.org.uk/publications/Download a free preprint version: Linder-Pelz_Lawley-ICPR_preprint_15_Jun_2015.pdf
The second part of the study was published as: Lawley, J. & Linder-Pelz, S. (2016). Evidence of competency:
exploring coach, coachee and expert evaluations of coaching, Coaching:
An International Journal of Theory, Research and Practice
Download a free preprint version: Lawley&Linder-Pelz_CIJTRP_preprint_03_May_2016.pdf
** We are grateful to Michelle Duval who helped us to get clear on this point.