APPENDIX A: Other Types of Clean Evaluative Interviewing
Nancy Doyle and Caitlin Walker of Training Attention
have conducted clean interviews of the effectiveness of their work since 2002. Below James précises his conversations with them about how they conducted their evaluations.
The kind of evaluative interviewing undertaken by Training Attention is the diligent recording of the state of a system ‘before’ and 'after' and then comparing the two (or more) states as a measure of the effectiveness of their work, e.g. Systemic Modelling: Installing Coaching for Organisational-Learning.
This is similar but not quite the same as what we hope to achieve in our evaluation project. First, if there is a ‘before’ and ‘after’ comparison it is done by the interviewee not the interviewer. Second, we are primarily interested in the individual's idiosyncratic evaluation, rather than a composite state-of-the-system evaluation.Nancy Doyle
a Chartered Occupational Psychologist has been at the forefront of evaluating the impact of interventions in organisations. Clean interviews have been used to evaluate all of Training Attention’s organisational work, starting with Nancy’s masters degree. This includes the Welfare to Work research published by the British Psychological Society
in 2010 and 2011 (note this research focuses more on quantitative results). The article that Nancy, Paul Tosey and Caitlin Walker wrote for The Association for Management Education and Development
’s journal is an example of using clean interviews to explore and evaluate organisational change (see link above).
Below are Nancy’s answers to James’ questions:
- What makes your evaluative interviews different from other types of clean methods?
It is dirtier than other classically clean interviews. You don't start with "And what would you like to have happen?" Instead you begin with a clear concept that you want the interviewees to describe. Then you can compare (by asking questions such as "Currently, the company is like what?") the initial model of the organisation to subsequent models.- What protocols have you designed for conducting your evaluative interviews?
The purpose of the interview is for the interviewer to understand what is happening, as well as the client gaining insight. The interviewer is using clean questions to remove his/her interpretation from the output, but not to remove his/her influence on what the kind of output should be.
We email prompt-questions in advance. We then conduct interviews using the prompt questions and clean questions. This can be done on the phone. - What have you learned from your evaluative interviewing?
You can improve the quality of interviewing in research by teaching interviewers Clean Language. With the Welfare to Work clients, interviewers were trained in Clean Language and given a list of approximately 10 questions I wanted to know the answer to. They were instructed to only ask the prompt questions and when they needed to elaborate or clarify to only ask clean questions. This meant they were less likely to go ‘off piste’ or to start directing interviewee responses.Caitlin Walker
said that a question they are frequently asked is, "What research is being done in this field?" Luckily clean questions are themselves excellent research tools.
A key question they ask their customers is, "How do you know what you're doing is useful?". And they continually ask it of themselves. Evaluating the impact of their work is an ongoing challenge. Design and delivery are very different skills from those required for research and evaluation. This is not a problem unique to people who use clean approaches. Training Attention’s customers tell them that very little evaluation takes place even for large scale projects. Often projects begin without research into what people really want to have happen. Many projects end without meaningful feedback as to whether they have made any lasting difference. As a result sometimes today's change processes become tomorrows problems.
Because of Nancy's background Training Attention have a number of well-thought through, longer-term evaluations and case studies, including the effectiveness of:
- Diversity Training across 900 members of a Primary Care Trust
- Whole-system change in a secondary school
- Welfare to Work interventions
- Peer coaching from primary to secondary school transition.
All have detailed evaluation reports. Dr. Michael Ben Avie
, a research affiliate of the
Yale University Child Study Center, helped to evaluate the whole-school project.
Caitlin’s examples of evaluative interviewing generally involved three stages:
1. First build a relationship between the person and their experience. This can be achieved with question like:
- Currently, [context] is like what?
2. Then ask them to make an evaluation, e.g.
- What are you able to do differently?
- What's the difference that made the difference in achieving or not achieving [project outcome]?
- And is there anything else that made a difference?
- How did that impact on you achieving [project outcome]?
- How well or not well is [context] working?
3. Follow that with a request for a desired outcome, e.g.
- How do you want [the group] to be?
During stage 1 the interviews start with a very broad frame until patterns in people's attention emerge. Once a number of people have been facilitated individually or in small groups, common evaluation themes emerge, e.g.
The distance/closeness between members of a group
The amount of feedback being given/received
How well or not something worked.
At Liverpool John Moore University
, seven themes were mentioned by 50% of the 45 people involved.
In stage 2, evaluation can be on a pre-defined scale e.g. 0-10 (where 10 could be ‘being able to work at your best’); or a personal scale, e.g.
Uncomfortable - Comfortable
Feeling scared - Safe
Volume of arguments
Level of aggressive behaviour.
The same external behaviour/circumstances will be evaluated by different people using different criteria. A peer's behaviour could be evaluated by one person on a uncomfortable-comfortable scale, while another might use their degree of irritation.
If a person is unable to come up with their own evaluation criteria, they can be given a number of representative examples – these must cover a wide range.
As a minimum, people evaluate against their own (unconscious) desired outcomes or expectations.
Numerical evaluations are interesting but on their own they do not give much indication of how to use them to learn and improve things. For that you need reasons, metaphors and outcomes.