Wednesday, 19 September 2012

Pendleton Plus goes international

Following the presentation and workshop on Pendleton Plus in the European ATLS meeting in Berlin in April, Pendleton Plus will be used in the 9th edition ATLS manual from early 2013. It is already being successfully used in a number of European countries and feedback has been positive. Users report liking the flexibility it gives to explore performance more deeply.

Friday, 29 June 2012


Pendleton Plus

Facilitating analytical feedback and reflection

A year ago I wrote about the debate surrounding Pendleton’s Rules for feedback and gave my view on why a structured approach is important.

A year down the line and I have amended Pendleton to reflect the good use I see it put to on a regular basis. My changes to the original address the concerns raised when critics of Pendleton discuss the framework:

Criticisms of Pendleton’s Rules:

* Pendleton's Rules often lead to narration of events: "I introduced myself, I introduced the topic, I asked a question...." (The 'what') and can omit to consider the analysis and application (the 'why' and the 'how') of the episode and the implications for future practice;
* Pendleton's Rules can leave learners unsure as to the quality of their teaching episode - was it good, poor, catastrophic - due to the 'balanced' feedback. Candidates want to (and need to) know how well they did.
* People struggle to give constructive feedback, or if they do, they either say what was sub optimal or how it can be improved, but not always both of those things.

I have heard from those who had used a less structured approach to feedback, that whilst this can be done extremely well, is not done well by up to 50% of the learner group.  (anecdotal, not evidence based using RCT!)


The challenge is how to introduce analysis and evaluation in a simple, structured format that even the most reluctant facilitator of feedback can confidently tackle.

The product of extensive discussions and some pre piloting is presented here – ‘Pendleton Plus’ retains the principles of Pendleton's Rules (the learner self evaluates first; positives are usually discussed first) but has a couple of slight changes:

Pendleton Plus:

  1. Insight: Coach Ask: “How do you think that went?” (to find out the level of insight of the learner)

Headline: Coach Tell: “I thought that was excellent | very good | good| OK | slightly problematic | problematic – let’s go back through what you did and look at each part, as this can be improved”

  1. What went well: Coach Ask: “Let’s look at what you think went well?” Add “why” and “how did you do that?” questions where relevant to promote analysis
  2. What to improve: Coach Ask:  “Let’s look at what you want to improve or develop.” Add “why” and “how would you do that?” questions where relevant to promote analysis
  3. Action Plan: Coach Ask:  “What will you do to take this forward?”

Step 1 is to briefly ascertain the insight of the learner and a simple headline evaluation from the facilitator:
So the facilitators asks the learner in one word to sum up how they felt they did. It is important here not to let them delve head first into regurgitating the narrative of their episode, or to metaphorically beat themselves around the head in anguish. This is a quick stock check, to enable them to give a gut feeling, and for you to add your gut feeling, before the real analysis starts.

Step 2 asks the Pendleton initial question or 'what did you think worked well?' But the difference here is that where often the facilitator would previously listen to the retelling of the teaching story, then provide their own (often second) teaching story, this time the facilitator prompts the learner with 'why?' and 'how' questions - 'why did that question bring them to life like that?' 'How did you move them onto that point?' In this way Pendleton steps 1 and 2 (learner positives, trainer positives) should be covered in one stage, with the facilitator questioning, prompting and if necessary making observations to develop learner understanding of what went well, why and how they achieved that.

Step 3 does the same but with the areas for development. Again, analysis should be encouraged through the use of questions and observations: 'why do you think they fell silent at that point?’  'I noticed you looked uncomfortable then - why do you think that was?' Development for future would extend this conversation using 'how' questions: 'how would you deal with a silence like that in future?' 'How would you avoid...?'

Step 4 ought to be a quick resume of follow up actions from the learner of 1-2 points they did well and 1-2 points they intend to address.


Having piloted this I was surprised to see the ease with which people picked it up. The benefit of the familiarity with the original Pendleton clearly helped, as did the clear structure. I was concerned with the possibility that this would take longer to do than the previous system but this was not the case at all. In fact, many feedback sessions were a little briefer as they cut out the time consuming narration. As long as the group understood the principles involved, they seemed to have no trouble at all.

A useful paper I am sure you will all recognise which supports the principles of Pendleton Plus and may be useful to have on hand if anyone asks for further information is  Cantillon P, Sargeant J: Giving Feedback in Clinical Settings
BMJ 2008;337:a1961 doi:10.1136/bmj.a1961

Do let me know of your experiences in using Pendleton Plus.

Sunday, 29 May 2011

Pendleton's rules

At the recent ATLS National Day and the Europen meeting of ATLS Educators there were discussions about other courses trying out different approaches to feedback. It seems as though Pendleton's rules for feedback http://www.gp-training.net/training/educational_theory/feedback/pendleton.htm are falling out of favour.

I was asked why this may be the case, and my personal view is that it is not the Pendleton framework which is at fault, but the struggle many people have to either understand the point of Pendleton, or to conduct a developmental conversation that is both specific, based on behavioural evidence and sufficiently constructive.

I have discussed various forms of feedback on here previously (see DEBRIEF), but wanted to make out a case for the use of Pendleton's rules.

For me the benefit of Pendleton is that it reaches all stages of learning, from the competent to the incompetent, and from that which we are aware of to that which we are not aware of.

As I have written before, learning is a matter of developing both competence and conscious awareness. We progress from a state of not knowing that we do not know or cannot do (unconscious incompetence) through the stage of being aware of what we do not know or cannot do (conscious incompetence) to one of knowing what we know and can do (conscious competence) to the final stage of knowing what we know and doing what we do but not being always aware of that (unconscious competence.)

If this is the case, and this learning curve describes the rudimentary stages we progress through when learning a new skill or behaviour, then feedback needs to access each of these stages of learning.

Pendleton's rules map onto these stages of learning beautifully:

1. Asking "What went well with that?" accesses the conscious competence quadrant and focuses the learner's mind on practice that needs to be repeated in future. This question also allows for the teacher to assess the levels of insight the learner displays, in their self evaluation.

2. Providing further discussion of what went well led by the teacher, develops the good practice and may access the areas of strength which the learner has no awareness of (unconsciously competent.)

3. Asking the learner what they were less pleased with and what could be developed further, also checks insight levels, and accesses the consciously incompetent areas of practice.

4. Finally, discussing with the learner what the teacher feels needs to be developed (with an action plan to do so) accesses the unconsciously incompetent quadrant.

Pendleton offers a framework within which we can discuss all aspects of the learning curve, including those areas of competence and incompetence known and unknown to the learner. What we need to do within each of the four questions advocated by Pendleton is to be specific about the strengths and areas for development, not shy away from being honest in our descriptions of the behaviours we have observed. And we must always encourage further actions which develop the weaker areas.

Monday, 16 May 2011

Proof it can be done!

A great new article from a trainee who was victim to the curriculum changes from BST to ISCP.

This article shows how hard Rafay worked and what he has achieved. He is now deservedly in ST3 training in the specialty of his choice.

http://careers.bmj.com/careers/advice/view-article.html?id=20002902

Thursday, 13 January 2011

DEBRIEF: A reflective tool for workplace based learning
Hayley Allan

Why do we sometimes act inconsistently? Our heart is pulling us in one direction and our head is insisting on another. What about that gut feeling we have? You know that niggling feeling in the middle of the night? The knowledge, deep down somewhere that what we are doing is not totally right?

Survival has always depended on gut feeling. Humans would long be extinct if they did not have the ability to instinctively know when something is wrong. How many times do we hear someone saying that they “just knew” something was wrong? Paediatricians know that if the mother is worried, they ought to be worried. The girl who was mugged outside the internal front door of her second floor flat, knew there was something not quite right.  What is this sixth sense we all have and why are we advised to ignore it at our peril?

Call it intuition, call it experience. (It will depend on where you were schooled. If in the arts or social sciences you may favour intuition as a term for this phenomenon. Medics and those who deal with the allegedly more concrete world would call it experience. )

Experience is a tremendous learning tool because it develops in us over time; it steadily drip feeds our psyche while we work. Experience builds up pattern recognition over time. It swells the coffers of our intellect, adding to the vault of events and feelings that our mind stockpiles over the years. It is a rich resource. Many people, especially those who learned their craft through trial and error, through experimentation and throughput of events, believe there is no other way to learn.

But what if that accumulation of experience could be fast tracked? What if, instead of laying down lots of fifty pence pieces in the bank vault (individually heavy and of low value) we put by the (admittedly less frequent) stash of ten pound notes we came across? Is there a way to take more learning from fewer learning experiences?

Learning from experience and developing metacognition

DEBRIEF is a tool that enables reflection to take place between a number of people or individually. It provides a structure for review of an event in an emotional, a cognitive and a practical way, thus addressing the psychosocial and practical elements of learning. It has been acknowledged that learning is facilitated or hampered by emotions (Boekaerts 1993, Goleman 1995) and that emotions drive learning and memory (Sylvester 1994.) Learning is not a purely cognitive process (Le Doux (1997) Gross (2008) Love & Goodsell (1996). Much has been said about the emotional impact of learning. If we agree that learning is based on experiences then we cannot deny that emotions will play a part in those experiences and how we process them. The DEBRIEF model, in the constructivist tradition, helps learners to “take responsibility for their own learning, to be autonomous thinkers, to develop integrated understandings of concepts and to pose – and seek to answer- important questions.” Brooks & Brooks (1993)

Friere (1970) argued that learning and education is transformed through praxis – that is “reflection and action upon the world in order to transform it.” Vygotsky (1978 ) identified a zone of proximal development – a gap in terms of experience between two people at different levels of performance which could be used to “scaffold” (Wood et al 1976) the learning for the less experienced of the two. Scaffolding works best when functioning in a situated context or a Community of Practice (Lave & Wenger 1991) and remains the most practically useful way for many in training to learn. DEBRIEF offers a simple structure for such scaffolding to follow, but can also develop in time into internal DEBRIEFing, or metacognition.

Where are the trainers? And how do I know what I don’t know?

In current workplaces, trainees rarely have a supervisor with them all the time; the luxury of a more expert person always being at the elbow of the novice to question and support them in their thinking and practice is but a dream. If medical training is to be ‘trainee driven’ ( ISCP 2010, RCPath 2011,RCP 2011 ) then learners have to recognise their own learning needs and seek out an expert with whom to discuss those needs. However many needs, or gaps in knowledge and uncertainty about practice, fall into the zone of ‘unconscious incompetence.’ How can we know what we do not know if we do not know it and do not know that we do not know it? Once we are consciously competent (that is we know what we don’t know) there is not a problem, but often this conscious awareness has to be raised by either experience or a supervisor. If the supervisors aren’t there, we return to the learning by mistakes method, which is no longer tenable in the twenty first century.

This is where gut feeling comes in.


Fig 1. The role of conscious awareness in the development of competence.

There is a halfway house between unconscious incompetence and conscious incompetence where gut feeling resides. It is a small space, barely perceptible to some, but it can be developed given practice and the right conditions. Gut feeling can alert the learner to an inconsistency, or a ‘perturbation’ (Piaget 1954, 1971) and it is then the role of the learner to pursue this. There are several ways to do this, and usually the educational response is to seek out a supervisor or mentor for a conversation such as a Case Based Discussion, or to reflect on the perturbation independently. Both approaches can have limited effect. A CBD may yield a sophisticated level of analysis, resulting in new levels of understanding for trainee and even for the trainer. Often however, the conversation becomes didactic and theoretical and the synthesis between knowledge and application can be lost. Independent reflection is a good habit to develop but if the gut feeling is not explored purposefully and systematically there will be no real development beyond the ever decreasing circles we can be trapped in when trying to work out ‘what went wrong.’

DEBRIEFing

Debriefing is a mixture of reflecting and teaching. Using a framework to discuss the gut feeling with a more experienced colleague can lead to unexpected revelations. Once the framework has been practised several times it can work without another person’s input. If the learner becomes accustomed to following the steps in the model, s/he can uncover information and understanding to which they did not have conscious access previously either with a peer or alone.


DEBRIEF model



Describe events as factually as possible

Evaluate what went well/to change next time

Bring out emotions/values/beliefs/assumptions that cloud judgement and development

Review and analyse in light of previous experience; what a colleague would have done

Identify lessons learned

Establish follow up actions

Feedback on actions


©Hayley Allan 2009


Fig 2 DEBRIEF model
 
How does DEBRIEF work?

DEBRIEF is more than reflection. It is a series of questions asked of the learner which promote recall of the events, evaluation of his or her role in the events, and a psychological review of the impact of the events on the learner’s sense of wellbeing, before addressing the cognitive impact and reviewing the account for previous similarities of behaviour. Often perturbations (Piaget 1954, 1971) occur because we repeat behaviour which is a function of emotional or psychological triggers from past experiences. It is only when learners can look back in a safe environment, knowing that whilst they may have made mistakes they also had a positive effect on some of the events concerning them, that they are free to own those mistakes. Emotions can block cognitive development and progress and so the examining of the feelings, beliefs, assumptions or motives behind the learner’s actions is often a pivotal point at which the individual unblocks that repetitive behaviour or identifies the gut feeling causing the perturbation. Being able to move onto action planning as a result of the identification of what has been learned is a positive and valuable way for the learner to move on from the event.

How does DEBRIEF differ from regular reflection?

Reflection often follows the Learning cycle of Kolb (1984) but rarely bridges the gap between the action and reflection stages or between reflection and theory stages. Most learners are not able to make those large transitions alone without additional structure.



Fig 3 Kolb’s reflective practice cycle




Fig 4 Kolb’s cycle with DEBRIEF added

DEBRIEF provides a step wise structure to enable the learner to progress through each of Kolb’s learning points, but looking in turn at the behavioural, psychological and cognitive elements of the practice. By following these steps when reviewing an experience the learner is not only processing the experience itself, but is also developing metacognition which will enable further reflection on future experiences.

Building on Pendleton

Many trainers use Pendleton’s rules in discussing performance with trainees. Pendleton’s central tenets are learner comments preceding trainer’s comments, and positive features preceding developmental areas. This is included in the second step of the DEBRIEF model in order to review the actions taken and behaviour used. Without the emotional and cognitive areas of the experience being looked at too, the feedback can remain formulaic and focused on the surface actions rather than the motives or thinking underlying them.

The impact of DEBRIEF on learning

DEBRIEFing makes learners feel in control of their work; by instigating the process and by learning from an honest DEBRIEF, a trainee can relate to the strengths and areas for development within their practice, can understand the psychological impact of events and is able to access theoretical, emotional and practical developmental strategies to change that work for the better. Ownership of learning, especially in the workplace is a fundamental driver for progress. DEBRIEFing using this structure is a versatile process. A Case Based Discussion can easily turn into a DEBRIEF session as can using Pendleton’s rules for feedback. A learner can seek out a supervisor, more experienced colleague or a peer and discuss the event using the model outlined here. Alternatively the model can be applied to the event independently, using honest self disclosure to explore the issues and impact.

Conclusion

In a postgraduate medical training world where the pressure upon the trainers to teach has never been greater, but the time for teaching has never been more limited, the DEBRIEF model equips learners to structure and guide their own learning, utilising their supervisors, senior colleagues, peers and their own reflections to make sense of their daily experiences. Learning by pattern recognition is no longer tenable; smaller numbers of experiences carry greater pressure on trainees to learn and develop. DEBRIEF offers a comprehensive model for them to do this.


Bibliography

Boekaerts M (1993) Being concerned with well being and with learning Educational Psychologist 32(3) 137 - 151
Brooks JG & Brooks MG (1993) In search of understanding: the case for constructivist classrooms. Alexandria VA, Association for Supervision & Curriculum Development
Friere P (1970). Pedagogy of the oppressed. New York: Continuum
Goleman D (1995) Emotional Intelligence New York Bantam books
Gross M et al (2008) Emotions and feelings in learning process: Understanding emotional learning experiences of Postgraduate students ESREA Life History & Biography Network conference, Canterbury Christ Church university, UK http://tallinn.academia.edu/MarinGross/Papers/253692/Emotions_and_Feelings_In_Learning_Process_Understanding_Emotional_Learning_Experiences_of_Postgraduate_Students
ISCP 2010 https://www.iscp.ac.uk/home/principles_intro.aspx
Lave J, Wenger E (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press.
Le Doux J (1997) The Emotional Brain: the mysterious underpinnings of emotional life New York Simon & Schuster
Love PG & Goodsell A (1996) Enhancing Student Learning: Intellectual, Social and Emotional Integration by Love. ASHE-ERIC Higher Education Report series 95-4, (Volume 24-4), http://www.ntlf.com/html/lib/bib/95-4dig.htm
Kolb D A (1984) Experiential Learning: Experience as the source of learning and development. Prentice-Hall.
RCP 2011 http://www.jrcptb.org.uk/assessment/Pages/Workplace-Based-Assessment.aspx
RCPath 2011 http://www.rcpath.org/resources/pdf/definitions_of_assessment_tools__ar.pdf
Pendleton’s Rules http://www.gp-training.net/training/educational_theory/feedback/pendleton.htm
Piaget, Jean. (1954). The Construction of Reality in the Child. Translated by Margeter Cook. New York: Ballantine.
Piaget, Jean. (1971). Psychology and Epistemology: Towards a Theory of Knowledge. Translated by Arnold Rosen. New York: The Viking Press.
Sylvester, R. (1994). How emotions affect learning. Educational Leadership, 52(2), 60-65.
Vygotsky, L.S. (1978). Mind and society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.
Wood, D. J., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychiatry and Psychology, 17(2), 89-100.

Saturday, 12 June 2010

Supporting medical trainees - a new course

BEST (Building Excellence in Specialty Training) is a new course I am running on behalf of several Deaneries around the UK, to explore much needed ways for trainees to gain access to opportunities for learning on the job.

With the advent of the latest report on training under the European Working Time Directive,(EWTD) http://www.mee.nhs.uk/PDF/14274%20Bookmark%20Web%20Version.pdf by Professor Sir John Temple, which highlights the need for dedicated training and support for junior doctors, BEST addresses one of the shortfalls in the new WBA (Workplace Based Assessment)and competence based training system.

The Training and Assessment in Practice (TAiP) course that I wrote and have delivered for the last 3 years on behalf of the Royal College of Surgeons of England has seen over a thousand consultants demystified as to what the WBAs are and how they complement existing good training practice. However there has been no such provision for trainees. Many are still using the WBAs incorrectly as retrospective, virtual scoring forms rather than face to face engagement and training tools. They struggle to identify their Educational Supervisor and to gain meaningful Learning Agreements from them. Even when trainees understand the formative and developmental nature of the WBAs, they are often working with trainers who are enculturated into a summative, secretive pass/fail mentality because that is all they have known during their own training.

BEST is available to equip trainees with the skills they need to manage their own training, to access the appropriate opportunities and to engage their trainers within a system they are mandated to use.

Training at Foundation stage and beyond in Core and Specialty training programmes is now built around the use of Portfolios to document evidence of competence and progression. Use of the new Assessment tools (Mini CEX, CBD, DOPs, PBA and a range of 360° feedback) is here to stay and the BEST course will:

• Develop understanding of the assessment tools and how best to use them in everyday practice;
• Show how to manage the Learning Agreement;
• Identify immediate learning needs;
• Consider how to get the best out of trainers;
• Identify ways to ensure trainer observation sessions;
• Use the tools to gain training rather than assessment;
• Manage the reception of feedback from your trainer;
• Agree ways to action plan further learning;
• Identify the appropriate people to ask to conduct your 360° feedback;
• Agree follow up activities to show progress after your 360° feedback;
• Support the development of simple approaches to structure reflective writing;
• Develop and improve a piece of your reflective writing;
• Identify the use and value of a portfolio;
• Discuss ways to add quality to your portfolio;

Sessions are delivered using live demonstrations for candidates to discuss, group work and paired work.

By the end of the course you will have:

• Prepared a PDP
• Drawn up an action plan
• Managed your feedback from a trainer
• Identified follow up activities from an assessment tool
• Improved a piece of reflective writing
• Received advice on your portfolio

Please email me for further details. There are a variety of ways that BEST can be accessed.

Hayleyallan@tiscali.co.uk

Thursday, 27 May 2010

Who needs Learning Outcomes?

I was working this week on a new course with another educationalist. When it came to the Learning Outcomes she, knowing my position on them (that they are great to put in at the end of the programme but no use at all when designing learning events) said, "Let's not start with the LOs...."

Hurrah! said I, and we began to design our session using key messages and activities to achieve those messages. What a joy!

I rummaged through my archives to find a copy of this article that speaks for itself. I wish to reiterate that LOs are great to put in a course programme so that learners see what they are going to get out of the day. But to start off with them when planning learning? Only if you are a businessman.....!

Who needs learning objectives?
Posted by Charles Jennings in Strategy, The training cycle on Tue, 28/07/2009 - 08:42
• This article looks at the case against creating learning objectives
• It explores how learners are often discouraged by box-ticking and how trainers need to implement better systems at helping learners retain information
• Charles Jennings cites examples of how effective learning can be achieved without the use of learning objectives

How many times have you embarked on some formal learning, whether in a classroom or through an elearning or blended course, and the first thing you’re presented with is a list of rather bland learning objectives? This begs the question, are lists for losers? Charles Jennings considers the evidence.

1. At the end of this course you will be able to tie your shoelaces in a double bow
2. At the end of this course you will be able to use a blender to make a tasty fish milkshake
3. At the end of this course you will be able to make gold out of base metal
4. and so on...

Apart from being some of the most de-motivating writing any of us have ever read, lists of learning objectives are the worst possible way to create an environment for learning. In fact, they are often the first barrier to real learning. Why so?

Two basic problems
I see two basic underlying problems with learning objectives. Firstly, many training and development specialists continue to apply a model of learning objectives that was developed more than half a century ago in a context that they don’t really understand. It’s a model that was ‘of its time’ and, although some of the principles still apply, certainly isn’t as relevant in the 21st century as it was in the mid-1900s, even accepting the view that formal learning still has a place in developing people.

Secondly, many training and development specialists are learning obsessed rather than performance obsessed. Their focus is on delivering content and assessing its retention by learners – on ensuring learners ‘learn’ rather than enabling people to ‘do’. Giving fish rather than fishing rods.

"There’s a strong argument that proof of achievement of learning objectives as commonly assessed at the end of the learning event doesn’t even measure learning."
Subsequently their learning objectives tend to be built around a set of post-course assessments. Even then, the way in which the ‘learning’ is assessed is often so poor that it only measures short-term memory retention rather than real learning and behaviour change.

A nod to Bloom
Back in 1956 when Benjamin Bloom and his committee members developed a taxonomy of learning objectives they were working in a very different world than we live in today. Reductionism and codification were the dominant mindsets. The standard approach to teaching at the time (and it was ‘teaching’ rather than ‘learning’) was to design courses and programmes so that students should take the same time to reach a specified level of mastery.
It was a crude approach where the hares won and the tortoises lost. Bloom was kicking against this with his taxonomy. The three learning domains of Bloom’s Taxonomy (cognitive, affective and psychomotor) were, in some way, an attempt to overlay some of the complexity of the learning process on what was seen at the time as a rather deterministic and mechanistic endeavour. Bloom was, underneath it all, a progressive. A former student once described him as "embracing the idea that education as a process was an effort to realize human potential, indeed, even more, it was an effort designed to make potential possible. Education was an exercise in optimism." (Elliot W. Eisner in the UNSECO Quarterly Review of Comparative Education 2000).

Bloom himself saw beyond learning objectives as simply a means to an end. He was convinced that environment and experience were very powerful factors influencing human performance. It’s worth noting that his last book published just six years before he died in 1999 was ‘The Home Environment and Social Learning’. He certainly wasn’t hung up on learning objectives. Bloom’s view of learning was the need to focus on target attainment rather than the ‘race to the finish post’ as was common in the 1950s. It was, in reality, a belief of learning as an enabler. At the time Bloom was addressing an important issue through his learning objectives, today that battle has been won.

Learning objectives and improved performance
So why, 50 years on, do we still have this slavish adherence to presenting learning objectives at the outset of courses in some mechanistic manner, and often skewed to the cognitive domain? It’s often ignorance, and sometimes simply a desire to make the life of the trainer easier, I’m afraid. And sometimes it’s just marketing. Learning objectives are really only useful for the people designing the learning. If used well they can form a helpful framework for instructional designers. However, they should be kept well away from learners or course recipients. If a course is well-designed and targeted to meet a defined performance gap, a list of learning objectives serves absolutely no purpose other than to dull the enthusiasm of those embarking on a course of study.

What any learner, and their manager, wants to know is whether on-the-job performance has been improved through some formal learning intervention. In other words, whether the experiences that the employee had during formal training has resulted in changed behaviour and performance in the workplace. Achievement of learning objectives is not evidence of this. The ability to pass a test or demonstrate a skill in a classroom setting is not the same as being able to do so in workplace conditions. I suppose the notable exception is where the classroom conditions mirror exactly, or almost exactly, the workplace – such as training pilots in a flight simulator. Still, I don’t imagine any one of us would take kindly to flying in a plane with a pilot who has only proved his or her performance in a simulator and hasn’t a modicum of experience in the air, unless there isn’t an alternative.

"Learning objectives are really only useful for the people designing the learning."
In fact there’s a strong argument that proof of achievement of learning objectives as commonly assessed – at the end of the learning event – doesn’t even measure learning. Sometimes the time lag between end-of-course testing and attempting to put the learning into action is such that the ‘learning’ is lost from short-term memory. At other times the work environment is less ‘controlled’ than the learning environment and the added variables mean performance improvement simply doesn’t occur. Most of us have seen situations where people return bright-eyed and bushy-tailed from a training course with plans to do things differently – time management, project management and people management training are good cases-in-point - only to revert to the old ways as soon as the day-to-day pressures of the working environment kick back in.

Measuring performance
If you are going to assess the impact of a course on individual participants’ performance in the workplace you need to forget about learning objectives for doing the job. Remember, learning objectives may be useful to help you create a logical design, but that’s all they’re useful for. When you get to measuring transfer of learning to the workplace you need to engage with the people who are in a position to observe behaviour and performance and those who are in a position to measure outputs. This usually means the manager and the team member who is responsible for maintaining performance metrics for the business or team – the balanced scorecard metrics or similar.

This approach requires training and development managers and instructional designers to engage with business managers and agree on strategies for measuring the impact of the learning before the learning design phase even starts. A good way to do this is to roll it into early engagement with business managers in defining the performance problem to be solved, whose performance needs improving and whether training is likely to help solve the problem (which is usually ‘no’, but sometimes ‘yes’).
In most cases performance change can’t be measured immediately following the training if it is to be meaningful. Take the case of transactional work - data entry or call centre operatives for instance – where the proof that training has led to improved performance requires data taken over a period of time, and not just on the first day or two back in the workplace. All this requires more thought and effort than writing a few overarching learning objectives (even if in well-formed behavioural format) and then developing assessments to ‘test’ whether they’ve been achieved or not. And it requires different skills of the training and development team.

Charles Jennings was chief learning officer at Reuters and Thomson Reuters. He now works as an independent consultant on learning and performance. Details of Charles consultancy work and his blog can be found on his website www.duntroon.com

Wednesday, 26 May 2010

Reflective Practice - here is how to do it!

http://www.rcog.org.uk/files/rcog-corp/uploaded-files/ED-Reflective-Prac.pdf

This excellent pdf from the Royal Colleg of Obstetrics and Gynaecology has everything you need to know about writing up your reflections in medical training.

Saturday, 15 May 2010

Article in BMJ for medical trainees

http://careers.bmj.com/careers/advice/view-article.html?id=20001007

This article published in the BMJ this week gives a good overview of the structure of training and the role of the trainee and trainer in negotiating access to training opportunities.

With The Royal College of Surgeons in England, I have worked with over a thousand consultants and senior trainers to establish understanding of the Workplace Based Assessment Tools and their place in Medical and Surgical postgraduate training.

In partnership with some forward thinking Deaneries, we are now offering such courses for Trainees. BEST (Building Excellence in Specialty Training), is a one day course for all Foundation and Core (or ST 1 and 2) trainees who wish to master the approaches needed to get the best out of their training and their trainers.

Thursday, 13 May 2010

Cognitive Apprenticeships

While researching for a course I am writing I found this article on the web which I thought was most interesting. Despite it being about school classrooms - and American ones at that! - it offers a great example of how we can develop active apprenticeships in our more constrained educational environments. I shall certainly be looking at how I can incorporate the key elements of this work into my new course.

http://projects.coe.uga.edu/epltt/index.php?title=Cognitive_Apprenticeship

Monday, 26 April 2010

Early Adopter, Early Cynic, Blind Complier or Blind Rebel? How do you react to training courses?

Surgeons' reactions to training, assessment and management courses is understandably at a low ebb. With the PMETB trainer requirements deadline earlier this year, many consultants found themselves mandated to attend a range of courses in Educational Supervision; Clinical supervision; Training the Trainers; WBA tools; Assessment and Appraisal; Equality and Diversity; Trainees in Difficulty; or 'Manual Fire Bucket Assessment Handling' as one surgeon referred to the homogenised mass of courses he had to take.

Disparity in content, delivery and quality of these courses has led to a lowest common denominator perception among participant groups. Think of the worst course you have ever attended, multiply it by ten and you have the level of underwhelming expectation with which most groups greet their latest day out of clinical practice.

In little over three years there have been well over a thousand participants (estimates at time of press number over 1200) through the Royal College of Surgeons of England Training & Assessment in Practice (TAiP) course. Such numbers enable perception analysis to be carried out regarding the differing reactions from participants, to the course. Four predominant types emerged:

• Early Adopter
• Early Cynic
• Blind Complier
• Blind Rebel


TAiP was developed by an educator and a group of surgeons to support consultants in the use of the ISCP, a new training and assessment programme developed by the Intercollegiate Surgical body in response to MMC (modernising Medical Careers) and the advent of the EWTD (European Working Time Directive.) Although TAiP contains strategies to support consultants in using the new WBA (Workplace Based Assessment) tools, many participants see the course as an imposition.

The aim of the TAiP course is to give everyone the information they need to use the ISCP system in accordance with good training practices. This requires understanding of the system and a willingness to use it.

In analysing the different responses to the course, it became clear that these two predominant factors are central to a person's response. The first of these concerns capability; that is whether the surgeon comes to the course with either an understanding of ISCP or the capability to develop an understanding within the day. The second factor relates to attitude; does the surgeon have the willingness to work with the system? Some participants have an attitude of open mindedness, or compliance with suggested new approaches. Others have greater resistance to any suggestion of change and are determined not to comply with whatever is suggested. This attitude is very often pre determined by factors outside the area of responsibility of the group facilitators.

Understanding these patterns of behaviour can assist the facilitators to relate appropriately to each participant and to manage the course in slightly different ways, according to the group make up. By examining the four permutations of the aforementioned factors we can see that each response type presents its own challenges to the facilitators.

Early Adopter - high capability, high compliance
Early Cynic - high capability, low compliance
Blind Complier - low capability, high compliance
Blind Rebel - low capability, low compliance


The Early Adopters in a group have both high levels of compliance and capability. They are often keen to make sense of a new system and to find ways to implement change. They are not afraid to be seen to be different and for this reason often occupy positions of leadership. Early adopters do bring their own challenges to the group facilitators. Whilst their open mindedness means they are willing to look at new perspectives, their high levels of capability require the facilitators to have a sophisticated grasp of the issues and perspectives, the knowledge frameworks and the medical settings within which the participants work. If facilitators demonstrate credibility, considerable levels of knowledge and understanding, and harness the inventiveness of the group to a common goal, even making them think they have developed original ideas, the early adopters will react overwhelmingly positively to the course.

The second group are called Early Cynics because their lack of compliance means that they often announce their cynicism at the start of the course. They can initially appear to be very challenging to the facilitators but by asking about their cynicism it is easy to tell whether it is informed or not. Early Cynics can be the most rewarding of all participants due to their high levels of capability. If their cynicism is ill informed, they are not Early Cynics at all, but fall into the fourth category, Blind Rebels. There are some Early Cynics who have high capability but who are so fearful of change that their high level of non compliance reduces them to the fourth category too. But real Early Cynics have high levels of capability; their ideas are often persuasive and they are quick to grasp new information. It is their intelligence that will triumph over their non compliance and result in a positive response to the course at the end. For course facilitators Early Cynics present two challenges. Their aggression at the start of the course can be disruptive and hold up progress. If they are given a limited space to air their views, this will “park” their scepticism. And then, in a similar way to the early adopters, the Early Cynics require facilitators who are knowledgeable and who also demonstrate persuasive, informed argument. (Cynicism with poor levels of understanding is not cynicism but rebellion. For this reason the only true cynics are those who know what it is they do not believe in.)

The third group of respondents appear to be relatively easy members of the group initially. Called Blind Compliers they are characterized by their apparently high levels of compliance and low levels of understanding. They ask few questions and appear to be absorbing what is being said. However, without careful handling this group can easily leave with little more than they came. Their compliance means they are not likely to question the frameworks being presented, but unless all the details of the course, including acronyms, roles, responsibilities and relationships are spelled out to them, they let much of the new perspectives wash over their heads. Whilst they may present little by way of attitudinal challenge to the facilitators, the faculty must check that Blind Compliers are following, understanding and integrating what they are exposed to on the course into clinical practice. A course can become a one way didactic session never really getting beyond the basics of the course content, if there are lots of Blind Compliers in a group, so facilitators must ensure that these learners are made to think for themselves too. If the overall group has a mixture of Blind Compliers, Early Adopters and Early Cynics, it challenges the facilitators to cover the basics in a simple way but also to extend the complexity of the arguments for those with higher capability and understanding, as already discussed.

The final group is probably the most challenging but they often have the greatest need for the course. With Blind Rebels both their compliance and their capability are low. Most often they have attended the course because they have been forced to do so, either as a result of an appraisal action or because there have been threats associated with non attendance. This group have developed a stance of non compliance as a result of their lack of understanding or willingness to engage in new ideas, and see courses as threats to their professional standing. They will argue fiercely against any proposed change, finding a range of people to blame for the changes they see as having been externally imposed. The difference between this group and the Early Cynics is that the Blind Rebels do not have the capability to argue with any degree of information or logical reasoning. Indeed, the more vociferous they become the more the rest of the group begin to disengage with them. This group of participants can be very disruptive as they can raise a comment or an objection to every point made. The challenge for the faculty is to maintain composure in the face of often rather offensive behavior and to remember that this group need as careful instruction as the Blind Compliers. It would be encouraging to think that information would assist the Blind Rebels to overcome their non compliance but for many in this group it is often not enough. For those who are determined to destroy the course, it may be necessary to respond to their ill informed arguments somewhat aggressively, highlighting where they are misinformed and emphasizing that both information and a change in attitude would indeed help them to overcome their grievances. Often it is only their self instigated isolation from the rest of the group that finally reduces them to silence. Frequently they have to be allowed to hang themselves with their own petards, necessitating a group attack on their disruptive behaviour.

In an unrelated field but one which may be interesting to compare, we can see that similar findings have been discussed with regard to the responses of people to new technology. The Everett Rogers Diffusion of innovations theory - for any given product category, shows five categories of product adopters:

o Innovators – venturesome, educated, multiple info sources;
o Early adopters – social leaders, popular, educated;
o Early majority – deliberate, many informal social contacts;
o Late majority – sceptical, traditional, lower socio-economic status;
o Laggards – neighbours and friends are main info sources, fear of debt.


However this pattern is a linear one, describing types of people within a range of demographic factors, including social background, psychological make up, educational history, personality and popularity, economic situation, social influences and fears. In the paradigm used with surgeon responders to courses I look simply at the responses relating to two factors – those of compliance and capability.

For those of us involved in education and training, this paradigm provides us with an interesting perspective on the challenges facing us in any group of participants in a course group. A surgeon cohort group should not usually be perceived as a mixed ability group in the usual definition of the term. All consultant groups must surely share a similar level of intelligence and motivation to have achieved the position of consultant. But new initiatives coupled with the disenchantment surrounding the many changes we have seen in the last five years, mean that attitude to change as well as engagement with it result in a mixed ability reaction to training courses.

For course facilitators, such groups can be very challenging for a variety of reasons. Understanding the factors behind the behaviours of surgeons attending such courses can help faculty to respond appropriately to each type of participant, ensuring maximum success and minimum disruption for each course group.

Sunday, 28 February 2010

Feedback for Learning (FfL)

We still haven’t mastered feedback to drive learning. In the medical world, feedback is seen as a commodity rather than a learning process. This needs to change.

A friend and colleague of mine says that feedback is the oxygen to the soul and that we shouldn’t make people gasp for it.

How true it is that many of us out there are wheezing for some guidance as to how we are doing? We work too often in the dark with little idea of how well we are doing or what we are getting away with that could be better.

“Feedback is the corner stone of effective clinical teaching.” (Cantillon & Sargeant 2008)

This kind of statement is rife in medical education and whilst it is not untrue, it gives little away about what feedback is, how we should approach it, who is best placed to engage in it and when that should happen.

A recent article in Medical Education (Archer, Jan 2010) says that, “Only feedback seen along a learning continuum within a culture of feedback is likely to be effective.” Whilst I agree with this statement most emphatically, I am not sure the article goes far enough in its portrayal of feedback for learning. (FfL)

We have moved over the last decade away from assessment being for the stakeholders only (summative assessment) to a position of using formative assessment to diagnose learning need and to instigate actions to develop and train. (WBAs exemplify this.) What we haven’t quite achieved is the recognition that feedback is not an outcome but is, if it is to facilitate reflection and development, a process. We may talk about the feedback process but most often the result of that process is what is seen to be most important. We still refer to a “critique” of a performance, piece of work or discussion, where the feedback becomes a commodity to be recorded by the trainee. Whilst this outcome based view of feedback persists, it will never fully work as a training and learning tool.

For effective feedback to work in terms of developing knowledge and understanding, attitudes and perspectives and behaviours and actions, it needs to be seen as a process, not an outcome.

This is not a new idea. In 1983 Schon said: “To achieve effective feedback, the health professions must nurture recipient reflection-in-action.” More than 25 years later this is still not happening.

Archer begins to address this but fails to go beyond the outcomes model. He draws a distinction between feedback as directive or facilitative in nature, explaining that “directive feedback informs the learner of what requires correction. Facilitative feedback involves the provision of comments and suggestions to facilitate recipients in their own revision.” In most cases of facilitative feedback the recipient will still focus on the facilitator’s suggestions and comments as the outcomes that must be achieved.

In directive feedback the educator generally tells the trainee how they have done. The trainee then has to understand what has been said, deal with the emotional response that may invoke in him or her and then accept or reject the comment. Acceptance and rejection often have more to do with the emotional response than with the accuracy and relevance of the comments made. In this way what is intended by the educator can often be greatly misinterpreted by the trainee.

Similarly in facilitative feedback, the comments and suggestions, albeit perhaps more acceptably and sympathetically offered, are still dealt with in the same way by the recipient, and may also be interpreted very differently.

Archer claimed that we need to “build on self monitoring informed by external feedback.” I would go further and say that as trainees progress, we need to build on self monitoring within a process of reflective conversations using open questions to guide and develop. Archer says that “the ability to shape capability through self monitoring with self directed assessment seeking requires an individual to accept the feedback provided. “ But if the feedback process is a process where the recipient has to do all the thinking, and the facilitator merely asks appropriate relevant open questions, then there is no feedback to be accepted, except the recipient’s own.

Feedback for Learning (FfL)


What is required is a third form of feedback during which the educator or facilitator does no “telling” whatsoever. Instead the focus is entirely on the trainee, and the trainee does all of the work. Feedback is facilitated with a series of open questions. As a result the learners have to think evaluatively about their experiences. They provide information regarding what they have done or not done, to which they may still have an emotional reaction, but which cannot be immediately rejected as it is they who have stimulated it. To any emotional or difficult reaction the facilitator responds supportively and with questions concerning further actions. In this way the facilitator of the feedback is seen as a helper and supporter of learning, not a critic. The role of the educator is then to move the learner forward to consider ways to address, amend or develop practice for the future.

Archer recommends that feedback is done by trained facilitators and discusses a useful model for scaffolding feedback:

1. Motivate the learner
2. Deconstruct the task
3. Provide direction
4. Identify gaps between actual and ideal performance
5. Reduce risk
6. Define goals

This framework is helpful but still focuses on the facilitator of the feedback as the one who drives the process.

If we return to the model of feedback using open questions only (FfL), we can add to Archer’s scaffolding model with some suggested open questions to elicit learning reflection and thus learning driven feedback.


1. Motivate the learner
• What would you like to get from this review session?
• How do you see this review contributing to your practice?

2. Deconstruct the task
Tell me:
• What happened/what you did
• How you felt/what you assumed/what you believed would happen
• What you think about this experience/have you had this happen before?

3. Identify gaps between actual and ideal performance
• Describe the differences between what you did here and what you would like to do in -x- time?


4. Provide direction & reduce risk

What next steps can you set yourself for this?
How will you do that? Who may you need to help you?

5. Define goals

• Jot down the goals you have set yourself here, along with the time scales, who is involved, where and when you will achieve them and how you propose to review and monitor them.

Previously trainers have been reluctant to use only open questions as they say that the feedback then relies on the honesty and perception and insight of the trainee to identify where they went wrong. It is indeed true that if the trainee is unconsciously incompetent they will not be able to identify errors and ways to develop. However, the skill in using open questions means that the facilitator can enable the trainee to identify that unknown area through questioning and elicitation. This is a skill which is not endemic to all trainers. Therefore the skill must be developed in trainers as well as in trainees.

FfL as professional reflection in action

Feedback is the pre cursor to professional reflection in action. As children we are disciplined so that as adults we develop self discipline. Junior doctors – or anyone starting off in a profession, need to develop professional reflection on and in action in order to function as independent practitioners after some time. If feedback continues to be a commodity that is bestowed from on high, the dependence of trainee upon trainers will never diminish. As educators our role is not just to train, pass on wisdom, transmit knowledge and skills, but to develop the trainee to become critical thinker, reflective practitioner and measured decision maker.


As a consultant surgeon I worked with this week said to me, “I want to teach them to think for themselves.” We can only do this if the feedback we engage in makes them think for themselves. Feedback for Learning offers a process to enable that to happen


Archer J, State of the Science in Health Professional Education. Effective Feedback: Medical Education 2010:101 – 108
Cantillon P, Sargeant J, Giving feedback in Clinical Settings BMJ 2008: 337a 1961
Schon D, From technical rationality to reflection in Action in Schon D, The Reflective Practitioner: how professionals think in action. London basic books 1983 21 - 75

Sunday, 6 September 2009

How to reflect - a guide for trainees and students

Developing reflective writing
A guide for doctors in training
©Hayley Allan 2009


Reflective practice is an essential component of the Portfolio for all trainees today. The ISCP ( Intercollegiate Surgical Curriculum Programme) requires it as do other forms of training and education both at undergraduate and postgraduate level. However, very few syllabuses or training programmes define exactly what it is; fewer still give any advice on how to do it. This paper will give trainees a greater understanding of how to approach their reflective practice, and some simple frameworks for structuring their writing.
What is it?
“Reflection is vital for learning from clinical experiences” (Driessen et al, BMJ 2008 336)
To many people "experience" means "making the same mistakes with increasing confidence over an impressive number of years" (O’Donnell, 1997). The “impressive number of years” that surgeons spent in training previously has now gone and in the era of EWTD and ISCP, trainees cannot afford to make the same mistake twice. One way of addressing this is to encourage and develop the use of reflection in all trainee doctors.
Defining reflection
When we say that trainees need to be more reflective, what we mean is that they need to let future behaviour be guided by systematic and critical analysis of past actions and beliefs and the assumptions that underlie them. (Dewey, 1933)
Why use it?
All doctors in the UK are now required to make reflection a critical foundation of their lifelong learning (GMC 2000). Research evidence from nursing, (Jarvis in Nurse Educ 1992) and teaching (Korthagen et al, 2001) suggests that reflection can help students learn from their experiences.
How do we do it?
Most trainees do know how to reflect effectively on their practice, but they may not be aware that they are doing this. They may be aware that something has not gone particularly well; for Dewey, reflection was stimulated by an event that aroused a state of doubt, perplexity and uncertainty that often leads to the individual searching for the possible explanations or solutions. (Dewey, 1933) We are less given to reflect on practice that has gone well, although it is useful for us to do this from time to time to ensure that we understand why it went well in order to replicate the good practice. However, most of us want to improve the poor practice and this is where much of reflection is centred.
The benefits of regular reflective writing
Reflective writing provides an opportunity for us to think critically about what we do and why. It provides
· a record of events and results and our reactions to them,
· data on which to base reflective discussion,
· opportunity for us to challenge ourselves and what we do and to look at doing it differently and better,
· impetus to take action that is informed and planned,
· an opportunity to view our clinical practice objectively and not see all problems as personal inadequacy,
· increased confidence through increased insight
· Basic documentation to support future entries in our portfolio and for job applications etc.
(http://www.clt.uts.edu.au/Scholarship/Reflective.journal.htm)
Where do we use it?
We need to be able to assess and analyse our actions and devise alternative actions. That is the essence of reflective practice in the workplace. To begin with it is helpful to have guidance and some structure given by more experienced colleagues. In the bust context of the clinical setting it is difficult to gain time to do this, but requesting that a trainer provide you with five minutes to run through an event, can be all that is needed to trigger the reflection you need.
1. Asking for feedback after conducting an assessment (mini CEX, CbD, DOP, PBA)
Ask the trainer what they felt your strengths were in that activity. Add your own view of the strengths and ask for comments
Ask the trainer where they think you could develop. Add your own view of areas for development
Ask them to focus on one or two key areas for follow up action and to give some suggestions for follow up activity to enhance learning
This approach to feedback is based on Pendleton’s rules. It is imperative that you draw up actions for development, and not merely talk about how to improve. Adding quality to your learning, practice and portfolio requires you to show progress as a result of learning on the job.
Example:
After a Mini CEX in clinic, you and your trainer identify together the strengths of professionalism and appearance, rapport with a patient and organisation of encounter. In need of development was the history taking. You discuss the areas of weakness here and agree that you will shadow him/her in clinic next week, recording his/her history taking approach. You will then teach this to the F1s the following week and do a follow up mini CEX the week following that.
This is the most helpful way you can begin to develop reflection early in your training career. Make sure you document your development, with evidence in this case of the records you make of your trainer’s history taking, and the evaluations you receive from the F1 teaching the following week. When you next complete your Mini CEX form you can put all of these pieces of evidence together to show that you have learned from this period of time.
Try this model for structuring your reflections:
ALAC: (Driessen 2008)

Action - choose experiences that support and develop your learning (ie those from which you can learn)

Looking back - Separate performance from person (a mistake does not mean the person is a failure); be trustworthy and honest; acknowledge and make success explicit; seek feedback; obtain information and evidence from various sources and put it into your portfolio

Analysis - Focus on your own role in the success or failure; take the perspective of others; ask ‘why’ questions; ‘confronting’ questions; ‘generalising’ questions; look for inconsistencies in your analysis; generalise between experiences

Creating alternative actions - suggest options for change; formulate plans and check these are in line with analysis; focus on SMART objectives for learning

SMART: Make sure your actions are:
Specific
Measurable
Achievable
Relevant
Timely

Peer reflection
Engaging in open and collaborative discussion about work with a peer is a process that can enable us to become more reflective doctors. You can use any of the models advocated above if you and your peer are reasonably confident and experienced in challenging one another in reviewing an event. If you are new to this you may wish to reflect with a supervisor first until you become more confident.

Assumptions
A helpful way of understanding the process of reflection described by Stephen Brookfield (1995) describes the process of hunting out our assumptions and critically examining them. Ask yourself what are the assumptions behind your practice and then try to develop a contrary argument. You now have two sides of an argument to evaluate. This is engaging in personal critical reflection.

Keeping a journal
The journal is parallel to the field book or laboratory notes of the scientist. We not only record what happened or what was observed but in addition we can record a tentative hypothesis or the development of new understanding, we can use our writing to make a new sense of phenomena. Reflective writing has the potential to provide us with a systematic approach to our development as a reflective, critical and constructive learner. Our journal can provide an opportunity to make explicit our position on a range of issues of personal significance.
Your journal could be structured:-
· as a personal learning journey, tracking and documenting an evolving understanding of your clinical practice and learning
· a critical reflection on a clinical encounter you have witnessed between a colleague or your registrar or consultant supervisor

Ideas for getting started on reflective writing:
1. Use a checklist
· What is the current problem or issue? Describe the context
· What additional information would be useful?
· How is it related to other issues?
· Who or what could help?
· What are my assumptions? How can I test them?
· What can I do to create a change? Be as adventurous as you can
· What are the possible outcomes of these?
· What action will I take? Why?
· List the outcomes you hope to achieve.
· Reflection on the actual outcome What worked well?
· What could I do differently next time?

2. Focus on a critical incident that took place in your clinical practice.
· Describe the incident as objectively as possible.
· What were the assumptions that you were operating with?
· Is there another way to see this event?
· How would your patients explain this event?
· How do the two explanations compare?
· What could you do differently?

3.And from time to time...
· What has using this journal confirmed that I already know about my learning and how I affect that?
· What do I need to do to improve the quality of what I do?
· What might I do instead of what I do now?
· What innovation could I introduce?
· What professional development activities should I be seeking?

For recording your reflective writing, keep it simple.

HEADLINE: What you have learned from this event
EVENT: 1-3 sentences about the event itself with some idea of the area you targeted from reflection
LEARNED: now you can spend longer on this area, discussing what it was you learned from the event and expressing this in developmental but positive terms
ACTION: This is where you identify the actions resulting from the reflection. They may be short, medium or long term and you can revisit them after you have implemented them to comment on their efficacy once used in practice.
Remember to always follow up with a review of your amended practice after you have implemented your actions. This completes the cycle of learning initiated from the original piece of reflection.

For more information on reflective practice see:
Ballantyne, R & Packer, J; (1995)Making Connections: Using Student Journals as a Teaching/Learning Aid, HERDSA ACT.
Boud, D; Keogh, R; & Walker, D, (1995) Reflection: Turning Experience into Learning, Kogan Page, London.
Brookfield S. (1995) On Becoming a Critically Reflective Teacher, Jossey Bass, San Francisco.
http://www.clt.uts.edu.au/Scholarship/Reflective.journal.htm
Dewey J. How we think: a restatement of the relation between reflective thinking to the education process. Boston: Heath, 1933Driessen E, van Tartwijk J, Dornan T The self critical doctor: helping students become more reflective. BMJ 2008 336:0
General medical Council. Revalidating doctors: ensuring standards, securing the future. London: GMC, 2000
Jarvis P. Reflective Practice and nursing. Nurse Educ 1992; 12; 174 – 81
Korthagen FAJ, Kessels J, Koster B, Lagerwerf B, Wubbels T. linking theory and Practice: the pedagogy of realistic teacher education. Mahwah, NY: Lawrence Erlbaum Associates, 2001
O’Donnell M. A sceptic’s medical dictionary. Oxford: Blackwell BMJ books, 1997
Schon, D; (1987) Educating the Reflective Practitioner; Jossey Bass, San Francisco.

Monday, 29 June 2009

Compentecy based education and Quantity assurance - why nobody can think for themselves any more

Thanks to the excellent groups I worked with last week (almost 40 consultant doctors in two South London trusts), as well as some of the great literature around in Generic and Medical Education, I think I am somewhat further forward in identifying what is so wrong about our current education philosophy and practice in the UK.

It is the political control over our hospitals and schools that has led to the implementation of competence based education. This political climate has as its closet epigraph: “Let’s only value that which we can easily measure; forget trying to measure what is of value.” Added to that is the obsession of measuring for the benefit of the organisation (Government, Trust, University, Royal College etc…) and not just forgetting but turning a blind eye to the end user – the patient, the trainee doctor, course participants and students.

This realisation that organisations no longer care about the end user of their services, but only about their financial accounting and those involved in the decision making, prompted me to research the meaning of Quality Assurance.

It is a set of activities intended to ensure that products (goods and/or services) satisfy customer requirements in a systematic, reliable fashion…. It is important to realise also that quality is determined by the intended users, clients or customers, not by society in general (http://en.wikipedia.org/wiki/Quality_assurance)
This also incorporates measuring all process elements, the analysis of performance and the continual improvement of the products, services and processes that deliver them to the customer. (http://www.thecqi.org/Knowledge-Hub/What-is-quality-new/)

What I took from this is that the end user is of paramount importance – we do QA for them and for the products and services that they require. Therefore they ought to be involved in it and given a voice to shape and improve those products and services where possible.

So far so good.

A medical school lecturer shared their approach to QA with me:

Level 1- Course satisfaction: feedback forms completed at the end of event. This gauges the immediate reactions of participants and will highlight immediate issues.
Level 2- Transfer to work place: (completed 2-3 weeks after event) electronically. This allows time to reflect on the learning and relate usefulness to own learning needs and role.
Level 3- Impact on processes/performance: This involves a qualitative approach with a structured discussion with a sample of participants.

But some institutions are cutting back on the levels of QA they use. Rather than using level 1 evaluation at the end of the course, they are moving to an on line version of level 2 several weeks after the event. Whilst this may gather data on the transfer of learning to the workplace, I suspect that since course participants will not be able to receive their certificate until they do complete the e feedback form, less qualitative data will result. My guess is that the tick box exercise will remain.

When I looked elsewhere, I found that some of the barriers to doing what is best for the end user were frighteningly similar in hospital trusts. A radiologist told me about the two week targets they face; A+ E clinicians regularly face a choice of quantity over quality of care and are penalised when they choose the latter. Waiting lists have gone down in hospitals, but at what cost to patient care? Or to the very people who provide that care?

In much the same way trainee doctors are asked to gather evidence of their clinical practice, on single sheets of paper with tick boxes printed on them. The more pieces of paper, the better. Once again the quantitative triumphs over the qualitative.

And I ask the question, should we not be calling our evaluation processes “Quantity Assurance?”

To escape from the harsh realities of the overworked and undervalued doctors I work with, I turned to the literature, in the hope that I would find a new idea I could use with my trainers, to mitigate against this inexorable decline into robotic delivery of “care” or “education” as a commodity like sandwiches or exhaust pipes.

One hot sleepless night, I unearthed a new book from beside my bed on Reflective Practice, in the hope that it would have a soporific effect! It did quite the opposite. (Reflective Practice by Gillie Bolton, 2005, SAGE, London. ISBN 978-1-4129-0811-5)

As I flicked through it my attention was caught by the phrase, “Teachers are assessed on the value they offer the consumer….” Bolton was attacking competency based education. Hurrah! As someone who finds “Learning Outcomes” just a bit too management oriented, I read on.
Bolton’s critique claims that knowledge and skills are seen like commodities and ignore educational processes such as the teacher learner relationship and the learning environment, to name but two. She says that, “Giving students set pro formas, lists of prompts, questions or areas which must be covered in reflective practice will stultify, make for passivity and lack of respect. Professionals need to ask and attempt to answer their own questions. Otherwise their practice is being moulded towards the system’s wants and needs.”

Furthermore, says Bolton, testing and checking up on students to see if they have acquired the competencies further endorses this subordinate sense. Trainers talk about ‘getting the trainees to do’ …..which means that all our next generation are doing is joining the dots and filling in the blanks. Providing evidence of a competency does not guarantee that the trainee has learned anything or understood the case or the patient. Bolton says: “It matters not that it is solely a paper exercise as there is no continuity between course and practice, no one to see practice has been changed or developed; what matters is the product: the neatly ticked boxes look right.” And she references Prosser and Trigwell 1999 and Rust 2002 in calling this “Surface Learning.”

Most of the trainers I work with complain bitterly about the spoon feeding culture and about the trainees they have who cannot think for themselves. What we don’t realise is that the system requires that trainees cannot think for themselves – the system specifies what it wants and orders its learning outcomes and behaviourists statements of performance accordingly. Unless we intervene and do something about this, the next generation will be fit for nothing except robotically following those in power.

1984 or what?

This morning I turned to Medical Teacher, (http://www.medicalteacher.org/) hoping for some respite from this depressing situation. And I found it. Well, a chink of light.

I came across a table in the current edition of Medical Teacher Vol 31 number 4, April 2009 in an article on assessing medical professionalism (Assessment of medical professionalism: who,what,when,where, how and ….why? Hawkins et al 2009;31:348 – 361.) The table contrasts two frameworks for defining the elements of professionalism. One framework is the horribly familiar “Behaviours oriented framework” which specifies the specific behaviours that trainee doctors should be able to “evidence” and tick off in their portfolios:

Responds promptly when paged or called
Takes on extra work to help the team
Listens and responds to others respectfully
Discusses colleagues and co-workers in a respectful manner…..”

However, the other framework cheered me up somewhat. This is a “Principles oriented framework” in which the key headings relate to less observable competencies:

Excellence – dedication to improving quality of care; commitment to competence
Humanism – respect, compassion, empathy, honour/integrity
Altruism – putting patient interest above own
Accountability – embraces self regulation, public service/advocacy


I was immediately struck by the stark difference in these two lists. Not only is there the obvious difference of the first one being easy to “tick off”, but what struck me about the first list is that some of these behaviours are of such a low level. Are these really professional behaviours or just common or garden courtesies all workers ought to show as a matter of course? Are these things really necessary to document? Surely they should be taken as read and anyone not consistently behaving like this ought to be referred to HR for the relevant disciplinary action? Is this what we get when HR encroach on educational and training? A list of things you have to do to avoid being disciplined?

What about true professionalism? What about aspiring to be the best? The Principles list gives us the beginning of an outline of what a real professional ought to want to be. Surely professionalism is not something we put on (like a white coat) when we get to work? For me it pervades my life and is who I am and not just what I do.
Feeling bolstered, I read further in this article……and fell back to earth with a bang. It said:

While such principle driven frameworks are quite useful in thinking broadly about construction of an assessment programme, they are not easily applied to the assessment of measureable behaviours.” The authors bemoan the lack of consistency in assessing these principles, but do admit to the behaviours oriented framework being a “bottom up approach.”

They do go on to make some interesting and useful suggestions for assessing professionalism, but still within a context of assessing behaviours and documenting evidence.

What I have come to realise over the last few weeks, from both my reading and my practice and accounts of other people’s practice, is that we inhabit an evidence based, competency focused education, training and medical system. The results of this, whether in terms of patient care or trainee development, is for the lowest common denominator to be the benchmark of satisfaction. Quality and quantity seem to be linked as are the two bowls in an old fashioned measuring scale; when one improves, the other suffers. As waiting times are reduced and patients receive test results in less than two weeks, so quality of care is compromised. The more evidence that trainees gather about their competence, related to observable and documented behaviours, so the simpler the competencies become.

Those concerned with quality assurance need to stop looking at easy ways to gather the lowest levels of data, (or else call it quantity assurance) and start looking at indicators of quality – that is descriptive feedback from their end users. In fact, they just need to consult the end user, without whom there would be no NHS, no University or Royal College. We need to stop satisfying the masters, and look at the servants – for it is they who employ us.