Saturday, 12 June 2010

Supporting medical trainees - a new course

BEST (Building Excellence in Specialty Training) is a new course I am running on behalf of several Deaneries around the UK, to explore much needed ways for trainees to gain access to opportunities for learning on the job.

With the advent of the latest report on training under the European Working Time Directive,(EWTD) http://www.mee.nhs.uk/PDF/14274%20Bookmark%20Web%20Version.pdf by Professor Sir John Temple, which highlights the need for dedicated training and support for junior doctors, BEST addresses one of the shortfalls in the new WBA (Workplace Based Assessment)and competence based training system.

The Training and Assessment in Practice (TAiP) course that I wrote and have delivered for the last 3 years on behalf of the Royal College of Surgeons of England has seen over a thousand consultants demystified as to what the WBAs are and how they complement existing good training practice. However there has been no such provision for trainees. Many are still using the WBAs incorrectly as retrospective, virtual scoring forms rather than face to face engagement and training tools. They struggle to identify their Educational Supervisor and to gain meaningful Learning Agreements from them. Even when trainees understand the formative and developmental nature of the WBAs, they are often working with trainers who are enculturated into a summative, secretive pass/fail mentality because that is all they have known during their own training.

BEST is available to equip trainees with the skills they need to manage their own training, to access the appropriate opportunities and to engage their trainers within a system they are mandated to use.

Training at Foundation stage and beyond in Core and Specialty training programmes is now built around the use of Portfolios to document evidence of competence and progression. Use of the new Assessment tools (Mini CEX, CBD, DOPs, PBA and a range of 360° feedback) is here to stay and the BEST course will:

• Develop understanding of the assessment tools and how best to use them in everyday practice;
• Show how to manage the Learning Agreement;
• Identify immediate learning needs;
• Consider how to get the best out of trainers;
• Identify ways to ensure trainer observation sessions;
• Use the tools to gain training rather than assessment;
• Manage the reception of feedback from your trainer;
• Agree ways to action plan further learning;
• Identify the appropriate people to ask to conduct your 360° feedback;
• Agree follow up activities to show progress after your 360° feedback;
• Support the development of simple approaches to structure reflective writing;
• Develop and improve a piece of your reflective writing;
• Identify the use and value of a portfolio;
• Discuss ways to add quality to your portfolio;

Sessions are delivered using live demonstrations for candidates to discuss, group work and paired work.

By the end of the course you will have:

• Prepared a PDP
• Drawn up an action plan
• Managed your feedback from a trainer
• Identified follow up activities from an assessment tool
• Improved a piece of reflective writing
• Received advice on your portfolio

Please email me for further details. There are a variety of ways that BEST can be accessed.

Hayleyallan@tiscali.co.uk

Thursday, 27 May 2010

Who needs Learning Outcomes?

I was working this week on a new course with another educationalist. When it came to the Learning Outcomes she, knowing my position on them (that they are great to put in at the end of the programme but no use at all when designing learning events) said, "Let's not start with the LOs...."

Hurrah! said I, and we began to design our session using key messages and activities to achieve those messages. What a joy!

I rummaged through my archives to find a copy of this article that speaks for itself. I wish to reiterate that LOs are great to put in a course programme so that learners see what they are going to get out of the day. But to start off with them when planning learning? Only if you are a businessman.....!

Who needs learning objectives?
Posted by Charles Jennings in Strategy, The training cycle on Tue, 28/07/2009 - 08:42
• This article looks at the case against creating learning objectives
• It explores how learners are often discouraged by box-ticking and how trainers need to implement better systems at helping learners retain information
• Charles Jennings cites examples of how effective learning can be achieved without the use of learning objectives

How many times have you embarked on some formal learning, whether in a classroom or through an elearning or blended course, and the first thing you’re presented with is a list of rather bland learning objectives? This begs the question, are lists for losers? Charles Jennings considers the evidence.

1. At the end of this course you will be able to tie your shoelaces in a double bow
2. At the end of this course you will be able to use a blender to make a tasty fish milkshake
3. At the end of this course you will be able to make gold out of base metal
4. and so on...

Apart from being some of the most de-motivating writing any of us have ever read, lists of learning objectives are the worst possible way to create an environment for learning. In fact, they are often the first barrier to real learning. Why so?

Two basic problems
I see two basic underlying problems with learning objectives. Firstly, many training and development specialists continue to apply a model of learning objectives that was developed more than half a century ago in a context that they don’t really understand. It’s a model that was ‘of its time’ and, although some of the principles still apply, certainly isn’t as relevant in the 21st century as it was in the mid-1900s, even accepting the view that formal learning still has a place in developing people.

Secondly, many training and development specialists are learning obsessed rather than performance obsessed. Their focus is on delivering content and assessing its retention by learners – on ensuring learners ‘learn’ rather than enabling people to ‘do’. Giving fish rather than fishing rods.

"There’s a strong argument that proof of achievement of learning objectives as commonly assessed at the end of the learning event doesn’t even measure learning."
Subsequently their learning objectives tend to be built around a set of post-course assessments. Even then, the way in which the ‘learning’ is assessed is often so poor that it only measures short-term memory retention rather than real learning and behaviour change.

A nod to Bloom
Back in 1956 when Benjamin Bloom and his committee members developed a taxonomy of learning objectives they were working in a very different world than we live in today. Reductionism and codification were the dominant mindsets. The standard approach to teaching at the time (and it was ‘teaching’ rather than ‘learning’) was to design courses and programmes so that students should take the same time to reach a specified level of mastery.
It was a crude approach where the hares won and the tortoises lost. Bloom was kicking against this with his taxonomy. The three learning domains of Bloom’s Taxonomy (cognitive, affective and psychomotor) were, in some way, an attempt to overlay some of the complexity of the learning process on what was seen at the time as a rather deterministic and mechanistic endeavour. Bloom was, underneath it all, a progressive. A former student once described him as "embracing the idea that education as a process was an effort to realize human potential, indeed, even more, it was an effort designed to make potential possible. Education was an exercise in optimism." (Elliot W. Eisner in the UNSECO Quarterly Review of Comparative Education 2000).

Bloom himself saw beyond learning objectives as simply a means to an end. He was convinced that environment and experience were very powerful factors influencing human performance. It’s worth noting that his last book published just six years before he died in 1999 was ‘The Home Environment and Social Learning’. He certainly wasn’t hung up on learning objectives. Bloom’s view of learning was the need to focus on target attainment rather than the ‘race to the finish post’ as was common in the 1950s. It was, in reality, a belief of learning as an enabler. At the time Bloom was addressing an important issue through his learning objectives, today that battle has been won.

Learning objectives and improved performance
So why, 50 years on, do we still have this slavish adherence to presenting learning objectives at the outset of courses in some mechanistic manner, and often skewed to the cognitive domain? It’s often ignorance, and sometimes simply a desire to make the life of the trainer easier, I’m afraid. And sometimes it’s just marketing. Learning objectives are really only useful for the people designing the learning. If used well they can form a helpful framework for instructional designers. However, they should be kept well away from learners or course recipients. If a course is well-designed and targeted to meet a defined performance gap, a list of learning objectives serves absolutely no purpose other than to dull the enthusiasm of those embarking on a course of study.

What any learner, and their manager, wants to know is whether on-the-job performance has been improved through some formal learning intervention. In other words, whether the experiences that the employee had during formal training has resulted in changed behaviour and performance in the workplace. Achievement of learning objectives is not evidence of this. The ability to pass a test or demonstrate a skill in a classroom setting is not the same as being able to do so in workplace conditions. I suppose the notable exception is where the classroom conditions mirror exactly, or almost exactly, the workplace – such as training pilots in a flight simulator. Still, I don’t imagine any one of us would take kindly to flying in a plane with a pilot who has only proved his or her performance in a simulator and hasn’t a modicum of experience in the air, unless there isn’t an alternative.

"Learning objectives are really only useful for the people designing the learning."
In fact there’s a strong argument that proof of achievement of learning objectives as commonly assessed – at the end of the learning event – doesn’t even measure learning. Sometimes the time lag between end-of-course testing and attempting to put the learning into action is such that the ‘learning’ is lost from short-term memory. At other times the work environment is less ‘controlled’ than the learning environment and the added variables mean performance improvement simply doesn’t occur. Most of us have seen situations where people return bright-eyed and bushy-tailed from a training course with plans to do things differently – time management, project management and people management training are good cases-in-point - only to revert to the old ways as soon as the day-to-day pressures of the working environment kick back in.

Measuring performance
If you are going to assess the impact of a course on individual participants’ performance in the workplace you need to forget about learning objectives for doing the job. Remember, learning objectives may be useful to help you create a logical design, but that’s all they’re useful for. When you get to measuring transfer of learning to the workplace you need to engage with the people who are in a position to observe behaviour and performance and those who are in a position to measure outputs. This usually means the manager and the team member who is responsible for maintaining performance metrics for the business or team – the balanced scorecard metrics or similar.

This approach requires training and development managers and instructional designers to engage with business managers and agree on strategies for measuring the impact of the learning before the learning design phase even starts. A good way to do this is to roll it into early engagement with business managers in defining the performance problem to be solved, whose performance needs improving and whether training is likely to help solve the problem (which is usually ‘no’, but sometimes ‘yes’).
In most cases performance change can’t be measured immediately following the training if it is to be meaningful. Take the case of transactional work - data entry or call centre operatives for instance – where the proof that training has led to improved performance requires data taken over a period of time, and not just on the first day or two back in the workplace. All this requires more thought and effort than writing a few overarching learning objectives (even if in well-formed behavioural format) and then developing assessments to ‘test’ whether they’ve been achieved or not. And it requires different skills of the training and development team.

Charles Jennings was chief learning officer at Reuters and Thomson Reuters. He now works as an independent consultant on learning and performance. Details of Charles consultancy work and his blog can be found on his website www.duntroon.com

Wednesday, 26 May 2010

Reflective Practice - here is how to do it!

http://www.rcog.org.uk/files/rcog-corp/uploaded-files/ED-Reflective-Prac.pdf

This excellent pdf from the Royal Colleg of Obstetrics and Gynaecology has everything you need to know about writing up your reflections in medical training.

Saturday, 15 May 2010

Article in BMJ for medical trainees

http://careers.bmj.com/careers/advice/view-article.html?id=20001007

This article published in the BMJ this week gives a good overview of the structure of training and the role of the trainee and trainer in negotiating access to training opportunities.

With The Royal College of Surgeons in England, I have worked with over a thousand consultants and senior trainers to establish understanding of the Workplace Based Assessment Tools and their place in Medical and Surgical postgraduate training.

In partnership with some forward thinking Deaneries, we are now offering such courses for Trainees. BEST (Building Excellence in Specialty Training), is a one day course for all Foundation and Core (or ST 1 and 2) trainees who wish to master the approaches needed to get the best out of their training and their trainers.

Thursday, 13 May 2010

Cognitive Apprenticeships

While researching for a course I am writing I found this article on the web which I thought was most interesting. Despite it being about school classrooms - and American ones at that! - it offers a great example of how we can develop active apprenticeships in our more constrained educational environments. I shall certainly be looking at how I can incorporate the key elements of this work into my new course.

http://projects.coe.uga.edu/epltt/index.php?title=Cognitive_Apprenticeship

Monday, 26 April 2010

Early Adopter, Early Cynic, Blind Complier or Blind Rebel? How do you react to training courses?

Surgeons' reactions to training, assessment and management courses is understandably at a low ebb. With the PMETB trainer requirements deadline earlier this year, many consultants found themselves mandated to attend a range of courses in Educational Supervision; Clinical supervision; Training the Trainers; WBA tools; Assessment and Appraisal; Equality and Diversity; Trainees in Difficulty; or 'Manual Fire Bucket Assessment Handling' as one surgeon referred to the homogenised mass of courses he had to take.

Disparity in content, delivery and quality of these courses has led to a lowest common denominator perception among participant groups. Think of the worst course you have ever attended, multiply it by ten and you have the level of underwhelming expectation with which most groups greet their latest day out of clinical practice.

In little over three years there have been well over a thousand participants (estimates at time of press number over 1200) through the Royal College of Surgeons of England Training & Assessment in Practice (TAiP) course. Such numbers enable perception analysis to be carried out regarding the differing reactions from participants, to the course. Four predominant types emerged:

• Early Adopter
• Early Cynic
• Blind Complier
• Blind Rebel


TAiP was developed by an educator and a group of surgeons to support consultants in the use of the ISCP, a new training and assessment programme developed by the Intercollegiate Surgical body in response to MMC (modernising Medical Careers) and the advent of the EWTD (European Working Time Directive.) Although TAiP contains strategies to support consultants in using the new WBA (Workplace Based Assessment) tools, many participants see the course as an imposition.

The aim of the TAiP course is to give everyone the information they need to use the ISCP system in accordance with good training practices. This requires understanding of the system and a willingness to use it.

In analysing the different responses to the course, it became clear that these two predominant factors are central to a person's response. The first of these concerns capability; that is whether the surgeon comes to the course with either an understanding of ISCP or the capability to develop an understanding within the day. The second factor relates to attitude; does the surgeon have the willingness to work with the system? Some participants have an attitude of open mindedness, or compliance with suggested new approaches. Others have greater resistance to any suggestion of change and are determined not to comply with whatever is suggested. This attitude is very often pre determined by factors outside the area of responsibility of the group facilitators.

Understanding these patterns of behaviour can assist the facilitators to relate appropriately to each participant and to manage the course in slightly different ways, according to the group make up. By examining the four permutations of the aforementioned factors we can see that each response type presents its own challenges to the facilitators.

Early Adopter - high capability, high compliance
Early Cynic - high capability, low compliance
Blind Complier - low capability, high compliance
Blind Rebel - low capability, low compliance


The Early Adopters in a group have both high levels of compliance and capability. They are often keen to make sense of a new system and to find ways to implement change. They are not afraid to be seen to be different and for this reason often occupy positions of leadership. Early adopters do bring their own challenges to the group facilitators. Whilst their open mindedness means they are willing to look at new perspectives, their high levels of capability require the facilitators to have a sophisticated grasp of the issues and perspectives, the knowledge frameworks and the medical settings within which the participants work. If facilitators demonstrate credibility, considerable levels of knowledge and understanding, and harness the inventiveness of the group to a common goal, even making them think they have developed original ideas, the early adopters will react overwhelmingly positively to the course.

The second group are called Early Cynics because their lack of compliance means that they often announce their cynicism at the start of the course. They can initially appear to be very challenging to the facilitators but by asking about their cynicism it is easy to tell whether it is informed or not. Early Cynics can be the most rewarding of all participants due to their high levels of capability. If their cynicism is ill informed, they are not Early Cynics at all, but fall into the fourth category, Blind Rebels. There are some Early Cynics who have high capability but who are so fearful of change that their high level of non compliance reduces them to the fourth category too. But real Early Cynics have high levels of capability; their ideas are often persuasive and they are quick to grasp new information. It is their intelligence that will triumph over their non compliance and result in a positive response to the course at the end. For course facilitators Early Cynics present two challenges. Their aggression at the start of the course can be disruptive and hold up progress. If they are given a limited space to air their views, this will “park” their scepticism. And then, in a similar way to the early adopters, the Early Cynics require facilitators who are knowledgeable and who also demonstrate persuasive, informed argument. (Cynicism with poor levels of understanding is not cynicism but rebellion. For this reason the only true cynics are those who know what it is they do not believe in.)

The third group of respondents appear to be relatively easy members of the group initially. Called Blind Compliers they are characterized by their apparently high levels of compliance and low levels of understanding. They ask few questions and appear to be absorbing what is being said. However, without careful handling this group can easily leave with little more than they came. Their compliance means they are not likely to question the frameworks being presented, but unless all the details of the course, including acronyms, roles, responsibilities and relationships are spelled out to them, they let much of the new perspectives wash over their heads. Whilst they may present little by way of attitudinal challenge to the facilitators, the faculty must check that Blind Compliers are following, understanding and integrating what they are exposed to on the course into clinical practice. A course can become a one way didactic session never really getting beyond the basics of the course content, if there are lots of Blind Compliers in a group, so facilitators must ensure that these learners are made to think for themselves too. If the overall group has a mixture of Blind Compliers, Early Adopters and Early Cynics, it challenges the facilitators to cover the basics in a simple way but also to extend the complexity of the arguments for those with higher capability and understanding, as already discussed.

The final group is probably the most challenging but they often have the greatest need for the course. With Blind Rebels both their compliance and their capability are low. Most often they have attended the course because they have been forced to do so, either as a result of an appraisal action or because there have been threats associated with non attendance. This group have developed a stance of non compliance as a result of their lack of understanding or willingness to engage in new ideas, and see courses as threats to their professional standing. They will argue fiercely against any proposed change, finding a range of people to blame for the changes they see as having been externally imposed. The difference between this group and the Early Cynics is that the Blind Rebels do not have the capability to argue with any degree of information or logical reasoning. Indeed, the more vociferous they become the more the rest of the group begin to disengage with them. This group of participants can be very disruptive as they can raise a comment or an objection to every point made. The challenge for the faculty is to maintain composure in the face of often rather offensive behavior and to remember that this group need as careful instruction as the Blind Compliers. It would be encouraging to think that information would assist the Blind Rebels to overcome their non compliance but for many in this group it is often not enough. For those who are determined to destroy the course, it may be necessary to respond to their ill informed arguments somewhat aggressively, highlighting where they are misinformed and emphasizing that both information and a change in attitude would indeed help them to overcome their grievances. Often it is only their self instigated isolation from the rest of the group that finally reduces them to silence. Frequently they have to be allowed to hang themselves with their own petards, necessitating a group attack on their disruptive behaviour.

In an unrelated field but one which may be interesting to compare, we can see that similar findings have been discussed with regard to the responses of people to new technology. The Everett Rogers Diffusion of innovations theory - for any given product category, shows five categories of product adopters:

o Innovators – venturesome, educated, multiple info sources;
o Early adopters – social leaders, popular, educated;
o Early majority – deliberate, many informal social contacts;
o Late majority – sceptical, traditional, lower socio-economic status;
o Laggards – neighbours and friends are main info sources, fear of debt.


However this pattern is a linear one, describing types of people within a range of demographic factors, including social background, psychological make up, educational history, personality and popularity, economic situation, social influences and fears. In the paradigm used with surgeon responders to courses I look simply at the responses relating to two factors – those of compliance and capability.

For those of us involved in education and training, this paradigm provides us with an interesting perspective on the challenges facing us in any group of participants in a course group. A surgeon cohort group should not usually be perceived as a mixed ability group in the usual definition of the term. All consultant groups must surely share a similar level of intelligence and motivation to have achieved the position of consultant. But new initiatives coupled with the disenchantment surrounding the many changes we have seen in the last five years, mean that attitude to change as well as engagement with it result in a mixed ability reaction to training courses.

For course facilitators, such groups can be very challenging for a variety of reasons. Understanding the factors behind the behaviours of surgeons attending such courses can help faculty to respond appropriately to each type of participant, ensuring maximum success and minimum disruption for each course group.

Sunday, 28 February 2010

Feedback for Learning (FfL)

We still haven’t mastered feedback to drive learning. In the medical world, feedback is seen as a commodity rather than a learning process. This needs to change.

A friend and colleague of mine says that feedback is the oxygen to the soul and that we shouldn’t make people gasp for it.

How true it is that many of us out there are wheezing for some guidance as to how we are doing? We work too often in the dark with little idea of how well we are doing or what we are getting away with that could be better.

“Feedback is the corner stone of effective clinical teaching.” (Cantillon & Sargeant 2008)

This kind of statement is rife in medical education and whilst it is not untrue, it gives little away about what feedback is, how we should approach it, who is best placed to engage in it and when that should happen.

A recent article in Medical Education (Archer, Jan 2010) says that, “Only feedback seen along a learning continuum within a culture of feedback is likely to be effective.” Whilst I agree with this statement most emphatically, I am not sure the article goes far enough in its portrayal of feedback for learning. (FfL)

We have moved over the last decade away from assessment being for the stakeholders only (summative assessment) to a position of using formative assessment to diagnose learning need and to instigate actions to develop and train. (WBAs exemplify this.) What we haven’t quite achieved is the recognition that feedback is not an outcome but is, if it is to facilitate reflection and development, a process. We may talk about the feedback process but most often the result of that process is what is seen to be most important. We still refer to a “critique” of a performance, piece of work or discussion, where the feedback becomes a commodity to be recorded by the trainee. Whilst this outcome based view of feedback persists, it will never fully work as a training and learning tool.

For effective feedback to work in terms of developing knowledge and understanding, attitudes and perspectives and behaviours and actions, it needs to be seen as a process, not an outcome.

This is not a new idea. In 1983 Schon said: “To achieve effective feedback, the health professions must nurture recipient reflection-in-action.” More than 25 years later this is still not happening.

Archer begins to address this but fails to go beyond the outcomes model. He draws a distinction between feedback as directive or facilitative in nature, explaining that “directive feedback informs the learner of what requires correction. Facilitative feedback involves the provision of comments and suggestions to facilitate recipients in their own revision.” In most cases of facilitative feedback the recipient will still focus on the facilitator’s suggestions and comments as the outcomes that must be achieved.

In directive feedback the educator generally tells the trainee how they have done. The trainee then has to understand what has been said, deal with the emotional response that may invoke in him or her and then accept or reject the comment. Acceptance and rejection often have more to do with the emotional response than with the accuracy and relevance of the comments made. In this way what is intended by the educator can often be greatly misinterpreted by the trainee.

Similarly in facilitative feedback, the comments and suggestions, albeit perhaps more acceptably and sympathetically offered, are still dealt with in the same way by the recipient, and may also be interpreted very differently.

Archer claimed that we need to “build on self monitoring informed by external feedback.” I would go further and say that as trainees progress, we need to build on self monitoring within a process of reflective conversations using open questions to guide and develop. Archer says that “the ability to shape capability through self monitoring with self directed assessment seeking requires an individual to accept the feedback provided. “ But if the feedback process is a process where the recipient has to do all the thinking, and the facilitator merely asks appropriate relevant open questions, then there is no feedback to be accepted, except the recipient’s own.

Feedback for Learning (FfL)


What is required is a third form of feedback during which the educator or facilitator does no “telling” whatsoever. Instead the focus is entirely on the trainee, and the trainee does all of the work. Feedback is facilitated with a series of open questions. As a result the learners have to think evaluatively about their experiences. They provide information regarding what they have done or not done, to which they may still have an emotional reaction, but which cannot be immediately rejected as it is they who have stimulated it. To any emotional or difficult reaction the facilitator responds supportively and with questions concerning further actions. In this way the facilitator of the feedback is seen as a helper and supporter of learning, not a critic. The role of the educator is then to move the learner forward to consider ways to address, amend or develop practice for the future.

Archer recommends that feedback is done by trained facilitators and discusses a useful model for scaffolding feedback:

1. Motivate the learner
2. Deconstruct the task
3. Provide direction
4. Identify gaps between actual and ideal performance
5. Reduce risk
6. Define goals

This framework is helpful but still focuses on the facilitator of the feedback as the one who drives the process.

If we return to the model of feedback using open questions only (FfL), we can add to Archer’s scaffolding model with some suggested open questions to elicit learning reflection and thus learning driven feedback.


1. Motivate the learner
• What would you like to get from this review session?
• How do you see this review contributing to your practice?

2. Deconstruct the task
Tell me:
• What happened/what you did
• How you felt/what you assumed/what you believed would happen
• What you think about this experience/have you had this happen before?

3. Identify gaps between actual and ideal performance
• Describe the differences between what you did here and what you would like to do in -x- time?


4. Provide direction & reduce risk

What next steps can you set yourself for this?
How will you do that? Who may you need to help you?

5. Define goals

• Jot down the goals you have set yourself here, along with the time scales, who is involved, where and when you will achieve them and how you propose to review and monitor them.

Previously trainers have been reluctant to use only open questions as they say that the feedback then relies on the honesty and perception and insight of the trainee to identify where they went wrong. It is indeed true that if the trainee is unconsciously incompetent they will not be able to identify errors and ways to develop. However, the skill in using open questions means that the facilitator can enable the trainee to identify that unknown area through questioning and elicitation. This is a skill which is not endemic to all trainers. Therefore the skill must be developed in trainers as well as in trainees.

FfL as professional reflection in action

Feedback is the pre cursor to professional reflection in action. As children we are disciplined so that as adults we develop self discipline. Junior doctors – or anyone starting off in a profession, need to develop professional reflection on and in action in order to function as independent practitioners after some time. If feedback continues to be a commodity that is bestowed from on high, the dependence of trainee upon trainers will never diminish. As educators our role is not just to train, pass on wisdom, transmit knowledge and skills, but to develop the trainee to become critical thinker, reflective practitioner and measured decision maker.


As a consultant surgeon I worked with this week said to me, “I want to teach them to think for themselves.” We can only do this if the feedback we engage in makes them think for themselves. Feedback for Learning offers a process to enable that to happen


Archer J, State of the Science in Health Professional Education. Effective Feedback: Medical Education 2010:101 – 108
Cantillon P, Sargeant J, Giving feedback in Clinical Settings BMJ 2008: 337a 1961
Schon D, From technical rationality to reflection in Action in Schon D, The Reflective Practitioner: how professionals think in action. London basic books 1983 21 - 75