Thursday 27 May 2010

Who needs Learning Outcomes?

I was working this week on a new course with another educationalist. When it came to the Learning Outcomes she, knowing my position on them (that they are great to put in at the end of the programme but no use at all when designing learning events) said, "Let's not start with the LOs...."

Hurrah! said I, and we began to design our session using key messages and activities to achieve those messages. What a joy!

I rummaged through my archives to find a copy of this article that speaks for itself. I wish to reiterate that LOs are great to put in a course programme so that learners see what they are going to get out of the day. But to start off with them when planning learning? Only if you are a businessman.....!

Who needs learning objectives?
Posted by Charles Jennings in Strategy, The training cycle on Tue, 28/07/2009 - 08:42
• This article looks at the case against creating learning objectives
• It explores how learners are often discouraged by box-ticking and how trainers need to implement better systems at helping learners retain information
• Charles Jennings cites examples of how effective learning can be achieved without the use of learning objectives

How many times have you embarked on some formal learning, whether in a classroom or through an elearning or blended course, and the first thing you’re presented with is a list of rather bland learning objectives? This begs the question, are lists for losers? Charles Jennings considers the evidence.

1. At the end of this course you will be able to tie your shoelaces in a double bow
2. At the end of this course you will be able to use a blender to make a tasty fish milkshake
3. At the end of this course you will be able to make gold out of base metal
4. and so on...

Apart from being some of the most de-motivating writing any of us have ever read, lists of learning objectives are the worst possible way to create an environment for learning. In fact, they are often the first barrier to real learning. Why so?

Two basic problems
I see two basic underlying problems with learning objectives. Firstly, many training and development specialists continue to apply a model of learning objectives that was developed more than half a century ago in a context that they don’t really understand. It’s a model that was ‘of its time’ and, although some of the principles still apply, certainly isn’t as relevant in the 21st century as it was in the mid-1900s, even accepting the view that formal learning still has a place in developing people.

Secondly, many training and development specialists are learning obsessed rather than performance obsessed. Their focus is on delivering content and assessing its retention by learners – on ensuring learners ‘learn’ rather than enabling people to ‘do’. Giving fish rather than fishing rods.

"There’s a strong argument that proof of achievement of learning objectives as commonly assessed at the end of the learning event doesn’t even measure learning."
Subsequently their learning objectives tend to be built around a set of post-course assessments. Even then, the way in which the ‘learning’ is assessed is often so poor that it only measures short-term memory retention rather than real learning and behaviour change.

A nod to Bloom
Back in 1956 when Benjamin Bloom and his committee members developed a taxonomy of learning objectives they were working in a very different world than we live in today. Reductionism and codification were the dominant mindsets. The standard approach to teaching at the time (and it was ‘teaching’ rather than ‘learning’) was to design courses and programmes so that students should take the same time to reach a specified level of mastery.
It was a crude approach where the hares won and the tortoises lost. Bloom was kicking against this with his taxonomy. The three learning domains of Bloom’s Taxonomy (cognitive, affective and psychomotor) were, in some way, an attempt to overlay some of the complexity of the learning process on what was seen at the time as a rather deterministic and mechanistic endeavour. Bloom was, underneath it all, a progressive. A former student once described him as "embracing the idea that education as a process was an effort to realize human potential, indeed, even more, it was an effort designed to make potential possible. Education was an exercise in optimism." (Elliot W. Eisner in the UNSECO Quarterly Review of Comparative Education 2000).

Bloom himself saw beyond learning objectives as simply a means to an end. He was convinced that environment and experience were very powerful factors influencing human performance. It’s worth noting that his last book published just six years before he died in 1999 was ‘The Home Environment and Social Learning’. He certainly wasn’t hung up on learning objectives. Bloom’s view of learning was the need to focus on target attainment rather than the ‘race to the finish post’ as was common in the 1950s. It was, in reality, a belief of learning as an enabler. At the time Bloom was addressing an important issue through his learning objectives, today that battle has been won.

Learning objectives and improved performance
So why, 50 years on, do we still have this slavish adherence to presenting learning objectives at the outset of courses in some mechanistic manner, and often skewed to the cognitive domain? It’s often ignorance, and sometimes simply a desire to make the life of the trainer easier, I’m afraid. And sometimes it’s just marketing. Learning objectives are really only useful for the people designing the learning. If used well they can form a helpful framework for instructional designers. However, they should be kept well away from learners or course recipients. If a course is well-designed and targeted to meet a defined performance gap, a list of learning objectives serves absolutely no purpose other than to dull the enthusiasm of those embarking on a course of study.

What any learner, and their manager, wants to know is whether on-the-job performance has been improved through some formal learning intervention. In other words, whether the experiences that the employee had during formal training has resulted in changed behaviour and performance in the workplace. Achievement of learning objectives is not evidence of this. The ability to pass a test or demonstrate a skill in a classroom setting is not the same as being able to do so in workplace conditions. I suppose the notable exception is where the classroom conditions mirror exactly, or almost exactly, the workplace – such as training pilots in a flight simulator. Still, I don’t imagine any one of us would take kindly to flying in a plane with a pilot who has only proved his or her performance in a simulator and hasn’t a modicum of experience in the air, unless there isn’t an alternative.

"Learning objectives are really only useful for the people designing the learning."
In fact there’s a strong argument that proof of achievement of learning objectives as commonly assessed – at the end of the learning event – doesn’t even measure learning. Sometimes the time lag between end-of-course testing and attempting to put the learning into action is such that the ‘learning’ is lost from short-term memory. At other times the work environment is less ‘controlled’ than the learning environment and the added variables mean performance improvement simply doesn’t occur. Most of us have seen situations where people return bright-eyed and bushy-tailed from a training course with plans to do things differently – time management, project management and people management training are good cases-in-point - only to revert to the old ways as soon as the day-to-day pressures of the working environment kick back in.

Measuring performance
If you are going to assess the impact of a course on individual participants’ performance in the workplace you need to forget about learning objectives for doing the job. Remember, learning objectives may be useful to help you create a logical design, but that’s all they’re useful for. When you get to measuring transfer of learning to the workplace you need to engage with the people who are in a position to observe behaviour and performance and those who are in a position to measure outputs. This usually means the manager and the team member who is responsible for maintaining performance metrics for the business or team – the balanced scorecard metrics or similar.

This approach requires training and development managers and instructional designers to engage with business managers and agree on strategies for measuring the impact of the learning before the learning design phase even starts. A good way to do this is to roll it into early engagement with business managers in defining the performance problem to be solved, whose performance needs improving and whether training is likely to help solve the problem (which is usually ‘no’, but sometimes ‘yes’).
In most cases performance change can’t be measured immediately following the training if it is to be meaningful. Take the case of transactional work - data entry or call centre operatives for instance – where the proof that training has led to improved performance requires data taken over a period of time, and not just on the first day or two back in the workplace. All this requires more thought and effort than writing a few overarching learning objectives (even if in well-formed behavioural format) and then developing assessments to ‘test’ whether they’ve been achieved or not. And it requires different skills of the training and development team.

Charles Jennings was chief learning officer at Reuters and Thomson Reuters. He now works as an independent consultant on learning and performance. Details of Charles consultancy work and his blog can be found on his website www.duntroon.com

Wednesday 26 May 2010

Reflective Practice - here is how to do it!

http://www.rcog.org.uk/files/rcog-corp/uploaded-files/ED-Reflective-Prac.pdf

This excellent pdf from the Royal Colleg of Obstetrics and Gynaecology has everything you need to know about writing up your reflections in medical training.

Saturday 15 May 2010

Article in BMJ for medical trainees

http://careers.bmj.com/careers/advice/view-article.html?id=20001007

This article published in the BMJ this week gives a good overview of the structure of training and the role of the trainee and trainer in negotiating access to training opportunities.

With The Royal College of Surgeons in England, I have worked with over a thousand consultants and senior trainers to establish understanding of the Workplace Based Assessment Tools and their place in Medical and Surgical postgraduate training.

In partnership with some forward thinking Deaneries, we are now offering such courses for Trainees. BEST (Building Excellence in Specialty Training), is a one day course for all Foundation and Core (or ST 1 and 2) trainees who wish to master the approaches needed to get the best out of their training and their trainers.

Thursday 13 May 2010

Cognitive Apprenticeships

While researching for a course I am writing I found this article on the web which I thought was most interesting. Despite it being about school classrooms - and American ones at that! - it offers a great example of how we can develop active apprenticeships in our more constrained educational environments. I shall certainly be looking at how I can incorporate the key elements of this work into my new course.

http://projects.coe.uga.edu/epltt/index.php?title=Cognitive_Apprenticeship