Monday, 29 June 2009

Compentecy based education and Quantity assurance - why nobody can think for themselves any more

Thanks to the excellent groups I worked with last week (almost 40 consultant doctors in two South London trusts), as well as some of the great literature around in Generic and Medical Education, I think I am somewhat further forward in identifying what is so wrong about our current education philosophy and practice in the UK.

It is the political control over our hospitals and schools that has led to the implementation of competence based education. This political climate has as its closet epigraph: “Let’s only value that which we can easily measure; forget trying to measure what is of value.” Added to that is the obsession of measuring for the benefit of the organisation (Government, Trust, University, Royal College etc…) and not just forgetting but turning a blind eye to the end user – the patient, the trainee doctor, course participants and students.

This realisation that organisations no longer care about the end user of their services, but only about their financial accounting and those involved in the decision making, prompted me to research the meaning of Quality Assurance.

It is a set of activities intended to ensure that products (goods and/or services) satisfy customer requirements in a systematic, reliable fashion…. It is important to realise also that quality is determined by the intended users, clients or customers, not by society in general (http://en.wikipedia.org/wiki/Quality_assurance)
This also incorporates measuring all process elements, the analysis of performance and the continual improvement of the products, services and processes that deliver them to the customer. (http://www.thecqi.org/Knowledge-Hub/What-is-quality-new/)

What I took from this is that the end user is of paramount importance – we do QA for them and for the products and services that they require. Therefore they ought to be involved in it and given a voice to shape and improve those products and services where possible.

So far so good.

A medical school lecturer shared their approach to QA with me:

Level 1- Course satisfaction: feedback forms completed at the end of event. This gauges the immediate reactions of participants and will highlight immediate issues.
Level 2- Transfer to work place: (completed 2-3 weeks after event) electronically. This allows time to reflect on the learning and relate usefulness to own learning needs and role.
Level 3- Impact on processes/performance: This involves a qualitative approach with a structured discussion with a sample of participants.

But some institutions are cutting back on the levels of QA they use. Rather than using level 1 evaluation at the end of the course, they are moving to an on line version of level 2 several weeks after the event. Whilst this may gather data on the transfer of learning to the workplace, I suspect that since course participants will not be able to receive their certificate until they do complete the e feedback form, less qualitative data will result. My guess is that the tick box exercise will remain.

When I looked elsewhere, I found that some of the barriers to doing what is best for the end user were frighteningly similar in hospital trusts. A radiologist told me about the two week targets they face; A+ E clinicians regularly face a choice of quantity over quality of care and are penalised when they choose the latter. Waiting lists have gone down in hospitals, but at what cost to patient care? Or to the very people who provide that care?

In much the same way trainee doctors are asked to gather evidence of their clinical practice, on single sheets of paper with tick boxes printed on them. The more pieces of paper, the better. Once again the quantitative triumphs over the qualitative.

And I ask the question, should we not be calling our evaluation processes “Quantity Assurance?”

To escape from the harsh realities of the overworked and undervalued doctors I work with, I turned to the literature, in the hope that I would find a new idea I could use with my trainers, to mitigate against this inexorable decline into robotic delivery of “care” or “education” as a commodity like sandwiches or exhaust pipes.

One hot sleepless night, I unearthed a new book from beside my bed on Reflective Practice, in the hope that it would have a soporific effect! It did quite the opposite. (Reflective Practice by Gillie Bolton, 2005, SAGE, London. ISBN 978-1-4129-0811-5)

As I flicked through it my attention was caught by the phrase, “Teachers are assessed on the value they offer the consumer….” Bolton was attacking competency based education. Hurrah! As someone who finds “Learning Outcomes” just a bit too management oriented, I read on.
Bolton’s critique claims that knowledge and skills are seen like commodities and ignore educational processes such as the teacher learner relationship and the learning environment, to name but two. She says that, “Giving students set pro formas, lists of prompts, questions or areas which must be covered in reflective practice will stultify, make for passivity and lack of respect. Professionals need to ask and attempt to answer their own questions. Otherwise their practice is being moulded towards the system’s wants and needs.”

Furthermore, says Bolton, testing and checking up on students to see if they have acquired the competencies further endorses this subordinate sense. Trainers talk about ‘getting the trainees to do’ …..which means that all our next generation are doing is joining the dots and filling in the blanks. Providing evidence of a competency does not guarantee that the trainee has learned anything or understood the case or the patient. Bolton says: “It matters not that it is solely a paper exercise as there is no continuity between course and practice, no one to see practice has been changed or developed; what matters is the product: the neatly ticked boxes look right.” And she references Prosser and Trigwell 1999 and Rust 2002 in calling this “Surface Learning.”

Most of the trainers I work with complain bitterly about the spoon feeding culture and about the trainees they have who cannot think for themselves. What we don’t realise is that the system requires that trainees cannot think for themselves – the system specifies what it wants and orders its learning outcomes and behaviourists statements of performance accordingly. Unless we intervene and do something about this, the next generation will be fit for nothing except robotically following those in power.

1984 or what?

This morning I turned to Medical Teacher, (http://www.medicalteacher.org/) hoping for some respite from this depressing situation. And I found it. Well, a chink of light.

I came across a table in the current edition of Medical Teacher Vol 31 number 4, April 2009 in an article on assessing medical professionalism (Assessment of medical professionalism: who,what,when,where, how and ….why? Hawkins et al 2009;31:348 – 361.) The table contrasts two frameworks for defining the elements of professionalism. One framework is the horribly familiar “Behaviours oriented framework” which specifies the specific behaviours that trainee doctors should be able to “evidence” and tick off in their portfolios:

Responds promptly when paged or called
Takes on extra work to help the team
Listens and responds to others respectfully
Discusses colleagues and co-workers in a respectful manner…..”

However, the other framework cheered me up somewhat. This is a “Principles oriented framework” in which the key headings relate to less observable competencies:

Excellence – dedication to improving quality of care; commitment to competence
Humanism – respect, compassion, empathy, honour/integrity
Altruism – putting patient interest above own
Accountability – embraces self regulation, public service/advocacy


I was immediately struck by the stark difference in these two lists. Not only is there the obvious difference of the first one being easy to “tick off”, but what struck me about the first list is that some of these behaviours are of such a low level. Are these really professional behaviours or just common or garden courtesies all workers ought to show as a matter of course? Are these things really necessary to document? Surely they should be taken as read and anyone not consistently behaving like this ought to be referred to HR for the relevant disciplinary action? Is this what we get when HR encroach on educational and training? A list of things you have to do to avoid being disciplined?

What about true professionalism? What about aspiring to be the best? The Principles list gives us the beginning of an outline of what a real professional ought to want to be. Surely professionalism is not something we put on (like a white coat) when we get to work? For me it pervades my life and is who I am and not just what I do.
Feeling bolstered, I read further in this article……and fell back to earth with a bang. It said:

While such principle driven frameworks are quite useful in thinking broadly about construction of an assessment programme, they are not easily applied to the assessment of measureable behaviours.” The authors bemoan the lack of consistency in assessing these principles, but do admit to the behaviours oriented framework being a “bottom up approach.”

They do go on to make some interesting and useful suggestions for assessing professionalism, but still within a context of assessing behaviours and documenting evidence.

What I have come to realise over the last few weeks, from both my reading and my practice and accounts of other people’s practice, is that we inhabit an evidence based, competency focused education, training and medical system. The results of this, whether in terms of patient care or trainee development, is for the lowest common denominator to be the benchmark of satisfaction. Quality and quantity seem to be linked as are the two bowls in an old fashioned measuring scale; when one improves, the other suffers. As waiting times are reduced and patients receive test results in less than two weeks, so quality of care is compromised. The more evidence that trainees gather about their competence, related to observable and documented behaviours, so the simpler the competencies become.

Those concerned with quality assurance need to stop looking at easy ways to gather the lowest levels of data, (or else call it quantity assurance) and start looking at indicators of quality – that is descriptive feedback from their end users. In fact, they just need to consult the end user, without whom there would be no NHS, no University or Royal College. We need to stop satisfying the masters, and look at the servants – for it is they who employ us.

No comments:

Post a Comment