My thoughts for this weeks post were sparked by a couple of recent events. 1) Ben Goldacre’s suggestions that we need better “evidence-based education” and 2) a friend’s visit to an immunologist to get treatment for her child’s allergies. Although these might seem rather unrelated events, they both help illustrate some of the problems with current approaches in medicine and education that ignore the relationship between science and artistry.
The growing polarization between science and the arts was a significant trend in latter 20th century academia, as well as in health care and my own profession (nursing). C.P. Snow suggested in his famous 1959 Rede lecture, (and later in his book) that there were diverging trends between the cultures of science and the humanities which he called “The great divide” and that the split between the two cultures of science and the humanities was a great hindrance in solving the world’s problems (Snow, 1993). I tend to agree and here are two examples why;
In his recent paper presented at Bethnal Green Academy in the UK, and associated Blog post Ben Goldacre suggested that;
“Medicine has leapt forward with evidence-based practice. Teachers have the same opportunity to leap forwards and become a truly evidence-based profession. This is a huge prize, waiting to be claimed by teachers.”
This sounds, very stirring, and I agree with the sentiments of evidence based education, but I must admit to some concerns with the details of the message here, which begins to look like exactly the sort of media hype Dr. Goldacre is usually (quite rightly) railing against.
Firstly, why Michael Gove (Secretary of State for Education in the UK) would ask a “hip and cool” physician rather than an education leader to pontificate on the state of educational research and practice in the UK strikes me as rather a cynical move. Once again this seems to put teachers squarely in the cross-hairs for the blame on educational failures in the UK (the rhetoric suggests that grasping evidence based practice [EBP] will empower them and set them free from educational dogma)! I can only imagine that to many teachers working in overcrowded classrooms in state inner city schools this must seem the equivalent to an owner of three Bentley’s and a country estate lecturing the grounds workers on their social responsibilities.
Secondly, medicine and education are two completely different domains, with vastly different power structures and knowledge bases, so the assumption that what is proving a useful approach in medicine can be applied in education seems rather simplistic at best. As Ben Goldacre notes, both medicine and teaching represent public service professions that integrate scientific knowledge with artistic skills in implementation (involving the development of practice skills honed by experience). But then the similarities tend to fall away.
Medicine is overall a naturalistic science based profession with a knowledge base from the bio-physiological sciences (although modern medicine certainly also embraces scientific work from the psycho-social sciences). However, it is based on an therapeutic intervention based model of treatment (curative and preventative) for a range of very specific identified diseases and health issues. Generally as all humans share the same physiology we can generalize medical treatments pretty well, and an evidence-based practice (EBP) model can work very effectively. We can get a variety of excellent evidence from comparative trials, clinical cases, epidemiological work and even lab studies. Also, because of the legal and social structures (certainly in western countries) physicians are highly educated (at a doctoral level) generally well-paid professionals who enjoy a somewhat privileged social status with legally enshrined powers to control access to their therapeutic interventions (e.g. prescription rights).
Let’s compare this to teaching (with a secondary education focus; as that was the target of the paper). Modern pedagogy is based upon the work of educational theorists using the work of educational psychology and social theories to explain how we learn and the best ways to teach. Pedagogy represents a humanistic science based profession with a primary evidence and knowledge base from the psycho-social sciences. It involves the practical application of skills to enhance the personal learning and development of children in a wide range of social areas and cultures, for a wide range of knowledge, skills and attitudes, with socially defined content usually involving prescribed curricula. Also, as professional educators, teachers are not doctorally educated (or particularly well-paid) professionals, don’t enjoy a privileged social status, and have few legally embodied powers to control access to their interventions. Although you need a state recognized teaching qualification to teach in a state school, the teaching techniques they employ can be legally employed by anyone.
A physician can use whatever therapies they think appropriate, based on the best scientific evidence, and we should not forget, patients have a strong motivation to get better, so are generally willing participants in the endeavour. On the other hand, a teacher can use a variety of techniques, but usually has little choice on what they teach or where, which in turn has an impact on the motivation and interest of students. As an example my daughter was recently taught the wonders of the cathode-ray-tube in her grade 11 physics class (sixth form for you folks in Blighty). I can recall the same subject matter being taught in my physics class when I was in school over 30 years ago, and it is hardly relevant today. Yet teachers have to teach what is in the prescribed curriculum, to groups of socially and culturally diverse children (rather than patients with specific illnesses) in an educational system constrained by strong socio-political elements, that are generally well outside the educator’s direct control. Here, the best tools for EBP such as randomized controlled trials (RCTs) don’t really work well. Yet Ben Goldacre suggests:
“We simply take a group of children, or schools (or patients, or people); we split them into two groups at random; we give one intervention to one group, and the other intervention to the other group; then we measure how each group is doing, to see if one intervention achieved its supposed outcome any better.”
Sounds very good in theory, and RCTs in education can work (I have done them), but in high schools things work rather differently from hospitals and GP practices, and undertaking them is practically very complex and difficult. Firstly the assumption here is that better educational outcomes = better examination/test results, and I would suggest, it is a bit more complicated than that. You can’t simple equate educational outcomes with effective curative/preventative medical treatments for diseases, and to do so seems to reflect rather simplified behaviourist educational principles from the last century.
Secondly, children are taught in groups, that are frequently streamed for ability, so getting a random sample is not usually very easy to do, involving matching techniques. Also attaining a comparable population for comparison of a teaching techniques is difficult for sampling in education, as schools range across very different socio-economic and cultural areas, and what is being measured is in reality a particular dominant cultural artifact (Berliner, 2002). So how can we effectively make meaningful generalizations with numerous repeat studies across different locations for systematic reviews? It’s more likely what works well in one setting won’t work so well in another (although Goldacre does acknowledge this).
Thirdly, he also suggests we generally have no idea which of two interventions is best in an RCT. I would argue that is not usually the case in educational settings, and for most interventions we have a clear idea of what will work better, and the confounding variables that can apply are huge (Berliner, 2002). Controlling for them is a very complex undertaking and often we can’t really use controls effectively at all; imagine telling a group of parents that their kids are going into the control group whilst the rest of the class gets one of two new whiz-bang, high-tech teaching resources. Despite Dr. Goldacre’s comparisons to medicine such education RCTs are unlikely to be so acceptable in wider society, as you are not just dealing with a single motovated patient group, but a range of children and their parents, (who are not ill, so have less incentive to participate) and of whom many are already disenfranchised by the education system and see their kids as guinea pigs. Consider the huge issues of getting parental consent; my wife, who was a teacher for years couldn’t even get some parents to agree on their children doing homework! Yes RCT’s might be useful in some circumstances, but their widespread use in education and value is more limited. Other educational research (including case studies and qualitative work) are more likely to be more practical and give more useful results.
Unlike medicine, we might also want to consider that new theories of how we learn come up relatively infrequently. Overall although our understanding of cognitive processes is changing, theories on how to harness them best to promote learning has progressed little since the 1990’s. We do however, have a plethora of new tools which can be applied. My sense is, teachers already generally use best practices and technologies whenever they can, and they already have a good idea of what classroom techniques work best (i.e. small socially active student centred classes, mixed methods including plenty of interactive and heuristic learning, and the use of multimedia and educational technologies). There is good evidence of that already without RCTs. Sadly though, it isn’t the lack of knowledge of evidence based practice or a will to implement them that stops teachers using best practices in the classroom. It is the curriculum, assessment, social and environmental constraints that (at least in state schools) are externally controlled.
Finally, unlike doctors high-school teachers are not expected to carry out research as a part of their everyday practice, and most have not been prepared or have the skills to do so. All in all, other forms of educational research than RCTs are much more likely to be effective to inform practice (and have been), and frankly the assumption that educators might not have thought of this before is a tad patronizing. There is already plenty written about this in the education literature (Hammersley, 2008; Ong-Dean et al, 2011; Rudd & Johnson, 2008; Skidmore & Thompson, 2012; Torgerson, 2009) but you have to go to ERIC rather than PubMed, Medline or CINAHL to find it.
Now don’t get me wrong (as I am beginning to sound a bit of an old Trot; rise up comrades) I do think EBP is a great leap forward in healthcare, and evidence based teaching is certainly a good idea. However, Ben Goldacre’s paper seems to suggest that education is equivalent to medicine and the same techniques can be equally well applied. Sadly, in this case I think this gives an example of a failure to see how science can best be used to inform educational practice. Likwise the responses from the Education Endowment Foundation, and a blog post from the Institute of Education seem similarly misguided here.
So here is my second example. In a recent trip to a well-qualified immunologist to seek diagnostic testing for allergies, a friend’s child was subject to a wide variety of allergy tests and found to have a number of them to specific substances. The treatments suggested includes avoidance of a lot of antigenic substances, and allergen desensitization therapy by prolonged regular injections of minute amounts of the allergen. Allergen immunotherapy is thought to work by mediating immunological responses that cause allergic reactions. The mechanisms are currently poorly understood, but exposure to very small quantities regularly appears to help individuals build up tolerance. Specific allergen immunotherapy is most commonly administered as subcutaneous injections (under the skin) by specialists and requires a building-up period followed by a maintenance period of three to five years. It seems to work in reducing allergic symptoms, and likewise, reducing exposure to the allergens also works.
Now here lies the problem; many of the other things on the list of interventions that my friend was supposed to adopt were practically useless, and the suggestions are hardly a great example of EBP. They included:
- Two programs of desensitization therapy ($600/yr each)
- Use of an in-room HEPA air filter 24/7 (costs $120)
- New mattress and quilt, and bedding (circa $1400)
- Sealing their bed mattress and duvet in plastic covers and taping the zips closed
- Avoiding all of the following foods: tomatoes, apples, carrots, celery, chill, kiwi, peanut, papaya, pineapple, melon, banana, peach, lamb, beef,
- No carpets
- No plants in aquariums
- Animals and birds to be kept out of the house
- No old furniture
- Cleaning of the bedroom daily
- Curtains rugs and bedding all washed weekly
- Stuffed toys from synthetic fibres only, and washed and dried weekly
- Clothes stored outside of the bedroom
- Closed windows to all rooms
- No forced air heating
In effect getting the child to live in a plastic air filtered bubble would seem to be the ideal choice for control of the symptoms, and few of these suggested interventions appear to take any account of the quality of life for the hapless patient.
In EBP there are four key elements that should be considered.
- Scientific evidence of a demonstrated positive or negative effect of a specific action/intervention
- Economic Viability of the intervention
- Social and personal acceptability of the intervention
- Clinical expert judgement (as to efficacy of the intervention this specific case, or any adaptations required)
In this case the physician had clearly ignored items 2 to 4 in their prescribed treatment regime, and although this is using best evidence from scientific research (including RCTs), ignored the practical reality and external constraints (one has to wonder if he had ever experienced living with a teenager). There is no way most kids would adhere to this regime, or for parents be able to apply all of these suggested interventions, and many could not afford them. I would hope a good nurse would be able to point out more realistic interventions that might help to the parents here.
Anyhow, my point here is there seems a long way to go to unite the science and arts in practice professions. Initiativess like “integrative medicine” or “holistic science” don’t really help as they just appear to invoke a lot of pseudo-scientific nonsense in an attempt to generate mass appeal. I would argue a pragmatic approach to the implementation of scientific knowledge and EBP with artistry in education, and changing policy has the best chance of improving education outcomes.
So, in terms of using medicine as an example of the power of EBP, maybe physicians in glasshouses should not throw stones just yet. Yes, the occasional progressive school like Bethnal Green Academy may break the mould in terms of improving examination performance, but that isn’t because they have embraced EBP or use of RCTs. Ben Goldacre does acknowledge some of this and also states “I know that outsiders often try to tell teachers what they should do, and I’m aware this often ends badly.” However, he should probably heed his own advice there, as he seems at risk of turning into what he himself once criticized; a celebrity expert holding forth in the media on matters outside of their own expertise. That said, his new book on “bad pharma” is pretty good!
Berliner, D. C. (2002). Educational research: The hardest science of all. Educational Researcher, 31(8), 18–20.
Goldacre, B. (2013) Building Evidence into Education. Available to download from: http://www.education.gov.uk/inthenews/inthenews/a00222740/building-evidence-into-education
Hammersley, M. (2008). Paradigm war revived? on the diagnosis of resistance to randomized controlled trials and systematic review in education. International Journal of Research & Method in Education, 31(1), 3-10.
Ong-Dean, C., Hofstetter, C. H., & Strick, B. R. (2011). Challenges and dilemmas in implementing random assignment in educational research. American Journal of Evaluation, 32(1), 29-49.
Rudd, A., & Johnson, R. B. (2008). Lessons learned from the use of randomized and quasi-experimental field designs for the evaluation of educational programs. Studies in Educational Evaluation, 34(3), 180-188.
Skidmore, S. T., & Thompson, B. (2012). Propagation of misinformation about frequencies of RFTs/RCTs in education: A cautionary tale. Educational Researcher, 41(5), 163-170.
Snow, C. P. (1993). The two cultures (Second Ed.). Cambridge: University Press. UK Science Council. (2009). What is science? | www.sciencecouncil.org Retrieved 8/25/2011, 2011, from http://www.sciencecouncil.org/content/what-science
Torgerson, C. J. (2009). Randomised controlled trials in education research: A case study of an individually randomised pragmatic trial. Education 3-13, 37(4), 313-321.