E4L and the value of dissent

I find it ironic that, just after a blog post in which I reflected that blogging often feels like shouting into the void, a recent post on this blog has received a robust and ongoing response, as well as plenty of rich conversation, online and in my immediate context.

I wrote earlier this month about my ponderings and cautions around the Evidence for Learning Toolkit (based heavily on the UK’s Education Endowment Fund Toolkit) currently gaining traction in Australian education circles. I felt compelled to share my thinking as the E4L Toolkit is connected with some Australian heavy hitters. It is powered by Social Ventures Australia and the Commonwealth Bank. It is advised by prominent academics from a range of Australian universities, including John Hattie who is both a champion of meta-analysis and the equivalent of an education household name. Its advisors also include high-level representatives from the Australian Council for Educational Research (ACER), the Australian Council of Education Leaders (ACEL), the Australian Curriculum, Assessment and Reporting Authority (ACARA), various independent schools associations, and various State education departments. This who’s-who is all the more reason, I think, for those in the education community to look carefully at the Toolkit and its adoption across the Australian edu-landscape.

This week, John Bush, Associate Director of Education at Social Ventures Australia and part of the leadership team of Evidence for Learning, wrote a blog post for the E4L website (the first of a series). In it, he responds specifically to some of the comments I made in my post.

John points out that my post was part of “a flurry of public comment across Australia and the UK in the value and risks of meta-analysis and synthesis of meta-analyses in education research.” Although I didn’t realise it at the time, my post did come in on the crest of a wave. Published the same day (UK time) were this article in the Times Education Supplement in which Professors Terry Wrigley and Gert Biesta call the EEF Teaching and Learning Toolkit misleading and unhelpful. There was also this response by the EEF addressing the concerns around and risks of its Toolkit.

A few days later, Rachel Buchanan posted about the need for a critical perspective when we talk about using an evidence base in education. I was also made aware
(thank you, Twitter) of this post from David Didau from last month, which questioned the objectivity of the EEF Toolkit. He has previously cogitated on the limitations of meta-analysis and effect sizes as measures in education.

In my recent post, I linked to a couple of other thoughtful blogs on the limitations of meta-analysis in educational research, one by Jon Andrews and one by Gary Jones. It’s also well worth listening to both Dan Haesler’s comments, and Cameron Malcher’s interview with E4L’s Tanya Vaughan and Janet Clinton, on this recent episode of the Teachers’ Education Review podcast.

So, amid this flurry, I was delighted when John Bush began his own considered response within a complex field: the use of evidence to inform teaching, in order to most positively impact on student learning and achievement.

Despite John’s explanations of the padlock rating system, bearing in mind the many reservations about meta-analysis, I’m not sure I’m confident with E4L’s five-padlock ‘security’ of interventions that are supported by “at least five robust and recent meta-analyses.” I was, however, relieved to read that E4L deliberately uses careful language in the Toolkit in order to facilitate nuanced understandings of the evidence it presents.

John writes that, “We do not envision the Toolkit as a resource that should dictate or direct professional decisions in schools. Instead, we hope school leaders and teachers will use it to start discussions with their peers and to help inform their professional judgement with research evidence.” These are important words, but we in education remain challenged by the tension between a desire for the simple fix and the reality of the complexities of our work. Careful language and a critical approach are important, but the appeal of the misleading media headline, the ranking on a league table, and the easy answer, remain. As a profession we need to push back against these, despite the performative culture in which we operate.

This ongoing public conversation reminded me of a podcast from Brand Newsroom in which the hosts Nic Hayes and Sarah Mitchell interview influential marketing consultant Jay Baer about his book titled Hug Your Haters. In it, Jay discusses the science of complaints. He says that praise is over-rated. Praise feels terrific, he tells us, but teaches us nothing. Negative feedback and criticism, on the other hand, teach us everything; it’s where learning and opportunity come from. Now, I wouldn’t classify myself as a hater, or a complainer, but I would probably be seen by those championing E4L Toolkit as a dissenting voice.

Being publically challenged, even gently, can be difficult, especially when we believe strongly that we are doing good work. I respect John Bush, and the folk at Evidence for Learning, for having the transparency and commitment to engage in some graceful disagreement. In doing so they are responding directly to the concerns of those like me, who, as it turns out, wasn’t blogging into a void after all. Rather, I was offering my perspective to a community of educators who seek to understand one another in order to best serve our students.

While I have moments of high cynicism and outright despair, it is blogversations like the one in which John and I are currently engaging, that can help model publically the kinds of conversations educators can and should be having. I remain cautious and critical about the Evidence for Learning Toolkit, especially in terms of the ways in which it might be adopted by educators in the busyness of their work, but I am open to developing my thinking and continuing the conversation.

Evidence For Learning in Australia

In the UK the Education Endowment Foundation (EEF) is championed by some as a tool for helping teachers, school leaders and schools to make the best decisions for their students, based on what research and evidence shows. Now in Australia, Evidence for Learning (E4L), powered by Social Ventures Australia and the Commonwealth Bank of Australia, is piggybacking on the EEF’s toolkit in order to provide an Australasian equivalent. It is advised by, among others, John Hattie, and is partnering with AITSL and with State education departments to map the toolkit to State education frameworks and the AISTL Professional Standards for Teachers and Principals.

Last year I spoke with John Bush, Associate Director of the Learning Impact Fund, about the toolkit, and this week I attended a breakfast workshop run by Dr Tanya Vaughan, Associate Director for the E4L toolkit and Honorary Fellow at the Melbourne Graduate School of Education (MGSE) at the University of Melbourne. As the Research Lead at my Australian school, I was keen to hear more about how it was progressing and what it is offering Australian schools.

The aims of the E4L Toolkit

Tanya framed the toolkit as as an instrument for helping great practice become common practice. E4L aspires to make accessible, and develop the rigour of, evidence of what works and why in education, in order to make a difference to learners. That is, it aims to build, share and use evidence to support better decision-making in schools, to in turn lead to better outcomes for students.

The E4L toolkit is free and unrestricted in order to provide all schools with access to evidence of what works best in education, regardless of budget or postcode. This, Tanya explained, will help to address the barriers for teachers engaging with research:

  • Shortage of time;
  • Overload of information; and
  • Insufficient contextualized information for practice.

I would add that much educational research is behind a pay wall in journals inaccessible to non-researchers, or in very expensive books that aren’t affordable for many schools. Tanya was adamant that “front line professionals are the heart and soul of evidence-based education practice”, and that E4L endeavoured to improve communication between professionals and researchers, teachers and ‘the evidence’. This connection between educational research and practice is one to which I am especially committed.

What does the E4L Toolkit look like?

The E4L effect size league table’s Top 5 edu-practices

A first glance, the E4L toolkit shows a set of effect-size league tables of teaching practices, each showing – via symbols – the average cost of implementation, the ‘evidence security’ of the claim, and the average month’s worth of learning impact.

Visitors to the toolkit can drill down into the site. Clicking on a single practice such as ‘feedback’ reveals summaries addressing the following questions: What is it?; How effective is it?; How secure is the evidence?; What are the costs?; and, What should I consider? Clicking further into ‘References’ reveals the studies that sit behind this practice, with abstracts. Some practices additionally have an Australasian research summary.

Tanya was clear that the toolkit presents averages. In fact, it presents averages of averages, or more accurately meta-meta-analyses. While Tanya advocated for mixed methods – including talking to leaders, teachers and students – most of what the toolkit presents are syntheses of meta-analyses and randomised control trials (often considered the ‘gold standard’ of educational research).

The lock rating symbols, showing apparent ‘security of evidence’ are based on the number of meta-analyses beneath the meta-meta-analysis. It is the notion of evidence security and the simplification of ‘what works’ to effect size league tables that has me feeling cautious about the toolkit and its potential use. In attempting to address education practitioners’ shortage of time to engage with research and the overload of research information out there, does E4L provide an oversimplified tool likely to be accepted uncritically by busy educators working in our schools?

What is meta-analysis?

Meta-analysis is a statistical analysis using an equation: the experimental mean, minus the control group mean, divided by the population standard deviation. Simpson (2017) gives us this description of what happens:

“Individual studies report quantitative measures of the outcomes of particular interventions; meta-analysts collect studies in a given area, convert outcome measures to a common metric and combine those to report an estimate which they claim represents the impact or influence of interventions in that area. Meta-meta-analysis then takes the results of meta-analyses, collected in broader fields, and combines those estimates to provide a rank ordering of those fields which make the most difference.”

Simpson’s paper, released in January this year, challenges analogies between evidence-based practice in medicine and education. Treatments in medicine, he argues, are often standard and well-specified, with agreed outcomes which are relatively easy to measure. Education is more nuanced, complex and contextual.

Simpson invokes Eysenck’s (1984) notion of comparing apples with oranges, when he points out that meta-analyses often do not compare studies with the same comparisons, measures and ranges of participants. He contends that aggregated effect sizes are more likely to show differences in research design manipulation than in effects on learners. Bloggers such as Jon Andrews, in this post, and Gary Jones, in this one, have teased out the limitations of meta-analysis as method in educational research. Gary insists that “if teachers and school leaders wish to use effect sizes generated by research to help prioritise interventions, then it is necessary to look at the original research”, rather than relying on simplified lists. Educators need to look behind the curtain.

Snook et al. (2009) argue that when averages are sought or large numbers of disparate studies amalgamated, as in meta-analyses, the complexity of education and of classrooms can be overlooked.  They also point out that any meta-analysis that does not exclude poor or inadequate studies is misleading or potentially damaging. Terhart (2011) points out that by focusing on quantifiable measures of student performance, meta-analyses ignore the broader goals of education.

Meta-analysis is singled out by Wiliam (2016) as an unsuitable technique for identifying the relative effectiveness of different approaches to student learning. He states that:

Meta-analysis is simply incapable of yielding meaningful findings that leaders can use to direct the activities of the teachers they lead.”

Wiliam’s PowerPoint presentation from last year’s ResearchED conference in Washington—titled ‘Why teaching isn’t—and probably never will be—a research-based profession (and why that’s a good thing)’—presents the problems with meta-analyses for deciding ‘what works’ in education. In the presentation, Wiliam reminds us that everything works somewhere and nothing works everywhere. He encourages us instead to ask: Under what conditions does this work?

Possibilities and reservations

In her E4L Toolkit presentation this week, Tanya Vaughan advocated for trusting the profession to be thoughtful and intelligent and to engage with the research literature that sits behind the seductive league tables of the E4L toolkit. Her call for mixed methods research—for qualitative and quantitative to “play together”—resonated with me. Many methods of research have something to offer the field, and all are limited.

My hunch is that the E4L toolkit has something to offer educators in Australia (as a starting point rather than an answer sheet), and I can see the significant work that has gone into producing it, as well as the good intentions behind it. Yet I have my reservations. I worry that an uncritical acceptance of the toolkit’s content, alluring in its apparent simplicity, will result in an impoverished understanding of ‘what research says’. We are in danger of giving education research lip service, or wading in shallow pools of evidence. The use of meta-meta-analyses as the basis for the toolkit has the potential to over-synthesise limited quantitative data to the point of distorting original findings, and ignore the limitations, qualities and complexities of the synthesised studies.

Everyone from the profession to the media is likely to translate these effect-size league tables into seemingly authoritative soundbites of ‘what works’ without taking the time to consider what might work where, for whom, and under what conditions. If Australian organisations and schools are to embrace the E4L Toolkit as part of their pursuit of having a positive impact on learners and more systematic bases on which to make decisions, I hope they do so with a cautious step and a critical eye.

References

Eysenck, H. J. (1984). Meta-analysis: An abuse of research integration. The Journal of Special Education 18(1), 41–59.

Simpson, A. (2017). The misdirection of public policy: Comparing and combining standardised effect sizes. Journal of Education Policy, 1-17.

Snook, I., O’Neill, J., Clark, J., O’Neill, A. M., & Openshaw, R. (2009). Invisible learnings? A commentary on John Hattie’s book: Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New Zealand Journal of Educational Studies, 44(1), 93-106.

Terhart, E. (2011). Has John Hattie really found the holy grail of research on teaching? An extended review of Visible Learning. Journal of Curriculum Studies, 43(3), 425-438.

Wiliam, D. (2016). Leadership for teacher learning: Creating a culture where all teachers improve so that all students succeed. Moorabbin, Australia: Hawker Brownlow Education.