Evidence For Learning in Australia

In the UK the Education Endowment Foundation (EEF) is championed by some as a tool for helping teachers, school leaders and schools to make the best decisions for their students, based on what research and evidence shows. Now in Australia, Evidence for Learning (E4L), powered by Social Ventures Australia and the Commonwealth Bank of Australia, is piggybacking on the EEF’s toolkit in order to provide an Australasian equivalent. It is advised by, among others, John Hattie, and is partnering with AITSL and with State education departments to map the toolkit to State education frameworks and the AISTL Professional Standards for Teachers and Principals.

Last year I spoke with John Bush, Associate Director of the Learning Impact Fund, about the toolkit, and this week I attended a breakfast workshop run by Dr Tanya Vaughan, Associate Director for the E4L toolkit and Honorary Fellow at the Melbourne Graduate School of Education (MGSE) at the University of Melbourne. As the Research Lead at my Australian school, I was keen to hear more about how it was progressing and what it is offering Australian schools.

The aims of the E4L Toolkit

Tanya framed the toolkit as as an instrument for helping great practice become common practice. E4L aspires to make accessible, and develop the rigour of, evidence of what works and why in education, in order to make a difference to learners. That is, it aims to build, share and use evidence to support better decision-making in schools, to in turn lead to better outcomes for students.

The E4L toolkit is free and unrestricted in order to provide all schools with access to evidence of what works best in education, regardless of budget or postcode. This, Tanya explained, will help to address the barriers for teachers engaging with research:

  • Shortage of time;
  • Overload of information; and
  • Insufficient contextualized information for practice.

I would add that much educational research is behind a pay wall in journals inaccessible to non-researchers, or in very expensive books that aren’t affordable for many schools. Tanya was adamant that “front line professionals are the heart and soul of evidence-based education practice”, and that E4L endeavoured to improve communication between professionals and researchers, teachers and ‘the evidence’. This connection between educational research and practice is one to which I am especially committed.

What does the E4L Toolkit look like?

The E4L effect size league table’s Top 5 edu-practices

A first glance, the E4L toolkit shows a set of effect-size league tables of teaching practices, each showing – via symbols – the average cost of implementation, the ‘evidence security’ of the claim, and the average month’s worth of learning impact.

Visitors to the toolkit can drill down into the site. Clicking on a single practice such as ‘feedback’ reveals summaries addressing the following questions: What is it?; How effective is it?; How secure is the evidence?; What are the costs?; and, What should I consider? Clicking further into ‘References’ reveals the studies that sit behind this practice, with abstracts. Some practices additionally have an Australasian research summary.

Tanya was clear that the toolkit presents averages. In fact, it presents averages of averages, or more accurately meta-meta-analyses. While Tanya advocated for mixed methods – including talking to leaders, teachers and students – most of what the toolkit presents are syntheses of meta-analyses and randomised control trials (often considered the ‘gold standard’ of educational research).

The lock rating symbols, showing apparent ‘security of evidence’ are based on the number of meta-analyses beneath the meta-meta-analysis. It is the notion of evidence security and the simplification of ‘what works’ to effect size league tables that has me feeling cautious about the toolkit and its potential use. In attempting to address education practitioners’ shortage of time to engage with research and the overload of research information out there, does E4L provide an oversimplified tool likely to be accepted uncritically by busy educators working in our schools?

What is meta-analysis?

Meta-analysis is a statistical analysis using an equation: the experimental mean, minus the control group mean, divided by the population standard deviation. Simpson (2017) gives us this description of what happens:

“Individual studies report quantitative measures of the outcomes of particular interventions; meta-analysts collect studies in a given area, convert outcome measures to a common metric and combine those to report an estimate which they claim represents the impact or influence of interventions in that area. Meta-meta-analysis then takes the results of meta-analyses, collected in broader fields, and combines those estimates to provide a rank ordering of those fields which make the most difference.”

Simpson’s paper, released in January this year, challenges analogies between evidence-based practice in medicine and education. Treatments in medicine, he argues, are often standard and well-specified, with agreed outcomes which are relatively easy to measure. Education is more nuanced, complex and contextual.

Simpson invokes Eysenck’s (1984) notion of comparing apples with oranges, when he points out that meta-analyses often do not compare studies with the same comparisons, measures and ranges of participants. He contends that aggregated effect sizes are more likely to show differences in research design manipulation than in effects on learners. Bloggers such as Jon Andrews, in this post, and Gary Jones, in this one, have teased out the limitations of meta-analysis as method in educational research. Gary insists that “if teachers and school leaders wish to use effect sizes generated by research to help prioritise interventions, then it is necessary to look at the original research”, rather than relying on simplified lists. Educators need to look behind the curtain.

Snook et al. (2009) argue that when averages are sought or large numbers of disparate studies amalgamated, as in meta-analyses, the complexity of education and of classrooms can be overlooked.  They also point out that any meta-analysis that does not exclude poor or inadequate studies is misleading or potentially damaging. Terhart (2011) points out that by focusing on quantifiable measures of student performance, meta-analyses ignore the broader goals of education.

Meta-analysis is singled out by Wiliam (2016) as an unsuitable technique for identifying the relative effectiveness of different approaches to student learning. He states that:

Meta-analysis is simply incapable of yielding meaningful findings that leaders can use to direct the activities of the teachers they lead.”

Wiliam’s PowerPoint presentation from last year’s ResearchED conference in Washington—titled ‘Why teaching isn’t—and probably never will be—a research-based profession (and why that’s a good thing)’—presents the problems with meta-analyses for deciding ‘what works’ in education. In the presentation, Wiliam reminds us that everything works somewhere and nothing works everywhere. He encourages us instead to ask: Under what conditions does this work?

Possibilities and reservations

In her E4L Toolkit presentation this week, Tanya Vaughan advocated for trusting the profession to be thoughtful and intelligent and to engage with the research literature that sits behind the seductive league tables of the E4L toolkit. Her call for mixed methods research—for qualitative and quantitative to “play together”—resonated with me. Many methods of research have something to offer the field, and all are limited.

My hunch is that the E4L toolkit has something to offer educators in Australia (as a starting point rather than an answer sheet), and I can see the significant work that has gone into producing it, as well as the good intentions behind it. Yet I have my reservations. I worry that an uncritical acceptance of the toolkit’s content, alluring in its apparent simplicity, will result in an impoverished understanding of ‘what research says’. We are in danger of giving education research lip service, or wading in shallow pools of evidence. The use of meta-meta-analyses as the basis for the toolkit has the potential to over-synthesise limited quantitative data to the point of distorting original findings, and ignore the limitations, qualities and complexities of the synthesised studies.

Everyone from the profession to the media is likely to translate these effect-size league tables into seemingly authoritative soundbites of ‘what works’ without taking the time to consider what might work where, for whom, and under what conditions. If Australian organisations and schools are to embrace the E4L Toolkit as part of their pursuit of having a positive impact on learners and more systematic bases on which to make decisions, I hope they do so with a cautious step and a critical eye.

References

Eysenck, H. J. (1984). Meta-analysis: An abuse of research integration. The Journal of Special Education 18(1), 41–59.

Simpson, A. (2017). The misdirection of public policy: Comparing and combining standardised effect sizes. Journal of Education Policy, 1-17.

Snook, I., O’Neill, J., Clark, J., O’Neill, A. M., & Openshaw, R. (2009). Invisible learnings? A commentary on John Hattie’s book: Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New Zealand Journal of Educational Studies, 44(1), 93-106.

Terhart, E. (2011). Has John Hattie really found the holy grail of research on teaching? An extended review of Visible Learning. Journal of Curriculum Studies, 43(3), 425-438.

Wiliam, D. (2016). Leadership for teacher learning: Creating a culture where all teachers improve so that all students succeed. Moorabbin, Australia: Hawker Brownlow Education.

14 thoughts on “Evidence For Learning in Australia

  1. Thank you. I really enjoyed reading this. Teachers need to be aware of the criticisms of meta-analysis. I’ve also always wondered why the cog.psych area is increasingly influencing education when much of the cog psych research can’t be replicated in another setting.

    Like

    • Thanks, Jennie. It’s the need for teachers (and media) being aware of the limitations that makes me cautious. I worry that the summary tables will be taken as gospel, as they already were in the media last week when SMH misleadingly reported on ‘the 10,000 pieces of evidence that will settle the homework wars’.

      Like

  2. Pingback: Evidence, evidence* | Dr Rachel Buchanan

  3. Pingback: Stream of blogciousness | the édu flâneuse

  4. Pingback: E4L and the value of dissent | the édu flâneuse

  5. Pingback: Acting on Evidence – Dan Haesler

  6. Pingback: The skill, will, and thrill of Project Based Learning | Bianca Hewes

  7. Pingback: Education Gurus | the édu flâneuse

  8. Pingback: Education research and the teaching profession: Barriers and solutions | the édu flâneuse

  9. Pingback: 4 years of blogging | the édu flâneuse

  10. Pingback: Hattie and Magana: Strange Bedfellows for Technology Innovation - DLIT

Leave a comment