E4L and the value of dissent

I find it ironic that, just after a blog post in which I reflected that blogging often feels like shouting into the void, a recent post on this blog has received a robust and ongoing response, as well as plenty of rich conversation, online and in my immediate context.

I wrote earlier this month about my ponderings and cautions around the Evidence for Learning Toolkit (based heavily on the UK’s Education Endowment Fund Toolkit) currently gaining traction in Australian education circles. I felt compelled to share my thinking as the E4L Toolkit is connected with some Australian heavy hitters. It is powered by Social Ventures Australia and the Commonwealth Bank. It is advised by prominent academics from a range of Australian universities, including John Hattie who is both a champion of meta-analysis and the equivalent of an education household name. Its advisors also include high-level representatives from the Australian Council for Educational Research (ACER), the Australian Council of Education Leaders (ACEL), the Australian Curriculum, Assessment and Reporting Authority (ACARA), various independent schools associations, and various State education departments. This who’s-who is all the more reason, I think, for those in the education community to look carefully at the Toolkit and its adoption across the Australian edu-landscape.

This week, John Bush, Associate Director of Education at Social Ventures Australia and part of the leadership team of Evidence for Learning, wrote a blog post for the E4L website (the first of a series). In it, he responds specifically to some of the comments I made in my post.

John points out that my post was part of “a flurry of public comment across Australia and the UK in the value and risks of meta-analysis and synthesis of meta-analyses in education research.” Although I didn’t realise it at the time, my post did come in on the crest of a wave. Published the same day (UK time) were this article in the Times Education Supplement in which Professors Terry Wrigley and Gert Biesta call the EEF Teaching and Learning Toolkit misleading and unhelpful. There was also this response by the EEF addressing the concerns around and risks of its Toolkit.

A few days later, Rachel Buchanan posted about the need for a critical perspective when we talk about using an evidence base in education. I was also made aware
(thank you, Twitter) of this post from David Didau from last month, which questioned the objectivity of the EEF Toolkit. He has previously cogitated on the limitations of meta-analysis and effect sizes as measures in education.

In my recent post, I linked to a couple of other thoughtful blogs on the limitations of meta-analysis in educational research, one by Jon Andrews and one by Gary Jones. It’s also well worth listening to both Dan Haesler’s comments, and Cameron Malcher’s interview with E4L’s Tanya Vaughan and Janet Clinton, on this recent episode of the Teachers’ Education Review podcast.

So, amid this flurry, I was delighted when John Bush began his own considered response within a complex field: the use of evidence to inform teaching, in order to most positively impact on student learning and achievement.

Despite John’s explanations of the padlock rating system, bearing in mind the many reservations about meta-analysis, I’m not sure I’m confident with E4L’s five-padlock ‘security’ of interventions that are supported by “at least five robust and recent meta-analyses.” I was, however, relieved to read that E4L deliberately uses careful language in the Toolkit in order to facilitate nuanced understandings of the evidence it presents.

John writes that, “We do not envision the Toolkit as a resource that should dictate or direct professional decisions in schools. Instead, we hope school leaders and teachers will use it to start discussions with their peers and to help inform their professional judgement with research evidence.” These are important words, but we in education remain challenged by the tension between a desire for the simple fix and the reality of the complexities of our work. Careful language and a critical approach are important, but the appeal of the misleading media headline, the ranking on a league table, and the easy answer, remain. As a profession we need to push back against these, despite the performative culture in which we operate.

This ongoing public conversation reminded me of a podcast from Brand Newsroom in which the hosts Nic Hayes and Sarah Mitchell interview influential marketing consultant Jay Baer about his book titled Hug Your Haters. In it, Jay discusses the science of complaints. He says that praise is over-rated. Praise feels terrific, he tells us, but teaches us nothing. Negative feedback and criticism, on the other hand, teach us everything; it’s where learning and opportunity come from. Now, I wouldn’t classify myself as a hater, or a complainer, but I would probably be seen by those championing E4L Toolkit as a dissenting voice.

Being publically challenged, even gently, can be difficult, especially when we believe strongly that we are doing good work. I respect John Bush, and the folk at Evidence for Learning, for having the transparency and commitment to engage in some graceful disagreement. In doing so they are responding directly to the concerns of those like me, who, as it turns out, wasn’t blogging into a void after all. Rather, I was offering my perspective to a community of educators who seek to understand one another in order to best serve our students.

While I have moments of high cynicism and outright despair, it is blogversations like the one in which John and I are currently engaging, that can help model publically the kinds of conversations educators can and should be having. I remain cautious and critical about the Evidence for Learning Toolkit, especially in terms of the ways in which it might be adopted by educators in the busyness of their work, but I am open to developing my thinking and continuing the conversation.

5 thoughts on “E4L and the value of dissent

  1. Pingback: Acting on Evidence – Dan Haesler

  2. Hey Deb,
    So I started writing a comment, and the comment was almost as long as your post…

    The short version of the comment reflected my thoughts of how, even when presented with facts or evidence, we sometimes still make poor decisions.

    “There are countless examples in society of where, even when presented with fairly substantial evidence, people still make “interesting” decisions – whether they be jurors in a courtroom, parents who choose to run the gauntlet with measles, smokers, or dare I say it, leaders of the free world.”

    The longer version that fleshes this out somewhat is here: https://danhaesler.com/2017/03/24/acting-on-evidence/

    Cheers! 🙂


  3. The nature of the ‘evidence’ in E4L is increasingly being recognised as fundamentally flawed. It relies on meta-analyses of research, reduced to ‘effect sizes’ (or, equivalently but very misleadingly ‘months progress’). This measure is seen as something which simply is not a proxy for educational importance at all and the league tables of more or less effective interventions is thoroughly misleading.

    Psychologists like Jan Vanhove have been saying this for a while: standardised effect size just a measure of how good your research design is (see http://janhove.github.io/design/2015/03/16/standardised-es-revisited). In education, The EEF toolkit and Hattie’s Visible Learning are based on this notion of ‘effect size’. Even the big ‘evidence based education’ people like Cheung & Slavin note that effect size varies with sample size and with whether you use a standard test or one a researcher makes up (https://goo.gl/JHEGpE) and Simpson really knocks the EEF toolkit to bits by showing that the effect size is just something which a researcher chooses (https://goo.gl/EzdcJT). For example, feedback looks like it has a big effect size because it is easy to study, not necessarily because it has a big educational impact.

    Arguments that these league tables are meant only to be used as discussion starting points is clearly false: you don’t need rank orderings to start a discussion. Whether intended to or not, these ‘months progress’ numbers push teachers and schools in directions which aren’t grounded in a reasonable interpretation of the research.


  4. Pingback: Feedback: It’s emotional | the édu flâneuse

  5. Pingback: Read Write Respond #015 – Read Write Collect

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s