Is formative assessment overvalued?

Call me late to the party, but last night I was surprised to see this tweet from Alfie Kohn stating that formative assessment is overvalued. I agree with his latter comment that data to see if students are improving, or have improved, are worthless until we’ve asked ‘improved at what?’, but I don’t understand the connection between the two parts of the tweet. My hunch is that my understanding of formative assessment in practice is different to Kohn’s. In this post I’ll explain my own take on formative assessment.

(Disclaimer – I understand that a tweet is limited in its 140 character form. I’m using my understanding of the tweet as a jumping off point for this post.)

From the seminal 1998 paper of Paul Black and Dylan Wiliam, ‘Inside the black box’, to subsequent work by these authors, and others, formative assessment as an evidence-based, rigorous feedback process is well-established.

Feedback can be defined as information provided by an agent regarding aspects of performance or understanding (Hattie & Timperley, 2007). Wiliam (2016) notes that anyone (teacher, learner, peer, parent) can be an agent of feedback, and that the most powerful agent of feedback is likely to be the student who takes responsibility for their own learning.

The purpose of feedback, according to Hattie and Timperley (2007) is to reduce the discrepancy between current and desired understanding. Information is used by students or teachers for improvement in an interactive dialogue between teacher and learners so that learners can become more expert and more responsible in guiding and furthering their own learning (Black & Wiliam, 2010). The interactivity, and the activity, are important. Teachers use feedback to make adjustments to planning and instruction. Students become active, empowered agents of their own learning as they self-assess, receive feedback, and act on it. Formative assessment is based in a belief that every learner can improve.

Feedback can have a significant positive influence on student learning and achievement (Stiggins & DuFour, 2009; Wiliam, 2011a, 2011b, 2016), but it is linked to emotions, relationships and environment; it can be accepted, modified, or rejected; and it can have positive or negative effects on performance (see Kluger & DeNisi, 1996).

Formative assessment involves feedback that is continuous; specific to goal, standards and task; descriptive rather than numerical or via grades; occuring within a learning context; and acted on by the learner (such as through self-assessment, re-doing the task, or outlining next steps).

It is information and interpretations from assessments, not numbers or grades, that matter (Hattie & Timperley, 2007). Numerical marks and grades operate as judgements, not aids to learning, and so students ignore comments where a mark is provided (Black, 2014; Black et al., 2004). Alfie Kohn argues against grades in this 2011 paper. Ruth Butler (1987, 1988) found that grades had no effect on achievement. Written comments based on the task, on the other hand, resulted in high levels of task involvement. Comments should identify what has been done well and what still needs improvement, and give guidance on how to make that improvement (Black et al., 2004; Wiliam, 2011b).

Feedback should not involve judgement of the person, positively or negatively. Butler’s research (1987, 1988) found that written praise had no effect on achievement, and Costa and Garmston (2003) note that learning cannot occur if a person feels threatened. While receiving feedback can be emotional, it should be designed to evoke cognition over emotion.

At a grass-roots level, teachers such as Starr Sackstein (2015, 2017) and Mark Barnes (2013, 2015) have been advocating for teachers to ‘throw out grades’, focusing instead on feedback practices such as conferencing, peer assessment, and self-assessment.

This previous blog post outlines some of my own practices around summative assessments, as well as a term I spent teaching Year 10 English without any marks or grades. I have recently developed my summative assessment feedback practices to ensure that students engage with their work more deeply before it is assessed, and then again once I have written comments, but before receiving their mark. In my classroom, formative assessment practices are a constant. They include myself and my students constantly engaging with their work, curriculum standards, syllabus points, rubrics, clear criteria for success, and setting of specific targets. These practices are entwined within a relational classroom environment of trust and challenge. Anecdotally, some of the best a-ha moments for my students come when they assess their own work against clear criteria, and come to their own realisations about how to improve. Over time, self-assessment becomes part of expected and lived practice for students in my classroom. This is not to say that I am a formative assessment expert; building formative opportunities takes ongoing teacher reflection, deliberate planning, and careful constant reading of the students.

Perhaps I have been embedding formative feedback practices into my teaching for so long that it seems obvious, but my thought on first seeing Kohn’s tweet was: of course we cannot look at data that might indicate improvement of learning without asking ‘improvement at what?’ Specific goals, standards, and comments on how and on what to improve, are part and parcel of the suite of practices of formative assessment.

Is formative assessment overvalued? I don’t think so. It is a fundamental way to improve learning, and also to build the capacity of the learner themselves.

References

Barnes, M. (2013) Role reversal: Achieving uncommonly excellent results in the student-centred classroom. Alexandria, VA: ASCD.

Barnes, M. (2015). Assessment 3.0: Throw Out Your Grade Book and Inspire Learning. Thousand Oaks, CA: Corwin.

Black, P. J. (2014). Assessment and the aims of the curriculum: An explorer’s journey. Prospects, 44, 487-501.

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the black box: Assessment for learning in the classroom. Phi Delta Kappan, 86(1), 8-21.

Black, P. J., & Wiliam, D. (1998). Inside the Black Box: Raising standards through classroom assessment. Phi Delta Kappan, 80, 139-48.

Black, P. J., & Wiliam, D. (2010). A pleasant surprise. Phi Delta Kappan, 92(1), 47.

Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest, and performance. Journal of educational psychology79(4), 474-482.

Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task‐involving and ego‐involving evaluation on interest and performance. British journal of educational psychology, 58(1), 1-14.

Costa, A. L., & Garmston, R. J. (2003). Cognitive coaching in retrospect: Why it persists.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research 77(1), 81-112.

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory.

Kohn, A. (2011). The case against grades. Educational Leadership, 69(3), 28-33.

Sackstein, S. (2015). Hacking assessment: 10 ways to go gradeless in a traditional grade school. Cleveland, OH: Hack Learning.

Sackstein, S. (2017). Peer Feedback in the classroom: Empowering students to be experts. Alexandria, VA: ASCD.

Stiggins, R., & DuFour, R. (2009). Maximizing the power of formative assessments. Phi Delta Kappan, 90(9), 640-644.

Wiliam, D. (2011a). Embedded Formative Assessment. Bloomington, IN: Solution Tree.

Wiliam, D. (2011b) What is assessment for learning? Studies in Educational Evaluation37(1), 3-14.

Wiliam, D. (2016). Leadership for teacher learning: Creating a culture where all teachers improve so that all students succeed. Moorabbin, Australia: Hawker Brownlow Education.

The oasis of writing

Sometimes we need an immersion in a cooling, calming place of our choosing. That might involve turning off our devices, turning away from social media, turning towards what nourishes us. It might be sitting in silence, or playing music loud. It might be the catharsis of working with our hands, or the release of letting them rest. It might be solitude or connection, work or play, stillness or movement, mindful or mindless.

School is currently out in Western Australia, and while I am working, I have been taking time out across the break to bathe in oases of sorts. I’ve been on a brief holiday with my family, pottered around the house, seen friends and indulged in another haven of mine: academic writing.

Those of you who write for a living or are in the throes of a PhD (Oh, the unicorn-dancing-in-a-champagne-waterfall highs! Oh, the despairing bottom-of-the-dark-pit lows!) might roll your eyes or baulk at writing as an oasis. But after a term of working full-time in an exciting but challenging newly-formed role in a school, selling a house, buying a house, moving house, parenting my two lovely children, and trying to maintain relationships with family and friends, I was ready for a break from the relentlessness. From feeling like the mouse on the wheel, full of urgency and repetitive motion. Not only that, but both social media and real life have had their share of challenges lately. Academic writing has been a welcome and nurturing reprieve; simultaneously mental work and a mental break. Academic writing continues to be like my PhD, which I sometimes managed to think of as a holiday from all-the-other-things, or intellectual me-time, although without the weighty pressure or looming examination. Papers and chapters are more bite-size and more varied, and pleasingly always at different stages; just as one becomes difficult, another is coming together or being accepted.

Of course academic writing is not easy or necessarily enjoyable. With it comes challenge, struggle, sometimes brutal feedback. It helps that the acwri I’m doing at the moment is writing I want to do. I’m engaged, interested, motivated, intrigued. I’m learning, growing, pushing at the boundaries of what I know and can do. Academic writing allows me to extend myself in different ways to my school role.

Some of this writing is solo, but I’m also writing papers and chapters collaboratively, something still pretty new to me. Perhaps the collaboration is the coolest part because working with others takes me out of my usual groove, my usual ways of thinking and writing. It gets me engaging with others’ words and these spur my words on. Our words are like gifts from a science fiction world; they shapeshift and take on different lives as they are passed back and forth between authors.

This kind of writing and collaboration is somewhere for a writer to luxuriate. Nestle in. Be cocooned by the writing while at the same time deliciously confronted by it. I brace for feedback but at the same time allow myself to be vulnerable and to be shaped. To read unfamiliar theory, try alternate approaches, or to tinker with new ways of theorising, researching and writing. To have one or more other writers to generate and energise.

It’s cool. It’s fun. It’s a welcome distraction from the daily rush of work during term time and the barrage of angry educators slinging accusations at one another on Twitter (thank goodness for my arguing on EduTwitter bingo card!). This holiday break I’ve worked on a solo-authored journal paper and a collaborative chapter so far. I’ve got one more collaborative chapter to look at over the next few days. I’m looking forward to it. Like a cup of tea at the end of the day after the kids have gone to bed, for my pracademic self, straddling as I do the worlds of school and academia, academic writing can be a moment of ‘aaaaaahhhh’, of indulgence, of me-time.

E4L and the value of dissent

I find it ironic that, just after a blog post in which I reflected that blogging often feels like shouting into the void, a recent post on this blog has received a robust and ongoing response, as well as plenty of rich conversation, online and in my immediate context.

I wrote earlier this month about my ponderings and cautions around the Evidence for Learning Toolkit (based heavily on the UK’s Education Endowment Fund Toolkit) currently gaining traction in Australian education circles. I felt compelled to share my thinking as the E4L Toolkit is connected with some Australian heavy hitters. It is powered by Social Ventures Australia and the Commonwealth Bank. It is advised by prominent academics from a range of Australian universities, including John Hattie who is both a champion of meta-analysis and the equivalent of an education household name. Its advisors also include high-level representatives from the Australian Council for Educational Research (ACER), the Australian Council of Education Leaders (ACEL), the Australian Curriculum, Assessment and Reporting Authority (ACARA), various independent schools associations, and various State education departments. This who’s-who is all the more reason, I think, for those in the education community to look carefully at the Toolkit and its adoption across the Australian edu-landscape.

This week, John Bush, Associate Director of Education at Social Ventures Australia and part of the leadership team of Evidence for Learning, wrote a blog post for the E4L website (the first of a series). In it, he responds specifically to some of the comments I made in my post.

John points out that my post was part of “a flurry of public comment across Australia and the UK in the value and risks of meta-analysis and synthesis of meta-analyses in education research.” Although I didn’t realise it at the time, my post did come in on the crest of a wave. Published the same day (UK time) were this article in the Times Education Supplement in which Professors Terry Wrigley and Gert Biesta call the EEF Teaching and Learning Toolkit misleading and unhelpful. There was also this response by the EEF addressing the concerns around and risks of its Toolkit.

A few days later, Rachel Buchanan posted about the need for a critical perspective when we talk about using an evidence base in education. I was also made aware
(thank you, Twitter) of this post from David Didau from last month, which questioned the objectivity of the EEF Toolkit. He has previously cogitated on the limitations of meta-analysis and effect sizes as measures in education.

In my recent post, I linked to a couple of other thoughtful blogs on the limitations of meta-analysis in educational research, one by Jon Andrews and one by Gary Jones. It’s also well worth listening to both Dan Haesler’s comments, and Cameron Malcher’s interview with E4L’s Tanya Vaughan and Janet Clinton, on this recent episode of the Teachers’ Education Review podcast.

So, amid this flurry, I was delighted when John Bush began his own considered response within a complex field: the use of evidence to inform teaching, in order to most positively impact on student learning and achievement.

Despite John’s explanations of the padlock rating system, bearing in mind the many reservations about meta-analysis, I’m not sure I’m confident with E4L’s five-padlock ‘security’ of interventions that are supported by “at least five robust and recent meta-analyses.” I was, however, relieved to read that E4L deliberately uses careful language in the Toolkit in order to facilitate nuanced understandings of the evidence it presents.

John writes that, “We do not envision the Toolkit as a resource that should dictate or direct professional decisions in schools. Instead, we hope school leaders and teachers will use it to start discussions with their peers and to help inform their professional judgement with research evidence.” These are important words, but we in education remain challenged by the tension between a desire for the simple fix and the reality of the complexities of our work. Careful language and a critical approach are important, but the appeal of the misleading media headline, the ranking on a league table, and the easy answer, remain. As a profession we need to push back against these, despite the performative culture in which we operate.

This ongoing public conversation reminded me of a podcast from Brand Newsroom in which the hosts Nic Hayes and Sarah Mitchell interview influential marketing consultant Jay Baer about his book titled Hug Your Haters. In it, Jay discusses the science of complaints. He says that praise is over-rated. Praise feels terrific, he tells us, but teaches us nothing. Negative feedback and criticism, on the other hand, teach us everything; it’s where learning and opportunity come from. Now, I wouldn’t classify myself as a hater, or a complainer, but I would probably be seen by those championing E4L Toolkit as a dissenting voice.

Being publically challenged, even gently, can be difficult, especially when we believe strongly that we are doing good work. I respect John Bush, and the folk at Evidence for Learning, for having the transparency and commitment to engage in some graceful disagreement. In doing so they are responding directly to the concerns of those like me, who, as it turns out, wasn’t blogging into a void after all. Rather, I was offering my perspective to a community of educators who seek to understand one another in order to best serve our students.

While I have moments of high cynicism and outright despair, it is blogversations like the one in which John and I are currently engaging, that can help model publically the kinds of conversations educators can and should be having. I remain cautious and critical about the Evidence for Learning Toolkit, especially in terms of the ways in which it might be adopted by educators in the busyness of their work, but I am open to developing my thinking and continuing the conversation.

Evidence For Learning in Australia

In the UK the Education Endowment Foundation (EEF) is championed by some as a tool for helping teachers, school leaders and schools to make the best decisions for their students, based on what research and evidence shows. Now in Australia, Evidence for Learning (E4L), powered by Social Ventures Australia and the Commonwealth Bank of Australia, is piggybacking on the EEF’s toolkit in order to provide an Australasian equivalent. It is advised by, among others, John Hattie, and is partnering with AITSL and with State education departments to map the toolkit to State education frameworks and the AISTL Professional Standards for Teachers and Principals.

Last year I spoke with John Bush, Associate Director of the Learning Impact Fund, about the toolkit, and this week I attended a breakfast workshop run by Dr Tanya Vaughan, Associate Director for the E4L toolkit and Honorary Fellow at the Melbourne Graduate School of Education (MGSE) at the University of Melbourne. As the Research Lead at my Australian school, I was keen to hear more about how it was progressing and what it is offering Australian schools.

The aims of the E4L Toolkit

Tanya framed the toolkit as as an instrument for helping great practice become common practice. E4L aspires to make accessible, and develop the rigour of, evidence of what works and why in education, in order to make a difference to learners. That is, it aims to build, share and use evidence to support better decision-making in schools, to in turn lead to better outcomes for students.

The E4L toolkit is free and unrestricted in order to provide all schools with access to evidence of what works best in education, regardless of budget or postcode. This, Tanya explained, will help to address the barriers for teachers engaging with research:

  • Shortage of time;
  • Overload of information; and
  • Insufficient contextualized information for practice.

I would add that much educational research is behind a pay wall in journals inaccessible to non-researchers, or in very expensive books that aren’t affordable for many schools. Tanya was adamant that “front line professionals are the heart and soul of evidence-based education practice”, and that E4L endeavoured to improve communication between professionals and researchers, teachers and ‘the evidence’. This connection between educational research and practice is one to which I am especially committed.

What does the E4L Toolkit look like?

The E4L effect size league table’s Top 5 edu-practices

A first glance, the E4L toolkit shows a set of effect-size league tables of teaching practices, each showing – via symbols – the average cost of implementation, the ‘evidence security’ of the claim, and the average month’s worth of learning impact.

Visitors to the toolkit can drill down into the site. Clicking on a single practice such as ‘feedback’ reveals summaries addressing the following questions: What is it?; How effective is it?; How secure is the evidence?; What are the costs?; and, What should I consider? Clicking further into ‘References’ reveals the studies that sit behind this practice, with abstracts. Some practices additionally have an Australasian research summary.

Tanya was clear that the toolkit presents averages. In fact, it presents averages of averages, or more accurately meta-meta-analyses. While Tanya advocated for mixed methods – including talking to leaders, teachers and students – most of what the toolkit presents are syntheses of meta-analyses and randomised control trials (often considered the ‘gold standard’ of educational research).

The lock rating symbols, showing apparent ‘security of evidence’ are based on the number of meta-analyses beneath the meta-meta-analysis. It is the notion of evidence security and the simplification of ‘what works’ to effect size league tables that has me feeling cautious about the toolkit and its potential use. In attempting to address education practitioners’ shortage of time to engage with research and the overload of research information out there, does E4L provide an oversimplified tool likely to be accepted uncritically by busy educators working in our schools?

What is meta-analysis?

Meta-analysis is a statistical analysis using an equation: the experimental mean, minus the control group mean, divided by the population standard deviation. Simpson (2017) gives us this description of what happens:

“Individual studies report quantitative measures of the outcomes of particular interventions; meta-analysts collect studies in a given area, convert outcome measures to a common metric and combine those to report an estimate which they claim represents the impact or influence of interventions in that area. Meta-meta-analysis then takes the results of meta-analyses, collected in broader fields, and combines those estimates to provide a rank ordering of those fields which make the most difference.”

Simpson’s paper, released in January this year, challenges analogies between evidence-based practice in medicine and education. Treatments in medicine, he argues, are often standard and well-specified, with agreed outcomes which are relatively easy to measure. Education is more nuanced, complex and contextual.

Simpson invokes Eysenck’s (1984) notion of comparing apples with oranges, when he points out that meta-analyses often do not compare studies with the same comparisons, measures and ranges of participants. He contends that aggregated effect sizes are more likely to show differences in research design manipulation than in effects on learners. Bloggers such as Jon Andrews, in this post, and Gary Jones, in this one, have teased out the limitations of meta-analysis as method in educational research. Gary insists that “if teachers and school leaders wish to use effect sizes generated by research to help prioritise interventions, then it is necessary to look at the original research”, rather than relying on simplified lists. Educators need to look behind the curtain.

Snook et al. (2009) argue that when averages are sought or large numbers of disparate studies amalgamated, as in meta-analyses, the complexity of education and of classrooms can be overlooked.  They also point out that any meta-analysis that does not exclude poor or inadequate studies is misleading or potentially damaging. Terhart (2011) points out that by focusing on quantifiable measures of student performance, meta-analyses ignore the broader goals of education.

Meta-analysis is singled out by Wiliam (2016) as an unsuitable technique for identifying the relative effectiveness of different approaches to student learning. He states that:

Meta-analysis is simply incapable of yielding meaningful findings that leaders can use to direct the activities of the teachers they lead.”

Wiliam’s PowerPoint presentation from last year’s ResearchED conference in Washington—titled ‘Why teaching isn’t—and probably never will be—a research-based profession (and why that’s a good thing)’—presents the problems with meta-analyses for deciding ‘what works’ in education. In the presentation, Wiliam reminds us that everything works somewhere and nothing works everywhere. He encourages us instead to ask: Under what conditions does this work?

Possibilities and reservations

In her E4L Toolkit presentation this week, Tanya Vaughan advocated for trusting the profession to be thoughtful and intelligent and to engage with the research literature that sits behind the seductive league tables of the E4L toolkit. Her call for mixed methods research—for qualitative and quantitative to “play together”—resonated with me. Many methods of research have something to offer the field, and all are limited.

My hunch is that the E4L toolkit has something to offer educators in Australia (as a starting point rather than an answer sheet), and I can see the significant work that has gone into producing it, as well as the good intentions behind it. Yet I have my reservations. I worry that an uncritical acceptance of the toolkit’s content, alluring in its apparent simplicity, will result in an impoverished understanding of ‘what research says’. We are in danger of giving education research lip service, or wading in shallow pools of evidence. The use of meta-meta-analyses as the basis for the toolkit has the potential to over-synthesise limited quantitative data to the point of distorting original findings, and ignore the limitations, qualities and complexities of the synthesised studies.

Everyone from the profession to the media is likely to translate these effect-size league tables into seemingly authoritative soundbites of ‘what works’ without taking the time to consider what might work where, for whom, and under what conditions. If Australian organisations and schools are to embrace the E4L Toolkit as part of their pursuit of having a positive impact on learners and more systematic bases on which to make decisions, I hope they do so with a cautious step and a critical eye.

References

Eysenck, H. J. (1984). Meta-analysis: An abuse of research integration. The Journal of Special Education 18(1), 41–59.

Simpson, A. (2017). The misdirection of public policy: Comparing and combining standardised effect sizes. Journal of Education Policy, 1-17.

Snook, I., O’Neill, J., Clark, J., O’Neill, A. M., & Openshaw, R. (2009). Invisible learnings? A commentary on John Hattie’s book: Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New Zealand Journal of Educational Studies, 44(1), 93-106.

Terhart, E. (2011). Has John Hattie really found the holy grail of research on teaching? An extended review of Visible Learning. Journal of Curriculum Studies, 43(3), 425-438.

Wiliam, D. (2016). Leadership for teacher learning: Creating a culture where all teachers improve so that all students succeed. Moorabbin, Australia: Hawker Brownlow Education.

The Research Lead Down Under

candle at the Emu Plains Market

candle at the Emu Plains Market

Schools, school leaders and teachers are increasingly expected to be research literate, to be informed by evidence in their decisions and practices, and to be assessed against a range of high-stakes measures. In this kind of education world, schools need to be able to make sense of the measures against which they are being assessed, and have the capacity to generate counter-narratives or alternative data to measure those things that are important for them.

As I’ve alluded to, I have this year begun a new role at my school, which encompasses overseeing professional learning, staff development, innovation and pedagogy. But it also encompasses the kinds of work associated with what UK schools call a ‘Research Lead’: developing the research base and systematic methodologies of the organisation; data generation and analytics; executing evidence-based strategic initiatives; overseeing and developing research and innovation frameworks.

As Hargreaves and Fullan (in Professional Capital, 2012) point out, leading evidence-based school practices and change is a complex process. Having a person dedicated to the curation, generation and communication of research supports everyone from the classroom to the boardroom in making better decisions. A role dedicated to raising the profile and practice of research helps a school to remain agile in response to current educational research; evidence-informed and systematic in its methods; proactive in its processes and communications; and keenly focused on its strategic impacts within the wider context of the global education world.

The Research Lead role has been around in UK schools for a few years, and now there are Research Schools. See, for instance, the Wellington Learning and Research Centre and the Huntington Research School.

As the UK’s College of Teaching noted yesterday, teachers need access to evidence, strategies for understanding it, and opportunities to conduct their own research, not to mention the desire to engage with research in the first place. Access is a real issue, and while there are open access journals, the occasional free paper, and popular dissemination sites like The Conversation and the AARE blog, many teachers do not have the library privileges, money or time to access pay walled journals and expensive books. The Research Lead can be a conduit between research and staff at the school.

The role of Research Lead is explained in this Education Development Trust report, by Tom Bennett. The report positions the Research Lead as gatekeeper, consigliere, devil’s advocate, auditor and project manager. Interestingly, the report notes that schools where Research Leads had made the biggest impact were frequently schools where the role was part of the brief of a senior member of the leadership team. It lists authentic buy-in from senior leadership and a ‘place at the table’ of school life as necessary conditions of the role; the Executive needs to support the role and give it authority, autonomy, time (for the Lead to manage projects and for staff to engage with research) and commitment. The autonomy is partly important for projects and getting work done, but also because the Research Lead might have to sometimes take an unpopular position, or suggest a pause during a time of rapid change; they need to be free to do so.

Elsewhere in the world, the American School of Bombay has a Research and Development Centre. In Australia, examples such as the St Stephens Institute in Perth, the Barker Institute in Sydney, the Crowther Centre at Brighton Grammar School in Melbourne, the Centre for Learning, Research and Innovation at Geelong College, and the Centre for Research, Innovation and Future Development at St Paul’s in Brisbane, show how Australian schools are focusing on centralising and developing research. Just last year, my own role and others local to me were created, incorporating ‘research’ in the title. Some of these roles incorporate learning technologies. Others incorporate student academic achievement and staff learning and development. The research focus is based around the strategic vision and learning principles of each school. In Australia, there is often a focus on generation and innovation (finding out what might work in what context) rather than on prescribing ‘what works’. Teachers are seen by many schools as potential researchers.

So the Research Lead, or equivalent, is advisor, instigator, filter, conduit, provocateur, disseminator, critical questioner, sceptic, creator of partnerships, and builder of a professional culture in which rigorously considering evidence, research literature, and how to measure impacts are an accepted part of the way things are done. The Lead is across and through the organisation, an influence and an advocate for systematic thinking through. As Gary Jones’ blog often explores, evidence-based practice is nuanced and rife with challenges. The Research Lead needs to move beyond lip-service to research and hat-tips to evidence-based practice. They need to be aware of their own preferences, biases, blind spots and deficiencies, as well as the research-and-evidence temperature of the organisation, and how to evaluate and generate evidence and research.

I’m looking forward to shaping the Research Lead part of my own role. As a boundary-spanning PhD-universityadjunct-schoolleader-teacher it is something to which I am deeply committed and about which endlessly fascinated. My nerdery will be put to good use!

Teaching and leading schools in a #posttruth word of #altfacts

General Hux's speech in The Force Awakens (reddit.com)

General Hux’s speech in The Force Awakens (source – reddit.com)

Post-truth: relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. ~ Oxford Dictionary

To my continued astonishment, we are living in a post-truth world. ‘Post-truth’ was the Oxford Dictionary’s 2016 word of the year. The Trump administration in its first week seemed to impersonate the Star Wars totalitarian First Order when it claimed that it was not lying but providing the public with ‘alternative facts’. Then, gag orders were placed on a number of government agencies, including the Department of Agriculture, the Environmental Protection Agency and the Department of Health and Human Services. (Hurrah for whoever tweeted rebelliously about inauguration crowds and climate change from the National Parks Service ‘Badlands National Park’ account.) 

For a Western government to blatantly deny reality is at once baffling and terrifying. Hello, propaganda. Hello, the invocation of untruths (sorry, ‘alternative facts’) to smother any unfavourable actuality.

We shouldn’t be surprised. The misuse and abuse of language and facts is something that dystopian and speculative fiction has been warning of for decades, and something that history tells us has the ability to tap into the hive mind and rally societies around a common, often chilling, cause or leader. 

In this post I’ll explore the notion of a post-truth world of alternative facts and empty emotive rhetoric, around two arenas in my own life: teaching English and Literature, and my new role at my school, which encompasses in part engagement with research across the school.

First, to teaching in a post-truth world …

With the school year beginning next week, my Year 12 English team are finalising the texts to be taught and studied this Australian academic year. We’ve been tossing up between two contemporary texts about modern issues like gender, sporting culture and bullying, but every day the news and my social media feed give me a nagging feeling, a tugging at my literary shirt sleeve, a whisper to pause, take stock, listen. And dig out a dystopian classic.

Last year we taught the 12s Fahrenheit 451, a text that portrays books as dangerous threats to government control and societal compliance. This year perhaps we should teach Orwell’s 1984. Its Ministry of Truth, that falsifies historical events, and Newspeak, a language that restricts freedom of thought, are more relevant than ever. In fact, Orwell’s novel has this week rocketed to number 1 on the Amazon best sellers list.

A more recent text also comes to mind. Lionel Shriver’s 2016 The Mandibles, set between 2029 and 2047, is an economic dystopia that imagines the USA’s collapse. In her novel, the bungling US government has little respect for its citizens. First world problems like gluten intolerance disappear as violence and poverty rise. It is Mexico that builds an electrified, computerised, constantly-surveyed fence to keep desperate Americans illegals out.

Of course as a teacher of English and Literature I teach versions of reality and multiplicity of perspectives, but that plurality doesn’t stretch to bald-faced lies for the purposes of propaganda, banning scientists from speaking, or removing language like ‘climate change’ from government policy and websites. Language matters. It shapes thought. It wields power. It’s our job as teachers to elevate our students’ capacities to engage critically with their world. To be sceptical consumers of what they see, hear and read, and to be empowered to use language as an agentic tool.

Next, to school leadership in a world of alternative facts …

I am also coming to terms with how schools might respond to this post-truth world. This is especially relevant to me as I have just begun a new role at my school (new to me and new to the school). It is a senior leadership role that encompasses the use of evidence and research to make informed decisions from the classroom to the boardroom, as well to underpin and frame pedagogy, professional learning, performance review processes and capacity building across the organisation.

In this paper published online on 18 January, Brown and Greany (2017, p.1)—thanks to Gary Jones, whose blog is a great resource in this space, for sharing it—write:

Educational evidence rarely translates into simple, linear changes in practice in the ways that what-works advocates might hope. Instead, … evidence must be combined with practitioner expertise to create new knowledge which improves decision making and enriches practice so that, ultimately, children’s learning is enhanced.

This focus on what Brown and Greany call ‘what matters’ as well as ‘what works’ resonates with me. As Jon Andrews (channelling Marilyn Cochran-Smith) reminds us, teaching is unforgivingly complex. If we schools and educators are to really engage with research, then we need to honour our own contexts and value our own wisdom of practice. Teachers and schools can and should engage with research. I’m grateful that my school is able to create a role like mine in order to elevate evidence and research, execute research initiatives, and further embed scientific thinking and data analytics into the fabric of the school a culture. I’m grateful that there are schools around the world bringing evidence, mindfulness and crticiality to their decision making and pedagogy.

In a post-truth world, how do we balance a respect for truth, evidence and reason, with an honouring of plurality, multiplicity and praxis? How might we use literature or research as vehicles for respecting perspectives, while exploring challenges and possibilities?

Stitching the shadows: Writing & social media

textile detail by Isobel Moore http://www.threadnoodle.co.uk/

textile detail by Isobel Moore http://www.threadnoodle.co.uk/

This blog post is part of a blogversation. It responds to two blogs, both of which came to my attention via my Twitter feed. This one on qualitative research methods by Naomi Barnes, and this one on tracing the social media interchange that followed, by Ian Guest. This is not the first time I have jumped into a blogversation unannounced and univited. The first time was when Helen Kara challenged Naomi Barnes to the #blimage challenge, after I had first challenged Helen to the same. The post I wrote, in response to Helen’s photograph of spider webs in her garden, echoes the themes of this post – the power and messiness of connectivity on social media. Another of Naomi’s posts had me thinking about diffraction.

The great thing about social media is that by engaging we situate ourselves within a public conversation. It’s when people jump in—to ask a question, make a comment, respectfully challenge, add their lived experience, share their perspective—that dialogue is enriched and we influence each other, across time, space and devices.

In Naomi’s recent post she articulates some ways of thinking that are close to my heart and my keyboard: blogging as inquiry and using metaphors as a method of sense-making. As many of you would know, I used Alice’s Adventures in Wonderland as a literary metaphor through which I viewed and re-constructed my PhD data. Metaphors, as any reader of this blog will recognise, are one way that I make sense of the world. Metaphors also emerged from the stories of the participants of my PhD as they worked to make sense of their selves and worlds.

As une édu flâneuse I was taken with Naomi’s notion of the ‘concept flâneur’. The flâneur, or its feminine alternative the flâneuse, is the attentive observer, the attuned wanderer, a scholar of the world and a chameleonic surveyor of the crowd. The ‘concept flâneur’ reminds me of my own use of bricolage in my PhD that I describe here as rethinking well-worn traditions and stitching them back together in new form. But flânerie is about more than stitching together. It is about rapt observation and devoted contemplation, about deep understanding and applying scholarly thinking. The theoretical flâneur is the insider-outsider, at once looking in and immersed within.

The part of Naomi’s post that challenged me the most was when she stated that qualitative research has stagnated as “the author has become central in the writing. It becomes about writing, rather than the research and the need for change.” It led me to a Twitter exchange in which I explored my own uncertainty around the self in research and the author’s place in writing.

a Twitter exchange resulting from Naomi Barnes' blog post

a Twitter exchange resulting from Naomi Barnes’ blog post

In keeping with Naomi’s metaphor of the sutured-together monster body, I see these kinds of social media interactions as textile. I have written before about textiles as a metaphor for subversion and political activism. We stitch onto shared fabric, adding perspectives, colour, texture, visual elements to a work. Our hands and minds shape the work (our thinking work, our writing work, our collaborative dialogue work), as it shapes us. Needles prick and rub callouses into fingers. We cramp. We struggle with the material. We can be proud of our contribution, working together like a quilting circle on the collaborative work of seeking to understand and to theorise.

Ian, in his post that responds to Naomi’s post, points out the non-linear, messy ways that exchanges happen on Twitter, despite their appearance in the feed as linear threads. I’ve written before about the butterfly effects of Twitter conversations, their serendipitous, surprising and subtly influential moments. Their powerful, unforeseen circumstances.

Ian wonders about the silences and the blurred boundaries between people and thoughts. I agree that it is in the silences, the shadows, the fissures, the dark cracks, of exchanges and of our own thinking, that we are most in a state of becoming and therefore potential change. It’s in the dark and vulnerable spaces that we learn. Blogging can be a bit like this: an exposure, a laying bare, a stripping down.

Ian mentions in his post that he shared a blog post via email despite sitting right next to his colleague; they collaborated via technology despite being in the same room. This reflects the evolving relationship that Naomi and I share. We have begun a co-authorly relationship, via digital tools. Word to word, screen to screen, device to device. When we met in person for the first time recently, we didn’t discuss our writing projects specifically. We saved our writerly collaborations for online spaces: email, Google docs, Twitter. In our fledgling collaboration, for me the digital sphere feels simultaneously a bit sacred and a draft-notebook-type place for working out. We show our workings to each other via our thinking-out-loud digital musings.

The wonderful thing about blogging, tweeting, emailing, writing and reading as inquiry is the acceptance, and even celebration, that it is all unformed. There are moments of awkwardness, uncertainty, openness, weakness, resistance, emotion. It’s all laid bare on screen, and open to tangled-threaded multi-webbed interchanges that have us emerging from the knotted labyrinthine tangles as from a chrysalis, declaring “here I am” so that we can be challenged and changed again.