Unknown's avatar

About Dr Deborah M. Netolicky

School Principal | PhD | GAICD | Educator | Author | Podcaster (The Edu Salon) | Blogger (the édu flâneuse)

Education Gurus

It’s easy to make your own guru memes with Canva.

Knowledge and advice for schools and about education often seem to exist in a world of commodification and memeification. There is plenty of disagreement and debate in education, and plenty of competition on bookshelves and in conference programs. Educators and academics position themselves as brands via bios, photographs, and certification badges. As an educator and a researcher I have those whose work I follow closely; academics, for instance, whose presence affects me when I meet them because their reputation and body of work precede them.

In education, we have perceived gurus. These are people who have become ubiquitous in education circles, at education conferences, and in education literature. Teachers and school leaders scramble to get tickets to their sessions and to get photographic evidence of having met them. Their words are tweeted out in soundbites ad infinitum (or is that ad nauseum?), and made into internet memes. Sometimes these individuals partner with publishers or education corporates, and so the visibility and reach of their work grows. They become the scholars or experts most cited in staff rooms, at professional learning water coolers, and in job interviews when asked how research informs practice. 

Sometimes, these gurus are teachers or principals who have gained a large following on social media and subsequently a monolithic profile. Often, they are academics who have built up bodies of work over many years, becoming more and more well-known along the way, and eventually being perceived as celebrities or gurus. Yesterday I had the pleasure of learning from Dylan Wiliam, firstly at a day long seminar, and then at my school. At one point the seminar organisers apologised for running out of Wiliam’s books, acknowledging the desire of delegates to have the book signed.
Marten Koomen has traced networks of influencers in Australian education organisations. In his new paper ‘School leadership and the cult of the guru: the neo-Taylorism of Hattie’, Scott Eacott challenges the rise of the edu guru, those academics whose work is ubiquitous and influential to the point of being uncritically accepted and canonised. Eacott pushes back against the ‘what works’ mentality in education, in which educators are sold ‘what works’ and encouraged to slavishly apply it to their own contexts. Jon Andrews, too, questions the unquestioning way in which the loudest and most prominent voices become the accepted voices. Meta-analysis and meta-meta-analysis, often translated into league tables of ‘what works’ in education, have been the subject of criticism. George Lilley and Gary Jones have both questioned meta-analysis on their blogs. I’ve written about cautions surrounding the use of meta-analysis in education, especially when it drives clickbait headlines and a silver-bullet mentality of having the answers without having to ask any questions. Yesterday Wiliam made his oft-repeated points: that everything works somewhere, nothing works everywhere, and context matters. A guru cannot provide easy answers in education, as education is too complex and contextual for that.

taken at AERA last year

Much of this conversation around the rise of the edu guru has surrounded John Hattie, although he is by no means the only globally renowned education expert likely to make conference delegates weak at the knees. I was personally uncomfortable when he was beamed in via video link to last year’s ACEL conference and began to give an ‘I have a dream’ speech about education. As an English and Literature teacher I understand the power of rhetoric and analogy to persuade and inspire, but appropriating the legacy and words of Dr Martin Luther King Junior seemed a way to gospelise a personal brand of education reform.

I don’t think that education experts, no matter how influential they become, should encourage the uncritical acceptance of their ideas as dogma, or present themselves as the bringers of the One True Thing To Rule All Things of and for education. As Dylan Wiliam, channelling Ben Goldacre, repeatedly said yesterday, “I think you’ll find it’s a little more complicated than that.”

I wonder how perceived gurus feel about being guru-ised by the education masses. In part the famous and the infamous in education are so because of their actions: accepting more and more speaking gigs, performing the game of publishing and promoting their work. Most, I would guess, do this for the same reason someone like me speaks and publishes. To contribute to education narratives and change those narratives, hopefully for the better. To be of service to the profession and the field. To explore and wrestle with ideas, trying to find ways to make sense of the complexity of education in order to improve the learning of students and the lives of teachers and school leaders.

I wondered about the rise to gurudom and the moral obligation of the academic celebrity figure last year when at AERA I saw a panel in which four educational heavy hitters—Andy Hargreaves, Michael Fullan, Linda Darling-Hammond and Diane Ravitch—all advocating for the moral imperative of educational research and practice. They spoke of lifetime journeys of work intended to make the world a better and more just place. I wondered at the time about how much an early career academic can be brave and resistant in their work, as they try to build a career via the performative pressures of the academe. Can only the guru, free from institutional performativities and the financial pressures often associated with early career academia, say what they really want to say and do the work and writing they really want to do?

I don’t think experts in education are dangerous. We need expertise and people willing to commit their lives and work to making sense of and making better the world of education and learning. But in a world where teachers and school leaders are busy racing on the mouse wheels of their own performative pressures, we need to figure out ways to support and facilitate sceptical and critical engagement with research. Even those who are highly influential and highly admired need to have their work engaged with closely and critically. The danger comes when experts become so guru-fied that the words they use become part of an unthinking professional vernacular, used by educators who haven’t looked behind the curtain or beneath the book cover.

Is formative assessment overvalued?

Call me late to the party, but last night I was surprised to see this tweet from Alfie Kohn stating that formative assessment is overvalued. I agree with his latter comment that data to see if students are improving, or have improved, are worthless until we’ve asked ‘improved at what?’, but I don’t understand the connection between the two parts of the tweet. My hunch is that my understanding of formative assessment in practice is different to Kohn’s. In this post I’ll explain my own take on formative assessment.

(Disclaimer – I understand that a tweet is limited in its 140 character form. I’m using my understanding of the tweet as a jumping off point for this post.)

From the seminal 1998 paper of Paul Black and Dylan Wiliam, ‘Inside the black box’, to subsequent work by these authors, and others, formative assessment as an evidence-based, rigorous feedback process is well-established.

Feedback can be defined as information provided by an agent regarding aspects of performance or understanding (Hattie & Timperley, 2007). Wiliam (2016) notes that anyone (teacher, learner, peer, parent) can be an agent of feedback, and that the most powerful agent of feedback is likely to be the student who takes responsibility for their own learning.

The purpose of feedback, according to Hattie and Timperley (2007) is to reduce the discrepancy between current and desired understanding. Information is used by students or teachers for improvement in an interactive dialogue between teacher and learners so that learners can become more expert and more responsible in guiding and furthering their own learning (Black & Wiliam, 2010). The interactivity, and the activity, are important. Teachers use feedback to make adjustments to planning and instruction. Students become active, empowered agents of their own learning as they self-assess, receive feedback, and act on it. Formative assessment is based in a belief that every learner can improve.

Feedback can have a significant positive influence on student learning and achievement (Stiggins & DuFour, 2009; Wiliam, 2011a, 2011b, 2016), but it is linked to emotions, relationships and environment; it can be accepted, modified, or rejected; and it can have positive or negative effects on performance (see Kluger & DeNisi, 1996).

Formative assessment involves feedback that is continuous; specific to goal, standards and task; descriptive rather than numerical or via grades; occuring within a learning context; and acted on by the learner (such as through self-assessment, re-doing the task, or outlining next steps).

It is information and interpretations from assessments, not numbers or grades, that matter (Hattie & Timperley, 2007). Numerical marks and grades operate as judgements, not aids to learning, and so students ignore comments where a mark is provided (Black, 2014; Black et al., 2004). Alfie Kohn argues against grades in this 2011 paper. Ruth Butler (1987, 1988) found that grades had no effect on achievement. Written comments based on the task, on the other hand, resulted in high levels of task involvement. Comments should identify what has been done well and what still needs improvement, and give guidance on how to make that improvement (Black et al., 2004; Wiliam, 2011b).

Feedback should not involve judgement of the person, positively or negatively. Butler’s research (1987, 1988) found that written praise had no effect on achievement, and Costa and Garmston (2003) note that learning cannot occur if a person feels threatened. While receiving feedback can be emotional, it should be designed to evoke cognition over emotion.

At a grass-roots level, teachers such as Starr Sackstein (2015, 2017) and Mark Barnes (2013, 2015) have been advocating for teachers to ‘throw out grades’, focusing instead on feedback practices such as conferencing, peer assessment, and self-assessment.

This previous blog post outlines some of my own practices around summative assessments, as well as a term I spent teaching Year 10 English without any marks or grades. I have recently developed my summative assessment feedback practices to ensure that students engage with their work more deeply before it is assessed, and then again once I have written comments, but before receiving their mark. In my classroom, formative assessment practices are a constant. They include myself and my students constantly engaging with their work, curriculum standards, syllabus points, rubrics, clear criteria for success, and setting of specific targets. These practices are entwined within a relational classroom environment of trust and challenge. Anecdotally, some of the best a-ha moments for my students come when they assess their own work against clear criteria, and come to their own realisations about how to improve. Over time, self-assessment becomes part of expected and lived practice for students in my classroom. This is not to say that I am a formative assessment expert; building formative opportunities takes ongoing teacher reflection, deliberate planning, and careful constant reading of the students.

Perhaps I have been embedding formative feedback practices into my teaching for so long that it seems obvious, but my thought on first seeing Kohn’s tweet was: of course we cannot look at data that might indicate improvement of learning without asking ‘improvement at what?’ Specific goals, standards, and comments on how and on what to improve, are part and parcel of the suite of practices of formative assessment.

Is formative assessment overvalued? I don’t think so. It is a fundamental way to improve learning, and also to build the capacity of the learner themselves.

References

Barnes, M. (2013) Role reversal: Achieving uncommonly excellent results in the student-centred classroom. Alexandria, VA: ASCD.

Barnes, M. (2015). Assessment 3.0: Throw Out Your Grade Book and Inspire Learning. Thousand Oaks, CA: Corwin.

Black, P. J. (2014). Assessment and the aims of the curriculum: An explorer’s journey. Prospects, 44, 487-501.

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the black box: Assessment for learning in the classroom. Phi Delta Kappan, 86(1), 8-21.

Black, P. J., & Wiliam, D. (1998). Inside the Black Box: Raising standards through classroom assessment. Phi Delta Kappan, 80, 139-48.

Black, P. J., & Wiliam, D. (2010). A pleasant surprise. Phi Delta Kappan, 92(1), 47.

Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest, and performance. Journal of educational psychology79(4), 474-482.

Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task‐involving and ego‐involving evaluation on interest and performance. British journal of educational psychology, 58(1), 1-14.

Costa, A. L., & Garmston, R. J. (2003). Cognitive coaching in retrospect: Why it persists.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research 77(1), 81-112.

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory.

Kohn, A. (2011). The case against grades. Educational Leadership, 69(3), 28-33.

Sackstein, S. (2015). Hacking assessment: 10 ways to go gradeless in a traditional grade school. Cleveland, OH: Hack Learning.

Sackstein, S. (2017). Peer Feedback in the classroom: Empowering students to be experts. Alexandria, VA: ASCD.

Stiggins, R., & DuFour, R. (2009). Maximizing the power of formative assessments. Phi Delta Kappan, 90(9), 640-644.

Wiliam, D. (2011a). Embedded Formative Assessment. Bloomington, IN: Solution Tree.

Wiliam, D. (2011b) What is assessment for learning? Studies in Educational Evaluation37(1), 3-14.

Wiliam, D. (2016). Leadership for teacher learning: Creating a culture where all teachers improve so that all students succeed. Moorabbin, Australia: Hawker Brownlow Education.

Reflecting on the school leader

The bad leader is he who the people despise; the good leader is he who the people praise; the great leader is he who the people say, “We did it ourselves”. ~ Peter Senge, The Fifth Discipline

Part of my PhD literature review encompassed what makes effective school leadership, and effective leadership of change or reform in schools. If you don’t fancy savouring all 300 odd pages of my dissertation, the summary of my literature search determined that effective school leaders:

  • Develop shared vision;
  • Have high expectations and clear accountabilities;
  • Develop an environment of trust;
  • Empower others and allow them autonomy, space, and support to lead;
  • Solve complex problems;
  • Engage with the wider community; act as storyteller and sense-maker; and
  • Balance instructional and transformational leadership.

Many of these points are reflected in the Australian Professional Standards for Principals, which break school leadership down into the following components:

  • Leading teaching and learning;
  • Developing self and others;
  • Leading improvement, innovation, and change;
  • Leading the management of the school; and
  • Engaging and working with the community.

Both of these lists cross over one another, and each seems simple in its short-list nature (5 dot points! How hard can it be?), but looking closely at many of these aspects of school leadership quickly reveals the complexity of the mandate. On top of that, school leadership teams are under pressure from constant measures of their performance. Leadership itself becomes a quantified, evaluated performance. Meanwhile, on a daily basis leaders constantly code-switch as they move from the classroom, to the boardroom, to the parents’ committee, to the community event, to the performance management conversation, to the staff member or student who needs support.

My PhD study found that school leaders are constantly navigating internal, relational, and organisational identities. These complex and sometimes competing identities affect leaders’ experiences and decision making. The leaders in my study were moving, often deliberately and relentlessly, between leadership modes that were directive and empowering, hero and servant, visible and invisible.

Leading is a constant state of becoming and of identity work. Peter Gronn, in his 2003 book The new work of educational leaders: Changing leadership practice in an era of school reform, reminds us that leaders’ senses of who they are, and who they aspire to be, play a pivotal role in their engagement with their work. Having multiple leadership roles in my current school has meant that it is not only me who has had to shift my self-perceptions or identity enactments, but also my colleagues who have had to see me in new ways across my time at the school. Additionally, I have multiple, competing identities that exist simultaneously with my school identity; as parent, spouse, sibling, daughter, researcher. Boundary spanner and pracademic. Identities like plates precariously spinning atop spidery poles.

While Gronn suggests that individuals rework their perspectives in relation to their contexts, my PhD found that, while context does shape professional identity, individuals also choose their contexts to fit their own identities. My leader participants indicated that they stayed in schools that resonated with their senses of professional self, and left schools in which they did not feel aligned with organisational purpose and action. That is, school contexts shape leaders, and leaders shape their contexts. Leaders can and do choose schools with which they feel an identity fit, and leave schools in which they feel they do not fit.

Wellbeing is a real issue in school leadership, as reflected in the results of the longitudinal Riley study, which has found that Australian principals score lower than the general population on positive measures of wellbeing, quality of life, and mental health; but higher on negative measures such as stress, depression and sleeping trouble. Leaders need formal and informal support, as well as their own strategies for self-care and renewal. It might be that school leadership can be summarised in a series of dot points, but it is contextual, complex, and lonely. It is challenging and rewarding, exhausting and exhilarating.

Flashback Friday: The end of the PhD

The end of the PhD. I remember it well, or so my long line of PhD-finishing blog posts might seem to attest. These include (and this is just a selection) …

The end of a doctorate is a rollercoaster of emotion. One, it turns out, I had largely forgotten. While my blog posts act as bread crumbs back to those experiences, the feelings themselves have faded, softened and blunted over time.

Today, I was reminded.

I still connect with the ‘DocVox’ Voxer (voice-to-voice messaging app) group that helped support me through my PhD. This is a group of mostly doctoral (PhD and EdD) candidates from the USA, plus a couple of us from Australasia. I figure staying in the Voxer group despite having finished the PhD helps me to pay back by continuing to support those who are still on their journey. It was via this group that I was today reminded of the visceral nature of the last bit of the PhD.

This morning a candidate from the US was Voxing about the blind panic they were feeling as they near dissertation submission. As I Voxed a response, I tried to reassure the person that their experience was normal. I recalled how in the last months of my PhD I had brutal insomnia. I clenched my jaw in my sleep despite chomping magnesium before bed to try and calm myself down and slow the mania of my obsessive mind. When I did sleep, I had nightmares, a recurring one of which was that I died and my almost-but-not-yet-finished PhD never saw the light of day, but languished, unexamined and unpublished. As I spoke, tears sprang to my eyes and my voice cracked. Some of that emotion returned in an intense flash. Wow, I thought, I didn’t think I was very affected by my experience. I was reminded as I spoke of the isolation of those moments, ones I didn’t really talk about because despite being surrounded by family, friends and colleagues, it didn’t seem something they would understand.

There are times in the PhD when everyone thinks you must be finished by now but you know you have so far to go, and times when it seems you should feel happy but instead you feel strange and empty. It’s a weird, emotional and quite a lonely time.

*                                    *                                    *

It’s almost 13 months since I was doctored. That moment was a glorious one. I awoke in Washington DC, after attending and presenting at the American Education Research Association (AERA) Conference. I had met a number of my academic heroes, as well as colleagues I knew only through Twitter and those that I met at the conference at sessions or in the epically long queue at Starbucks. I had nailed the presentation about my research and spent an hour in the corridor afterwards fielding questions and discussion. One of these discussions carried over to lunch and an ongoing professional connection. I’d had a great conference and was in edu-nerd heaven. It was the perfect moment for doctoring.

So, the day after AERA closed, I awoke in my Dupont Circle Airbnb apartment and checked my email, to find a ‘Congratulations, Doctor Netolicky’ email confirming the conferment of my PhD. I whooped, I shrieked, I clapped. I cried. I fist pumped. I felt overwhelmed and triumphant.

It was my last day in DC and I floated on rainbow-fairy-floss-cloud-nine as I swanned around the city in the magnificent sunshine. I was on my own, so I took this selfie (below) to remind myself of that elation. The iPhone snap mightn’t look like much to anyone else, but whenever I see it, it catapults me back to that moment of pure joy. Unadulterated I-am-now-Dr-Me exhilaration.

Now I have the luxury of being a pracademic, part school leader-teacher-practitioner, part early-career-scholar-researcher. During the PhD, finishing the doctorate always felt like an ending, but as I look back I can see that it was a beginning. I am now able to luxuriate more serenely in the oasis of academic writing, and to enjoy the gentle challenge of scholarly collaboration and conversation. And to apply my doctoral experience to my daily work.

The emotions fade, but it turns out they’re still there, in memory and in deep in the bowels of the iPhone camera roll.

DC doctor selfie

Running the PISA race

The education culture of performativity is wrapped up in notions of measurement. How do we measure student success, teacher performance, effective school leadership, and successful education policy? How do we know which school systems are successful and why they are successful? How can we tease out and understand causes of performance, such as the influence of social and economic factors, the system, the school, and the teacher? These are perennial education questions, and ones which continue to become more and more important in a globalised world in which countries, schools, and teachers can be compared, and in which there is an ever-increasing weight of accountability.

Standardised testing is a central issue in this neoliberal education context. Individual schools have their own approaches to measurement, such as the one I describe in this blog post. In Australia we have NAPLAN and WACE. There is currently talk of a national Phonics Check in the early years, such as that used in the UK. Internationally we have TIMSS, PIRLS and PISA.

Steven Lewis and Anna Hogan have shown how oversimplified reports of international testing measures can contribute to oversimplified ‘fast policy’. As Marten Koomen points out, systems should respond to international testing measures but these systems are complex. Stewart Riddle and Bob Lingard wrote that looking at a single country’s PISA ranking is useless; rather we need to carefully disaggregate the data and consider social and economic factors, and differences between states, schools and groups.

This week I received in the mail a copy of new book The global education race: Taking the measure of PISA and international testing, by Sam Sellar, Greg Thompson and David Rutkowski. (Full disclosure: I got a complimentary copy.) It is a pocket rocket at about A5 size and 99 pages. In the Foreword, David Berliner and Pasi Sahlberg question the value often ascribed to PISA, noting that PISA tests are linked to the social conditions as well as school systems and that “when the race to the top gets tougher … curriculum narrows and children suffer” (p.ix).

The book demystifies the workings of PISA, using the extended metaphor of the ‘race’ as a way to make sense of what PISA is, how it works, how it might be used, and how it shouldn’t be used. The authors make their explanations accessible without ignoring the complexities of standardised international testing on a large scale. The book is at once matter of fact and told with a wry sense of humour (as a reader I’m a sucker for references to Monty Python and 1980s arcade games). The book rails against the commodification of simple solutions to solve complex problems. As I explored satirically in this He-Man inspired post (speaking of 80s references) there are plenty of edu salespeople hoping to profit from the pressures of accountability pressing in on schools and teachers. This book, however, is about helping those on the ground to understand the complexities, inner workings and possibilities of PISA.

What sets Sellar, Thompson and Rutkowski’s book apart from other literature is its ability to engage with complexity in an accessible way; to explain clearly without simplifying; and to avoid binaries and polarising divides while acknowledging differing perspectives. They note, for instance, that PISA has been innovative, carefully developed, and judiciously administered, but that such a big project will undoubtably suffer from technical issues and limitations.  They note that PISA is an assessment of select content areas of one sample on one day, but also argue that transparent standardised tests like PISA can be a useful tool for understanding social systems. They explain validity in understandable ways and show how countries can use PISA data responsibly.

The international examples help the book to be relevant to people in OECD countries around the world. For Australian readers like myself there are some gems, such as that PISA performance has become an end in itself, evidenced by the national target of improving Australia’s PISA ranking by 2025 (articulated in the 2013 Education Act). The authors call this move “astounding”. Their tongues are firmly in their cheeks when they state on page 76 that: “The aim of reversing the trend of declining PISA scores seems to be to improve PISA scores through intensifying those policies that have not worked so far. A bold move.” A bold move, indeed, and one that Simon Birmingham might want to reconsider.

The authors also recognise the desire of policy makers, educators and the media to understand PISA and to glean the most important messages it can tell us. They warn, however, about over-attribution of causality, when correlations become causal claims, pointing to the way the media and policy makers often use PISA to point to particular factors as being the cause of high or declining PISA performance. I’ve written about the dangers of policy moves like performance pay for teachers, and the authors have warnings to offer their readers about the negative effects of pairing standardised testing with punitive accountability regimes, and of governments desperately scrambling to ‘win’ against other countries. Run your own race, they argue.

The global education race presents an important challenge to policy makers and educators alike: to develop informed communities willing and able to engage in discussion of how educational measurement, including system-level measurement tools, can be judiciously used to inform policy and practice. The kind of shift they suggest is a challenge. It will require ministers, media and educators to take a non-divisive, sense-making and collaborative approach of seeking to understand, and of deep, thoughtful engagement with data and with one another.

Engaging the aesthetic

vignettes from home

It is perhaps when our lives are at their most problematic that we are likely to be most receptive to beautiful things. ~ Alain de Botton, The Architecture of Happiness

Aesthetics is concerned with appreciation – of nature, of art, of that which we can experience through our senses. As a philosophy it explores how we perceive and experience beauty. There are studies that have looked at how aspects of aesthetics influence people’s lives. For instance, this one on how the colour of room walls impacts students’ behavior and learning performance in classrooms. Or this one that investigates the impact of hospital aesthetics—such as light, fresh air, nature, colour, sounds, smells and art—on patient wellbeing and recovery.

Like art, which is a culturally-embedded conversation over time, aesthetics is knitted with the fabric of society and culture. Anderson (2014) shows that while harmony and unification have often been seen as important aspects of interior design, this focus can curb individualism and lead to uniformity. She describes the Cult of Beauty of the 1870s and 1880s as “discriminating eclecticism guided by artistic sensibility” (p.345). At this time the homeowner became, according to Edmond de Goncourt, a décorateur or metteur-en-scène; a ‘scene-setter’; an artistic creator of spaces.

In the 19th century, colours and objects were linked to class, social standing and education. Partly as a reaction to mass production of objects and vividly-coloured synthetic fabrics, brightness and shininess were considered garish and distasteful, while subdued secondary or tertiary colours were seen to reveal distinguished taste. With the rare, the exotic and the expensive seen as ‘good taste’, decorating the home was saturated with inflections of societal, cultural and racial superiority.

Aesthetic discourses and disputes continue today. For example, in 2015 the owner of a mansion in Queensland was ordered to undertake an ‘aesthetic overhaul’ after it was found that the architecture was a copy of a unique neighbouring house. This year, a woman was taken to the London high court for painting her Kensington terrace in ‘garish’ candy stripes. People continue to care about the aesthetic experience.

Kyle Chayka challenges us that current aesthetic tropes perpetuate cultural and social divides, describing the ubiquity of reclaimed wood, Edison bulbs, and refurbished industrial lighting (what he calls ‘AirSpace’) as providing ‘familiar, comforting surroundings for a wealthy, mobile elite, who want to feel like they’re visiting somewhere ‘authentic’ while they travel, but who actually just crave more of the same.’ Aesthetics, as it becomes globalised via social media sharing, continues to promote uniformity and entangle with social inequities.

Technology has changed notions of beauty, as well as who can access it. Now, the well-worn patina of an antique rug can be achieved via polypropylene and technology. Found treasures can be upcycled or new leather carefully distressed. A throw on the end of a bed should be artfully flung not neatly folded; it should appear luxurious without seeming to try too hard. Style appears just-so as if by accident.

These days anyone with an Instagram or Pinterest account is an aesthete. The interwebs are full of endless aesthetic noise, constant bombardments of staged and judiciously curated pictures in which the everyday person has become the composer of artistic, filtered images that show snapshots of life. Aesthetics is democratised and commodified in new ways. Influential Instagrammers make money by posting products in carefully cropped snaps. Chompoo Baritone’s photo series shows how real and imperfect details are often omitted in order to create a social media image of beauty.  Lindahl & Öhlund (2013) argue that using images on social media is part of identity marketing and developing a personal brand, and that this is limited and fake, as well as nuanced and expressive. They point to social media aesthetics shaping identities, especially through imitation. As life imitates art, so identity imitates Instagram. Social media allow aesthetics to be accessible across social divides, but also to be manipulated. The aim of uniqueness drowns in a sea of uniformity. There is at once aesthetic individualism and an aesthetic echo chamber of groupthink (or is that groupaesthetics?).

Yes, there are social, cultural, and technological complexities of aesthetics. But in a world in which we are often obsessed by perforance, measurement, fast everything, multi-tasking, and efficiencies, often it’s worth immersing ourselves in the aesthetic of the real, as opposed to the virtual, world. Walking barefoot on grass. Wrapping palms of hands around a warm mug of tea. Watching the sun rise. Listening. Smelling the (actual, paper) pages of a book, feeling them between the pads of fingertips, hearing the swooshing noise they make as they turn. Sinking a vinyl disc onto a record player. Painting. Sewing. Tinkering. Looking and actually seeing. Breathing slow and deep.

bookshelf mis-en-scène

As I’ve been nesting in my new home, I’ve been pulling tactile objects out of boxes and placing them on shelves. Cutting flowers from my garden, drinking coffee to the sunrise song of local birds, letting my eyes wander over vignettes in nooks and corners. And it’s been giving me pretty big doses of contentment, even while I’m aware of the first world nature of my collections – objets d’art from exotic travel locations, international textiles, inherited antiques and collectibles, lots of books. I know these are the accoutrements of a priveleged life, and yet they tell me stories and bring me joy.

Maybe it’s my Fine Art background or my love of the weird and wonderful, but aesthetics have always been important to me. It causes my husband no end of annoyance that the first thing I like to do on moving into a new home is to hang pictures and place ornaments. (We have moved together seven times, not counting house-sitting or being ‘in between’ homes when we’ve moved interstate or overseas). In any home – whether in a tiny rented London apartment full of Argos goods, or an owned home in Australia, big or small – the aesthetic quality of my surroundings have helped to ground me. As well as providing experiences of colour, texture and light, the way we shape our surroundings encapsulates a story about ourselves and can provide a safe or stimulating place for us to burrow, create, or connect.

References

Anderson, A. (2014). Harmony in the Home: Fashioning the “Model” Artistic Home or Aesthetic House Beautiful through Color and Form. Interiors, 5(3), 341-360.

Caspari, S., Eriksson, K., & Nåden, D. (2011). The importance of aesthetic surroundings: A study interviewing experts within different aesthetic fields. Scandinavian Journal of Caring Sciences, 25(1), 134-142.

Lindahl, G., & Öhlund, M. (2013). Personal branding through imagification in social media: Identity creation and alteration through images (dissertation). Stockholm University.

Yildirim, K., Cagatay, K., & Ayalp, N. (2015). Effect of wall colour on the perception of classrooms. Indoor and Built Environment. Indoor and Built Environment, 24(5), 607-616.

The oasis of writing

Sometimes we need an immersion in a cooling, calming place of our choosing. That might involve turning off our devices, turning away from social media, turning towards what nourishes us. It might be sitting in silence, or playing music loud. It might be the catharsis of working with our hands, or the release of letting them rest. It might be solitude or connection, work or play, stillness or movement, mindful or mindless.

School is currently out in Western Australia, and while I am working, I have been taking time out across the break to bathe in oases of sorts. I’ve been on a brief holiday with my family, pottered around the house, seen friends and indulged in another haven of mine: academic writing.

Those of you who write for a living or are in the throes of a PhD (Oh, the unicorn-dancing-in-a-champagne-waterfall highs! Oh, the despairing bottom-of-the-dark-pit lows!) might roll your eyes or baulk at writing as an oasis. But after a term of working full-time in an exciting but challenging newly-formed role in a school, selling a house, buying a house, moving house, parenting my two lovely children, and trying to maintain relationships with family and friends, I was ready for a break from the relentlessness. From feeling like the mouse on the wheel, full of urgency and repetitive motion. Not only that, but both social media and real life have had their share of challenges lately. Academic writing has been a welcome and nurturing reprieve; simultaneously mental work and a mental break. Academic writing continues to be like my PhD, which I sometimes managed to think of as a holiday from all-the-other-things, or intellectual me-time, although without the weighty pressure or looming examination. Papers and chapters are more bite-size and more varied, and pleasingly always at different stages; just as one becomes difficult, another is coming together or being accepted.

Of course academic writing is not easy or necessarily enjoyable. With it comes challenge, struggle, sometimes brutal feedback. It helps that the acwri I’m doing at the moment is writing I want to do. I’m engaged, interested, motivated, intrigued. I’m learning, growing, pushing at the boundaries of what I know and can do. Academic writing allows me to extend myself in different ways to my school role.

Some of this writing is solo, but I’m also writing papers and chapters collaboratively, something still pretty new to me. Perhaps the collaboration is the coolest part because working with others takes me out of my usual groove, my usual ways of thinking and writing. It gets me engaging with others’ words and these spur my words on. Our words are like gifts from a science fiction world; they shapeshift and take on different lives as they are passed back and forth between authors.

This kind of writing and collaboration is somewhere for a writer to luxuriate. Nestle in. Be cocooned by the writing while at the same time deliciously confronted by it. I brace for feedback but at the same time allow myself to be vulnerable and to be shaped. To read unfamiliar theory, try alternate approaches, or to tinker with new ways of theorising, researching and writing. To have one or more other writers to generate and energise.

It’s cool. It’s fun. It’s a welcome distraction from the daily rush of work during term time and the barrage of angry educators slinging accusations at one another on Twitter (thank goodness for my arguing on EduTwitter bingo card!). This holiday break I’ve worked on a solo-authored journal paper and a collaborative chapter so far. I’ve got one more collaborative chapter to look at over the next few days. I’m looking forward to it. Like a cup of tea at the end of the day after the kids have gone to bed, for my pracademic self, straddling as I do the worlds of school and academia, academic writing can be a moment of ‘aaaaaahhhh’, of indulgence, of me-time.

On cognitive load

I’ve been thinking recently about cognitive load theory (CLT), a theory founded by John Sweller in the 1980s. Bear with me. I’m not intending to use seductive sounding terms like ‘cognitive architecture’; or to suggest that I am an expert on CLT; or to delve into discussions about intrinsic, extraneous and germane cognitive load; or to articulate the problems with self-ratings of perceived mental effort. This is more of a loose layperson’s pondering around the effects of the influence of new information on working memory.

CLT posits that human working memory cannot process many new elements at any one time. A couple of weeks ago I moved house and the resulting chaos had me realising the effects of putting a heavy load of novel information onto the working memory. Despite the mundanity of the challenges of moving into a new home (whitegoods don’t fit, furniture doesn’t work spatially, boxes crowd in threateningly, kids don’t sleep well, the house makes strange noises), in the first week at our new place I left my yoga clothes at home once and left my phone at home twice. I was constantly struggling to remember where I had to look to find plates, cling film, toiletries, members of my family. I had no sense of routine or stability.

For me, the mental work of existing somewhere new, without the automaticity that comes with entrenched habit (or, as cognitive load theorists might call it, cognitive schemata in my long term memory) was immense and intense. I felt that I was living in a fog, and existing at about 40% of my usual capacity. The simplest of tasks were arduous, time consuming, and took what seemed like excessive cognitive effort. My husband asked me what was wrong with me; I knew that the relocation had taken my working memory beyond its capacity to cope. I was moving as through wet concrete. I felt displaced.

Now, learning a new house isn’t the same as learning new, complex, domain-specific skills (although I could talk long and hard about the gurgling of the fishpond interrupting sleep, the mental effort required to drive in the right direction home from work, and the impossibility of finding a sensible place for everything in a new kitchen). No doubt there were aspects of my experience that were environmental and affective as well as cognitive. Yet, the disorder and discombobulation I felt in my first week in my new house were a stark reminder of what students might feel when confronted with new content in a classroom with which they are not yet familiar, or with a skill that they might approach without the appropriate embedded prior knowledge and automation required to succeed.

As Greg Thompson has recently blogged (channeling Derrida’s student Bernard Stiegler), writing (like this blog post) can construct a mental prosthesis, a kind of corporeal residue of an experience that, left to the memory, would fade in intensity over time. Unlike Greg in his story of being concussed in Banff, I will have no physical remnants of moving house, nor any of the entertainment value of the story. No doubt soon the uneasiness will fade into that vague unnoticed feeling of being at home in instinctive motion.

In this post on the doctorate I reflect that:

Once we have learned something, we cannot always remember what it was like to not know it, making it difficult to teach or help someone. By (b)logging my writing memories as they happen, perhaps I can archive my not-so-good-at-academic-writing self. Reflecting-on-writing by writing-about-writing – in a kind of meta-writing – helps me to document my academic writing journey. … blogging helps me to have a Hansel-and-Gretel breadcrumb trail back to my less capable self.

As someone becomes more expert, they often ‘black box’ their expertise, as Pamela Hinds explains in her 1999 paper ‘The Curse of Expertise’. Experts are unable to accurately predict the time and difficulty novices need to complete a task. Intermediate learners, Hinds finds, are more helpful for novices as they still remember and understand the problems of being a beginner. This is something I wonder about in terms of academia as well as teaching. Do doctoral supervisors ‘black box’ the PhD or EdD experience? Are they able to break down the steps of the doctorate for their students, or are veteran professors too far removed from the struggle and journey of the neophyte researcher? In a classroom, do teachers expert in their subjects have the capacities to break down the content and skills into accessible enough elements for struggling learners? Can an expert coach can break down the steps of coaching once they have internalised the philosophies, knowledge, and processes? Once the work of the mind is internalised and automated, much mindfulness and precision are needed if we are to teach others. Expertise may be a curse, but my house move has reminded me of the curse of the beginner. I yearn for repetitious automation.

So, as I use the daily practice of living in my new house as a way to build a long term memory schema, I am beginning to relax. Nothing yet feels automatic or fluid—and I still feel the newness and unfamiliarity of my surroundings—but I know that at some point I will forget the uneasy, cognitively prickly effort that came with moving house. I’ll happily float through the new place on auto-pilot, even in the dark of night or the first sleepy moments of the morning.

E4L and the value of dissent

I find it ironic that, just after a blog post in which I reflected that blogging often feels like shouting into the void, a recent post on this blog has received a robust and ongoing response, as well as plenty of rich conversation, online and in my immediate context.

I wrote earlier this month about my ponderings and cautions around the Evidence for Learning Toolkit (based heavily on the UK’s Education Endowment Fund Toolkit) currently gaining traction in Australian education circles. I felt compelled to share my thinking as the E4L Toolkit is connected with some Australian heavy hitters. It is powered by Social Ventures Australia and the Commonwealth Bank. It is advised by prominent academics from a range of Australian universities, including John Hattie who is both a champion of meta-analysis and the equivalent of an education household name. Its advisors also include high-level representatives from the Australian Council for Educational Research (ACER), the Australian Council of Education Leaders (ACEL), the Australian Curriculum, Assessment and Reporting Authority (ACARA), various independent schools associations, and various State education departments. This who’s-who is all the more reason, I think, for those in the education community to look carefully at the Toolkit and its adoption across the Australian edu-landscape.

This week, John Bush, Associate Director of Education at Social Ventures Australia and part of the leadership team of Evidence for Learning, wrote a blog post for the E4L website (the first of a series). In it, he responds specifically to some of the comments I made in my post.

John points out that my post was part of “a flurry of public comment across Australia and the UK in the value and risks of meta-analysis and synthesis of meta-analyses in education research.” Although I didn’t realise it at the time, my post did come in on the crest of a wave. Published the same day (UK time) were this article in the Times Education Supplement in which Professors Terry Wrigley and Gert Biesta call the EEF Teaching and Learning Toolkit misleading and unhelpful. There was also this response by the EEF addressing the concerns around and risks of its Toolkit.

A few days later, Rachel Buchanan posted about the need for a critical perspective when we talk about using an evidence base in education. I was also made aware
(thank you, Twitter) of this post from David Didau from last month, which questioned the objectivity of the EEF Toolkit. He has previously cogitated on the limitations of meta-analysis and effect sizes as measures in education.

In my recent post, I linked to a couple of other thoughtful blogs on the limitations of meta-analysis in educational research, one by Jon Andrews and one by Gary Jones. It’s also well worth listening to both Dan Haesler’s comments, and Cameron Malcher’s interview with E4L’s Tanya Vaughan and Janet Clinton, on this recent episode of the Teachers’ Education Review podcast.

So, amid this flurry, I was delighted when John Bush began his own considered response within a complex field: the use of evidence to inform teaching, in order to most positively impact on student learning and achievement.

Despite John’s explanations of the padlock rating system, bearing in mind the many reservations about meta-analysis, I’m not sure I’m confident with E4L’s five-padlock ‘security’ of interventions that are supported by “at least five robust and recent meta-analyses.” I was, however, relieved to read that E4L deliberately uses careful language in the Toolkit in order to facilitate nuanced understandings of the evidence it presents.

John writes that, “We do not envision the Toolkit as a resource that should dictate or direct professional decisions in schools. Instead, we hope school leaders and teachers will use it to start discussions with their peers and to help inform their professional judgement with research evidence.” These are important words, but we in education remain challenged by the tension between a desire for the simple fix and the reality of the complexities of our work. Careful language and a critical approach are important, but the appeal of the misleading media headline, the ranking on a league table, and the easy answer, remain. As a profession we need to push back against these, despite the performative culture in which we operate.

This ongoing public conversation reminded me of a podcast from Brand Newsroom in which the hosts Nic Hayes and Sarah Mitchell interview influential marketing consultant Jay Baer about his book titled Hug Your Haters. In it, Jay discusses the science of complaints. He says that praise is over-rated. Praise feels terrific, he tells us, but teaches us nothing. Negative feedback and criticism, on the other hand, teach us everything; it’s where learning and opportunity come from. Now, I wouldn’t classify myself as a hater, or a complainer, but I would probably be seen by those championing E4L Toolkit as a dissenting voice.

Being publically challenged, even gently, can be difficult, especially when we believe strongly that we are doing good work. I respect John Bush, and the folk at Evidence for Learning, for having the transparency and commitment to engage in some graceful disagreement. In doing so they are responding directly to the concerns of those like me, who, as it turns out, wasn’t blogging into a void after all. Rather, I was offering my perspective to a community of educators who seek to understand one another in order to best serve our students.

While I have moments of high cynicism and outright despair, it is blogversations like the one in which John and I are currently engaging, that can help model publically the kinds of conversations educators can and should be having. I remain cautious and critical about the Evidence for Learning Toolkit, especially in terms of the ways in which it might be adopted by educators in the busyness of their work, but I am open to developing my thinking and continuing the conversation.

Evidence For Learning in Australia

In the UK the Education Endowment Foundation (EEF) is championed by some as a tool for helping teachers, school leaders and schools to make the best decisions for their students, based on what research and evidence shows. Now in Australia, Evidence for Learning (E4L), powered by Social Ventures Australia and the Commonwealth Bank of Australia, is piggybacking on the EEF’s toolkit in order to provide an Australasian equivalent. It is advised by, among others, John Hattie, and is partnering with AITSL and with State education departments to map the toolkit to State education frameworks and the AISTL Professional Standards for Teachers and Principals.

Last year I spoke with John Bush, Associate Director of the Learning Impact Fund, about the toolkit, and this week I attended a breakfast workshop run by Dr Tanya Vaughan, Associate Director for the E4L toolkit and Honorary Fellow at the Melbourne Graduate School of Education (MGSE) at the University of Melbourne. As the Research Lead at my Australian school, I was keen to hear more about how it was progressing and what it is offering Australian schools.

The aims of the E4L Toolkit

Tanya framed the toolkit as as an instrument for helping great practice become common practice. E4L aspires to make accessible, and develop the rigour of, evidence of what works and why in education, in order to make a difference to learners. That is, it aims to build, share and use evidence to support better decision-making in schools, to in turn lead to better outcomes for students.

The E4L toolkit is free and unrestricted in order to provide all schools with access to evidence of what works best in education, regardless of budget or postcode. This, Tanya explained, will help to address the barriers for teachers engaging with research:

  • Shortage of time;
  • Overload of information; and
  • Insufficient contextualized information for practice.

I would add that much educational research is behind a pay wall in journals inaccessible to non-researchers, or in very expensive books that aren’t affordable for many schools. Tanya was adamant that “front line professionals are the heart and soul of evidence-based education practice”, and that E4L endeavoured to improve communication between professionals and researchers, teachers and ‘the evidence’. This connection between educational research and practice is one to which I am especially committed.

What does the E4L Toolkit look like?

The E4L effect size league table’s Top 5 edu-practices

A first glance, the E4L toolkit shows a set of effect-size league tables of teaching practices, each showing – via symbols – the average cost of implementation, the ‘evidence security’ of the claim, and the average month’s worth of learning impact.

Visitors to the toolkit can drill down into the site. Clicking on a single practice such as ‘feedback’ reveals summaries addressing the following questions: What is it?; How effective is it?; How secure is the evidence?; What are the costs?; and, What should I consider? Clicking further into ‘References’ reveals the studies that sit behind this practice, with abstracts. Some practices additionally have an Australasian research summary.

Tanya was clear that the toolkit presents averages. In fact, it presents averages of averages, or more accurately meta-meta-analyses. While Tanya advocated for mixed methods – including talking to leaders, teachers and students – most of what the toolkit presents are syntheses of meta-analyses and randomised control trials (often considered the ‘gold standard’ of educational research).

The lock rating symbols, showing apparent ‘security of evidence’ are based on the number of meta-analyses beneath the meta-meta-analysis. It is the notion of evidence security and the simplification of ‘what works’ to effect size league tables that has me feeling cautious about the toolkit and its potential use. In attempting to address education practitioners’ shortage of time to engage with research and the overload of research information out there, does E4L provide an oversimplified tool likely to be accepted uncritically by busy educators working in our schools?

What is meta-analysis?

Meta-analysis is a statistical analysis using an equation: the experimental mean, minus the control group mean, divided by the population standard deviation. Simpson (2017) gives us this description of what happens:

“Individual studies report quantitative measures of the outcomes of particular interventions; meta-analysts collect studies in a given area, convert outcome measures to a common metric and combine those to report an estimate which they claim represents the impact or influence of interventions in that area. Meta-meta-analysis then takes the results of meta-analyses, collected in broader fields, and combines those estimates to provide a rank ordering of those fields which make the most difference.”

Simpson’s paper, released in January this year, challenges analogies between evidence-based practice in medicine and education. Treatments in medicine, he argues, are often standard and well-specified, with agreed outcomes which are relatively easy to measure. Education is more nuanced, complex and contextual.

Simpson invokes Eysenck’s (1984) notion of comparing apples with oranges, when he points out that meta-analyses often do not compare studies with the same comparisons, measures and ranges of participants. He contends that aggregated effect sizes are more likely to show differences in research design manipulation than in effects on learners. Bloggers such as Jon Andrews, in this post, and Gary Jones, in this one, have teased out the limitations of meta-analysis as method in educational research. Gary insists that “if teachers and school leaders wish to use effect sizes generated by research to help prioritise interventions, then it is necessary to look at the original research”, rather than relying on simplified lists. Educators need to look behind the curtain.

Snook et al. (2009) argue that when averages are sought or large numbers of disparate studies amalgamated, as in meta-analyses, the complexity of education and of classrooms can be overlooked.  They also point out that any meta-analysis that does not exclude poor or inadequate studies is misleading or potentially damaging. Terhart (2011) points out that by focusing on quantifiable measures of student performance, meta-analyses ignore the broader goals of education.

Meta-analysis is singled out by Wiliam (2016) as an unsuitable technique for identifying the relative effectiveness of different approaches to student learning. He states that:

Meta-analysis is simply incapable of yielding meaningful findings that leaders can use to direct the activities of the teachers they lead.”

Wiliam’s PowerPoint presentation from last year’s ResearchED conference in Washington—titled ‘Why teaching isn’t—and probably never will be—a research-based profession (and why that’s a good thing)’—presents the problems with meta-analyses for deciding ‘what works’ in education. In the presentation, Wiliam reminds us that everything works somewhere and nothing works everywhere. He encourages us instead to ask: Under what conditions does this work?

Possibilities and reservations

In her E4L Toolkit presentation this week, Tanya Vaughan advocated for trusting the profession to be thoughtful and intelligent and to engage with the research literature that sits behind the seductive league tables of the E4L toolkit. Her call for mixed methods research—for qualitative and quantitative to “play together”—resonated with me. Many methods of research have something to offer the field, and all are limited.

My hunch is that the E4L toolkit has something to offer educators in Australia (as a starting point rather than an answer sheet), and I can see the significant work that has gone into producing it, as well as the good intentions behind it. Yet I have my reservations. I worry that an uncritical acceptance of the toolkit’s content, alluring in its apparent simplicity, will result in an impoverished understanding of ‘what research says’. We are in danger of giving education research lip service, or wading in shallow pools of evidence. The use of meta-meta-analyses as the basis for the toolkit has the potential to over-synthesise limited quantitative data to the point of distorting original findings, and ignore the limitations, qualities and complexities of the synthesised studies.

Everyone from the profession to the media is likely to translate these effect-size league tables into seemingly authoritative soundbites of ‘what works’ without taking the time to consider what might work where, for whom, and under what conditions. If Australian organisations and schools are to embrace the E4L Toolkit as part of their pursuit of having a positive impact on learners and more systematic bases on which to make decisions, I hope they do so with a cautious step and a critical eye.

References

Eysenck, H. J. (1984). Meta-analysis: An abuse of research integration. The Journal of Special Education 18(1), 41–59.

Simpson, A. (2017). The misdirection of public policy: Comparing and combining standardised effect sizes. Journal of Education Policy, 1-17.

Snook, I., O’Neill, J., Clark, J., O’Neill, A. M., & Openshaw, R. (2009). Invisible learnings? A commentary on John Hattie’s book: Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New Zealand Journal of Educational Studies, 44(1), 93-106.

Terhart, E. (2011). Has John Hattie really found the holy grail of research on teaching? An extended review of Visible Learning. Journal of Curriculum Studies, 43(3), 425-438.

Wiliam, D. (2016). Leadership for teacher learning: Creating a culture where all teachers improve so that all students succeed. Moorabbin, Australia: Hawker Brownlow Education.