Running the PISA race

The education culture of performativity is wrapped up in notions of measurement. How do we measure student success, teacher performance, effective school leadership, and successful education policy? How do we know which school systems are successful and why they are successful? How can we tease out and understand causes of performance, such as the influence of social and economic factors, the system, the school, and the teacher? These are perennial education questions, and ones which continue to become more and more important in a globalised world in which countries, schools, and teachers can be compared, and in which there is an ever-increasing weight of accountability.

Standardised testing is a central issue in this neoliberal education context. Individual schools have their own approaches to measurement, such as the one I describe in this blog post. In Australia we have NAPLAN and WACE. There is currently talk of a national Phonics Check in the early years, such as that used in the UK. Internationally we have TIMSS, PIRLS and PISA.

Steven Lewis and Anna Hogan have shown how oversimplified reports of international testing measures can contribute to oversimplified ‘fast policy’. As Marten Koomen points out, systems should respond to international testing measures but these systems are complex. Stewart Riddle and Bob Lingard wrote that looking at a single country’s PISA ranking is useless; rather we need to carefully disaggregate the data and consider social and economic factors, and differences between states, schools and groups.

This week I received in the mail a copy of new book The global education race: Taking the measure of PISA and international testing, by Sam Sellar, Greg Thompson and David Rutkowski. (Full disclosure: I got a complimentary copy.) It is a pocket rocket at about A5 size and 99 pages. In the Foreword, David Berliner and Pasi Sahlberg question the value often ascribed to PISA, noting that PISA tests are linked to the social conditions as well as school systems and that “when the race to the top gets tougher … curriculum narrows and children suffer” (p.ix).

The book demystifies the workings of PISA, using the extended metaphor of the ‘race’ as a way to make sense of what PISA is, how it works, how it might be used, and how it shouldn’t be used. The authors make their explanations accessible without ignoring the complexities of standardised international testing on a large scale. The book is at once matter of fact and told with a wry sense of humour (as a reader I’m a sucker for references to Monty Python and 1980s arcade games). The book rails against the commodification of simple solutions to solve complex problems. As I explored satirically in this He-Man inspired post (speaking of 80s references) there are plenty of edu salespeople hoping to profit from the pressures of accountability pressing in on schools and teachers. This book, however, is about helping those on the ground to understand the complexities, inner workings and possibilities of PISA.

What sets Sellar, Thompson and Rutkowski’s book apart from other literature is its ability to engage with complexity in an accessible way; to explain clearly without simplifying; and to avoid binaries and polarising divides while acknowledging differing perspectives. They note, for instance, that PISA has been innovative, carefully developed, and judiciously administered, but that such a big project will undoubtably suffer from technical issues and limitations.  They note that PISA is an assessment of select content areas of one sample on one day, but also argue that transparent standardised tests like PISA can be a useful tool for understanding social systems. They explain validity in understandable ways and show how countries can use PISA data responsibly.

The international examples help the book to be relevant to people in OECD countries around the world. For Australian readers like myself there are some gems, such as that PISA performance has become an end in itself, evidenced by the national target of improving Australia’s PISA ranking by 2025 (articulated in the 2013 Education Act). The authors call this move “astounding”. Their tongues are firmly in their cheeks when they state on page 76 that: “The aim of reversing the trend of declining PISA scores seems to be to improve PISA scores through intensifying those policies that have not worked so far. A bold move.” A bold move, indeed, and one that Simon Birmingham might want to reconsider.

The authors also recognise the desire of policy makers, educators and the media to understand PISA and to glean the most important messages it can tell us. They warn, however, about over-attribution of causality, when correlations become causal claims, pointing to the way the media and policy makers often use PISA to point to particular factors as being the cause of high or declining PISA performance. I’ve written about the dangers of policy moves like performance pay for teachers, and the authors have warnings to offer their readers about the negative effects of pairing standardised testing with punitive accountability regimes, and of governments desperately scrambling to ‘win’ against other countries. Run your own race, they argue.

The global education race presents an important challenge to policy makers and educators alike: to develop informed communities willing and able to engage in discussion of how educational measurement, including system-level measurement tools, can be judiciously used to inform policy and practice. The kind of shift they suggest is a challenge. It will require ministers, media and educators to take a non-divisive, sense-making and collaborative approach of seeking to understand, and of deep, thoughtful engagement with data and with one another.