Did you know that Sweden has the second highest rate of rape in the world, behind only Botswana? According to official statistics from the UN Organisation on Drugs and Crime (UNODC), in 2015 Sweden recorded 57 occurrences of rape per 100,000 people, compared to 39 in the USA and just 3 in India. Sweden’s rape rate has shot up in recent years, during the same period that the country has been accepting large numbers of non-European migrants and asylum seekers.
They are all wrong. Why? Because the UNODC data relates to “recorded” offences. Proportionally, Swedish police recorded many more occurrences of rape in 2015 than Indian police did. This is not the same as saying that more incidents actually occurred. Similarly, the fact that Swedish police recorded more incidents in 2015 than they did a decade prior does not necessarily mean that Sweden has become a more dangerous place to live. In fact, this period coincides with a substantial broadening of the legal definition of rape.
The mistake Trump, Farage and countless others made rests on one of the most critical principles in statistics: measurement. To understand what a number really says about the world, we need to know exactly what is being counted. Neglect of this principle is the source of a large fraction of the misleading statistics bandied about in the media and in politics.
The modern world runs on numbers. Vital information about almost any issue – from the environment, to health, to education, to immigration – is communicated in the language of statistics. This means that if universities want to prepare graduates to be informed citizens – for example, to participate in the immigration debate without being swayed by bogus facts like the Swedish rape figures – then we need to make sure they know which end of a percentage is which.
This is especially true for students of the social sciences. They are the next generation of politicians, journalists and thinktank employees. Yet our social science graduates remain underequipped to face an increasingly demanding statistical landscape.
After four years working on the Q-Step initiative, which seeks to change social science training in the UK, I am convinced that solving this problem requires a fundamental rethink of what undergraduate statistics training is for.
Most traditional statistics classes are pretty similar in what they cover. The content of these courses aims to turn students into expert quantitative researchers, covering the deeper maths underlying statistical tests, along with supplementary techniques and diagnostics which rarely appear in published research. These courses are so focused on the detail that if asked to teach even an introductory social statistics module, many working social scientists would be out of their depth.
I understand the motivation behind this. We don’t want to teach our students the random hodgepodge of techniques and concepts with which many of us muddle through our careers. We want them to have a clear and complete understanding, built from the foundations up.
But we’ve been trying to do this for decades and it just hasn’t worked. Instead we have run course after course that students hate. We’ve turned out generations of graduates who can remember sitting in labs pressing buttons in statistical software programmes like SPSS, but never really learned how to connect statistics to important issues in the real world.
Undergraduate statistics training needs to be flipped on its head to reflect the fact that making the connection between statistics and the real world is the only thing that really matters. Instead of using random examples to illustrate statistical principles, we need to start with the examples – the real-world issues we actually care about – and demonstrate how numbers can help us better understand them.
If our students graduate with absolutely no idea how to run a regression in SPSS, but with the ability to look at a number in the newspaper and say whether it’s big or small, then we will be in a much better place than we are now.