Science

Why it’s time to stop worrying about the decline of the English language


The 21st century seems to present us with an ever-lengthening list of perils: climate crisis, financial meltdown, cyber-attacks. Should we stock up on canned foods in case the ATMs snap shut? Buy a shedload of bottled water? Hoard prescription medicines? The prospect of everything that makes modern life possible being taken away from us is terrifying. We would be plunged back into the middle ages, but without the skills to cope.

Now imagine that something even more fundamental than electricity or money is at risk: a tool we have relied on since the dawn of human history, enabling the very foundations of civilisation to be laid. I’m talking about our ability to communicate – to put our thoughts into words, and to use those words to forge bonds, to deliver vital information, to learn from our mistakes and build on the work done by others.

The doomsayers admit that this apocalypse may take some time – years, or decades, even – to unfold. But the direction of travel is clear. As things stand, it is left to a few heroic individuals to raise their voices in warning about the dangers of doing nothing to stave off this threat. “There is a worrying trend of adults mimicking teen-speak. They are using slang words and ignoring grammar,” Marie Clair, of the Plain English Campaign, told the Daily Mail. “Their language is deteriorating. They are lowering the bar. Our language is flying off at all tangents, without the anchor of a solid foundation.”

The Queen’s English Society, a British organisation, has long been fighting to prevent this decline. Although it is at pains to point out that it does not believe language can be preserved unchanged, it worries that communication is at risk of becoming far less effective. “Some changes would be wholly unacceptable, as they would cause confusion and the language would lose shades of meaning,” the society says on its website.

With a reduced expressive capacity, it seems likely that research, innovation and the quality of public discourse would suffer. The columnist Douglas Rushkoff put it like this in a 2013 New York Times opinion piece: “Without grammar, we lose the agreed-upon standards about what means what. We lose the ability to communicate when respondents are not actually in the same room speaking to one another. Without grammar, we lose the precision required to be effective and purposeful in writing.”

At the same time, our laziness and imprecision are leading to unnecessary bloating of the language – “language obesity,” as the British broadcaster John Humphrys has described it. This is, he said, “the consequence of feeding on junk words. Tautology is the equivalent of having chips with rice. We talk of future plans and past history; of live survivors and safe havens. Children have temper tantrums and politicians announce ‘new initiatives’.”

It is frightening to think where all this might lead. If English is in such a bad state now, what will things be like in a generation’s time? We must surely act before it is too late.


But there is something perplexing about claims like this. By their nature, they imply that we were smarter and more precise in the past. Seventy-odd years ago, people knew their grammar and knew how to talk clearly. And, if we follow the logic, they must also have been better at organising, finding things out and making things work.

John Humphrys was born in 1943. Since then, the English-speaking world has grown more prosperous, better educated and more efficiently governed, despite an increase in population. Most democratic freedoms have been preserved and intellectual achievement intensified.

Linguistic decline is the cultural equivalent of the boy who cried wolf, except the wolf never turns up. Perhaps this is why, even though the idea that language is going to the dogs is widespread, nothing much has been done to mitigate it: it’s a powerful intuition, but the evidence of its effects has simply never materialised. That is because it is unscientific nonsense.

There is no such thing as linguistic decline, so far as the expressive capacity of the spoken or written word is concerned. We need not fear a breakdown in communication. Our language will always be as flexible and sophisticated as it has been up to now. Those who warn about the deterioration of English haven’t learned about the history of the language, and don’t understand the nature of their own complaints – which are simply statements of preference for the way of doing things they have become used to. The erosion of language to the point that “ultimately, no doubt, we shall communicate with a series of grunts” (Humphrys again) will not, cannot, happen. The clearest evidence for this is that warnings about the deterioration of English have been around for a very long time.

In 1785, a few years after the first volume of Edward Gibbon’s The History of the Decline and Fall of the Roman Empire had been published, things were so bad that the poet and philosopher James Beattie declared: “Our language (I mean the English) is degenerating very fast.” Some 70 years before that, Jonathan Swift had issued a similar warning. In a letter to Robert, Earl of Oxford, he complained: “From the Civil War to this present Time, I am apt to doubt whether the Corruptions in our Language have not at least equalled the Refinements of it … most of the Books we see now a-days, are full of those Manglings and Abbreviations. Instances of this Abuse are innumerable: What does Your Lordship think of the Words, Drudg’d, Disturb’d, Rebuk’t, Fledg’d, and a thousand others, every where to be met in Prose as well as Verse?”

Swift would presumably have thought The History of the Decline and Fall, revered as a masterpiece today, was a bit of a mess. He knew when the golden age of English was: “The Period wherein the English Tongue received most Improvement, I take to commence with the beginning of Queen Elizabeth’s Reign, and to conclude with the Great Rebellion in [Sixteen] Forty Two.”

Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

But the problem is that writers at that time also felt they were speaking a degraded, faltering tongue. In The Arte of English Poesie, published in 1589, the critic George Puttenham fretted about the importation of new, foreign words – “strange terms of other languages … and many dark words and not usual nor well sounding, though they be daily spoken in Court.” That was halfway through Swift’s golden age. Just before it, in the reign of Elizabeth’s sister, Mary, the Cambridge professor John Cheke wrote with anxiety that “Our own tongue should be written clean and pure, unmixed and unmangled with borrowing of other tongues.”

This concern for purity – and the need to take a stand against a rising tide of corruption – goes back even further. In the 14th century, Ranulf Higden complained about the state English was in. His words, quoted in David Crystal’s The Stories of English, were translated from the Latin by a near-contemporary, John Trevisa: “By intermingling and mixing, first with Danes and afterwards with Normans, in many people the language of the land is harmed, and some use strange inarticulate utterance, chattering, snarling, and harsh teeth-gnashing.”

That’s five writers, across a span of 400 years, all moaning about the same erosion of standards. And yet the period also encompasses some of the greatest works of English literature.

It’s worth pausing here to take a closer look at Trevisa’s translation, for the sentence I’ve reproduced is a version in modern English. The original is as follows: “By commyxstion and mellyng furst wiþ danes and afterward wiþ Normans in menye þe contray longage ys apeyred, and som vseþ strange wlaffyng, chyteryng, harrying and garryng, grisbittyng.”

For those who worry about language deteriorating, proper usage is best exemplified by the speech and writing of a generation or so before their own. The logical conclusion is that the generation or two before that would be even better, the one before that even more so. As a result, we should find Trevisa’s language vastly more refined, more correct, more clear and more effective. The problem is, we can’t even read it.

Hand-wringing about standards is not restricted to English. The fate of every language in the world has been lamented by its speakers at some point or another. In the 13th century, the Arabic lexicographer Ibn Manzur described himself as a linguistic Noah – ushering words into a protective ark in order that they might survive the onslaught of laziness. Elias Muhanna, a professor of comparative literature, describes one of Manzur’s modern-day counterparts: “Fi’l Amr, a language-advocacy group [in Lebanon], has launched a campaign to raise awareness about Arabic’s critical condition by staging mock crime scenes around Beirut depicting “murdered” Arabic letters, surrounded by yellow police tape that reads: ‘Don’t kill your language.’”

The linguist Rudi Keller gives similar examples from Germany. “Hardly a week goes by,” he writes, “in which some reader of the Frankfurter Allgemeine Zeitung doesn’t write a letter to the editor expressing fear for the future of the German language.” As Keller puts it: “For more than 2,000 years, complaints about the decay of respective languages have been documented in literature, but no one has yet been able to name an example of a ‘decayed language’.” He has a point.

The hard truth is that English, like all other languages, is constantly evolving. It is the speed of the change, within our own short lives, that creates the illusion of decline. Because change is often generational, older speakers recognise that the norms they grew up with are falling away, replaced with new ones they are not as comfortable using. This cognitive difficulty doesn’t feel good, and the bad feelings are translated into criticism and complaint. We tend to find intellectual justifications for our personal preferences, whatever their motivation. If we lived for hundreds of years, we would be able to see the bigger picture. Because when you zoom out, you can appreciate that language change is not just a question of slovenliness: it happens at every level, from the superficial to the structural.

Any given language is significantly reconfigured over the centuries, to the extent that it becomes totally unrecognisable. But, as with complex systems in the natural world, there is often a kind of homeostasis: simplification in one area can lead to greater complexity in another. What stays the same is the expressive capacity of the language. You can always say what needs to be said.


Frequently, these changes are unexpected and revealing. They shed light on the workings of our minds, mouths and culture. One common driver of linguistic change is a process called reanalysis. This can happen when a language is learned for the first time, when babies begin to talk and construe what they hear slightly differently from their parents. In the abstract, it sounds complex but, in fact, it is straightforward: when a word or sentence has a structural ambiguity, what we hear could be an instance of A, but it could also be an instance of B. For years, A has held sway, but suddenly B catches on – and changes flow from that new understanding.

Take the words adder, apron and umpire. They were originally “nadder”, “napron” and “numpire”. Numpire was a borrowing from the French non per – “not even” – and described someone who decided on tie-breaks in games. Given that numpire and those other words were nouns, they often found themselves next to an indefinite article – a or an – or the first-person possessive pronoun, mine. Phrases such as “a numpire” and “mine napron” were relatively common, and at some point – perhaps at the interface between two generations – the first letter came to be seen as part of the preceding word. The prerequisite for reanalysis is that communication is not seriously impaired: the reinterpretation takes place at the level of the underlying structure. A young person would be able to say “where’s mine apron?” and be understood, but they would then go on to produce phrases such as “her apron” rather than “her napron”, which older folk presumably regarded as idiotic.

Another form that linguistic change often takes is grammaticalisation: a process in which a common phrase is bleached of its independent meaning and made into a word with a solely grammatical function. One instance of this is the verb “to go”, when used for an action in the near future or an intention. There is a clue to its special status in the way we have started saying it. We all inherit an evolutionarily sensible tendency to expend only the minimum effort needed to complete a task. For that reason, once a word has become a grammatical marker, rather than something that carries a concrete meaning, you do not need it to be fully fleshed out. It becomes phonetically reduced – or, as some would have it, pronounced lazily. That is why “I’m going to” becomes “I’m gonna”, or even, in some dialects, “Imma”. But this change in pronunciation is only evident when “going to” is grammatical, not when it is a verb describing real movement. That is why you can say “I’m gonna study history” but not “I’m gonna the shops”. In the first sentence, all “I’m going to”/“I’m gonna” tells you is that the action (study history) is something you intend to do. In the second one, the same verb is not simply a marker of intention, it indicates movement. You cannot therefore swap it for another tense (“I will study history” v “I will the shops”).

“Will”, the standard future tense in English, has its own history of grammaticalisation. It once indicated desire and intention. “I will” meant “I want”. We can still detect this original English meaning in phrases such as “If you will” (if you want/desire). Since desires are hopes for the future, this very common verb gradually came to be seen simply as a future marker. It lost its full meaning, becoming merely a grammatical particle. As a result, it also gets phonetically reduced, as in “I’ll”, “she’ll” and so on.

Human anatomy makes some changes to language more likely than others. The simple mechanics of moving from a nasal sound (m or n) to a non-nasal one can make a consonant pop up in between. Thunder used to be “thuner”, and empty “emty”. You can see the same process happening now with words such as “hamster”, which is often pronounced with an intruding “p”. Linguists call this epenthesis. It may sound like a disease, but it is definitely not pathological laziness – it’s the laws of physics at work. If you stop channelling air through the nose before opening your lips for the “s”, they will burst apart with a characteristic pop, giving us our “p”.

The way our brain divides up words also drives change. We split them into phonemes (building blocks of sound that have special perceptual significance) and syllables (groups of phonemes). Sometimes these jump out of place, a bit like the tightly packed lines in a Bridget Riley painting. Occasionally, such cognitive hiccups become the norm. Wasp used to be “waps”; bird used to be “brid” and horse “hros”. Remember this the next time you hear someone “aks” for their “perscription”. What’s going on there is metathesis, and it’s a very common, perfectly natural process.

Sound changes can come about as a result of social pressures: certain ways of saying things are seen as having prestige, while others are stigmatised. We gravitate towards the prestigious, and make efforts to avoid saying things in a way that is associated with undesirable qualities – often just below the level of consciousness. Some forms that become wildly popular, such as Kim Kardashian’s vocal fry, although prestigious for some, are derided by others. One study found that “young adult female voices exhibiting vocal fry are perceived as less competent, less educated, less trustworthy, less attractive and less hireable”.

All this is merely a glimpse of the richness of language change. It is universal, it is constant, and it throws up extraordinary quirks and idiosyncrasies, despite being governed by a range of more-or-less regular processes. Anyone who wants to preserve some aspect of language that appears to be changing is fighting a losing battle. Anyone who wishes people would just speak according to the norms they had drummed into them when they were growing up may as well forget about it. But what about those, such as the Queen’s English Society, who say they merely want to ensure that clear and effective communication is preserved; to encourage good change, where they find it, and discourage bad change?

The problem arises when deciding what might be good or bad. There are, despite what many people feel, no objective criteria by which to judge what is better or worse in communication. Take the loss of so-called major distinctions in meaning bemoaned by the Queen’s English Society. The word “disinterested”, which can be glossed “not influenced by considerations of personal advantage”, is a good example. Whenever I hear it nowadays, it is being used instead to mean “uninterested, lacking in interest”. That’s a shame, you could argue: disinterest is a useful concept, a way (hopefully) to talk about public servants and judges. If the distinction is being lost, won’t that harm our ability to communicate? Except that, of course, there are many other ways to say disinterested: unbiased, impartial, neutral, having no skin in the game, without an axe to grind. If this word disappeared tomorrow, we would be no less able to describe probity and even-handedness in public life. Not only that, but if most people don’t use it properly, then the word itself has become ineffective. Words cannot really be said to have an existence beyond their common use. There is no perfect dictionary in the sky with meanings that are consistent and clearly defined: real-world dictionaries are constantly trying to catch up with the “common definition” of a word.

But here’s the clincher: disinterested, as in “not interested”, has actually been around for a long time. The blogger Jonathon Owen cites the Oxford English dictionary as providing evidence that “both meanings have existed side by side from the 1600s. So there’s not so much a present confusion of the two words as a continuing, three-and-a-half-century-long confusion.”


So what is it that drives the language conservationists? Younger people tend to be the ones who innovate in all aspects of life: fashion, music, art. Language is no different. Children are often the agents of reanalysis, reinterpreting ambiguous structures as they learn the language. Young people move about more, taking innovations with them into new communities. Their social networks are larger and more dynamic. They are more likely to be early adopters of new technology, becoming familiar with the terms used to describe them. At school, on campus or in clubs and pubs, groups develop habits, individuals move between them, and language change is the result.

What this means, crucially, is that older people experience greater linguistic disorientation. Though we are all capable of adaptation, many aspects of the way we use language, including stylistic preferences, have solidified by our 20s. If you are in your 50s, you may identify with many aspects of the way people spoke 30-45 years ago.

This is what the author Douglas Adams had to say about technology. Adapted slightly, it could apply to language, too:

– Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
– Anything that’s invented between when you’re 15 and 35 is new and exciting and revolutionary.
– Anything invented after you’re 35 is against the natural order of things.

Based on that timescale, formal, standard language is about 25 years behind the cutting edge. But if change is constant, why do we end up with a standard language at all? Well, think about the institutions that define standard language: universities, newspapers, broadcasters, the literary establishment. They are mostly controlled by middle-aged people. Their dialect is the dialect of power – and it means that everything else gets assigned a lower status. Deviations might be labelled cool, or creative, but because people generally fear or feel threatened by changes they do not understand, they are more likely to be called bad, lazy or even dangerous. This is where the “standards are slipping” narrative moves into more unpleasant territory. It’s probably OK to deviate from the norm if you are young – as long as you are also white and middle-class. If you are from a group with fewer social advantages, even the forms that your parents use are likely to be stigmatised. Your innovations will be doubly condemned.

The irony is, of course, that the pedants are the ones making the mistakes. To people who know how language works, pundits such as Douglas Rushkoff only end up sounding ignorant, having failed to really interrogate their views. What they are expressing are stylistic preferences – and that’s fine. I have my own, and can easily say “I hate the way this is written”, or even “this is badly written”. But that is shorthand: what is left off is “in my view” or “according to my stylistic preferences and prejudices, based on what I have been exposed to up to now, and particularly between the ages of five and 25”.

Mostly, pedants do not admit this. I know, because I have had plenty of arguments with them. They like to maintain that their prejudices are somehow objective – that there are clear instances of language getting “less good” in a way that can be independently verified. But, as we have seen, that is what pedants have said throughout history. George Orwell, a towering figure in politics, journalism and literature, was clearly wrong when he imagined that language would become decadent and “share in the general collapse” of civilisation unless hard work was done to repair it. Maybe it was only conscious and deliberate effort to arrest language change that was responsible for all the great poetry and rhetoric in the generation that followed him – the speeches “I have a dream” and “We choose to go to the moon”, the poetry of Seamus Heaney or Sylvia Plath, the novels of William Golding, Iris Murdoch, John Updike and Toni Morrison. More likely, Orwell was just mistaken.

The same is true of James Beattie, Jonathan Swift, George Puttenham, John Cheke and Ranulf Higden. The difference is that they didn’t have the benefit of evidence about the way language changes over time, unearthed by linguists from the 19th century onwards. Modern pedants don’t have that excuse. If they are so concerned about language, you have to wonder, why haven’t they bothered to get to know it a little better?

Adapted from Don’t Believe a Word: The Surprising Truth About Language by David Shariatmadari, published by W&N on 22 August and available at guardianbookshop.co.uk. Also available as an unabridged audio edition from Orion Audio

Follow the Long Read on Twitter at @gdnlongread, and sign up to the long read weekly email here.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.