In an era of Silicon Valley techno-utopianism, the age-old dream of a universal language, contemplated by thinkers from Dante to Sylvia Pankhurst, has found a new and vibrant home.Yet far from fostering international harmony, real-time speech-to-speech translation tech risks tearing society apart. Philip Seargant explores why the dream of transcending the curse of Babel persists so doggedly, and how it might lead the modern world into a dark future.
A decade after the end of the First World War, the social rights campaigner Sylvia Pankhurst wrote a short book outlining her recipe for a brighter future for humankind. A key ingredient, she argued, was a universal language. According to Pankhurst, the existence of a single world language would
play its part in the making of the future, in which the peoples of the world shall be one people: a people cultured and kind, and civilized beyond today’s conception, speaking a common language, bound by common interests, when the wars of class and of nations shall be no more.
By finding ways to bridge the gap between different languages, by creating more precise and unambiguous ways of expressing ourselves, we would be able, so the conviction has it, to overcome the impulse for war, to improve science, to strengthen our relationships with each other, and ultimately to reach our full potential as rational beings. Yet a hundred years on, Pankhurst’s blueprint for global harmony has yet to come to pass. Globalization might have transformed societies around the world, and communications technologies shrunk our perception of that world – but universal communication, and the world peace it would supposedly bring with it, remains elusive.
One of the major motivations for establishing such a language has been the idea that it will lead to a more concordant and amicable world.
But then maybe there’s a problem with the basic premise on which Pankhurst’s argument was built. There’s a strong counter-argument to be made that a universal language would not in fact be a catalyst for universal harmony, but something that would instead exacerbate inequality across the globe, privilege certain groups while marginalising others, and ultimately pose a deadly threat to the cultural diversity that’s such a strength for our species.
The dream of a universal language can be traced back to the myth of the Tower of Babel. It picked up momentum particularly in the Middle Ages – Dante, for example, in his De vulgari eloquentia, tried to discover a route back to the ‘original language’ of humankind –and subsequently became a subject of fascination for many scholars during the Renaissance and the Enlightenment. Throughout this history, one of the major motivations for establishing such a language has been the idea that it will lead to a more concordant and amicable world.
Possibly the most famous candidate for a constructed universal language was the brainchild of Ludwik Zamenhof who, back at the end of the nineteenth century, wrote a book detailing his invention, which he published under the name of Doktoro Esperanto. Zamenhof’s motivation for developing Esperanto came from his own experiences of growing up as a Polish Jew on the outskirts of the Russian Empire where violent cultural and ethnic prejudice was rife. One of the reasons for all the conflict he saw around him, he believed, was that ‘difference of speech is a cause of antipathy, nay even of hatred, between people’. The aim of his language was to create a neutral form of communication – something which didn’t ‘belong’ to any one community or culture – which would help people recognise their shared humanity rather than obsess about their differences.
The focus today is less on manufacturing a single language which will allow for universal communication, and more on harnessing the power of artificial intelligence and mobile technologies to create a world in which immediate and frictionless translation is always and everywhere available.
While the high-profile schemes to manufacture a universal language may have waned somewhat in recent decades, the twenty-first century has seen the tech industry enthusiastically embrace the idea of universal communication. The focus today is less on manufacturing a single language which will allow for universal communication, and more on harnessing the power of artificial intelligence and mobile technologies to create a world in which immediate and frictionless translation is always and everywhere available.
The proposals and debates taking place in today’s tech sector closely parallel those of the universal language projects from previous eras. Mark Zuckerberg’s stated vision for Facebook, for example, is that it ‘was built to accomplish a social mission – to make the world more open and connected’. Through sharing our thoughts and feelings with each other via social media, he contends, we can create a culture of openness which will lead ‘to a better understanding of the lives and perspectives of others’. There’s much in the language here that’s reminiscent of Sylvia Pankhurst a century ago.
Zuckerberg’s current plans centre mostly around the metaverse – the virtual environment in which people can congregate and interact via avatars of their real selves. Part of the architecture of this space, integrating machine translation and natural language processing, will allow for ‘instantaneous speech-to-speech translation across all languages’. You speak in the language of your choice and your interlocutor hears what you say in the language of their choice. Conversation in the metaverse would be like a perfectly dubbed film. Similar approaches are already being used by videoconferencing platforms, while out in the real world, the same capabilities are being built into earbuds which mediate between the foreign language you’re listening to and the familiar language you hear.
Zuckerberg has likened having the ability to transcend the curse of Babel as a ‘superpower that people have always dreamed of’. It’s the vision of a world handed down to us straight from science fiction, where the practical difficulties of cross-language communication don’t interfere with the main plot. If it comes to pass as predicted, the era of the global village will have truly arrived.
We can be forgiven, however, for harbouring a certain scepticism about whether Silicon Valley solutions are likely to succeed in uniting the world in a way in which the schemes devised by Zamenhof and others failed to do. One of the reasons why this vision may do more to divide than unite is that unequal access to important resources invariably leads to inequality more generally. In this case, the digital divide between wealthy communities and developing communities in terms of differential access to hardware, training and skills will mean that Zuckerberg’s ‘superpower’ of communicating across language barriers will privilege those already living in geopolitically powerful communities. Those who will most benefit from the technology, at least in the first instance, will be those who already enjoy the benefits of economic comfort.
More fundamentally, the way in which the tech sector conceives of language here appears narrow, if not utterly naïve.
It’s this heterogeneity which allows for flexibility, adaptability and creativity in communication, and is integral to the cultural diversity which underpins human civilization.
That is, language isn’t simply a means for communicating information and opinion from one person to another. It plays a far more extensive role in our lives. It’s part of our identity, our personality, the manner in which we relate to those around us. Our accent and dialect, the pitch of our voice, our lexical preferences, our writing style – all these contribute to the experience we have of interacting in the world. And it’s this heterogeneity which allows for flexibility, adaptability and creativity in communication, and is integral to the cultural diversity which underpins human civilization.
At the same time, however, this variety often gets used as fodder for acts of discrimination – for people mocking certain accents, or judging others according to arbitrary and stereotyped beliefs. And this has been used throughout history as an instrument of political control, stigmatising or suppressing whole languages as a way of marginalising or dehumanising the communities which speak them. To take just one current example, language education policies in Xinjiang, China, decree that almost everything is taught in Mandarin, the official language, producing what the UN calls ‘forced assimilation’ into mainstream culture for children from the Uyghur community.
All of which creates a potential dilemma for the application of the universal-communications technology we’ve been talking about. On the one hand, the technology could uphold or replicate the distinctiveness with which individuals speak, but in doing so risk replicating patterns of discrimination in society. On the other, it could be used to transform an individual’s speech into a standardised ‘prestige’ variety, thus obscuring all the markers of place, race, class and educational background – markers which are essential properties of our authentic language use.
From a technological perspective both options are perfectly feasible. But from a human point of view, they’re likely to result in very different visions of the world. And what we mustn’t lose sight of in all this is the fact that ‘difference of speech’, while it can be abused as an excuse for prejudice, is also one of the reasons why language is such a central part of what it means to be human. Diversity, variety and fluidity create the wellspring for our culture. Blithely sacrificing this in favour of a more ‘efficient’ form of global communication is very unlikely to create the utopian outcome we’re being promised. That’s not to say that an AI-facilitated universal translator can’t contribute to society – but that in designing such technology we need to find ways of preserving those qualities which make human language so human.