AI is turning us into machines

The imitation game

In order for artificially intelligent machines to learn, more and more we must express ourselves in a reduced language and must simplify the complex range of human expression into something AI can understand, writes David Berry.

 

The imitation game, better known as the Turing test, was developed by Alan Turing in 1950. As one of the early pioneers of computers, he argued that if a computer could imitate a human successfully, it might thereby be thought of as intelligent. Whilst it remains a controversial test - after all, merely imitating a human does not necessarily prove that the computer is intelligent - it is certainly the case that the Turing Test has set a standard by which we might measure artificially intelligent computers. For example, it helps if computers can talk to us as if they were human in order that we can use them as useful assistants in our everyday lives. But it also implies a moment of deception, the computer, in order for us to trust it to understand what we mean, is often designed to appear more human than it actually is.

The first example of a computer that could convincingly talk back to the user was the ELIZA program developed by Joseph Weizenbaum in the 1960s and was inspired by Turing’s idea. This program allowed a human user to talk to a sub-program called DOCTOR, which imitated a Rogerian psychotherapist by simply repeating back responses to a user’s input. The result was a computer system which people found remarkably easy to trust with their personal secrets even though it was built on deception. This was very surprising to Weizenbaum who grew increasingly sceptical of artificial intelligence due to the ease with which it can deceive humans. He wrote a book that explained the dangers of allowing computers to be the yardstick of intelligence by replacing humans with machines. A book that, whilst widely read at the time, did little to hold back the tide of trying to instil human-like behaviours into machines.

Today we are familiar with the plethora of computer assistants on our phones and smart-speakers, such as Alexa or Siri, who as conversational agents, are able to do a number of things for us, like checking the weather, turning on the lights or playing our favourite song. As they learn about our preferences, they are able to adapt their behaviour and conversation to better fit our needs, or at least that is the theory. Due to limitations, however, many of these conversational interfaces are not much more sophisticated in their handling of human language than the early ELIZA system. They are easily caught out by relatively simple human assumptions about language, such as context being considered relatively stable in a conversation, or the way in which an object or subject might be referred to. Additionally, humans cope extremely well with uncertainly and are able to fill in the blanks in conversations where cultural assumptions, habits or slang are used. Humans usually have no trouble with digressions, jokes and small-talk which might be incidental to the main focus of a conversation. However, for computers these language uses pose huge headaches as they are extremely difficult for an artificial intelligence to cope with as they are hugely literal in their understanding. They find it hard to put aside the superfluous aspects of a conversation, or to hold the context when many different things are being discussed. In addition, these systems can be remarkably bad at hearing at all, often missing the beginning of sentences or misunderstanding names of people or things. Anyone who has attempted to send a message whilst driving via the voice interface will be well aware of the annoying multitude of ways in which these systems mangle your request. As with ELIZA, most artificial intelligence developers have found the best way to hide these shortcomings is to deceive the human into thinking that the artificial intelligence is smarter than it is. Of course, these designers do not think of this as deception, rather they think of it as clever design or a pragmatic solution to a hard problem, but the reality is often to use these AIs also requires the humans to learn how to become less human and more the machine.

These glitches in conversational interfaces reveal to us the special human ability of empathy which we deploy in conversations and social life. We will repeat our request, try to find common ground, or search for another way of saying the same thing. When people talk to each other they are often generous in their sharing of developing understanding and forgiving of misunderstandings and confusions. Conversational AI’s, in contrast, are exhausting in their neediness, frustrating in their limited abilities, requiring us to do the work of shoring up their confusions and mistakes.

___

The danger is that as we shape ourselves to the requirements of the machine to get things done, we also learn to slowly undo the full creative expressive potential of our bodies and language and learn to work within the narrow lines laid down by the computer.

___

This points to a very interesting aspect of our relationship to artificially intelligent machines, and that is that due to their limitations, they struggle to keep up with us, and therefore encourage us to change our way of interacting with them. This can be done subtly, such as when Siri will ask you to repeat your request, or can be done much more imperatively when Alexa tells you which words it will respond to. But either way, we adapt ourselves to the machine, and in doing so, we are less expressing ourselves creatively in our full human capacity, and more narrowing our expressiveness to the narrow communications of the machine.

This simple act of communicational incommensurability is a useful analogy for the way in which artificial intelligence is able to disguise its lack of ability by causing us to fit ourselves to the requirements of the machine, rather than the other way around. The interface between human and computer requires that a simple channel of communication is opened, demonstrated by the visual windows-based interfaces that we are so familiar with today for using our devices, through simple point-and-click modalities our full expressive potential is compressed into simple commands the computer can follow and execute. Whilst in no way does this represent the full potential offered by a computer, it does allow a consistent, stable, trustworthy relationship to be developed between a human user and the machine. We are forgiving of the pedestrian nature of an interface because it remains still enormously powerful, even when it can be extremely clumsy and slow to get things done. We shape ourselves to the machine because that is the best way to get the machine to do what we want.

This instrumental relationship works both ways. The danger is that as we shape ourselves to the requirements of the machine to get things done, we also learn to slowly undo the full creative expressive potential of our bodies and language and learn to work within the narrow lines laid down by the computer. In a sense, over time we begin to unconsciously perform the very narrow limits of computational communication in our everyday life.

___

We learn to express ourselves in a reduced language, simplifying the complex human range of expression into the grid that computers overlay onto us.

___

Artificial intelligence research is fully aware of this, and has used this as an approach to structuring the relationship between human and machine. In essence, programmers design what are called “grammars of action” which prescribe the structure of interaction with a system, such that the action or communication is captured. For example, a computer prefers simple grammars of action such as “Bob posts message,” “Jane likes message,” “Bob sends a reply.” In these examples, the ability to draw the signal from the noise is made easier because the action that is required is specified and clear. This is why Siri is very good at telling you what the weather is, but not so good at answering simple questions about people in a room. Because of the frustrations of trying to get answers to things, we learn not to ask the complex questions, “is the Prime Minister a moral person?”, and instead shape our questions into the simple grammars of action that the device reinforces back to us. Whilst the illusion of the all-knowing AI soon wears off, we adapt ourselves to the simple mode of the machine. This can be seen on social media, where the platforms have learned to optimise and simplify the way in which we can engage with each other, often into simple actions such as like, heart, thumbs up, or smile. We learn to express ourselves in a reduced language, simplifying the complex human range of expression into the grid that computers overlay onto us.

In this way the simplicity of the intelligent machine serves to discipline us to be much simpler beings. Working to operate the computer not as we might wish to, but in the limited range of operations that it makes possible. And this has spillover effects into our everyday lives where our learnt behaviour around machines translates into a desiccated relationship with others, whether in terms of simple ideas of how friendships work from social networking, or in terms of the range of emotions we are taught to express, which often involve simplified but intense expressions of outrage, anger or liking. But there is also a deeply economic reason for seeking to simplify human life into a spreadsheet of emotional life and communication, it also makes it easier for the machine to predict our behaviour, by seeing how we react to things as a simple pull-down menu of responses.With prediction there is then the possibility of guessing what our actions will be. And it is also a small step from deception to manipulation, and with these predictive technologies many of the platform companies that surround our lives today have already begun to make that transition.

We live in an age where we are building automated intelligences, to be sure, but as Weizenbaum warned us decades ago, we run the risk of also living in an age of automated deception where being human is not to be human, all too human, but hardly human at all.

Latest Releases
Join the conversation

Frida R 3 March 2022

This is awesome! Glad to see this post.   
Cultivating a habit of gratitude is incredibly beneficial to one’s mental and physical health. It helps relieve depression, increases one's feeling of joy and happiness, and heightens general wellbeing. As a guide to finding inner happiness, a gratitude journal is recommended.

Frida R 3 March 2022

vgijlkmlhi

Jason Statham 18 February 2022

There are many things which can increase your knowledge. If you are interested to read news, blogs, articles then I would suggest giving a see-through to the British Blog Writers firm which provides high-quality writing content that is all written and checked by their own professional blog post writer UK. Which are developing these kinds of informative blogs for audiences which are engaged with business and IT related.