In order for artificially intelligent machines to learn, more and more we must express ourselves in a reduced language and must simplify the complex range of human expression into something AI can understand, writes David Berry.
The imitation game, better known as the Turing test, was developed by Alan Turing in 1950. As one of the early pioneers of computers, he argued that if a computer could imitate a human successfully, it might thereby be thought of as intelligent. Whilst it remains a controversial test - after all, merely imitating a human does not necessarily prove that the computer is intelligent - it is certainly the case that the Turing Test has set a standard by which we might measure artificially intelligent computers. For example, it helps if computers can talk to us as if they were human in order that we can use them as useful assistants in our everyday lives. But it also implies a moment of deception, the computer, in order for us to trust it to understand what we mean, is often designed to appear more human than it actually is.
The first example of a computer that could convincingly talk back to the user was the ELIZA program developed by Joseph Weizenbaum in the 1960s and was inspired by Turing’s idea. This program allowed a human user to talk to a sub-program called DOCTOR, which imitated a Rogerian psychotherapist by simply repeating back responses to a user’s input. The result was a computer system which people found remarkably easy to trust with their personal secrets even though it was built on deception. This was very surprising to Weizenbaum who grew increasingly sceptical of artificial intelligence due to the ease with which it can deceive humans. He wrote a book that explained the dangers of allowing computers to be the yardstick of intelligence by replacing humans with machines. A book that, whilst widely read at the time, did little to hold back the tide of trying to instil human-like behaviours into machines.
Join the conversation