Algorithms have been used to make astronomical calculations, build clocks and turn secret information into code. But whatever they did, for millennia up until around the eighties, most algorithms were pretty simple. They took an input, followed a series of well-described steps and produced an output. Input, process, output: that’s all an algorithm is, and has ever been. Over history some have been more complicated than others, but how they turned an input to an output has generally been transparent and understandable, at least to the people who built and used them.
But at another time, in another place, I’d begun to learn that a new species of algorithm has emerged. Memory had become so much cheaper, and computational power, and data, of course, became far more plentiful. This allowed algorithms to take on a form, I learned, very different from their forebears.
Jure Leskovec spent time at Facebook and as chief scientist at Pinterest before moving back to academia. We were sitting in his office in Stanford, which, like anything associated with tech in ‘the valley’, seemed to be expanding rapidly. As we spoke, clouds of hot, white dust drifted up past his window from drilling below.
___
"So I can train a machine learning algorithm to answer the question,'If I release you, will you commit another crime or not?'"
___
Join the conversation