The Past and the Future of Artificial Intelligence
Kenneth Cukier
Is AI humanity's savior, or do the threats outweigh its benefits? Best-selling author and Data Editor of The Economist Kenneth Cukier uncovers how to be prepared for the next phase of human evolution.
About the Course
Mankind is possibly on the verge of its greatest achievement; the creation of intelligence equal or superior to its own. Will we be gods to our creations or will they replace us? In this course, Kenneth Cukier, data editor at the Economist, looks at the origins of A.I. to examine the promise of A.I. to its early pioneers, how it works, and what the consequences may be. From the ‘Terminator’ scenario to Google translate, Cukier describes how A.I. will not only change our odds of survival, but also our understanding of life, freedom, and compassion.
In this course you will learn:
- Why the development of intelligent machines will soon lead to 'superintelligences'.
- How early pioneers used neuroscience to try and map the complexity of the human brain.
- Why we stopped trying to teach machines, and got machines to teach themselves.
- The ways learning algorithms are at work in countless aspects of our daily lives.
- The disruption to our society these algorithms will have in jobs, policing and surveillance.
- Why some of the most essential aspects of our humanity may become worthless.
- How the 'King Midas problem' anticipates problems with bad engineering.
- How the 'Terminator problem' anticipates the threat to humanity from superintelligence.
- The difficulties of programming superintelligences capable of moral choice.
- What can be done to protect our species.
About the Instructor
-
Kenneth Cukier
Kenneth Cukier is the Economist's Data Editor and co-author of Big Data.
Course Syllabus
-
Part One: Counterfeiting a Human MindWhat is artificial intelligence, how does it work and where is it headed? Has AI become indispensable in the modern world?
-
Part Two: The Rise of the MachinesDoes A.I. irreversibly transform humanity’s evolution for the worse -- with amoral robots, regulation by algorithm, and little need for people?