Elon Musk plans to build his Tesla Bot, Optimus, so that humans “can run away from it and most likely overpower it” should they ever need to. “Hopefully, that doesn’t ever happen, but you never know,” says Musk. But is this really enough to make an AI safe? The problem of keeping AI contained, and only doing the things we want it to, is a deceptively tricky one, writes Roman V. Yampolskiy.
With the likely development of superintelligent programs in the near future, many scientists have raised the issue of safety as it relates to such technology. A common theme in Artificial Intellgence (AI) safety research is the possibility of keeping a super-intelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind.
In this essay we will review specific proposals aimed at creating restricted environments for safely interacting with artificial minds. We will evaluate feasibility of presented proposals and suggest a protocol aimed at enhancing safety and security of such methodologies. While it is unlikely that long-term and secure confinement of AI is possible, we are hopeful that the proposed protocol will give researchers a little more time to find a permanent and satisfactory solution for addressing existential risks associated with appearance of super-intelligent machines.
___
Covert channels are not anticipated by the confinement system designers and are not intended for information transfer at all, for example if the AI has control over the processor cooling fan it can use it to send hidden signals encoded as Morse code.
___
1. The Artificial Intelligence Confinement Problem
Join the conversation