Brain-computer interfaces from companies like Neuralink and Synchron promise unprecedented cognitive enhancement. But emerging research suggests that boosting specific mental functions may come with hidden costs – improving memory might impair decision-making, while increasing focus could reduce creativity. Aristotle's concept of virtue as balance between extremes shows us that we should beware of these tradeoffs and view optimum cognition as a system, not a superpower.
Neurotechnology is a major industry these days. The companies Neuralink, Synchron, and BrainGate are racing to develop Brain-Computer Interfaces (BCIs) for medical purposes as well as for everyday consumers, promising the hope of cognitive enhancement. The potential of BCI-driven enhancements are plentiful—from better memory and learning capacities, to increased sensory perception and emotional capacities–but their use also raises challenging new philosophical questions.
For example, if a BCI turns out to be successful at dramatically improving our memory, learning capacities, or mindfulness, should we use it? Philosopher Julian Savulescu has argued that we have a moral obligation to cognitively enhance human beings. Nick Bostrom and Anders Sandberg argue, more descriptively, that we will use cognitive enhancements: “nature knows best”, they claim, and it is our human nature to try to improve ourselves. In another piece, they further assert that “Most cognitive functions… are intrinsically desirable. Having a good memory or a creative mind is normally valuable in its own right.” But what if there can be too much of a ‘good’ thing? What if we can overfill on some traits? What if there are unforeseen cognitive side-effects to enhancement? Cognitive functions aren't independent variables that can be optimized in isolation—they're part of an interconnected system where changes to one element may have cascading effects on others.
Cognitive enhancement, the improvement or amplification of cognitive capacities, can be achieved through a range of means, including pharmaceuticals, genetic modifications, digital technologies, and even social systems, such as education. BCIs, as a kind of a digital ‘neurotechnology’, achieve this through a direct communication pathway between the brain and a computing device. “Direct” in this case means unmediated by the body, language, perception (eye movements), action (electrophysiological activity of the body), or any other form of signaling outside the brain.
Some BCIs are more familiar and commonplace. Electroencephalogram (EEG) records electrical signals in the brain through electrodes, which are placed on the surface of the scalp. Functional Magnetic Resonance Imaging (fMRI) detects changes associated with cerebral blood flow and uses this as a measure of neuronal activity. Other BCIs don’t just read brain activity, but actually influence signals in the brain as well. Transcranial Magnetic Stimulation (TMS), for example, can be used as a measurement tool of cortical excitability, but can also be used to magnetically stimulate small regions of the brain.
SUGGESTED VIEWING The case for conscious AI With Joscha Bach
Another relevant distinction is the level of invasiveness of a device: noninvasive devices, like TMS, operate externally, and produce only a shallow magnetic field and, therefore, can only be used to affect surface regions of the brain; while invasive devices function from within the skull and, therefore, usually require some kind of neurosurgery. By operating under the skull (or ‘intracranially’), invasive techniques, such as electrocorticography (ECoG) and deep brain stimulation (DBS), can get a better signal read and can also stimulate areas deeper within the brain with greater precision and accuracy.
Join the conversation