Why AI must learn to forget

Machines with perfect memory would be dangerous

We think of our tendency to forget as a cognitive defect. When it comes to Artificial Intelligence, we could eradicate this human limitation by creating machines with infallible memories. But forgetting is in fact crucial both for smoothing out social interactions, and for the proper functioning of our memories. The same would be true of AI. Machine intelligence that never forgot would be both socially dangerous and cognitively inefficient, argues Ali Boyle.

 

A few months ago, I introduced myself to another philosopher only to discover that I’d met them before. We’d met at a conference, as it turned out, and had quite a long conversation over dinner. Even on being told this, I struggled to summon up the memory of our previous encounter. I remember being absolutely mortified. To make matters worse, I’ve already forgotten who this person was, so the mortification seems bound to be repeated.

All in all, I’d prefer it if I’d not forgotten and been saved the embarrassment.

Like most people, I use technology to offset my tendency to forget things. My calendar app reminds me about important birthdays. My to-do list app tells me what I need to get done each day. The timer on my phone has prevented me from burning several meals to a crisp. My apps never fail; they never forget.

___

It would seem odd to design a forgetful artificial intelligence. But in fact, I think that’s exactly what we should do.

___

The reliability of computerised memory seems like its greatest advantage over biological memory. So, when it comes to building artificial agents, we might think that the aim should be to capitalise on this advantage by making their memory systems as reliable as possible – that ideally, we should aim to build agents which never forget. It would seem odd to design a forgetful artificial intelligence. But in fact, I think that’s exactly what we should do.

AI containement SUGGESTED READING The AI containment problem By Roman V. Yampolskiy

It’ s natural to think of forgetting as an unfortunate thing: something we should do our best to avoid, but which will inevitably happen. One way in which forgetting seems bad is in how it affects others. When we forget things, it can inconvenience people, and go as far as causing them significant harm. Even where the practical impact is minimal, it can be taken to signal something hurtful about what we value: that the thing we forgot wasn’t important or significant enough for us to remember.

But forgetting also strikes us as bad in another way: it strikes us as a cognitive vice. That is, we’re supposed to remember things; it’s what our minds were made for. Our memory systems are supposed to collect information, store it and offer it up to us when we need it to successfully navigate the world. Forgetting is a failure to do this: a sign that our memory systems are doing what they’re supposed to do only imperfectly. So when we design new Artificial Intelligence, why not try and eradicate this human vice of forgetfulness?

___

Even if it were possible, to build infallible memory into machines would mean forgoing some of the important benefits of forgetting.

___

First of all, building Artificial Intelligence with infallible memory systems is unlikely to be feasible, for similar reasons that infallible memory isn’t feasible in us. There are simply practical limits on the amount of information that can be stored and processed in a finite system. But even if it were possible, to build infallible memory into machines would mean forgoing some of the important benefits of forgetting.

I said above that forgetting is sometimes harmful in its effects on other people. But there are also times when it’s beneficial. For instance, you probably know someone who just won’t let things go. Perhaps you said something unkind to a friend in anger years ago. You’ve apologised repeatedly, and your friend says all is forgiven. But he still reminds you of your unkind words every time you have a disagreement. This kind of thing is very frustrating. It would be better for you if your friend would just forget about it. It might be better for him, too.

In many cases, the philosopher Rima Basu argues, we have a duty to forget. The example above might be one such case. If your friend really means to forgive you, then he should probably make an effort to forget what you said. As Basu says, ‘you cannot truly forgive if the wrongdoing is still at the forefront of your mind’. Other cases include those where you learn something in a way which violates another’s right to privacy. If I’m snooping around your house and read your teenage diary, then I probably ought to put what I’ve learned out of my mind. Duties to forget can also arise from the interest we have in constructing our own identities – that is, in shaping the way the world see us. For instance, I was very shy as a child, but I wouldn’t say I am these days. There are times when I wish my family would forget that I used to be shy – since they tend still to explain things I do and say in terms of my shyness, and that isn’t how I wish to be seen.

In all these scenarios, we might wish that the people around us were more forgetful. And as frustrating as it can be when other people remember too much, it has the potential to be much more harmful when AI systems do it. There are already significant concerns about the amount of our personal data available to AI systems, given the control they exert over our lives. The situation could be made even more troublesome by systems which stored all of the information they encountered about us indefinitely. This alone is a reason to think about building forgetful machines.

SUGGESTED VIEWING Human justice and machine intelligence With Joanna Bryson

But there are other reasons to consider this – reasons which are ‘internal’ to the project of building artificial intelligence. Above, I said that forgetting strikes us as a cognitive vice: that our memory systems are supposed to retain information, and that forgetting is a sign that our memory systems carry out this function only imperfectly. But perhaps that’s not quite right: perhaps, to some extent, we’re supposed to forget.

The philosopher Kirk Michaelian has argued that this is the case: that a well-constructed memory system ought to forget certain things. The idea is that memory isn’t only a storage facility; it also centrally involves retrieval. For memory to work well, we need to be able to retrieve the information we need when we need it. This will be harder to the extent that our memory ‘storage’ is full of information we’re never likely to need. So, Michaelian argues, a virtuous memory system will be one which prioritises the important things forremembering, preferentially forgetting ‘uninteresting’ records – as our memory systems, at least approximately, do.

So, might forgetting actually be beneficial to artificial agents too? In at least some cases, it seems the answer is ‘yes’.

A few years ago, researchers developed an AI system which achieved a new state of the art when it came to playing Atari video games. The secret to its performance was that rather than learning directly from its ‘experiences’ as it played Atari, the system stored ‘memories’ of its gameplay which it could replay and learn from later. Adding this memory store resulted in a huge improvement in the system’s performance.

AdobeStock 480969831 SUGGESTED READING AI is turning us into machines By David M. Berry

What’s more interesting is that in a subsequent refinement of this system, researchers added a mechanism called ‘prioritised experience replay’. In prioritised replay, the system preferentially replayed gameplay events which were more surprising – that is, in which what happened differed from what the system had predicted. This system performed better again than the previous version, in which the replay of memories was entirely random, and only the oldest memories were ever ‘forgotten’.

The ‘prioritised replay’ mechanism which improved this system’s performance so much looks quite similar to the virtuous pattern of forgetting that Michaelian describes: the system preferentially remembers events which are more informative and ‘forgets’ ones which are less likely to be useful.

___

To the extent that the aim of artificial intelligence is to build machines which ‘learn and think like people’, we should build memory systems that resemble our own.

___

So, it’s not only for social or ethical reasons that we might want to build forgetful machines. To the extent that the aim of artificial intelligence is to build machines which ‘learn and think like people’, we should build memory systems that resemble our own – and our own memory systems are essentially fallible. Their fallibility is not a vice, or a failure to optimally perform their functions – it’s an essential part of how these systems work, and work so well.

Aristotle famously suggested that virtue is to be found in the ‘golden mean’ between two vicious extremes – one characterised by excess, the other by deficiency. There’s no denying that remembering too little is a vice, but remembering too much can be a vice too – an ethical vice, and also a cognitive one. If we’re to build truly intelligent machines, and if we’re to live comfortably with them, then we should not aim to make their memories as reliable as possible. Instead, we should aim to replicate the virtuous patterns of forgetting which characterise biological memory. 

Latest Releases
Join the conversation