Seeing is not believing

The coming deepfake dystopia

Visual evidence provides the gold standard of truth. So it is that the news or documentary footage has an uncanny ability to take us in, presenting the illusion of an eyewitness experience, even though it is always a construction and not the truth. The rise of deepfake technology takes this a radical step further. The consequences for our trust in any testimony are profound, writes Don Fallis.

 

In order to survive and flourish, people need to constantly acquire knowledge about the world. And since we do not have unlimited time and energy to do this, it is useful to have sources of information that we can simply trust without a lot of verifying. Direct visual perception is one such source. But we cannot always be at the right place, at the right time, to see things for ourselves. In such cases, videos are often the next best thing. We can find out what is going on at great distances from us by watching videos on the evening news, for instance.

Moreover, we make significant decisions based on the knowledge that we acquire from videos. Videos recorded by smart phones have led to politicians losing elections (see Konstantinides 2013), to police officers being fired and even prosecuted (see Almukhtar et al. 2018), and, most recently, to mass protests around the world (see Stern 2020). And we are significantly more likely to accept video evidence than other sources of information, such as testimony. Thus, videos are extremely useful when collective agreement on a topic is needed (see Rini 2019).

But the value of videos as a source of knowledge is now under threat by deepfakes—realistic videos created using new machine learning (specifically, deep learning) techniques (see Floridi 2018).  Deepfakes can depict people saying and doing things that they did not actually say or do. A high-profile example is “face-swap porn” in which the faces in pornographic videos are seamlessly replaced with the faces of celebrities (see Cole 2018). But for almost any event, these techniques can be used to create fake videos that are extremely difficult to distinguish from genuine videos. Notably, the statements or actions of politicians, such as former President Obama, can be, and have been, fabricated (see Chesney and Citron 2019; Toews 2020).

Deepfake technology threatens to seriously interfere with our ability to acquire knowledge from videos.

In the news media and the blogosphere, the worry has been raised that, as a result of deepfakes, we are heading toward an “infopocalypse” where we cannot tell what is real from what is not (see Rothman 2018; Schwartz 2018; Warzel 2018; Toews 2020). Philosophers, such as Deborah Johnson, Luciano Floridi, and Regina Rini (2019) and Michael LaBossiere (2019), have now issued similar warnings. As Floridi puts it, “do we really know what we’re watching is real? ... Is that really the President of the United States saying what he’s saying?”

Admittedly, realistic fake videos of events that did not actually occur are nothing new. For example, during World War Two, the Nazis created propaganda films depicting how well Jews were treated under Nazi rule (see Margry 1992). So, there is certainly a sense in which deepfakes do not pose a brand new threat to knowledge. Nevertheless, deepfake technology threatens to drastically increase the number of realistic fake videos in circulation. Thus, it may seriously interfere with our ability to acquire knowledge from videos. As Johnson puts it, “we’re getting to the point where we can’t distinguish what’s real—but then, we didn’t before. What is new is the fact that it’s now available to everybody, or will be... It’s destabilizing. The whole business of trust and reliability is undermined by this stuff.”

There are three ways that deepfakes pose a threat to knowledge. First, deepfakes can lead people to acquire false beliefs. That is, people might take deepfakes to be genuine videos and believe that what they depict actually occurred. And this can easily have dire practical consequences. For example, Chesney and Citron (2019) ask us to imagine “a video showing an American general in Afghanistan burning a Koran. In a world already primed for violence, such recordings would have a powerful potential for incitement.”

Second, deepfakes can prevent people from acquiring true beliefs (see Fallis 2004). When fake videos are widespread, people are less likely to believe that what is depicted in a video actually occurred. Thus, as a result of deepfakes, people may not trust genuine videos from the legitimate news media (see Chesney and Citron 2019, 152; Toews 2020). Indeed, a principal goal of media manipulation is to create uncertainty by sowing doubt about reliable sources (see Oreskes and Conway 2010; Coppins 2019). 

And, third, even if we end up with true beliefs after watching a genuine video, we might not end up with knowledge because our process of forming beliefs was not sufficiently reliable. Indeed, as deepfakes become more prevalent, it may be epistemically irresponsible to simply believe that what is depicted in a video actually occurred.

How do deepfakes lead to the epistemic (i.e., knowledge-related) harms just described? Our thesis is that, as a result of deepfakes, videos now carry less information about the events that they depict.  Basically, videos have become much less reliable evidence that the events that they depict actually occurred.  Deepfake technology makes it much easier to create convincing fake videos of anyone doing or saying anything. Thus, even when a video appears to be genuine, there is now a significant probability that the depicted event did not actually occur. Admittedly, we have not yet observed a lot of deepfakes, at least in the political realm (see Rini 2019). Nevertheless, the probability that any given video is fake has increased as a result of deepfake technology. Moreover, this probability will continue to increase as the technology improves and becomes more widely available. So, what appears to be a genuine video now carries less information than it once did and will likely carry even less in the future.

We cannot learn as much about the world if less information is carried by videos. But once we clearly understand the threat the deepfakes pose, there are some things that can be done.

We cannot learn as much about the world if less information is carried by videos. Even if we follow David Hume’s (1977 [1748]) advice to proportion our belief to the evidence, we are in a less hospitable epistemic environment when the amount of information carried by videos goes down. As a result, we will end up with fewer true beliefs than we would have had otherwise. But once we clearly understand the threat the deepfakes pose, there are some things that can be done.

The first strategy involves changing our information environment so that it is epistemically safer. We can increase the amount of information that videos carry if we decrease the probability of realistic fake videos being produced. Although there are fewer and fewer technical constraints on the production of realistic fake videos, it is still possible to impose normative constraints. For example, laws restricting the creation and dissemination of deepfakes have been proposed (see Brown 2019; Chesney and Citron 2019; Toews 2020). Also, informal sanctions can be applied. Indeed, some apps for creating deepfakes have been removed from the Internet due to public outcry (see Cole 2019).

The second strategy involves changing us so that we are at less epistemic risk. We can increase the amount of information that videos carry if we get better (individually and/or collectively) at identifying deepfakes. Even if laypeople cannot identify deepfakes with the naked eye, it is still possible for experts in digital forensics to identify them and to tell the rest of us about them. And it is possible for the rest of us to identify such experts as trustworthy sources (see Fallis 2018).

The third strategy involves identifying parts of our information environment that are already epistemically safe. Different videos carry different amounts of information. For example, a video shown on the evening news is much more likely to be genuine than a random video posted on the Internet. After all, the evening news is a source that has “such credit and reputation in the eyes of mankind, as to have a great deal to lose in case of their being detected in any falsehood” (Hume 1977 [1748]). In other words, even without laws against deepfakes, the evening news is subject to normative constraints. Thus, we can try to identify those videos that still carry a lot of information.

 

References:

Almukhtar, S., Benzaquen, M., Cave, D., Chinoy, S., Davis, K., Josh, K., Lai, K. K. R., Lee, J. C., Oliver, R., Park, H., & Royal, D.-C. (2018). Black lives upended by policing: the raw videos sparking outrage. New York Times. https://www.nytimes.com/interactive/2017/08/19/us/police-videos-race.html.

Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war: the coming age of post-truth geopolitics. Foreign Affairs, 98, 147–155.

Cole, Samantha. (2018). We are truly fucked: everyone is making AI-generated fake porn now. Motherboard. https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley

Cole, Samantha. (2019). Creator of DeepNude, app that undresses photos of women, takes it offline. Motherboard. https://www.vice.com/en_us/article/qv7agw/deepnude-app-that-undresses-photos-of-women-takes-it-offline

Coppins, McKay. (2019). The billion-dollar disinformation campaign to reelect the president. The Atlantic. https://www.theatlantic.com/magazine/archive/2020/03/the-2020-disinformation-war/605530/

Fallis, D. (2004). On verifying the accuracy of information: Philosophical perspectives. Library Trends, 52, 463–487.

Fallis, D. (2013). Privacy and lack of knowledge. Episteme, 10, 153–166.

Fallis, D. (2018). Adversarial epistemology on the internet. In D. Coady and J. Chase (Eds.), Routledge handbook of applied epistemology (pp. 54-68). New York, Routledge.

Floridi, L. (2018). Artificial intelligence, deepfakes and a future of ectypes. Philosophy and Technology, 31, 317–321.

Hume, David. 1977 [1748]. An enquiry concerning human understanding. Indianapolis: Hackett.

Konstantinides, Anneta. (2013). Viral videos that derailed political careers. ABC News. https://abcnews.go.com/Politics/viral-videos-derailed-political-careers/story?id=21182969

LaBossiere, Michael. (2019). Deep fakes. Philosophical Percolations. https://www.philpercs.com/2019/05/deep-fakes.html

Margry, K. (1992). Theresienstadt (1944–1945): the Nazi propaganda film depicting the concentration camp as paradise. Historical Journal of Film, Radio and Television., 12, 145–162.

Oreskes, N., & Conway, E. M. (2010). Merchants of doubt. New York: Bloomsbury Press.

Rini, Regina. (2019). Deepfakes and the epistemic backstop. https://philpapers.org/rec/RINDAT

Rothman, Joshua. (2018). In the age of A.I., is seeing still believing? New Yorker. https://www.newyorker.com/magazine/2018/11/12/in-the-age-of-ai-is-seeing-still-believing

Schwartz, Oscar. (2018). You thought fake news was bad? Deep fakes are where truth goes to die. Guardian. https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth

Stern, Joanna. (2020). They used smartphone cameras to record police brutality—and change history. Wall Street Journal. https://www.wsj.com/articles/they-used-smartphone-cameras-to-record-police-brutalityand-change-history-11592020827

Toews, Rob. (2020). Deepfakes are going to wreak havoc on society. We are not prepared. Forbes. https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/

Warzel, Charlie. (2018). He predicted the 2016 fake news crisis. Now he’s worried about an information apocalypse. Buzzfeed News. https://www.buzzfeednews.com/article/charliewarzel/the-terrifying-future-of-fake-news

Yetter-Chappell, H. (2018). Seeing through eyes, mirrors, shadows and pictures. Philosophical Studies, 175, 2017–2042.

Adapted by permission from Springer Nature: Fallis, D. The Epistemic Threat of Deepfakes. Philos. Technol. (2020). https://doi.org/10.1007/s13347-020-00419-2 . Find full reference list here.

Latest Releases
Join the conversation

ida sanka 2 September 2021

I appreciate your blog post and the content you shared with us is simply wonderful vinyl wrap

killer smile 2 September 2021

This is a good post. This post gives truly quality information. I'm definitely going to look into it. Really very useful tips are provided here. Thank you so much. Keep up the good works.
deck builders tulsa

killer smile 2 September 2021

interesting blog! i love it! keep posting more! bath remodel fresno

Andrea Jessica 11 June 2021

Technology nonstop. great idea.
are a fan of Solitaire, you can play world of solitaire for free on: https://worldofsolitaires.co

Minnie 21 May 2021

What is the next step after this? Virtual reality is fake. How are people going to control their identities in that sphere? If we give up on the truth of physical experience altogether, what will happen then?