← Back Published on

Deepfakes are here to stay, so what should we do about it?

Deepfakes are a technology which use machine learning or artificial intelligence to recreate an existing image, oftentimes a face, and replace it on another. This is a technology which has been used by the media and academics for some time now, but recently the public has gained access to it as a result of open-source software and the rapid improvement of consumer-grade graphics cards. Whilst this technology can be used to create seemingly harmless memes and cinema, it has also been used to generate fake political propaganda and pornography. This poses serious legal and socio-political problems which this blogpost will explore.

Image taken from the Arts Management and Technology laboratory

Deepfakes have exploded across the internet in the past year or so, becoming normalized and ever more common. In addition to this, they have become more convincing and harder to distinguish from real videos. This is because all that a deepfake software needs is some source photos, a good enough computer and some time. Recently, youtuber Mike Boyd learnt to create deepfakes around 100 hours, a relatively low commitment for such a powerful technology.

This is extremely worrying, as deepfakes can be used to make it appear as people are saying or doing things which they never have. Already, deepfakes have been at the heart of political turmoil, such as in Gabon where some argue that a presidential speech is just a deepfake being used to cover up President Ali Bongo’s illness. A deepfake expert, Aviv Ovadya, explained that the fact that deepfake technology is prominent enough to cast doubt is problematic, whether the presidential video is real or not. Inevitably, the technology’s increasing accuracy will lead to even more mistrust, misinformation and misuse unless regulated.

Compounding on these problems, audio deepfakes are also emerging and being used to cause harm. Like their image counterparts, an audio deepfake uses extensive data of a person’s voice to recreate their likeness to say something entirely new. This can be combined with video deepfakes to make misinformation even more convincing but also threatens the livelihood of singers and voice actors as their voices can be reproduced at very little cost. This has already been seen when Bev Standing noticed that her voice was being used across TikTok, despite her having never worked for them. Whilst this case was settled outside of court, it represents the legal issues that deepfakes can present and the worrying ease of which we may lose control of our own voice or likeness.

A further example of the misuse of deepfakes is in the creation of pornographic content. As the #NotYourPorn campaigner Kate Isaacs said, “The legislative process in this country is incredibly slow and technology is incredibly fast”. She argues that it is of key importance that we act fast to regulate and restrict the usage of deepfake technology. In a recent statement, the UK government announced that the planned reform to the Online Safety Bill will criminalise people who share explicit and non-consensual deepfakes. This is a promising step in protecting victims and reigning in the technology. However, outside of this specific intervention, the courts will most likely deal with the harms caused by deepfakes through the known categories of criminal, civil and administrative law. However, deepfakes ask new questions of our existing systems. For example: should we reconsider the intellectual property rules around parody now that deepfakes can be created to be nearly indistinguishable from reality?

To conclude, deepfakes are just one of the manifestations of artificial intelligence and machine learning, and their impact has been huge. Unsurprisingly, the response of the law to these new technologies has been slow and often inadequate. In addition to this, whilst there is irrefutably a need to protect victims and compensate harm caused by deepfakes, not all deepfakes are harmful. Therefore, other useful tools to combat these less harmful uses of deepfake technology could be employed: educational campaigns about the creation of deepfakes and how to spot them; regulation by social media organizations to restrict the posting of deepfakes on their platform; or the development of new AI tools to help recognize and remove problematic deepfakes. However, education is oftentimes a slow process; companies have proven time and time again that they cannot be trusted to self-regulate; and AI itself is the cause of these problems so is it unwise to turn to it for a solution? This highlights the urgent need for a transparent and open conversation about AI, machine learning and deepfakes and how society should respond to them.