In what is a pioneering effort to tackle the rise of deepfake content, now ubiquitous across the Internet, Denmark has proposed extending copyright protections to individuals’ facial features, appearance, and voice.
The proposed amendment to Denmark’s copyright law will effectively make it illegal to share deepfake content of another person without their consent, empowering individuals to get such forgeries taken down from online platforms, and seek compensation for their publication — similar to how copyright laws traditionally protect creative works.
Deepfakes are a form of synthetic media which depict believable and realistic videos, pictures, or audio of events that never happened — they show real people doing or saying things that they never did or said.
While media has long been manipulated for nefarious purposes, artificial intelligence has made such manipulation easier and more sophisticated than ever before. The volume of deepfake content online has risen dramatically in recent years, and deepfakes have become increasingly difficult to spot.
Authorities around the world have struggled to catch up with the technology which has been used to create pornographic content, spread misinformation, and pull off sophisticated con jobs.
Most existing laws dealing with deepfakes criminalise specific harms arising out of the technology, such as deepfake porn or the publication of manipulated media during elections. The Danish Bill, introduced last month and currently in the consultation stage, is harm-agnostic in that it directly addresses the publication of deepfakes and not specific harms they may cause.
Danish culture minister Jakob Engel-Schmid said the Bill gives people “the right to your own voice, your own facial features, and no one can copy that without your consent”.
The Bill introduces three new forms of protection against deepfakes:
The most notable here is the protection extended to ordinary individuals. The proposed Section 73(a) makes it illegal to share realistic deepfakes mimicking a person’s appearance, voice, or characteristics for up to 50 years after their death.
The operative word here is “realistic”. The Bill does not bother about intent— any deepfake can be taken down as long as it looks real and creates confusion. At the same time, content that is clearly stylised is not protected.
The Bill proposes a kind of consent-based protection: deepfake content can only be shared with the permission of the individual impersonated in it. It is the responsibility of the person sharing the content to prove that consent was obtained, and this consent can also be withdrawn at any time.
The Bill also makes online platforms responsible for taking down deep fakes, and proposes heavy penalties if they fail to do so.
Protections offered by the Bill only extended to content in the public sphere: the Bill does not make it illegal to generate deepfakes but simply bars their publication.
Certain forms of expressions, such as satire or parody, remain outside the Bill’s protections, although the Bill does not grant blanket exemptions. Civil courts will decide what content to take down on a case-to-case basis, based on protections for free expressions provided in the European Convention of Human Rights.
While ambitious and potentially agenda-setting, especially in the light of Denmark’s presidency of the European Union, critics say implementing such a law will be challenging. The law’s mandate is restricted to Danish territory, making it impossible to prosecute wrongdoers operating elsewhere in the world.
“Denmark may be granting a new right, but if the mechanisms to enforce it are slow, burdensome or inconsistent, the real-world impact could be minimal,” Francesco Cavalli, chief operating officer of Sensity AI, a company that offers deepfake detection tools, told The NYT.
“Regulation without enforcement is a signal, not a shield,” he said.
At the same time, many are looking at the Danish example as a blueprint for other countries, many of which do not have standalone legislation to address digital impersonation at the moment.
Indian courts have thus far resorted to concepts of privacy, defamation, and publicity rights when dealing with deepfakes. Notably, the Delhi High Court extended protections against unauthorised use of their likenesses to actors Amitabh Bachchan in 2022 and Anil Kapoor in 2023.
The rulings however looked at these figures specifically as celebrities.
The Danish Bill extends similar protections to ordinary citizens.