Denmark is proposing new legislation to amend its digital copyright law, addressing the increasing threat of AI-generated deepfakes. The proposed changes seek to protect individual rights concerning their digital identities in response to the rise in deepfake attacks, which have resulted in significant financial losses and the spread of disinformation.
Deepfakes utilize artificial intelligence to produce realistic fake images, videos, and audio recordings. The technology has been employed in various ways, from creating humorous content to perpetrating financial fraud and spreading misleading information. The World Economic Forum‘s Global Coalition for Digital Safety is working to foster public-private cooperation to combat harmful online content, including deepfakes, and to enhance digital media literacy.
The Danish government’s amendment, considered a pioneering effort in Europe, aims to safeguard individuals’ control over their identities, specifically their appearance and voice. The government aims to submit the amendment in the autumn, indicating the urgency with which it views the issue. The proposal has garnered cross-party support, suggesting a consensus on the need to address deepfake-related challenges.
Deepfakes leverage AI technology, specifically “deep learning,” to create manipulated or entirely fabricated content. This technology can alter existing content, as illustrated by replacing one actor’s face with another in film clips. It can also generate new content, depicting individuals saying or doing things they never actually did. While some uses, such as face swapping in film scenes, may appear harmless, they raise concerns about the individual’s right to their image.
In 2023, US actors went on strike to advocate for control over the use of their images by AI. The strike brought film and TV productions to a standstill. The actors secured a commitment from the industry that any future AI use of their images would require their explicit consent. This event highlights the growing awareness and concern regarding the use of AI to manipulate or replicate individuals’ likenesses without permission.
A significant threat posed by deepfakes is their use in spreading fake news. Instances include deepfakes of former US President Joe Biden and Ukrainian President Volodymyr Zelenskyy. By creating the appearance that messages originate from trustworthy sources, deepfakes can lend credibility to false information, potentially influencing public opinion and political discourse.
Resemble.ai‘s research indicates that financial fraud and cybercrime represent substantial growth areas for deepfake applications. While 41% of deepfake targets are public figures, including celebrities, politicians, and business leaders, 34% are private individuals, predominantly women and children. Organizations account for 18% of those targeted by deepfakes.
The UK engineering firm Arup experienced a deepfake scam that resulted in a financial loss of $25 million. Cybercriminals utilized an AI-generated clone of a senior manager to convince a finance employee to transfer the funds during a video call. This instance illustrates the potential for deepfakes to facilitate sophisticated financial crimes.
A fraud attempt targeting Ferrari involved the use of an AI-generated voice of CEO Benedetto Vigna. An employee thwarted the attempt by asking a question that only the real CEO could answer. A BBC journalist demonstrated the potential for voice cloning by bypassing her bank’s voice identification system using a synthetic version of her own voice. These examples underscore the increasing sophistication and accessibility of deepfake technology for malicious purposes.
Resemble.ai‘s deepfake security report for Q2 2025 revealed a significant increase in publicly disclosed deepfake attacks. The report documented 487 such attacks, representing a 41% increase compared to the previous quarter and a 300% increase year-on-year. The company’s findings also indicated that direct financial losses resulting from deepfake scams have reached nearly $350 million. Resemble.ai also noted that deepfake attacks are doubling every six months, highlighting the escalating nature of the threat.
Resemble.ai indicates that deepfake fraud is a global issue, particularly prevalent in technologically advanced regions. While the US leads in reported incidents, deepfake cases are also widespread across Asia Pacific and Europe, with a rapid increase observed in Africa. This global distribution underscores the need for international cooperation in addressing the challenges posed by deepfakes.
The US has implemented the Take It Down Act, requiring the removal of harmful deepfakes within 48 hours and imposing federal criminal penalties for their distribution. The Act also mandates that public websites and mobile apps establish reporting and takedown procedures. State legislators in Tennessee, Louisiana, and Florida have enacted their own deepfake laws, demonstrating a multi-faceted approach to addressing the issue.
The European Union’s Digital Services Act (DSA), which came into effect in 2024, aims to prevent illegal and harmful activities online, including the spread of disinformation. The DSA has placed online service providers under increased scrutiny, and several formal investigations for non-compliance are already underway. The UK has adopted a similar approach with the Online Safety Act, implemented in early 2025.
The Danish amendment under consideration allows individuals affected by deepfake content to request its removal. Artists can demand compensation for unauthorized use of their image. The right to compensation would extend for 50 years beyond the artist’s death. Online platforms like Meta and X could face substantial fines if the amended bill is passed as proposed. The bill would establish the legal foundations for seeking damages under Danish law, though it does not directly provide for compensation or criminal charges.
With Denmark holding the Presidency of the Council of the European Union, it aims to prioritize media and culture within European democracy through initiatives like the European Democracy Shield. The amendment to domestic copyright law is expected to send strong political signals to both Brussels and the wider EU. This action reflects Denmark’s commitment to addressing the challenges posed by deepfakes and promoting a safer online environment.
The World Economic Forum’s Global Coalition for Digital Safety aims to promote cross-regional cooperation for online safety. This includes accelerating public–private collaboration to address harmful content, including deepfakes. The coalition also facilitates the exchange of best practices in online safety regulation and supports efforts to improve digital media literacy. By fostering collaboration and knowledge sharing, the coalition aims to enhance global efforts to combat deepfakes and promote a more secure digital environment.