Insight,

Risks of Gen AI – Deepfakes to disinformation: esafety and authenticity

AU | EN
Current site :    AU   |   EN
Australia
China
China Hong Kong SAR
Japan
Singapore
United States
Global

Tell me in two minutes

The cross-roads between AI and online safety present two major societal risks:

  1. the generation of ‘deepfakes’ and abusive content
  2. the proliferation of false information.

These risks derive from generative AI (GenAI) - and they are more dangerous and damaging than the long-known risks from traditional AI. GenAI can make realistic content fast and in large amounts.

These issues are significant as they affect individuals, organisations, our political institutions and the wider society at large. Adequately addressing these risks is crucial and requires a combination of legislative action, governance and proactive measures from various stakeholders.

Want to know about the governance structures needed to effectively manage the use of AI? Jump to our ‘Governance’ section below! Businesses using GenAI must proactively establish and enforce robust governance structures and clear guidelines.

This article is part of KWM’s series on the risks of generative artificial intelligence (GenAI) and examines the complex interplay between GenAI and online safety.  Find the other articles here.

What are the e-safety risks from GenAI?

Source: CNN, NewsGuard, Sumsub 2023 Identity Fraud Report

Deepfakes and abusive content

Abusive Content

GenAI has an uncanny ability to create realistic and convincing images, video and audio of a person based on a photograph or from a snippet of the person’s voice. This can be used to generate ‘deepfakes’, that is, media that portrays something that did not occur in reality. GenAI can modify an individual’s voice in a video to show them saying something different, or to stitch a person into a video that they were not originally in.

Deepfakes can and are being used for harmful ends, particularly ‘deepfake porn’. In early 2024, sexually explicit images of Taylor Swift were shared publicly on social media platforms,which were deepfakes and generated by GenAI. The harm from these deepfakes was self-evident - being non-consensual and sexually abusive images of Taylor Swift.

This is not exclusive to celebrities. There are reports of GenAI applications allowing users to ‘create non-consensual porn of any women they like’ and building an alarming trend in deepfake revenge porn.

The availability of these GenAI tools is concerning. The eSafety Commissioner has also received complaints of children using image generators to create sexual images of their peers to bully them.

Scams

Scamsters and fraudsters also use deepfakes in schemes. In Hong Kong, multinational firm, Arup, was defrauded of HKD200 million after a worker was misled into transferring funds by scammers imitating the company’s chief financial officer through the use of deepfake technology.

Individual consumers are targeted through the creation of videos promoted by fake footage of celebrities or well-known public figures. In late 2023 an advertisement was circulated online that used a deepfake video of Tom Hanks to promote a dental plan. Other well-known people whose likeness has been used in deepfake scams include Andrew ForrestElon Musk, and Sam Kerr. Scammers have used their images to deceive people into investing into fake online trading platforms.

The rising prevalence of these scams has caused the ACCC’s National Anti-Scam Centre to publish a media release warning consumers of deepfake scams.

False information

Another key risk associated with GenAI is its capacity to disseminate false information, either through ‘hallucinations’ or the malintent of its users spreading ‘fake news’.

Hallucinations

GenAI can often ‘hallucinate’ or make up false information based on user inputs or the underlying datasets that it is trained on. These hallucinations can cause damage in a variety of different contexts. For example, GenAI may provide incorrect or misleading information that leads to the wrong medical diagnoses or the wrong financial data that results in an unsound investment.

Fake News

GenAI can also create and spread false information on purpose, by generating vast quantities of content. This is happening already, with GenAI used to create ‘fake news’, sparking an uptick in the dissemination of factually incorrect articles online. NewsGuard, a company that tracks online misinformation has created an online tool that has identified 840 unreliable AI-generated news and information sites at the time of publishing this article.

Why are these issues and risks important?

The issues and risks associated with GenAI are important as they have a multitude of impacts on individuals, organisations, politics and society. Here we explore the implications of GenAI at each level.

Individual

As discussed, GenAI can produce false information that is highly convincing and highly damaging to the individuals concerned, whether through the creation of a deepfake or a false statement made about them. They can suffer serious psychological or reputational harm which can have lasting and permanent consequences. Because of the persistence of images or statements on the internet or on people’s devices, it may be difficult to have these images or statements permanently deleted or removed.

The financial harm an individual can suffer is also evident. As highlighted above, deepfakes are being used by scammers to exploit money from victims and its use substantially increasing, with there being a 1530% jump in deepfake cases in the APAC region from 2022 to 2023.

Organisational

Like individuals, organisations can also fall victim to the effects of misinformation. Whether organisations engage in the spread of misinformation or become unwitting recipients of its effects, AI poses novel threats to an organisation’s reputation and credibility.

Last year, KPMG was falsely accused of being complicit in a 7-Eleven wage theft scandal after submissions to a Senate committee relied on Google’s Bard AI tool to generate case studies about misconduct that were not fact checked. Here, GenAI not only caused damage to KPMG’s reputation but also undermined the integrity of a parliamentary inquiry. Such an instance demonstrates that the spread of misinformation can provoke adverse reactions, tarnish a company's legitimacy, erode trust and inflict enduring harm on its reputation.

Political

There are risks of GenAI being used to spread political misinformation and interfere in elections. GenAI has been used in various political forms, whether it be deepfake videos of former US Secretary Hillary Clinton endorsing Florida Governor Ron DeSantis or ‘robocalls’ of Joe Biden urging voters not to vote in a Democratic primary election.

Election interference by GenAI was seen recently in countries like Slovakia and India, and was so prevalently used in the 2023 Argentina general election that the NYTimes labelled it as the ‘first AI election’. GenAI’s capacity to be leveraged more and more to shape voter opinion and push political agendas is becoming increasingly apparent. Ultimately, this undermines political discourse through amplifying misinformation and manipulating public opinion.

Societal

The risks associated with GenAI and how it affects individuals, organisations and politics creates overarching societal issues. By allowing the distribution of deepfakes, notions of sexism, disrespect, harassment and inequality become entrenched or normalised, perpetuating harmful societal norms and values. Resources from law enforcement are then also strained in the attempt to combat deepfakes. As these agencies become burdened with the growing use of deepfake content, resources and attention are diverted from other victims. When combined with the dissemination of false information, the cumulative effect of GenAI is that it erodes trust in our media and institutions. 

Pyramid of Risks Associated with GenAI and Online Safety

How do we address or mitigate the e-safety risks?

Legislation

Legal restrictions can regulate harmful uses of GenAI.

The Online Safety Act 2021 (Cth) (the Act) was introduced to bolster Australia’s online safety framework including the expansion of protections against online harm. The Act:

  • prohibits certain content such as the non-consensual posting of intimate images (section 75)
  • including images which have been digitally altered and generated and covers deepfakes (section 15(5)).

Accordingly, individuals can report abusive content to the eSafety Commissioner who can then issue notices for its removal (section 77).

However, there are challenges in enforcing this legislation especially where entities or actors are based overseas. An example is the ongoing litigation between the eSafety commissioner and X Corp, which will test whether Australia’s online safety regime can control the content of global and foreign corporations.

In May 2024, the Australian Government further announced intentions to introduce legislation that will ban the creation and distribution of deepfake pornography. This is representative of a global movement to regulate deepfakes, including:

Governance

At an entity level, those who utilise and interact with GenAI should establish robust governance structures to effectively oversee the use of GenAI. Governance frameworks can include the guidelines for the responsible use of GenAI, content moderation protocols and accountability mechanisms.

The eSafety Commissioner has recommended that entities industry-wide adopt a ‘Safety by Design’ approach to the development of online products and services. Organisations that provide online platforms should seek to incorporate the following principles with online safety in mind:

  • service provider responsibility: where the burden of regulating online safety should never fall on the user
  • user empowerment and autonomy: where the provision of an online product and service should align with the user’s best interests
  • transparency and accountability: where the online platform implements assurances that it is operating towards set safety standards and goals.

Some examples of positive governance frameworks in operation include:

  • prominent news organisations such as AP News dedicating a fact checking arm towards detecting ‘fake news’ including AI generated media. This embodies the ‘service provider responsibility’ principle by reducing the burden of ensuring information authenticity on its news readers.
  • social media platform, X, has a ‘Community Notes’ function that allows users to flag misleading media, emphasising user empowerment and autonomy.

At an industry level, addressing the prevalence and risk of GenAI is recognised as a collective effort. In 2021, the Coalition for Content Provenance and Authenticity (C2PA) was formed, which intended to develop technical standards for certifying the source and history of media content. The C2PA has grown to include members such as Adobe, Google and Microsoft. The overarching goal of the C2PA is to create a digital ecosystem that enables transparency and accountability when using GenAI technology.

What matters now with the tsunami of generative AI is that industry not only gets onto the business of measuring its safety success at the company, product and service level, but also sets tangible safety outcomes and measurements for the broader AI industry. – Australian eSafety Commissioner, Julie Inman Grant, Safety by design: protecting users, building trust and balancing rights in a generative AI world | The Strategist (aspistrategist.org.au)

Education and Awareness

Where malicious GenAI content falls through the cracks and is left unaddressed by legislation and governance mechanisms, education is an invaluable tool to empower individuals in combatting GenAI. Effective education allows individuals to make informed decisions regarding their consumption of online media that may include AI-generated content and protect themselves from potential harm.

In Canada, experiential workshops have introduced young people to deepfake technologies in a controlled environment and shown examples of its use. The research demonstrated that education may potentially assist in developing skills in recognising and assessing the validity of online media and building a resilience to malicious deepfakes.

Similar views have been espoused in Australia, where the Western Sydney University has emphasised the importance of news media literacy education in young Australians.

Technology

AI can also act as an effective tool against its own risks and issues. AI models can be developed to detect and identify false information, deepfakes and abusive content. For example, entities such as YouTube have been deploying AI tools to assist its content moderation systems. Integration of these technologies into these online platforms can provide an additional layer of protection against the risks associated with AI generated content.

In early 2024, leading technology companies such as OpenAI and Microsoft signed the ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ pledging a commitment towards the development of technology that detects and addresses deceptive GenAI election content.

Conclusion

The risks associated with traditional AI have been long recognised. However, GenAI poses higher stakes and more harmful risks due to its capacity to create believable content at scale and at speed. As a result, the harms inherent with GenAI are far-reaching and affect all aspects from the individual to society at large.

In a time of greater consciousness of technology’s impacts, this represents an opportunity to build trust with users and customers by emphasising safety and transparency.

For further insights into the eSafety Commissioner’s approach to regulating online content, keep an eye out for our conversation with Julie Inman Grant as part of our annual Digital Future Summit series in August.

Stay tuned for the next update in our risks in GenAI series, with a focus on copyright.  Subscribe to data and technology newsletters here.


KWM's tech reg tools

Australia’s regulatory framework is constantly evolving, as lawmakers try to keep up with rapidly developing technologies that are transforming our economy. Navigate this dynamic tech landscape and stay ahead of developments with KWM’s AI Regulatory map and technology regulatory tracker

AI regulatory map

Our map of AI regulation highlights key players who have a voice in the regulation of AI in Australia, with tiering based on our assessment of participant’s interest in AI, the scope of their mandate and assessment of public statements made by them on AI and the potential impact of their initiatives.

Tech reg tracker

Our easy-to-use and frequently updated tech reg tracker helps you stay on top of important developments across key areas of tech-related regulation, including AI, with links through to KWM insights or public resources explaining the significance of each development. 

If you would like to talk you through regulatory developments as it relates to GenAI, or data and tech more broadly, then please reach out to the experts listed at the bottom of our regulatory map and tracker.

Gen AI

We are at a technological inflection point with GenAI. Its capabilities are improving rapidly almost daily and the potential productivity gains from the use of GenAI are dramatic.  

LATEST THINKING
Insight
Australia’s new wage theft criminal offence is now in operation, having formally commenced on 1 January 2025.

13 February 2025

Insight
As the private markets have grown, and regulation becomes a more often used tool for managing market risks, discussion has, inevitably, turned to whether private capital is adequately regulated.

11 February 2025

Insight
The Government has tabled a report on the review of the Online Safety Act 2021 (Online Safety Act or OSA).

10 February 2025