Update! The Bill was rapidly passed into law shortly after the publication of this insight. Please see our post on the KWM Pulse blog for details.
Tell me in a minute
The Government has introduced the Online Safety Amendment (Social Media Minimum Age) Bill 2024 (Bill) into Parliament.
The Bill amends the Online Safety Act 2021 (Cth) (Online Safety Act) to require providers of certain social media platforms to take reasonable steps to prevent users under 16 from having an account. In doing so, it aims to reduce the risk of harm to children from these platforms.
The Bill also imposes obligations on social media platforms to ensure that information collected for age assurance purposes is not improperly used.
Background
The Government first expressed its intent to implement an age restriction on social media only a few weeks ago, and has moved very swiftly to introduce the Bill.
However, the age restriction has not been universally accepted as a magic bullet, and stakeholders from across the community and industry have expressed opposing views on an all-out ban. While eSafety Commissioner Julie Inman Grant has cautiously welcomed the legislation, it is notable that in a statement to the Joint Select Committee on Social Media and Australian Society earlier in the year she drew an analogy to the risk of drowning and noted that ‘We do not fence the ocean or keep children entirely out of the water but we do create protected swimming environments that provide safeguards and teach important lessons from a young age.’
Nonetheless, with the concept of an age-related ban on social media having strong support from both sides of politics, and the opposition pushing for it to become law by Christmas, we do expect a form of the age-restriction to become law during the last sitting week of the year.
Minimum age for account holder
The Bill introduces a new concept of an ‘age-restricted social media platform’. Those platforms will be obliged to take reasonable steps to prevent Australian children under the age of 16 from having accounts.
An electronic service will be an ‘age-restricted social media platform’ if:
- the ‘sole’ or a ‘significant’ purpose of the service is to enable online social interactions between 2 or more end-users
- it allows end-users to link or interact with other end-users, and
- it allows end-users to post the material on the service.
This is broader than the current definition of a ‘social media service’ under the Online Safety Act, which is limited to services whose ‘sole’ or ‘primary’ purpose is to enable online social interactions. The expanded definition means that messaging services that have a significant social aspect could be classified as age-restricted social media platforms, which addresses concerns that some had raised about certain platforms potentially falling outside the scope of the ban because they do not neatly fall within the definition of ‘social media’.
The Government has indicated in the Bill’s Second Reading Speech that it will create legislative rules that carve out dedicated messaging services, online games and services designed to support health and education. Somewhat contradictorily, however, the Government has also foreshadowed in its Second Reading Speech a potential enlargement of the definition of ‘age-restricted social media platform’ to include online games or additional social media platforms that can be viewed without an account, after further consultation with relevant stakeholders. We expect that the scope of services included and excluded from the scope of the ban will be a key focus for online safety advocates and industry alike.
For those platforms that are within scope, the Bill places the onus for complying with age restrictions squarely on the platform operators, rather than on the end-users, or their parents or carers. There are no legal consequences for users who circumvent the measures put in place by a platform or for their parents or carers. In such a circumstance, the platform provider would not be penalised either, provided that they have implemented reasonably appropriate measures and there is no systemic failure to take action to limit such circumventions.
The primary obligation under the Bill is that a platform provider must take reasonable steps to prevent age-restricted users having accounts with the platform. While the Bill does not specify what measures a provider must implement to comply with the age-restriction obligation, the Explanatory Memorandum notes that, as a minimum, platforms are expected to implement some form of age assurance in order to identify whether a prospective or existing account holder is under the age of 16.
The Explanatory Memorandum suggests that whether an implemented age assurance methodology constitutes a ‘reasonable step’ will be determined objectively, including by reference to the efficacy, cost and privacy implications of the methodology. Platforms may choose to contract with a third party to undertake age assurance on its behalf, or enter into an agreement with app distribution services or device manufacturers to allow for user information to be shared for age assurance purposes. There may also be a role for digital identity in effecting age assurance.[1]
Platforms will likely also be required to monitor and respond to users who have circumvented age assurance measures.
Interaction with the Privacy Act
The Bill introduces new privacy protections to prohibit relevant platform operators from using personal information collected for age assurance purposes for other purposes, unless the individual gives consent or an exception otherwise applies under the Privacy Act 1988 (Cth) (Privacy Act). Any further use of that personal information will be an interference with privacy under the Privacy Act.
Interestingly, the Bill provides that:
- the consent must be voluntary, informed, current, specific and unambiguous, and
- individuals must be able to easily withdraw their consent.
While these principles are consistent with previous recommendations made as part of the ongoing review of the Privacy Act, they did not feature in the first tranche of privacy reforms introduced in September this year, which is still making its way through Parliament. This is a sign that these consent principles will likely feature in future tranches of privacy reform.
These principles are designed to prevent providers from obtaining consent through preselected settings or opt-outs. After the provider has used the individual’s information for age assurance purposes, or any other agreed purpose, that information must be destroyed.
Penalties
If a relevant platform operator fails to take reasonable steps to prevent age-restricted users from having accounts, the provider will be subject to civil penalties of up to 150,000 penalty units, which in current terms equates to $49.5 million. A provider may also face penalties under the Privacy Act 1988 (Cth) if they fail to comply with the privacy protections set out in the Bill.
Incidentally, while not directly relevant to age-restriction, the government has also used the Bill to increase the maximum penalties available for breaches of industry codes and standards made under the Online Safety Act to the same amount of $49.5 million (a 60x increase on the current maximum(!)), which we see as a clear sign that the Government is committed to ensuring its efforts to regulate social media are support by a meaningful penalties.
The Commissioner’s powers
The Bill gives the eSafety Commissioner (Commissioner) broad powers to require a provider of an ‘electronic service’ to disclose information relevant to whether the service is an age-restricted social media platform, and whether the provider has taken the required reasonable steps to meet its obligations under the Bill. This will capture a broader range of service providers beyond just providers of social media platforms and will allow the Commissioner to validate whether a service is in fact a social media platform.
In addition, if the Commissioner is satisfied that the provider has contravened their obligations under the Bill, the Commissioner may prepare and publish a statement to that effect, effectively to ‘name and shame’ the provider and provide a further incentive to comply.
Challenges
The Explanatory Memorandum recognises a number of difficulties and challenges with these potential obligations.
- The Bill recognises that it is impossible for the Government to completely prevent young people from accessing social media. For this reason, providers will not be punished if individuals circumvent any reasonably appropriate measures put in place by the platform, as long as there are no signs of systemic failures on the platform provider’s part. Similarly, there will be no consequences for either user or platform operator if a child is able to access content on the platform in a ‘logged out’ state (ie without an account) or if a parent allows their child to use the parent’s account to access a platform.
- The Bill recognises that the age-restriction obligations are highly novel and therefore there are inherent uncertainties as to how they will be applied. It also recognises that time needs to be allowed for the Commissioner to establish the necessary guidance and enforcement framework and for platform operators to develop relevant systems to comply with the obligations. For this reason, the operation of the minimum age obligation will be deferred by 12 months after the commencement of the legislation. However, when the obligation comes into effect, it will apply to all accounts, including those created before the Bill takes effect (ie the accounts of existing users).
What’s next
The Bill has been referred to the Senate Environment and Communications Legislation Committee for a lightning speed review, with the Committee due to report back on 26 November, leaving a couple of sitting days for any amendments to be made and for the Bill to be passed.
Meanwhile, the Government has flagged in its Second Reading Speech that it intends to introduce further legislation including a statutory Digital Duty of Care to ensure that social media providers are continuously identifying and mitigating potential risks as the technology and service offerings evolve. The Government will further monitor the effectiveness of the social media minimum age through a mandatory independent review to be conducted within 2 years of the minimum age obligation taking effect.
We will be following along closely to keep you up to date. You can also keep up-to-date with the KWM Tech Regulation Tracker here.