18 November 2022

Plenary address by Australian Information Commissioner and Privacy Commissioner Angelene Falk

Check against delivery

Good evening and welcome to the members of LAWASIA.

Thank you to Michael West for his Welcome to Country and I pay my respects to the Traditional Custodians and to Elders past, present and emerging.

Thank you to Dr Gordon Hughes, Secretary-General of LAWASIA for the invitation to address you today, on the role of information management in maintaining human rights and the rule of law, which is essential to my office’s core functions.

And as members of the legal profession, you of course have a role to play in managing the information of clients and as advocates for the rule of law.

I’m speaking with you at a time when there is increased community awareness and concern around the handling of personal information, following Australia’s two largest data breaches.

Entities that hold personal information continue to be subjected to challenging cyber and other risks.

There is also a commitment from government to reform privacy laws in Australia, with a bill currently before Parliament with a set of provisions that, together with increased penalties from $2.2 to $50 million, would assist my office’s ability to undertake our regulatory functions in an efficient and effective manner in the best interests of the community.

We see this as a positive step towards updating Australia’s privacy laws, ahead of the completion of the Attorney-General’s Department’s broad review of the Privacy Act, to ensure we have a regulatory framework that empowers individuals, ensures entities protect personal information and best serves the Australian economy.

In our globally interconnected digital world, where personal information knows no borders, the need for high standards of data protection is more important than ever before.

The Office of the Australian Information Commissioner is the independent national regulator for privacy and freedom of information. We promote and uphold rights to access government-held information under the Freedom of Information Act, and to have personal information respected and protected in accordance with privacy laws.

Both of these functions share a deep human rights basis.

Privacy is a fundamental human right recognised in Article 12 of the UN Declaration of Human Rights, in Article 17 of the International Covenant on Civil and Political Rights, and in many other international and regional agreements.

In Australia, the Privacy Act seeks to give effect to the fundamental right to privacy by preventing individuals from being subject to arbitrary interferences with their personal information and protecting them from harm stemming from the misuse of their personal information.

Put simply, data protection and privacy seek to protect our right to autonomy and human dignity.

The right to privacy also enables other human rights such as freedom of association, thought and expression, as well as freedom from discrimination. The previous Special Rapporteur on the right to privacy has described it as the right to the 'free development of personality', without the gaze of those with whom, we do not choose to share.

And managing information well is a foundation for protecting this fundamental human right.

My thesis this evening is that this is a shared responsibility of governments and private sector organisations. And that it is also a partnership with individuals.

This requires a proactive approach, where entities that wish to collect and hold personal information put individuals at the centre. It requires assessing the risks upfront, and building in safeguards. Privacy by design. And it means more than asking individuals to read 6,000 word privacy policies that require a tick in a box, purporting to be consent.

Ensuring individuals have access to government-held information is also crucial. The proactive disclosure of government-held information promotes open government and advances our system of representative democracy.

This is supported by our open by design principles, which recognise that:

  • information held by government and public institutions is a public resource
  • a culture of transparency within government is everyone’s responsibility
  • appropriate, prompt and proactive disclosure of government-held information informs community, increases participation and enhances decision-making, builds trust and confidence, is required and permitted by law and improves efficiency.

I will return to access to information access later, but first to privacy.

Across the world we see different privacy and access to information laws, which are reflective of unique cultural foundations. While new laws continue to be developed and are emerging, not all countries across Asia and the Pacific have enacted comprehensive data protection laws or established independent data protection authorities.

European countries approach privacy primarily from a human rights perspective, whereas the US primarily treat it as a consumer protection issue subject to contractual arrangements. This has led to a greater intersection between data protection, consumer protection and competition laws. Each has a role to play.

For the audience here tonight, you will have your own particular experiences, both legal and cultural, to call upon.

Globally, our respective frameworks do have more in common than differences. At its heart, privacy seeks to ensure the protection of an individual’s human dignity, and that personal information is not used in ways to create harm. As my counterpart commissioner in the UK has said, privacy laws are 'not a don’t do, but a how to'.

Data breaches

This simple phrase has great work to do when applied to data breaches, where recent experience in Australia highlights the importance of managing information to mitigate the impacts on personal information privacy.

Similar to other notifiable data breaches schemes around the world, if an organisation covered by the Australian Privacy Act experiences a data breach that is likely to cause serious harm, then they must notify my office and affected individuals to give them the opportunity to take steps to protect their personal information, like changing passwords or notifying their bank, as well as being on the lookout for scammers.

In the last few months, there have been prominent cases of data breaches that have affected millions of Australians. The two most notable are those involving telecommunications firm Optus and insurer Medibank. Involving a range of personal information, they are of great community concern.

My office received 854 data breaches last financial year, with the top reporting sectors continuing to be health and finance, followed by legal, accounting and management services.

Data breaches are a reminder of the risks and potential harms that can arise from the handling and misuse of personal information and the impact they can have on the human right to privacy.

In Australia, we have seen a previous breach where the former Department of Immigration and Border Protection mistakenly released the personal information of over 9,000 asylum seekers that were being detained in immigration detention. Some asylum seekers claimed the breach exposed them to persecution from authorities in their home countries.

It further reinforces how a privacy breach in one nation can have far-reaching consequences, given that the flow of information knows no borders in our increasingly digitised world.

It also makes clear the need to prevent breaches upfront. If we apply the 'how to' message, it would go like this: Only collect personal information that is reasonably necessary, have reasonable steps in place to protect it and delete it when it is no longer required.

Impact of technology on the right to privacy

I would like to turn from data breaches to consider the impact of technology on the right to privacy. Indeed, most aspects of the daily lives of individuals around the world have been transformed by innovations in technology and service delivery.

The amount of data generated each year is astonishing and growing exponentially. By 2025, the total amount of data created, captured, copied and consumed in the world is expected to grow to 175 zettabytes or 175 trillion gigabytes, 5 times higher than it was only a few years ago.

We’ve seen the trend for accelerated adoption of digital technology continue alongside rapid advances in artificial intelligence, facial recognition technology, machine learning algorithms and biometrics.

This has implications for both privacy and information access rights. Indeed, the way in which technology is governed and deployed now will shape our society. While use of technology can create public good, it can also create privacy risks.

Imagine a facial recognition technology capability paired with computer vision, and integrated into wearable smart sunglasses. This could allow real time identification of every person we pass on the street. The camera collects facial templates, compares them to those collected into a data base sourced from images accessed from the internet, and tells you the identity of the person you have walked past. Not only would this have implications for safety, but for our autonomy.

And the question to be asked, is whether such deployments of technology are in the public interest? And if they are not, are our laws able to adequately address these risks?

So let’s bring that back to a real case. The OAIC and the UK’s Information Commissioner’s Office conducted a joint investigation into the personal information handling practices of Clearview AI Inc. It focused on the company’s use of ‘scraped’ data and biometrics of individuals.

At the time of our investigation, Clearview’s facial recognition tool included a database of over 3 billion images taken from social media platforms and other publicly accessible websites.

The tool allows users to upload a photo of an individual’s face and find other images of that person scraped from the internet. It then links to where the photos appeared for identification purposes.

There were also reports that the service was being used by law enforcement.

I found that Clearview AI Inc breached the Australian Privacy Act by collecting Australians’ sensitive biometric information without consent, by covert and therefore unfair means and by failing to take reasonable steps to notify individuals of the collection of their personal information.

The UK authorities also found that it breached their data protection laws.

Commenting on our announcement at the time, I said:

'When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes.

'The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.'

And while there may be justified uses of facial recognition in law enforcement or combatting terrorism, there must be consideration of whether the privacy impact is reasonable, necessary and proportionate.

Australia is currently reviewing its Privacy Act. This remains a landmark opportunity to ensure that Australia has a fair and flexible framework that can protect the community from privacy harms and meet the challenges of rapidly evolving global digital markets.

A related issue to facial recognition is technology that uses automated decision-making. Its legitimate uses can advance the public good, but it also has the potential to affect people’s rights.

Racial and gender biases have been cited in a number of automated decision-making processes.

My office has recommended to the review of the Privacy Act, that entities engaging in automated decision-making should include a meaningful explanation of these automated decisions in their privacy policies and collection notices.

An example is a digital site that assesses what job advertisements will be relevant to show an individual based on their personal information. This seems simple but being shown or not shown a job can impact career opportunities. What information should be relevant here? Generally we would say that gender is not relevant. But what about information such as which university the applicant went to or what suburb they live in?

That is why we have also recommended, that the handling of personal information for the purposes of automated decision-making with legal or significant effects should be one of several restricted practices set out in the Privacy Act.

The restricted practices regime would require entities proposing to engage in certain high privacy risk activities, to undertake a Privacy Impact Assessment to consider and mitigate the risks to individuals, and have the practice audited by an independent third party. This provides upfront assurance to individuals that the practice is a fair and reasonable use of their data, when they are asked to tick that box.

We have also recommended a prohibited practices regime to prohibit certain activities entirely. In our regulatory experience, there are some activities where the privacy risks cannot be appropriately mitigated and so we think they should be prohibited before they cause harms in the community. These activities include profiling, online personalisation and behavioural advertising using children’s personal information.

This is consistent with the approach taken in Europe with the recent Digital Services Act.

Global privacy and collective harms

So privacy laws and practices must put individuals at the centre.

Impacts on privacy may differ between members and sectors of the community – and not everyone has the same understanding of how they can take control of their privacy.

We also need to be aware that collective data handling practices can impact individuals, and in turn can impact the public good. How those data practices are experienced by individuals, can depend on their characteristics – age, gender, language, disability – and also their personal situation.

It is becoming increasingly clear that singular privacy decisions are capable of impacting other people and the community at large.

We have seen how the cumulative effect of practices can have significant impacts, without necessarily presenting a significant harm to any one individual. The activities of Cambridge Analytica in the UK and the potential impact of individual privacy transactions on the broader democracy.

For regulators, we must recognise the importance of being connected, and to rely on our network of regulators, both across regulatory schemes such as consumer and competition, communications and online safety regulators and across jurisdictions internationally, to protect the collective public interest.

Information access

I mentioned open by design earlier in my remarks, and I’d like to return to the related function of the OAIC: access to information.

Just as the right to privacy is a human right and enables other human rights, access to information is integral to the integrity of a modern democratic framework.

Article 19 of the ICCPR states that everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds.

These commitments are also reflected in the United Nations’ Sustainable Development Goal 16 to which Australia is a signatory.

So, access to information is a fundamental principle, a foundation of our democratic way of life, and necessary for the exercise of other rights and freedoms.

The OAIC, which regulates both freedom of information and privacy laws, sits at the intersection of these rights.

And the shift to the digital world also encourages us to explore how it can improve access to information.

Australian Government agencies all have a responsibility to manage information as a national resource, for public benefit. To realise the value of that information, access by design must be built into information management by default.

This requires internal and external facing information systems of government agencies to build in access to information as a key design feature. In that way, citizens can be provided with the information they seek proactively and efficiently.

In conclusion: there are 2 sides to effective information management – both to aid access to information through our freedom of Information laws and in protecting and respecting personal information. These 2 pursuits can both promote human rights.

The acceleration of the digital world has opened the door to both increased innovation and economic opportunity along with exponential growth in the collection of personal information.

It has also increased practices such as data sharing, tracking and monitoring. Digital economy initiatives can challenge privacy regulation as they can involve handling personal information in new and opaque ways.

And so we look forward to the finalisation of the Attorney-General’s Department’s review of the Privacy Act to advance law reform in this important area.

This is also about building trust and ensuring that data flows with trust.

As members of the legal profession, you have a role to play in ensuring trust in the digital economy.

How the justice system interprets and apply our rights to privacy and access to information will in part inform whether the rule of law will prevail in these emerging virtual domains where data knows no borders.

Thank you for your attention tonight and I wish you well for the remainder of the conference.