Natalie Le
Director
Generative Artificial Intelligence (GenAI) is transforming how businesses operate — from automating workflows and enhancing customer experiences to improving decision-making. As GenAl systems become more integrated into workplace activities, this presents businesses with opportunities and challenges.
Knowing the risks
Publicly available or web-based GenAI tools are platforms that can be accessed via a web browser, software or application . Some popular publicly available GenAI tools that are increasingly becoming familiar are ChatGPT, Grammarly, Claude, Copilot and Gemini.
In our October 2024 guidance we advised that regulated entities (organisations subject to the Privacy Act) refrain from entering personal information, particularly sensitive information, into publicly available tools. Once personal information has been input into GenAI systems, depending on the relevant privacy settings it will be very difficult to track or control how it is used, and potentially impossible to remove.
However, as uptake of AI tools expands, so too do the options available to users at the individual, organisational and enterprise levels. Increasingly, organisations have more choice and control around their use of AI tools, such that they may be able to appropriately account for or mitigate privacy risks that arise.
For example, some approaches to AI tools, such as AI systems which run on-premises or on private cloud, may carry fewer privacy risks than others, because they won’t result in a secondary disclosure of personal information being made to another entity.
Nevertheless, if entities are using personal information with AI systems they will need to ensure they actively manage the privacy risks that arise.
Those risks extend beyond disclosure to include secondary uses, new collections, and concerns around the security and accuracy of personal information. They’ll also need to consider what changes need to be made to their privacy policies, collection notices and other communications with users and customers.
The below case study, a fictional example based on a notification made to the OAIC under the Notifiable Data Breaches scheme, provides a reminder of what can go wrong when using publicly available GenAI tools like ChatGPT in the workplace:
Human error or an organisational failure? – a practical example
CarCover, a car insurance company, permits its employees to use individual accounts on publicly available GenAI products to assist in performing routine tasks, such as summarising documents and producing simple reports.
To support this practice, CarCover has an internal policy that governs the use of GenAI tools which prohibits the uploading of personal information on these platforms, as well as third party technical measures to identify and notify it of any incidents of personal information being uploaded to GenAI platforms.
Against CarCover’s policy, an employee uploaded a customer’s financial hardship application, including information about their health and family circumstances, to ChatGPT, to generate a summary report.
By uploading the application to ChatGPT, CarCover not only disclosed the customer’s sensitive information without consent, but the summary report generated minimised relevant and key aspects of the customer’s application. On assessment of the summary report, CarCover refused the customer’s hardship application, resulting in significant financial and emotional distress to the customer.
As highlighted in this case study, where not managed appropriately, the use of publicly available GenAI products in the workplace can lead to data breaches, reputational damage, harm to individuals and possible regulatory consequences.
Use of publicly available GenAI tools in the workplace requires strong privacy governance to ensure that a business adheres to legal obligations.
No matter the controls and policies in place, businesses must also be aware that there is always a human element - and ensure that staff are aware of what is appropriate, or permitted, to be entered into a GenAI platform.
In practice, steps businesses can take include:
- conducting a Privacy Impact Assessment to understand the impact of the use of publicly available GenAI tools to ensure that risks can be managed, minimised or eliminated;
- in cases where organisational or privacy risks are too high, prohibiting personal information to be uploaded to, or the use of, publicly available GenAI products;
- developing policies and procedures that govern the business’ use of GenAI tools, including ensuring staff are equipped to check and account for inaccuracies in the tools’ outputs;
- ensuring privacy policies and collection notices reflect the organisation’s use of GenAI, where relevant;
- when using organisational or enterprise licences to publicly available tools, actively engage with and manage privacy settings, including restricting access to the tool provider to user data for the purposes of training AI models, and restricting retention of user data where possible; and
- communicating policies and educating staff on how to responsibly use publicly available GenAI products.
How can we learn from CarCover?
Following this incident, CarCover reminded staff of its internal policy on the use of GenAI tools and scheduled bi-annual staff training sessions and refreshers. To reduce the likelihood of similar incidents, CarCover also:
- deployed technical measures to prevent personal information from being uploaded outside of its client management systems;
- revised its internal policy to limit the use of GenAI in certain business areas and in circumstances where risks to organisational decision making were identified.
Key takeaway
To harness the opportunities publicly available GenAI tools may offer, businesses must be alert to the complex privacy risks involved and potential harms to individuals.
Once personal information has been input into GenAI systems, it is difficult to track or control how it is used, and potentially impossible to remove.
Some technical approaches can help to minimise some of the privacy risks that arise when using this technology. However, a holistic and comprehensive approach to compliance and risk management will be required to ensure that your organisation adheres to their Privacy Act obligations.