Publication date: 26 August 2019

Introduction

The Office of the Australian Information Commissioner (OAIC) welcomes the opportunity to make this submission on the Standards Australia discussion paper, Developing Standards for Artificial Intelligence: Hearing Australia’s Voice (the discussion paper). The discussion paper provides a timely opportunity to consider how standards can increase organisational accountability for entities using and designing artificial intelligence (AI) technologies in Australia.

The OAIC acknowledges that the continued development and use of a variety of AI technologies has the potential to create significant benefits and opportunities for Australian society. This submission will focus on AI tools which rely on personal information, and the potential for the use of these technologies to amplify existing privacy risks. For these types of AI tools, privacy and data protection must be a central consideration when creating standards applying to the design, development and use of these technologies.

When developing standards on AI and other emerging technologies, it is essential that Standards Australia considers how these standards fit within Australia’s technology neutral, principles-based privacy framework which increases organisational accountability and empowers individuals to make informed choices about how to manage their privacy. This will in turn help increase consumer trust and confidence in the use of AI technology.[1]

As personal information in the digital age flows across national borders, it is also important to ensure a global, interoperable system of regulatory oversight. Accordingly, any AI standards must draw on domestic and international privacy and related frameworks, policy and guidance to ensure that efforts to guide the use and development of AI tools are aligned.

This submission sets out the current privacy regulatory framework, highlights privacy challenges posed by AI technologies which Standards Australia may wish to consider and discusses international developments in AI regulation.

The OAIC would welcome the opportunity to continue to engage with Standards Australia on the privacy issues involved in the development of AI standards.

Australia’s privacy framework and artificial intelligence

As privacy considerations are a central concern in regulating AI technologies, Australia’s existing privacy and data protection framework should form the foundation of AI standards. This includes ensuring that AI standards draw on the data protection principles underlying Australia’s privacy framework and are consistent with key definitions such as the meaning of personal and sensitive information.

The OAIC regulates the Privacy Act 1988 (Cth) (the Privacy Act), which outlines how organisations and agencies covered by this legislation can handle personal and sensitive information. The Privacy Act contains 13 Australian Privacy Principles (APPs) which, like other data protection laws internationally, draw on several core data protection principles.[2]

As principles-based law, the APPs are technology neutral, flexible, and can adapt to changing and emerging technologies — including AI. The Australian Information Commissioner and Privacy Commissioner also has powers to approve and register enforceable ‘APP codes’, and to prepare guidelines and other resources to provide greater particularity around the application of principles-based law. Appendix 1 of this submission sets out particular OAIC resources that may be of assistance to Standards Australia.

Standards Australia should consider how AI standards can complement and reflect Australia’s principle-based data protection framework, including the potential for AI standards to serve as an auditable system of assurance for organisations. Further information on a third-party audit or certification scheme is outlined below.

Privacy challenges posed by AI technologies

While the full extent to which AI amplifies the existing challenges to individuals’ privacy is still emerging, it will be essential for AI standards to address these issues to ensure that entities handle personal information in accordance with legal requirements and community expectations.

The section below highlights some of these emerging privacy challenges and suggests potential steps to address these issues. Standards Australia may wish to consider these issues when developing AI standards.

Organisational accountability and transparency

Transparency, accountability, choice and control are central themes in the Privacy Act and the APPs. These themes correspond with the objectives of APP 1 to ensure that regulated entities are open and transparent about their information handling practices.[3]

Businesses and federal government agencies are required under APP 1.2 to take reasonable steps to implement practices, procedures and systems that will ensure compliance with the APPs. This supports entities taking a privacy-by-design approach, implementing robust privacy governance processes, and being accountable for their handling of personal information. These obligations complement the notice requirements in the APPs, outlined below.

Standards Australia may wish to consider how standards can promote a culture of responsible, accountable and transparent deployment of AI tools which embeds privacy-by-design principles at both the developmental and utilisation stages of AI projects.

This could include measures in relation to the following areas:

  • Purpose for collection — Organisations are required to provide clear and easily understandable information about the purposes for collection of personal information.[4] However, this may present challenges as AI technologies may produce unexpected results which are not always directly or initially understood and anticipated by the programmer. This may also mean that the purpose for collecting the information changes over time. AI standards may assist entities in addressing these challenges.

  • Use of personal information — Organisations should be transparent about the use of AI tools to process personal information. In respect to automated decision-making technologies, the organisation should be able to provide details about:

    • the personal information and other factors considered by the AI tool
    • the weighting of those factors
    • about how the decision was made.

    AI standards may be able to set appropriate baselines for information that an organisation should be able to provide to an individual, if required (for example, if an individual queries an adverse decision made about them by an AI tool).[5]

  • Accountability and human intervention — Organisational accountability is increased if individuals can contest automated-decisions which negatively impact them. For example, Article 22 of the EU’s General Data Protection Regulation (EU GDPR) requires there to be avenues for human intervention in relation to certain types of automated decision-making. AI standards may assist entities to create systems, procedures and mechanisms for individuals to seek appropriate explanations of AI systems and contest automated decisions where appropriate.

Standards Australia may also wish to draw from the Australian Government Agencies Privacy Code[6] which particularises the openness and transparency requirements of APP 1.2 for federal government agencies. Agencies are required to have a privacy management plan, appoint a privacy officer, take steps to enhance internal privacy capability (including providing appropriate education and training for staff) and undertake a Privacy Impact Assessment (PIA)[7] for all ‘high privacy risk’ projects.[8] The OAIC considers federal government initiatives utilising AI technologies that rely on personal information are likely to be ‘high privacy risk’ projects. Organisations should also consider undertaking a PIA when utilising AI technologies. These accountability and transparency measures may usefully be applied within an AI standard that addresses governance issues.

Standards Australia may wish to refer to the following resources when considering these issues:

  • Part 2.1 of the OAIC’s Guide to data analytics and the APPs, which considers the open and transparent management of personal information when undertaking data analytics activities[9]
  • Chapter 1 of the OAIC’s APP guidelines on open and transparent management of personal information[10]
  • Chapter 5 of the OAIC’s APP guidelines on notification requirements under APP 5[11]
  • The UK Information Commissioner’s series of blogs on an AI Auditing Framework.[12]

Privacy self-management and consent

The provisions in the Privacy Act that require organisational accountability are essential in supporting privacy self-management, which allows individuals to exercise choice and control by understanding how their personal information is collected, handled, used and disclosed.

Mechanisms in Australia’s privacy framework aimed at fostering privacy self-management include notice requirements for regulated entities when collecting personal information from an individual, and requirements to have a clearly expressed and up-to-date privacy policy. The Privacy Act also requires regulated entities to seek voluntary, informed, current and specific consent in certain circumstances, for example when collecting sensitive information.

AI tools commonly rely on technologies which may be challenging to understand and explain. This lack of transparency about the operation of these tools may make it challenging for individuals to understand how regulated entities will use, or have used, their personal information. It may also make it difficult for individuals to provide informed and specific consent, for example to the collection of sensitive information for the purpose of processing by AI technologies.

Standards Australia may wish to consider how standards can assist entities to provide clear, comprehensive and understandable information to satisfy notice requirements and facilitate valid consents. Resources which may be of assistance in considering these issues include:

  • the OAIC guidance on the meaning of consent under the Privacy Act[13]
  • chapter 5 of the OAIC’s APP guidelines on notification requirements under APP 5[14]
  • the UK Information Commissioner’s interim report setting out practical guidance to assist organisations to explain decisions made by AI tools to individuals.[15]

Quality, accuracy and correction of personal information

An important consideration for Standards Australia is how standards can address issues of bias and discrimination caused by using AI tools. While biased and discriminatory decisions are not necessarily caused by using inaccurate personal information to train the AI technology, privacy controls to ensure the accuracy of personal information may be of assistance in addressing this issue.

A regulated entity is required to take reasonable steps to ensure that personal information it collects is accurate, up to date and complete (APP 10)[16] and must allow individuals to access and correct personal information held about them (APPs 12 & 13).[17]

Standards Australia may wish to consider how standards can help entities to implement appropriate controls to comply with these obligations. These may include standards relating to how often a human will:

  • check and review the accuracy and appropriateness of the personal information and other data used to train an AI tool
  • validate the accuracy of decisions being produced by an AI tool.

Chapters 10, 12 & 13 of the OAIC’s APP guidelines may be of assistance in this regard.[18]

Secondary use and disclosure of personal information

APP 6 outlines when an APP entity may use or disclose personal information. This APP reflects the use limitation principle, which holds that personal information should generally not be used or disclosed for purposes other than the specified purpose of collection.[19]

A regulated entity that holds personal information about an individual can only use or disclose the information for the particular purpose for which it was collected (known as the ‘primary purpose’ of collection), unless an exception applies. Where an exception applies the entity may use or disclose personal information for another purpose (known as the ‘secondary purpose’).

The availability of large datasets of personal and sensitive information may be essential to the training and refining of AI technologies. However, except where the primary purpose for the collection of this information was for processing by an AI tool, regulated entities using personal information to train AI technologies may breach APP 6 unless an exception applies. Key exceptions to APP 6 include:

  • where an individual has consented to a secondary use or disclosure
  • where an individual would reasonably expect the secondary use or disclosure, and that is related to the primary purpose of collection or, in the case of sensitive information, directly related to the primary purpose.

Given the issues identified above in relation to transparency and consent, regulated entities reliant on an exception to APP 6 may have to carefully consider how they can satisfactorily demonstrate consent or reasonable expectation.

Standards produced by Standards Australia should include governance, management or technical controls aimed at ensuring that organisations comply with APP 6 when using personal information to train AI tools. The OAIC has produced guidance on APP 6 which may be of assistance.[20]

Standardising key definitions

As the regulatory framework around AI is still evolving, it may be useful for Standards Australia to consider whether it is possible to propose a common set of key terms and definitions for this. The OAIC acknowledges that this may be difficult given that the term AI is itself a contested definition. Nevertheless, the creation of a commonly accepted AI lexicon would help organisations and individuals to understand their rights and obligations in this emerging regulatory area.

The discussion paper appropriately refers to the definition of AI proposed in the Department of Industry, Innovation and Science and Data 61’s recent publication ‘Artificial Intelligence: Australia’s Ethics Framework’. Standards Australia may also wish to consider the applicability of definitions used in international instruments and regulations including:

  • the OECD’s ‘Recommendations of the Council on Artificial Intelligence’, which proposes definitions for terms such as AI system, AI system lifecycle and AI knowledge[21]
  • relevant definitions in the GDPR including the meaning of profiling.[22]

Collecting personal information by creation

The concept of ‘collects’ under the Privacy Act applies broadly, and includes gathering, acquiring or obtaining personal information from any source and by any means. This includes collection by ‘creation’ which may occur when personal information is created with reference to, or generated from, other information the entity holds. An example of collection by ‘creation’ is the capability of some AI tools to infer personal or sensitive information that has not been directly provided by an individual.

Where an entity collects personal information ‘via creation’, it must determine whether it could have solicited and collected the personal information under APP 3. For organisations, this will be where it was reasonably necessary to collect that personal information for the organisation’s functions or activities.[23] If personal information is created which the organisation is not able to lawfully collect, it will need to be de-identified or destroyed.

Standards Australia should consider how standards can assist entities to have appropriate policies and procedures in place to identify and appropriately manage personal information which is collected by ‘creation’. Resources which may be of assistance to Standards Australia includes:

  • the Guide to data analytics and the APPs, which considers the application of APP 3 where information is collected via creation[24]
  • chapters 3 and 4 of the OAIC’s APP guidelines the collection of solicited and unsolicited personal information.[25]

Securing personal information

Under APP 11, regulated entities are required to take a range of steps to protect and ensure the integrity of personal information throughout the information lifecycle.[26] In addition, regulated entities must also take reasonable steps to implement practices, procedures and systems that will ensure compliance with the APPs across the information lifecycle.

When collecting significant datasets of personal or sensitive information, for example for the purpose of training AI tools, entities will be required to take steps under APP 11 which are proportionate to this increased risk.

Any AI standards should be mindful of the relevant considerations for entities to ensure appropriate privacy protections which includes:

  • Collection of personal information — APP entities should collect the minimal amount of personal information that is reasonably necessary for (and, for federal government agencies, directly related to) one or more of the entities functions or activities
  • Privacy-by-design — privacy should be incorporated into practices, procedures, planning, staff training, priorities, project objectives and design processes[27]
  • Assessing the risks — assessing the security risks to personal information, for example by using PIAs, is a key element of a privacy-by-design’ approach
  • Security of personal information — this may include technical security (such as software security, encryption and network security), access controls (such as identity management and authentication) and physical security
  • Destruction and de-identification — entities should consider when and how they will destroy or de-identify information no longer needed for a purpose permitted under the APPs.[28]

The OAIC has released guidance on securing personal information.[29] This guide is particularly relevant for APP entities which are required under the Privacy Act to take reasonable steps to protect the personal information they hold from misuse, interference, loss, and from unauthorised access, modification or disclosure.

De-identification and aggregated data

Information that has undergone an appropriate and robust de-identification process is not personal information and is therefore not subject to the Privacy Act. This depends on context and will occur where there is no reasonable likelihood of the information being re-identified.

The capability of some AI tools, however, may make it possible to re-identify information that was previously considered de-identified or aggregated. Standards Australia should consider how standards can assist entities to address the de-identification and re-identification risks when using AI tools. The OAIC has issued guidance on de-identification which may be of assistance including:

  • the De-identification and the Privacy Act guide, which provides general advice about de-identification, to assist APP entities to protect privacy when using or sharing information containing personal information[30]
  • the De-identification Decision-Making Framework, produced jointly with CSIRO-Data61, which provides a comprehensive framework for approaching de-identification in accordance with the Privacy Act.[31]

Third-party privacy certification and audits

Standards Australia may also wish to monitor emerging reforms to Australia’s privacy law, such as the potential implementation of a third-party privacy audit or certification framework.

The Australian Competition and Consumer Commission’s Final Report into the Digital Platforms Inquiry[32] considered that a third-party privacy audit or certification scheme could help increase the transparency of organisations’ data practices.[33] The OAIC supports the introduction of such a scheme which may assist entities to meet their obligations under the Privacy Act while providing consumers with evidence-based information about the privacy credentials of entities with which they may engage.[34] These schemes may also help alleviate the technical knowledge imbalance between many individuals and organisations utilising AI tools. Given the difficulties with explaining the operation of AI, certification by a trusted third-party auditor with appropriate technical qualifications and expertise may give individuals confidence in the information handling practices of entities using AI tools.

Standards Australia may wish to consider how the development of AI standards could interact with such a scheme.

International data protection developments on AI

It is important that any AI standards draw on international developments and guidance on AI to promote a global, interoperable system of regulatory oversight. In this respect, we note that Standards Australia’s creation of a mirror committee to the Joint Technical Committee 1 and Subcommittee 42 – Artificial Intelligence. The OAIC supports Standards Australia continuing to engage with these committees to ensure that any Australian AI standards are compatible with global counterparts.

The importance of data protection regulation and governance in addressing potential risks of AI is also a current area of significant interest for Australian and international data protection regulators. The OAIC draws on the work of other data protection authorities and seeks to ensure global policy alignment where appropriate.

The discussion paper refers to a work undertaken internationally by the Organisation for Economic Co-operation and Development’s (OECD),[35] and the High-Level Expert Group on Artificial Intelligence set up by the European Commission. [36] Appendix 2 of this submission sets out other regulatory developments that Standards Australia may wish to have regard to as it develops standards in AI.

Appendix 1 — OAIC guidelines

The OAIC has released guidelines which may be of assistance in ensuring that AI standards accurately reflect Australia’s privacy framework:

  • the APP guidelines, which outline the mandatory requirements of the APPs, how the OAIC will interpret the APPs, and matters we may consider when exercising functions and powers under the Privacy Act[37]
  • the Data Breach Preparation and Response guidelines, which assist Australian Government agencies and private sector organisations to prepare for and respond to data breaches in line with their obligations under the Privacy Act[38]
  • the Guide to Data Analytics and the APPs, which addresses some of the privacy challenges when undertaking data analytics activities[39]
  • the Guide to Securing Personal Information, which is relevant for entities that are required under the Privacy Act to take reasonable steps to protect the personal information they hold from misuse, interference, loss, and from unauthorised access, modification or disclosure.[40]

The Guide to Securing Personal Information sets out relevant considerations for regulated entities that are considering adopting standards to assist in ensuring compliance with the security obligations under APP 11. These include:

  • that compliance with a standard does not in of itself mean that an entity has fulfilled its obligations under APP 11
  • it is important to be mindful to apply the definition of personal information and sensitive information from the Privacy Act, and not any other similar definitions that might by imported by or used in the standard.

We suggest that these considerations will also be relevant for regulated entities looking to adopt a standard relevant to AI.

Appendix 2 — International regulatory developments

Standards Australia may wish to have regard to the following international regulatory developments to further inform the development of standards in AI:

  • The International Conference of Data Protection and Privacy Commissioners (ICDPPC) adopted a declaration on ethics and data protection in AI in October 2018 which endorsed principles including that AI and machine learning technologies should be designed, developed and used in respect of fundamental human rights, and that unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated[41].

  • The Privacy Commissioner for Personal Data in Hong Kong released a report titled ‘Ethical Accountability Framework for Hong Kong, China’[42] in October 2018 which considers the ethical processing of data, including in relation to AI tools, and seeks to foster a culture of ethical data governance.

  • The Personal Data Protection Commission of Singapore released the ‘Proposed Model Artificial Intelligence Governance Framework’[43] in January 2019, aimed at providing detailed and readily implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions.

  • The European Data Protection Board adopted the ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’[44] on 3 October 2017 (last revised and adopted 6 February 2018) which provides guidance on the regulation of automated individual decision-making and profiling under the EU’s General Data Protection Regulation.[45]

Footnotes

[1] The OAIC considers that it is important that technologies that may impact on the lives of individuals, such as AI, have sufficient community support (‘social licence’). The OAIC suggests that a social licence for AI should be built on several elements including increasing transparency around AI tools, and ensuring that the community trusts AI tools and understands how their personal information is being used.

[2] See the Organisation for Economic Co-operation and Development 1980, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.

[3] Organisation for Economic Co-operation and Development 1980, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.

[4] This corresponds with the purpose specification principle in the Organisation for Economic Co-operation and Development 1980, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.

[5] Providing these types of information may be useful to ensure compliance with APP 6 (see discussion below on Secondary use and disclosure of personal information). This right to information is required under articles 13-15 of the EU’s General Data Protection Regulation which states that the regulated entities should provide ‘meaningful information about the logic involved’ in relation to certain types of automated decision-making, as well as details regarding any ‘significant and envisaged consequences’. The interpretation and application of this provision, however, is still developing.

[6] Privacy (Australian Government Agencies – Governance) APP Code 2017.

[7] A privacy impact assessment is a systematic assessment of a project that identifies the impact that the project might have on the privacy of individuals, and sets out recommendations for managing, minimising or eliminating that impact. For more information, see OAIC 2019, Guide to Undertaking Privacy Impact Assessments and the OAIC’s e-learning course on conducting a PIA.

[8] For more information, see OAIC, Australian Government Agencies Privacy Code.

[9] OAIC, Guide to Data Analytics and the Australian Privacy Principles.

[10] OAIC 2019, APP Guidelines, Chapter 1: APP 1 — Open and transparent management of personal information.

[11] OAIC 2019, APP Guidelines, Chapter 5: APP 5 — Notification of the collection of personal information.

[12] Information Commissioner 2019, AI Auditing Framework.

[13] OAIC 2019, APP Guidelines, Chapter B: Key Concepts.

[14] OAIC 2019, APP Guidelines, Chapter 5: APP 5 — Notification of the collection of personal information.

[15] Information Commissioner’s Office 2019, Project ExplAIn interim report.

[16] This APP reflects the Data Quality principle in the Organisation for Economic Co-operation and Development 1980, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.

[17] This APP reflects the individual participation principle in the Organisation for Economic Co-operation and Development 1980, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.

[18] OAIC, APP Guidelines.

[19] Organisation for Economic Co-operation and Development 1980, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.

[20] OAIC 2019, APP Guidelines, Chapter 6: APP 6 — Use or disclosure of personal information.

[21] Organisation for Economic Co-Operation and Development 2019, Recommendations of the Council on Artificial Intelligence.

[22] EU GDPR, article 4

[23] APP 3 contains slightly different obligations in relation to the collection of personal information by agencies, or the collection of sensitive information.

[24] OAIC, Guide to Data Analytics and the Australian Privacy Principles.

[25] OAIC 2019, APP Guidelines — Chapters 3 & 4.

[26] This obligation reflects the security safeguards principle in the Organisation for Economic Co-operation and Development 1980, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.

[27] For more information on ‘privacy by design’, see the OAIC, Guide to Securing Personal Information.

[28] See OAIC, De-identification and the Privacy Act.

[29] OAIC, Guide to Securing Personal Information.

[30] OAIC, De-identification and the Privacy Act.

[31] OAIC and Data61 2017, The De-Identification Decision-Making Framework.

[32] Australian Competition and Consumer Commission 2019, Digital Platforms Inquiry: Final Report, recommendation 17; See also Australian Council of Learned Academies 2019, The Effective and Ethical Development of Artificial Intelligence: An Opportunity to Improve our Wellbeing [6.2.3.5]

[33] Treasury is currently seeking stakeholder comment on the ACCC’s recommendations from the Final Report of the Digital Platforms Inquiry.

[34] See the OAIC 2019, Submission to the ACCC Digital Platforms Inquiry preliminary report, page 5 where the OAIC has previously recommended independent third party certification as a proactive method to increase organisational accountability.

[35] Organisation for Economic Co-Operation and Development 2019, Recommendations of the Council on Artificial Intelligence.

[36] High-Level Expert Group on Artificial Intelligence 2019, Ethics Guidelines for Trustworthy Artificial Intelligence. This guideline was first released for comment on 18 December 2018 and was issued as final on 9 April 2019.

[37] OAIC, APP Guidelines.

[38] OAIC, Data breach preparation and response — A guide to managing data breaches in accordance with the Privacy Act 1988 (Cth).

[39] OAIC, Guide to Data Analytics and the Australian Privacy Principles.

[40] OAIC 2019, Guide to Securing Personal Information.

[41] International Conference of Data Protection and Privacy Commissioners, Declaration on ethics and data protection in artificial intelligence, 40th International Conference of Data Protection and Privacy Commissioners, Tuesday 23rd October 2018, Brussels.

[42] Office of the Privacy Commissioner for Personal Data 2018, Ethical Accountability Framework for Hong Kong, China.

[43] Personal Data Protection Commission Singapore 2019, A Proposed Model Artificial Intelligence Governance Framework.

[44] European Commission Article 29 Working Party 2018, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679.

[45] These guidelines were adopted by the European Data Protection Board in its first plenary meeting on 25 May 2018.