Our principles

Our panellists, Professor James Maclaurin and Dr Karaitiana Taiuru, articulated principles for using predictive and generative AI in Aotearoa New Zealand. Given the rate of progress in the development of AI, principles should be revisited annually or as often as seen fit by relevant authorities. The principles may be useful for developers, medical professionals, patients, users, and regulators. We acknowledge that some principles create tensions. These are helpful to frame policy choices.


A. Implementing Te Tiriti o Waitangi and recognising tikanga Māori

Globally, collective rights for Indigenous populations are recognised and affirmed by the United Nations Declaration of the Rights of Indigenous Peoples (UNDRIP). New Zealand gave its support to the declaration in 2010, acknowledging Māori as tangata whenua and affirming a commitment to the common objectives of the declaration and Te Tiriti o Waitangi. Te Tiriti and its principles require consideration on an ongoing basis as the breadth of applications for AI in healthcare delivery continues to evolve.

Principle 1

Mana whakahaere: effective and appropriate stewardship or kaitiakitanga over AI health systems recognises Māori data are a taonga and subject to Māori data sovereignty principles determined by Te Tiriti. This includes individual and collective rights with whānau, hapū, iwi, and Māori organisations.

Principle 2

All AI systems will embed Māori leadership, decision-making, and governance at all levels of the systems life-cycle spanning inception, design, release and monitoring.

Principle 3

Mana motuhake: Enabling the right for Māori to be Māori (Māori self-determination); to exercise authority over their lives, and to live on Māori terms and according to Māori philosophies, values and practices which are framed by te ao Māori (the Māori world), enacted through tikanga Māori (Māori philosophy & customary practices) and encapsulated within mātauranga Māori (Māori knowledge).

Principle 4

Mana tangata: AI systems will support equity in health and disability outcomes for Māori (individuals and collectively) across their life course and contribute to Māori wellness.

B. Safe and effective AI

AI must be safe, not exposing patients to increased levels of risk. It must be effective in achieving the goals set out in the Pae Ora | Healthy Futures Strategies 2023 to achieve health equity and improve health outcomes for all. This will require: the development of frameworks for assessment of AI in various healthcare contexts; better understanding of the limitations and risks of AI systems; and the development of rules and governance frameworks across the health system.

Principle 5

Health delivery entities must have policies regulating the use of AI. Such policies should specify an assessment process for AI tools to go through before use and an ongoing evaluation process for accuracy, efficacy and safety, addressing issues such as ease of use, bias, security, and data sovereignty.

Principle 6

Assessments of AI for use in healthcare should be made with an opportunities lens, making comparisons between the performance and reduction of mental and physical harm of AI and alternatives available within the Aotearoa New Zealand health system.

Manatū Hauora | Ministry of Health has the Health Information Governance Guidelines and other entities will need to adapt or develop their own policies

C. AI for equity

If we are to make good on Pae Ora, our deployment of AI must focus on enhancing equity in access and in outcomes. There must be ongoing audits and evaluation of potential biases and prioritisation of use cases that enhance equity. While inappropriate use can lead to inequity, early evidence suggests that AI is capable of enhancing equity by lowering barriers to knowledge, monitoring human bias, enhancing access to healthcare, and increasing the productivity of healthcare professionals. If such productivity gains prove viable, it is essential that they be harnessed to increase the equity of healthcare provision.

Principle 7

AI tools should be designed and implemented to address health inequities, by prioritising the health needs of disadvantaged groups including those identified as priority groups by Manatū Hauora and other groups as appropriate.

Principle 8

All use of AI should be subject to ongoing audit and evaluation for bias.

Principle 9

The permissibility of AI use should be judged relative to the actual healthcare that individuals are likely to receive, not to an ideal level of treatment and support.

D. Effective control of AI

Where AI is supervised by humans, it is essential that its supervision be effective. Increasingly, we will not always want to supervise all AI as confidence, capability, and trust builds. There will be low risk domains in which supervision is not cost effective and, as AI becomes increasingly powerful, we will be less competent at supervising it. 

Principle 10

Where AI is supervised:

a) All AI-generated information relevant to treatment must be independently checked before it is acted on

b) Supervisors must be competent to make the decisions that we are asking AI to make, i.e., the operation of an AI must be within the scope of practice of those tasked with its supervision

c) Everyone who uses AI in a clinical setting should be trained in its use, for example, the circumstances in which a given AI tool is likely to be more and less accurate, and in relevant principles of prompt engineering

AI may be used unsupervised where:

d) The use is low-risk and its performance is subject to ongoing audit and evaluation showing that it increases accuracy, equity, or patient satisfaction or that it decreases cost without sacrificing accuracy, equity, or patient satisfaction


e) The use is medium-risk and its performance is subject to ongoing audit showing that it is demonstrably more accurate and/or unbiased than the human decision-makers it is replacing

E. Evaluated and trusted AI

The use of AI in health contexts must be both trusted and trustworthy. People should understand the role that AI plays in their care. Significant effort is being put into explaining the nature and reliability of technology. But, by its nature, generative AI is less explainable. In some cases, its trustworthiness is best secured by effective and well communicated audit and evaluation rather than by communicating the mechanics of its operation and the nature of the vast amount of data, sometimes sensitive, on which it is constructed.

Principle 11

The trustworthiness of predictive AI should continue to be secured by using relevant and representative training data, maintaining transparency, and retaining human oversight (as construed by the most up to data guidance for our national context such as the Principles for Safe, and Effective use of Data and Analytics jointly developed by Te Mana Mātāpono Matatapu | Privacy Commissioner and Tatauranga Aotearoa | Stats NZ, and Artificial intelligence and the Information Privacy Principles set out by the Privacy Commissioner).

Principle 12

The trustworthiness of generative AI should be underpinned by ongoing well-communicated audit and evaluation. Such audit should address accuracy, bias, fitness for purpose, privacy, data security, and data sovereignty.

Principle 13

Aotearoa New Zealand should explore methods for mitigating bias and for securing data sovereignty, particularly Māori data sovereignty. These might include the development of generative AI in New Zealand which either stands alone or works with commercial AI based in other countries. Health data of people in New Zealand must not be collected, defined, stored, or processed in systems that are not subject to New Zealand law.

Principle 14

New Zealand should develop a strategy to widely communicate the benefits and risks of the public using generative AI as an alternative to consulting healthcare professionals.

F. Responsible AI

Effective use of AI requires clear rules about liability and responsibility.

Principle 15

The use of AI as a ‘practitioner co-pilot’ can be mandated in domains in which its performance is subject to ongoing audit and evaluation showing that it is more accurate and no more biased than human decision-makers.

Principle 16

Health organisations are responsible for decision-making (as per principle 5) about the purchase, provisioning, audit, evaluation, and authorisation of AI systems.

Principle 17

Practitioners supervising AI are responsible for its operation and they remain liable for decisions made using AI generated advice, and for meeting requirements of the Health Practitioners Competence Assurance Act 2003.

Last edited on: 15th December 2023