Existing New Zealand Polices for Algorithms, Data and AI use in Education

The use of algorithms, data and artificially intelligent (AI) systems is becoming prevalent in society. Education as a sector is no exception and has additional complexities to navigate around safeguarding our tamariki. Schools and educational institutions collect personal information on students, as well as a broad range of information that relates to attendance, academic achievement, health and behaviour. This article gives an overview on existing legislation and guidelines in New Zealand relevant to digital content, collection of data and use of algorithms. We will also explore currently unregulated areas and future areas of consideration for educators and policymakers.

Collection and Use of Data

The Privacy Act 2020 requires that there must be security safeguards to prevent the loss, misuse or unauthorised disclosure of personal information. Misleading an organisation to access, use, alter or destroy someone else’s information is also a criminal offence.

Principles around data collection established in the Privacy Act include:

  • only collecting necessary information for a lawful purpose,
  • that reasonable steps must be taken that information is accurate, complete, relevant and up to date before use or disclosure,
  • making sure the person has knowledge around: why it is being collected, who will receive it, whether giving it is compulsory or voluntary and what will happen if they do not give you the information,
  • information can only be collected in ways that are fair and not unreasonably intrusive,
  • the circumstances around disclosure of personal information including: authorisation, anonymous use, avoidance of endangering a person’s health or safety and maintenance of the law.

Agencies are required to take particular care to ensure collection of information of children and young people is fair and appropriate, but there are no explicit requirements that apply to the collection or processing of minors’ personal data. Breaches to privacy without notification can also result in a conviction and/or fine up to $10,000. Principles and guidelines of the privacy act may be difficult to enforce due to a burden of proof of both breach and harm. This is particularly challenging with digital data which is often processed and stored internationally.

The Data Protection and Use Policy from the NZ Digital Government provides the guidelines of safe and respectful collection and use of data by government agencies and service providers. It is framed by the five principles of: He Tāngata- to improve people’s lives; Manaakitanga- to respect people’s mana and dignity; Mana Whakahere- to give people choice and enable access; Kaitiakitanga- to act as a steward with understanding and trust; and Mahitahitanga- to work as equals to create and share knowledge. Guidelines in the policy overlap with the Privacy Act in only collecting what is needed, enabling people to understand what is happening to their information, what choices they have and why, access for people to see and change their information and ensure that information is used to create insights relevant and useful, deliver value and improved wellbeing. The use of this framework advises carrying out risk assessment management and depends on the self-assessment by an organisation which is likely to mean a lack of monitoring or enforcement.

Content

The Harmful Digital Communications Act 2015 says that digital communications should not: disclose sensitive personal facts about a person, be threatening, intimidating or menacing, be grossly offensive, be obscene or indecent, be used to harass a person, make a false allegation, break confidences, incite or encourage anyone to send a deliberately harmful message, incite or encourage a person to harm themselves or commit suicide, denigrate a person’s colour, race, ethnic or national origins, religion, gender, sexual orientation or disability. With generative AI, harmful digital communications may be easier to produce. There is also no current legislation around the risks of disinformation due to the contention around the freedom of expression. In 2022, the EU created the digital services act which aims to give “better protection to users and to fundamental rights online, [and] establish a powerful transparency and accountability framework for online platforms”. It also established new obligations for the protection of minors on any platform used in the EU and mitigation of risks and harms.

Copyright

The Copyright Act 1994 automatically gives authors and creators freedom to choose how their work will be used and lasts for the lifetime of the author or creator plus 50 years after their death. The aim is to balance the rights of authors and creators with society’s interest in allowing people to access and use their work. Internet material is protected by copyright by default, though there are open access materials which may use creative commons licences. For non-commercial and educational purposes attribution and acknowledgement may be required. Schools also can have special licenses to copy from books (up to 10% or one chapter), journals or magazines to share with students, which extends to materials published online. ChatGPT and Large Language Models have used vast amounts of data from websites and e-books, which likely include copyrighted work. However it is not clear if this use infringes copyright in different jurisdictions. In addition, the copyright of AI-generated content is up for debate.

Protection

The Children’s Act 2014 requires the safety checking of children’s workers as well as for educational institutions to develop a child protection policy including how to identify child abuse and neglect. Child protection policies with the increased availability of data and algorithms could become increasingly more complex, but also possibly enable better protection of children. For example, through the use of AI algorithms to prevent suicide. The Intelligence and Security Act 2017 protects New Zealand as a free, open and democratic society while also setting out appropriate functions, powers and duties of agencies (e.g. National Assessments Bureau (NAB), New Zealand Security and Intelligence Service (NZSIS) and the Government Communications Security Bureau (GCSB)) to act as necessary to protect New Zealand and New Zealand interests. This includes seeking access to information from individuals or organisations on a case-by-case basis. Individuals and organisations therefore do not have to share data unless there is a legal warrant to do so.

Algorithms and AI Systems

There are currently no specific regulations around the use of algorithms or AI systems in New Zealand, although there is guidance for the public sector around the use of algorithms and generative AI. The Algorithm Charter by Statistics NZ was signed by multiple government agencies as a commitment to ensure transparency and accountability in the use of data so New Zealanders can have confidence in government agencies and how they use algorithms. Principles of transparency, acknowledging te Tiriti, data limitations and bias, privacy ethics and human rights and human oversight were outlined. An evaluation of risk-likelihood and impact was recommended to guide when the charter should be applied. Schools and educational institutions could adopt a similar approach for self-management. Guidance around the use of generative AI (e.g. ChatGPT) in the public service includes not inputting personal or sensitive data into generative AI tools, checking for accuracy, being accountable and transparent around decisions, and taking necessary steps to protect privacy. Schools and educational institutions might also want to consider their procurement procedures and management of security and privacy for digital tools.

The EU has been one of the first jurisdictions to develop a comprehensive AI Act which aims to ensure that AI systems are “appropriately controlled and overseen by humans” and are safe, transparent, and non-discriminatory. AI systems which have unacceptable risk such as cognitive behavioural manipulation of people or specific vulnerable groups, social scoring: classifying people based on behaviour, socio-economic status or personal characteristics, and real-time and remote biometric identification systems, such as facial recognition will be strictly prohibited. In educational settings, this could include the prohibition of using profiling or predictive systems and systems which “infer emotions” for social engagement or behaviour management, which are considered discriminatory and intrusive. New Zealand does not have any AI specific regulation, but due to the broad and rapidly developing nature of the technology creating legislation is a challenge. Addressing the potential risks and outcomes, with requirements for accountability, safety and transparency as well as strengthening data protection and privacy obligations might need to be considered to minimise harms.

Māori Data Sovereignty

AI relies heavily on the data used in developing the model. Data has the potential to convey mātauranga which is taonga, enabling opportunities for innovation and self-determination. Te Tiriti o Waitangi serves as the foundation of the Māori Data Governance Model by Te Kāhui Raraunga. The model establishes processes to uphold and enable collective data rights for better shared and autonomous decision-making in a trusted and safe data system which supports whanau to flourish. Guiding values of nurturing data as a taonga to be used for good, putting iwi-Māori data in iwi-Māori hands and the cessation of exploitative and extractive practices pursues a vision of a future that is people-centered and environmentally responsible where mokopuna are safe and empowered to meet the challenges of an uncertain future and thrive. For more details on this, see section 6.1 in the AI in healthcare report.

Download the table of resources used to inform this summary here. (Last updated: 4/12/2023)