You’re talking about ChatGPT, right?
ChatGPT might be the catalyst for AI’s prominence in the zeitgeist, but the world of AI is much wider than any single tool. In fact, one challenge for law makers is simply defining AI. For our purposes in this blog post, we’re using AI broadly, including:
- Generative AI such as ChatGPT, which, as the name suggests, generates new text (or audio or images) in response to a “natural language” (as opposed to computer code) prompt
- Large language models (LLMs), which are representations of language. Generative AI runs on these models
- Algorithms that are used in making decisions that can have important impacts on people’s lives, for example, on eligibility for public services
- Facial recognition
- Machine learning for audience segmentation and targeting
Why should I care?
The infamous letter signed by Steve Wozniak, Elon Musk, and others lists among the potential consequences of AI “loss of control of our civilization”, which is quite alarming. But some experts say focusing on hypothetical future harms ignores actual harms already occurring with AI use and detracts from immediate action that can be taken to develop AI safely and maximise its benefits.
Perhaps the greatest existing harm is that biased AI reproduces and entrenches inequalities. There is an abundance of data available to AI developers, but it is of varying quality. When the data used to build an algorithm isn’t representative, then the resulting AI will have bias and discrimination baked in. This is how Amazon came to build a recruiting tool that taught itself to favour men. It also explains how tens of thousands of Dutch families were accused of benefit fraud on the basis of ethnicity and nationality – resulting in economic distress, loss of access to a range of public services, children being taken into foster care, and, arguably, the resignation of the Dutch government –. And it’s why facial recognition performs differently by ethnicity, resulting in multiple Black men in the US being arrested for crimes they did not commit on the basis of false facial recognition matches. AI also has the potential to challenge human bias and there is an opportunity to influence its development to reach equity goals.
AI’s ability to rapidly analyse large volumes of data also raises privacy concerns. In the facial recognition cases above, the harm caused by the poorly performing facial recognition technology was facilitated by a degree of surveillance. Beyond surveillance, a key privacy issue is in the collection and use of data necessary for AI. There is an asymmetry in power and information between individual consumers and the companies using their data, meaning people have little meaningful control over the way their data is processed, for example, to prevent the creation of sophisticated profiles to enable powerful targeting of advertising or services for a company’s gain. Finally, the data processing capabilities of AI creates a risk that data sets that would previously have been considered anonymous because they did not contain any obviously identifying information (like names, phone numbers) can be matched to individuals. Understanding how AI might affect traditional ideas of privacy enables us to design appropriate data management systems, such as by federating data.
How are other countries managing the issue?
The EU is leading the pack on AI regulation, with its AI Act expected to become law later in the year. The AI Act classifies AI applications based on risk, and imposes different rules depending on the risk level – including banning applications deemed to have potentially unacceptable consequences.
In contrast, nowhere else has bespoke, overarching legislation. China has developed individual laws for specific AI applications. Canada has a bill in the early stages of its parliamentary process. In the US, the White House has released the AI Bill of Rights which outlines principles but has no actual powers, a bill on algorithmic accountability seems to have run out of political legs, and various state and city governments have passed laws affecting very specific uses of AI. Australia has started a public consultation on how AI should be regulated. We have summarised a snapshot of the current state of play on our webpage.
What laws cover AI in Aotearoa New Zealand at the moment?
There are no AI-specific laws in NZ so far. The only AI-specific policy is the Algorithm Charter, which most government agencies have signed up to. Signatories to the Algorithm Charter have agreed to apply certain principles in how they use algorithms, especially in designing access to public services but it doesn’t address newer technologies such as the LLMs.
But just because we don’t have any AI specific laws doesn’t mean we don’t have any laws that apply to AI. AI is covered by existing laws including the Privacy Act, the Human Rights Act, the Fair Trading Act, the Harmful Digital Communications Act, among others, and obligations under te Tiriti o Waitangi still apply.
But challenges remain. Under the Privacy Act, for example, there are 13 principles that are meant to govern how organisations use individuals’ data and the Privacy Commissioner has issued guidance to organisations to manage generative AI. However, these may prove difficult to enforce. Fines for Privacy Act violations are currently capped at $10,000– European Union’s General Data Protection Regulations that can reach €20m ($35.4m) or 4% of an organisation’s global turnover, whichever is greater (for example, the largest fine to date is €1.2b ($2.1b) issued to Meta earlier this year). In cases where an individual believes they have suffered serious harm, or the Privacy Commissioner chooses to refer a case to the Human Rights Commissioner, fines can theoretically reach $350,000 but this amount has never been awarded. The burden of proving serious harm to an individual is difficult to meet, and doesn’t cover situations where social institutions have been harmed but it would be difficult for an individual to claim they personally had been harmed – for example, the use of targeted messaging to influence an election.
There are also gaps created by new technologies. For example, lethal autonomous weapons are not addressed by any current New Zealand law.
New laws aren’t simple
Although there are important gaps in current regulations, there isn’t universal endorsement around the world for urgently creating new law around AI. Some advocate taking a slower approach noting the need to develop regulation within the context of a broader national AI strategy, with the possible exception of highly problematic uses that a large majority of New Zealanders would likely oppose, such as employing lethal autonomous weapons.
That said, addressing gaps in data protection and privacy have advantages, beyond the implications for AI. Strengthening the governance of data, by making privacy and algorithmic impact assessment standard practice, can foster wide benefits. Some work is already happening in this space across government. StatsNZ, for example, has engaged experts to operationalise the principles of the Algorithm Charter, and through Te Kāhui Raraunga, has been involved in the development of a model of Māori data governance that provides a strong foundation for protecting indigenous data unique to our country and is currently looking to set up a Centre for Data Ethics and Innovation. This transition will require the development of specialised skillsets within government agencies and the private sector. The Privacy Act may also be tested as AI strains the power of current laws everywhere to the enforce existing principles.
There are some principles that can guide us when we do start legislating how AI can be used. Among the most important of these will be transparency: that people know when AI has been used to make a decision that affects their lives, and potentially even when they are exposed to AI-generated content in marketing. Also crucial will be ensuring that there is accountability built into processes that use AI, so that a human is ultimately responsible for the outcomes produced. More about how these are shaping up internationally on our webpage.
AI undoubtedly has the potential to benefit New Zealand immensely. But ensuring we see these benefits without further entrenching inequalities and creating other harms will require careful and strategic action by policymakers and other stakeholders. Although there are a range of views on the optimal nature and pace of AI regulation, it is clear that strengthening general privacy and data protection obligations is a critical first step.