If you’re using AI tools in your business — and in 2026, most NZ businesses are — you need to understand how the Privacy Act 2020 applies to what you’re doing. This isn’t about being a compliance nerd. It’s about protecting your clients, your staff, and your business from real consequences.
The good news: the Privacy Act 2020 is actually designed with common sense. The principles are straightforward. The hard part is applying them consistently when AI is in the loop — because AI changes how data flows, who processes it, and where it ends up.
What the Privacy Act 2020 Actually Covers
The Privacy Act 2020 replaced the 1993 Act and significantly updated New Zealand’s privacy framework. The key change: organisations must now proactively protect personal information, not just avoid obvious misuse.
The Act is built around 13 Information Privacy Principles (IPPs). When AI is involved, these five matter most:
- IPP 1 – Purpose of collection: Collect information only for a lawful purpose, and only what’s necessary. Using AI to scrape or aggregate data “just in case” likely breaches this.
- IPP 3 – Collection from individuals: When collecting from people directly, tell them what you’re collecting, why, and what you’ll do with it. AI-powered forms and chatbots need to be transparent about this.
- IPP 5 – Storage and security: Take reasonable steps to protect information from loss, misuse, or unauthorised access. Cloud AI tools must meet your security standards.
- IPP 10 – Limits on use of information: Don’t use information for a purpose other than what it was collected for. Feeding client data into an AI tool for a different purpose may breach this.
- IPP 12 – Disclosure overseas: Before disclosing personal information to overseas recipients (including cloud AI providers), take reasonable steps to ensure it will be adequately protected.
The AI Tools Most Likely to Create Privacy Risk
Not all AI tools are equal in terms of privacy risk. Here’s a quick triage:
High Risk: Public AI Chatbots with Personal Data
Pasting client information into ChatGPT, Claude, or Copilot without checking the terms of service is the most common mistake NZ businesses make. If your AI provider uses your inputs for training, that data could end up in responses to other users. OpenAI and Anthropic both allow you to opt out of training — but you need to actively do this.
Action: Check your AI provider’s terms. Turn off “improve the product” data sharing. Use API access rather than consumer-tier tools for sensitive data.
Medium Risk: AI Tools That Process Client Data
This includes AI-powered CRMs, email tools, HR platforms, and document analysis tools. The issue isn’t usually the AI itself — it’s whether the provider is processing your data on overseas servers without adequate protections.
Action: Review your data processing agreements. Ensure vendors who process NZ personal information meet IPP 12 (overseas disclosure) requirements. Look for ISO 27001 certification or equivalent.
Lower Risk: AI Tools Running Locally or With Strong Data Isolation
AI tools running on your own hardware (like OpenClaw) or enterprise AI deployments with strict data isolation carry significantly lower privacy risk. Your data doesn’t leave your environment, so IPP 12 isn’t triggered.
Action: For high-sensitivity industries (legal, medical, finance, HR), consider local AI deployment. It’s no longer as expensive or complex as it sounds.
What the Privacy Commissioner Has Said About AI
New Zealand’s Privacy Commissioner has been increasingly active on AI. Key positions from recent guidance:
- Transparency matters: If you’re using AI to make decisions that affect people (hiring, credit, service delivery), you should be able to explain that — and ideally disclose it.
- Consent doesn’t override everything: Even if someone “agrees” to terms, using their data in unexpected ways may still breach the IPPs.
- Automated decisions need care: AI-generated outputs that affect someone’s rights or opportunities carry higher obligations. You need to be able to explain the decision and offer a review pathway.
- The Algorithms Charter: Government agencies must follow the NZ Algorithms Charter, which requires transparency and human oversight of automated decision-making.
Private businesses aren’t bound by the Charter, but it’s a good benchmark for responsible AI use.
Practical Privacy Checklist for AI Use in Your Business
Here’s what you should have in place in 2026:
- AI Tools Inventory — List every AI tool you use that touches personal data. Include the provider, where data is stored, and whether you’ve reviewed the terms.
- Data Minimisation Policy — Train staff to never paste more data into an AI tool than is necessary for the task. Names + job titles = often fine. Full client records = rarely appropriate.
- Provider Agreements — Ensure you have data processing agreements with AI providers that handle personal information. This is a contractual form of the IPP 12 protection.
- Staff Training — Your team needs to understand what’s appropriate. Most privacy breaches involving AI aren’t malicious — they’re just people not thinking about where the data goes.
- Breach Response Plan — Under the Privacy Act 2020, you must notify the Privacy Commissioner and affected individuals of a notifiable privacy breach. Know what you’d do if an AI tool leaked client data.
- Privacy Impact Assessment (PIA) — For any significant new AI deployment, run a PIA. The Privacy Commissioner has a template. It takes a few hours and can save you enormous pain.
Sector-Specific Considerations
Legal Professionals
Solicitor-client privilege and the Privacy Act both apply. Don’t use public AI tools for anything covered by privilege. Consider local AI deployment or verified enterprise tools with data isolation. The NZ Law Society has not yet issued formal guidance, but the general principles strongly suggest caution.
See: AI tools for Christchurch lawyers | AI for NZ lawyers in 2026
Healthcare
Health information is a special category under the Privacy Act and the Health Information Privacy Code 2020. It carries the highest protection obligations. AI tools processing patient records must meet very high standards — local deployment or health-certified cloud solutions only.
See: AI for NZ healthcare professionals
HR and People Teams
Using AI in hiring (CV screening, interview scoring) creates significant risk. Automated hiring decisions must be explainable and must not discriminate. If your AI tool uses biased training data, you’re potentially liable for the outputs. Disclose AI use in hiring processes to candidates.
Financial Services
The Financial Markets Authority and Reserve Bank have their own overlapping requirements. AI-generated financial advice has additional obligations. Credit decisions made with AI assistance must be able to be explained to the consumer.
Common Questions
Does GDPR apply to NZ businesses?
If you’re processing data about people in the European Union, yes — GDPR applies regardless of where your business is based. New Zealand has an “adequacy decision” from the EU, meaning data can flow to NZ without extra steps, but if you’re targeting EU customers, you still need to comply with GDPR.
Is it okay to use ChatGPT for work tasks?
Yes — with the right practices. Use a business account with training data opt-out enabled. Never paste identifiable client or employee information. Stick to tasks that don’t require personal data: drafting, research, summarising public information, creating templates.
What if a vendor says they’re “GDPR compliant”?
That’s a good start, but doesn’t automatically satisfy NZ Privacy Act requirements. You still need to assess whether the arrangement meets your IPP obligations, particularly IPP 12 on overseas disclosure.
Do I need to tell clients I’m using AI?
It depends. If you’re using AI in ways that affect them — analysing their data, generating advice, making decisions — transparency is good practice and increasingly expected. In some contexts (particularly government and financial services) it may be required. When in doubt, disclose.
Getting Your Team AI-Ready and Privacy-Compliant
The businesses that get this right in 2026 aren’t necessarily the most cautious — they’re the most prepared. They’ve built AI practices that are genuinely useful AND responsible. That’s a competitive advantage.
At Gen AI Training, we help NZ businesses build practical AI capability that respects privacy, security, and professional obligations. Our AI Roadmap Workshop includes a privacy and compliance review tailored to your industry and the tools you’re using.
If you’re a legal, healthcare, or financial services firm with specific concerns, our professional AI training covers privacy compliance in depth.
This post provides general information only and does not constitute legal advice. For specific privacy law questions, consult a qualified NZ privacy lawyer or contact the Office of the Privacy Commissioner at privacy.org.nz.




