Most NZ businesses have no formal AI policy. Their staff are using ChatGPT, Copilot, Gemini, and a dozen other AI tools — sometimes brilliantly, sometimes in ways that create real legal or reputational risk — and no one’s written anything down about what’s acceptable.
That’s starting to change. Larger NZ employers are formalising AI use as part of their employment agreements and HR policies. If you’re ahead of this, you’ll spend less time firefighting and more time benefiting from what AI can actually do.
This post walks you through exactly what an AI workplace policy needs to cover in 2026 — and gives you a template you can adapt for your organisation.
Why You Need an AI Policy Now
Without a policy, you’re relying on individual staff judgment — which varies enormously. Some of your people are using AI brilliantly and gaining hours back every week. Others are accidentally feeding client data into public AI tools, or submitting AI-generated work without checking it for accuracy.
An AI policy does three things:
- Protects your business from privacy breaches, IP issues, and reputational risk
- Enables good AI use by making it clear what’s encouraged, not just what’s banned
- Sets a baseline so you can build capability consistently across your team
What to Cover: The Six Essentials
1. Approved Tools
List the AI tools your organisation approves for work use — and at what tier. For example:
- Approved for general use: Microsoft Copilot (with business account), ChatGPT Team, Claude for Work
- Approved for non-sensitive use only: Free-tier ChatGPT, Gemini with personal account
- Not approved: Any AI tool not on this list, without manager sign-off
You don’t need to be exhaustive — and you’ll need to update this as tools evolve — but having a starting list saves a lot of confusion.
2. Confidentiality and Data Rules
This is the most important section. Be specific about what data can and cannot go into AI tools. A simple framework:
- Never input into external AI tools: Client personal information, financial data, health records, legally privileged material, employee records, anything under NDA
- Use caution with: Internal strategies, pricing, unreleased product information
- Generally fine: Publicly available information, your own writing you want to improve, generic templates, research on non-sensitive topics
Link this to your Privacy Act 2020 obligations — especially IPP 10 (limits on use) and IPP 12 (overseas disclosure).
3. Accuracy and Verification
AI tools hallucinate. This isn’t a bug that’s going to be fixed — it’s a fundamental feature of how large language models work. Your policy needs to address this directly:
- Staff must verify any factual claims made by AI tools before using them in client-facing or high-stakes work
- AI-generated legal, medical, financial, or technical content must be reviewed by a qualified person before use
- Statistics and citations from AI must be verified against original sources
4. Disclosure and Transparency
Set expectations about when AI use should be disclosed — to clients, to management, in published work. This varies by context:
- Using AI to draft an internal email: probably no disclosure needed
- Using AI to draft a legal document or advice for a client: consider whether your professional obligations require disclosure
- Using AI to create marketing content: disclose if it would materially affect how clients view the content
- Using AI in hiring decisions (CV screening, etc.): disclose to candidates
Check your professional body’s guidelines — many NZ professional associations are developing their own positions on this.
5. Intellectual Property
AI-generated content raises IP questions that NZ law hasn’t fully resolved. Your policy should address:
- Ownership of AI outputs: Work produced using AI during employment is still generally owned by the employer (same as other work product), but check your employment agreements
- Copyright risk: AI tools can inadvertently reproduce copyrighted content. For high-risk uses (publishing, marketing, creative work), have a human review process
- Training data concerns: Some AI tools are trained on data that may create IP issues. This is an evolving area — flag major AI-generated creative work for legal review if in doubt
6. Accountability
The final piece: who owns AI governance in your organisation? Someone needs to:
- Maintain the approved tools list
- Handle breach reports (what happens if someone accidentally inputs confidential data)
- Review and update the policy as AI evolves (at minimum annually)
- Be the escalation point for “is this okay?” questions
In a small organisation, this might be the business owner or operations manager. In a larger one, it sits with legal, IT, or HR.
AI Policy Template for NZ Organisations
Copy and adapt this for your team:
AI Use Policy — [Organisation Name]
Effective date: [Date] | Review date: [Date + 12 months] | Owner: [Role]
Purpose
This policy guides the use of artificial intelligence tools by [Organisation] staff. It aims to enable productive, responsible use of AI while managing privacy, accuracy, and reputational risks.
Scope
This policy applies to all staff, contractors, and volunteers using AI tools for work purposes, including on personal devices.
Approved Tools
The following tools are approved for work use: [List tools]. All other AI tools require approval from [Role] before use for work purposes.
Data and Confidentiality
Staff must not input the following into external AI tools: client personal information, employee records, financial data, legally privileged material, or anything covered by NDA or confidentiality agreement.
When in doubt, anonymise or generalise before using AI assistance.
Accuracy
AI-generated content must be reviewed and verified before use in client-facing, legal, financial, or technical work. Staff are responsible for the accuracy of any work they produce, regardless of whether AI was used in its preparation.
Disclosure
Staff should disclose AI use where it would materially affect how the work is received, or where professional obligations require it. If uncertain, escalate to [Role].
Breach Reporting
If confidential information is accidentally disclosed to an AI tool, staff must report this to [Role] immediately. [Organisation] will follow its Privacy Act 2020 breach response procedure.
Policy Updates
This policy will be reviewed at least annually, or when significant changes occur in AI tools or applicable law.
What Good AI Policy Looks Like in Practice
A policy that just says “don’t misuse AI” is almost useless. The best AI policies we see are:
- Specific about tools — name the approved ones, not just “AI tools”
- Example-driven — show what’s okay and what isn’t with real scenarios from your work
- Paired with training — a policy document no one reads doesn’t change behaviour. A 30-minute session explaining it does.
- Living documents — reviewed at least annually. AI in 2026 looks very different from AI in 2024.
Getting Help
If you want help building an AI policy that actually fits your organisation — not just a generic template — our AI Roadmap Workshop covers governance as part of the full picture. We work with NZ businesses across sectors including legal, healthcare, accounting, and general team training.
We also run workshops specifically on responsible AI use for teams — practical, hands-on, and tailored to your industry. Get in touch to find out more.




