Most NZ teams using AI have a vague sense it’s saving time. Few have actually measured it. That gap matters — because without measurement, you can’t improve, you can’t justify further investment, and you can’t identify where AI is actually working versus where it’s just creating busywork.
This post gives you a practical framework for measuring AI ROI in your team — simple enough to actually use, rigorous enough to trust.
Why AI ROI Is Harder to Measure Than It Looks
AI ROI is genuinely tricky for a few reasons:
- The benefit is often diffuse. AI doesn’t save 2 hours on one task — it saves 10 minutes on 12 tasks across 8 people. That’s hard to see and easy to undercount.
- Quality improvements are harder to quantify than time savings. A better proposal, a more accurate report, a more personalised client email — these matter, but how do you put a number on them?
- The counterfactual is invisible. You don’t see the mistakes that didn’t happen, the deals that didn’t take three extra days to close, the burnout that didn’t occur.
- AI use is uneven across teams. Your highest adopter might save 5 hours a week. Your lowest might save 20 minutes. The average is misleading.
None of this means measurement is impossible. It just means you need to be deliberate about what you measure and how.
The Four Categories of AI Return
Before you can measure ROI, you need to know what you’re measuring. AI returns come in four categories:
1. Time Saved
The most direct measure. Time saved on tasks that previously required human effort. This is straightforward to calculate: how long did this task take before AI? How long does it take now?
Example: Writing a project status report previously took 90 minutes. With AI assistance, it takes 25 minutes. Time saved: 65 minutes per report, multiplied by frequency.
2. Capacity Gained
Not just the hours saved, but what you do with them. A team that saves 10 hours per week can either work less (wellbeing return) or take on more work (revenue return). Both are valid — but they’re different.
Example: A consultant who saves 5 hours per week on admin can take on one additional client per month. At $2,000/client, that’s $24,000/year in additional revenue — from the same person, working the same total hours.
3. Quality Improvement
Harder to quantify, but often more significant than time savings. AI can improve the quality of first drafts, catch errors, and help produce work that would have previously required more senior input.
Proxy measures: Client satisfaction scores, revision rounds per deliverable, error rates in reports, proposal win rates.
4. Risk Reduction
AI can catch compliance issues, flag inconsistencies, and ensure nothing falls through the cracks. This is the hardest return to measure — you’re measuring things that didn’t happen — but often the most valuable for high-stakes industries.
Proxy measures: Number of compliance issues identified before submission, errors caught in review, near-misses flagged.
A Simple Measurement Framework: The Three-Month Sprint
Here’s a practical approach that most NZ teams can run without a data analyst:
Month 1: Baseline
Before you change anything, measure your current state. Pick 3-5 tasks where you plan to introduce or expand AI use. For each:
- Time to complete (track for 2 weeks)
- Quality proxy (client satisfaction, revision rounds, etc.)
- Frequency per week/month
This is your baseline. Don’t skip it — without it, you’re guessing.
Month 2: Introduction
Introduce AI assistance for the selected tasks. Be deliberate — use it consistently, not just when you remember. Track the same metrics you tracked in Month 1.
Expect a dip in the first two weeks as your team learns the tool. Don’t judge the ROI from Week 1 results.
Month 3: Steady State
By Month 3, your team has habits with the tool and you’re seeing real-world results. Compare Month 3 metrics against your Month 1 baseline. Calculate:
- Time saved per week (hours)
- Annualised value at your team’s hourly cost
- Quality improvements (if measurable)
- What you did with the time saved
The ROI Calculation
Once you have Month 3 data, the calculation is:
Annual return = (hours saved per week × 52 × fully loaded hourly cost)
+ (additional revenue enabled by capacity gained)
+ (quality improvement value, if quantifiable)
Annual cost = AI tool subscriptions + training time + setup cost (amortised over 2 years)
ROI = (Annual return − Annual cost) / Annual cost × 100
Example for a 5-person NZ professional services team:
- Average fully loaded hourly cost: $85/hour
- Hours saved per person per week: 3 hours
- Team time saved per year: 3 × 5 × 52 = 780 hours
- Annual return from time saved: 780 × $85 = $66,300
- Annual AI tool costs (5 × Claude Pro + training): ~$3,500
- ROI: ($66,300 − $3,500) / $3,500 = 1,794%
Even at more conservative time savings (1.5 hours per person per week), the ROI is well above 800% for most professional services teams.
What Goes Wrong With AI ROI Measurement
The most common failure modes:
- Measuring the wrong things. Tracking AI usage (number of prompts sent) rather than outcomes. Usage doesn’t equal value.
- Not accounting for the learning curve. Month 1 AI results are always worse than Month 3. Don’t judge on early results.
- Ignoring team variation. Aggregate numbers hide the fact that some team members are getting huge value and others almost none. Dig into individual data.
- Measuring only time, not quality. A report that takes 30 minutes but is mediocre isn’t a win. Quality must be part of the picture.
- Not closing the loop. Measurement is useless if it doesn’t change what you do. Review results monthly and adjust.
Getting Help With Your AI ROI Assessment
If you want an outside perspective on where AI is creating real value in your organisation — and where it’s not — our AI Roadmap Workshop includes a structured ROI analysis tailored to your industry and team size.
We also run team AI training programmes that include ROI measurement frameworks as part of the curriculum — so your team builds the habit of measuring from day one.
Related reading:




