Practical, no-nonsense guidance for everyday Australians and New Zealanders — from families to small businesses.
New to AI? Start here
Keep your kids safe
Use AI without cheating
Adopt AI safely
Clubs, charities, volunteers
Get help now
Whether you're new to AI or use it every day, these seven rules protect you from the most common risks. Each one is based on a real way people get hurt.
Passwords, Medicare numbers, bank details, client data, confidential work documents — none of it belongs in an AI tool. What you type may be stored or used for training.
AI sounds confident even when it's wrong. Made-up references, outdated facts, and plausible-sounding errors are common. Check medical, legal and financial advice with a real expert.
Your conversations may be read by company staff, stored indefinitely, or used to train future AI. Treat everything you type as if it could be made public.
AI is not designed for children and carries real risks — harmful content, emotional dependency, and unsafe interactions. Kids should never use AI tools unsupervised.
Free consumer AI tools are not designed for confidential business data. Using them for client files, contracts, or HR matters can breach privacy law and your clients' trust.
Most people never read how an AI tool uses their data. "Free" usually means your conversations help train the AI. Our Privacy guide has tool-by-tool breakdowns.
AI can draft, summarise, and explain — but it doesn't understand your full situation, your legal context, or your specific needs. It cannot replace a doctor, lawyer, or financial adviser.
New to AI? This section gives you a solid foundation in plain English — no jargon, no technical background needed.
AI stands for Artificial Intelligence. In everyday use, it usually means software that can read what you type and write a response that sounds human.
An AI tool has been trained by reading enormous amounts of text from the internet, books, and other sources. When you ask it something, it predicts what a good response would look like — based on patterns in all that text.
It is not thinking. It is not feeling. It is generating text that matches what it has learned.
The most widely used AI tools in Australia and New Zealand include:
Made by OpenAI (USA). One of the most widely used AI tools for writing, answering questions, and summarising.
Visit ChatGPT ↗Made by Anthropic (USA). Used for writing, research help, and analysis. Designed with safety as a focus.
Visit Claude ↗Made by Google (USA). Integrated with Google's products. Used for writing, research, and email drafting.
Visit Gemini ↗Made by Microsoft (USA). Built into Windows and Microsoft 365. Used for drafting documents and emails.
Visit Copilot ↗Made by Meta (USA). Already built into apps millions of Australians use every day — Facebook, Instagram, Messenger, and WhatsApp — as well as Ray-Ban Meta smart glasses. Many people are using Meta AI without realising it.
Visit Meta AI ↗Made by Perplexity AI (USA). An AI-powered search tool that cites sources and retrieves current web information. Routes queries through multiple AI models including its own Sonar model, and (on paid plans) OpenAI's GPT, Anthropic's Claude, Google's Gemini, and others. Multiple companies' privacy terms may apply.
Visit Perplexity ↗View full Provider Directory →
It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.
However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.
For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.
Understanding what AI cannot do is just as important as knowing what it can do. Many problems come from misunderstanding this.
AI can describe symptoms and general health information, but it cannot examine you, diagnose you, or know your full medical history.
AI can explain general legal concepts but cannot give you legal advice about your specific situation, especially under Australian or New Zealand law.
AI can explain financial terms but is not licensed to advise you on your money, investments, or tax situation.
AI is not conscious, does not have feelings, and does not remember you between sessions (unless it explicitly says it does).
AI tools have a training cutoff — they may not know about recent events, new laws, or current prices.
AI is not like talking to a friend in confidence. What you type is typically stored by the company.
Used correctly, AI is genuinely useful. Here is where it adds real value for everyday people.
Emails, letters, social posts, reports — AI can produce a solid first draft that you then check and adjust.
AI can read a long document and give you a shorter summary — useful for reports, meeting notes, and articles.
Great for generating a list of options, ideas, questions, or approaches — then you decide which are good.
AI is reasonably good at translating between languages for everyday purposes.
Ask AI to explain something in plain English — it is often better at this than a search engine.
AI can help write basic code, Excel formulas, and spreadsheet functions — though you should test these before relying on them.
This is the most important part of this section. AI makes mistakes — and it makes them confidently.
AI invents facts, quotes, references, court cases, statistics, and names that do not exist. This is not lying — it is a technical flaw. But it is dangerous if you act on it.
AI's knowledge has a cutoff date. Laws, prices, contacts, and policies may have changed since it was trained.
AI does not know your local area, your specific circumstances, or Australian and New Zealand law unless you tell it explicitly.
AI was trained on internet text, which contains biases. It can produce content that is biased by race, gender, culture, or geography.
Basic AI tools can make arithmetic mistakes. Do not rely on AI for financial calculations without checking.
Follow these five rules and you will avoid the most common and serious problems.
Treat every AI conversation as if it could be read by the company, their staff, or made public one day. If you would not post it on social media, do not type it into AI.
Before acting on any AI answer about health, money, law, or safety — check it against a trusted source or a real professional.
AI is a great first step. It is a poor final step. Read, edit, and check everything before you use it.
Different tools have different privacy policies and different levels of safety. Check before you use.
Most mainstream AI tools are not specifically designed for children and may not include adequate safeguards by default. Supervision is essential.
Every term you might encounter — explained in plain English. Click any term to expand its definition.
The practical core of this site. What to do, what not to do, and how to build safe habits.
These habits keep you protected regardless of which AI tool you use.
AI sounds authoritative even when it is wrong. These steps help you catch errors before they cause problems.
Type: "What is your source for that?" If it cannot point to a specific, verifiable source, treat the answer as unverified.
Take the key facts from the AI's answer and search them on Google or a trusted website. Compare what you find.
For health: healthdirect.gov.au or health.govt.nz. For law: your state's legal aid. For financial matters: moneysmart.gov.au or sorted.org.nz.
If two different AI tools give conflicting answers, neither should be trusted without further verification.
For anything that affects your health, money, legal situation, or safety — talk to a qualified human professional.
AI is a useful tool — but it is the wrong tool for some situations.
If someone is unwell or in danger, call 000 (Australia) or 111 (New Zealand). Do not ask AI first.
For matters going through court, a tribunal, or involving police — speak to a lawyer, not an AI.
Buying property, restructuring debt, investment decisions — see a licensed financial adviser.
AI is not a counsellor. Contact Lifeline (13 11 14) or the Crisis Line (0800 543 354 in NZ).
Do not use AI to try to identify a person from a photo or to find someone's personal details.
Wills, contracts, agreements — an AI draft is not legally valid. Have a lawyer review any important document.
Before and after using AI — run through this quick checklist.
These are the most common error patterns — not dramatic hacking scenarios, just everyday habits that create real risk.
Full name combined with Medicare number, tax file number, passport, driver's licence, or date of birth. Any combination that could be used to identify or impersonate you should never go into an AI tool.
AI conversations are typically stored by the company. On free plans, they may be used for training or reviewed by staff. Your most sensitive thoughts and disclosures are not safe there.
AI can now clone a voice from a short audio sample and generate a realistic video of a person saying things they never said. A voice or face on a call is no longer proof of identity. See Scams & Deepfakes →
AI produces wrong answers confidently and frequently. Facts, statistics, references, quotes, and names can all be fabricated. Never act on an important AI answer without verifying it against an independent source.
AI can provide useful general information. It cannot replace a doctor, lawyer, or financial adviser who understands your specific situation and circumstances.
Photos of children, their school names, routines, or personal stories should never be entered into AI tools. You may not know how that data will be stored, used, or seen.
AI-generated images can look extremely realistic. Sharing them without clearly labelling them as AI-generated can spread misinformation or harm real people's reputations.
AI chatbots and fake customer support bots can be used to deliver malicious links. Always verify links independently by visiting the official website directly.
AI will write confidently even when it invents facts. If you submit a formal complaint, legal declaration, or official application based on AI content without checking it, you may be responsible for inaccurate or false statements.
Free AI tools are consumer products — not secure business environments. Work contracts, HR files, client details, and commercially sensitive documents should not go into free AI tools.
Many AI tools offer to connect to your email and files. Before granting this access, check exactly what the tool is allowed to read, write, and send. Access is often broader than expected.
Some AI tools can help identify suspicious messages — but scammers also use AI to make their communications more convincing. AI is not a reliable final judge of whether a message or call is genuine.
AI summaries of contracts, policies, legal notices, or important documents can miss critical details, misinterpret conditions, or omit exceptions. Always read the actual document for anything important.
Sharing family members' names, birthdates, schools, routines, and relationships creates a detailed profile that could be used in a social-engineering attack targeting you or them.
Voice and face data created for entertainment can be extracted and reused in fraud. Think carefully before using apps that record your voice, capture your likeness, or create deepfakes of others.
AI is trained on data that contains biases. It can also be influenced by the way questions are asked. Outputs may reflect cultural, political, gender, or racial bias without any obvious signal.
Public Wi-Fi — in cafes, hotels, airports — is not secure. Using AI to handle sensitive information on public networks adds an additional layer of risk to an already exposed channel.
Deleting a conversation from your AI tool's history does not guarantee immediate or complete deletion from the company's servers. Sensitive information already entered may be retained for a period regardless.
AI can feel like a supportive listener — it is always available and never judgemental. But it is not a genuine relationship, has no real understanding of your situation, and may not reflect what a trained counsellor would advise.
"I agreed to the terms without reading them" is not a protection. You are bound by those terms. The Privacy & Data section of this site summarises the most important questions to ask before using any AI tool.
Understand what happens to your information when you use AI tools — and what real incidents tell us about the risks.
When you use an AI tool, the company collects your conversations. What they do with them depends on the company and which plan you are on.
Most AI tools store your conversations on US-based servers. US law applies to your data — not Australian or New Zealand privacy law.
On free plans, many companies use your conversations to improve the AI. Your words may influence future outputs given to other users.
Some companies allow staff to read conversations for safety monitoring or quality assurance. Usually disclosed in the privacy policy — which most people never read.
Paid and business plans generally offer stronger protections — including opt-out from training and no human review. Free plans offer the least protection.
These are not hypothetical risks. Every incident below is confirmed and publicly reported. They show what can happen — and often reveal the governance failure that came before the breach.
Within weeks of Samsung lifting its ChatGPT ban, three separate employees uploaded confidential source code, meeting transcripts, and proprietary test data. Samsung restricted generative AI use across the company as a result. Disciplinary investigations were launched against all three employees.
Lesson: Without a clear policy, staff will use AI tools in ways that create serious risk. Governance must come before access. [Bloomberg]
Google confirmed it had advised employees against pasting confidential material into AI chatbots — including its own Bard product. The same data risk that applies to users applies internally.
Lesson: Even the companies building AI tools acknowledge the risk of staff misuse. [Fortune] [The Decoder]
While publishing open-source training data on GitHub, Microsoft AI researchers inadvertently exposed 38 terabytes of private data via a misconfigured Azure storage token — including passwords, private keys, and over 30,000 internal Teams messages from 359 employees. The exposure had been live for nearly three years before Wiz Research discovered it.
Lesson: AI research pipelines create new and often invisible security exposures. [Wiz Research] [TechCrunch]
A software bug exposed some users' chat titles to other users. OpenAI later confirmed that payment-related details of a small number of ChatGPT Plus subscribers may also have been visible during a nine-hour window. This same breach later contributed to Italy's €15 million fine.
[OpenAI — March 20 outage post] [The Hacker News — Italy fine]
Security researchers at Wiz discovered a DeepSeek database accessible to anyone on the internet with no password required. It contained over one million log lines including user chat histories, API keys, backend system details, and plaintext passwords. DeepSeek secured it within 30 minutes of being notified.
Lesson: This is one of the primary reasons we advise against using DeepSeek for personal or sensitive information. [Wiz Research] [TechCrunch]
In August 2024, LinkedIn automatically opted Premium subscribers into a new setting called "Data for Generative AI Improvement" — including their private InMail messages — without clear notification. LinkedIn updated its privacy policy only in September 2024, after the opt-in was already active. A class action was filed in January 2025 alleging breach of the Stored Communications Act. LinkedIn has denied the claims and says it did not ultimately use paid subscribers' private messages for training.
Lesson: Always check the current privacy settings of platforms you pay for — "premium" does not mean "private from the platform." Default-on data collection can happen without visible notification. [The Register] [ClassAction.org]
Italy's Garante temporarily banned ChatGPT over suspected privacy breaches, then lifted the ban after OpenAI made changes. Investigation continued.
After a multi-year investigation, Italy's Garante found OpenAI processed personal data without an adequate legal basis, failed transparency obligations to users, lacked adequate age verification to protect children, and failed to report the 2023 breach. OpenAI was fined €15 million and ordered to run a public awareness campaign. OpenAI is appealing.
Following Italy's action, Spain's AEPD launched a preliminary investigation, France's CNIL said it was investigating complaints, and Canada's privacy regulators opened a joint investigation into OpenAI's data collection practices.
Note: None of these cover Australian or NZ users directly. This reinforces the need to protect your own data rather than relying on regulators to do it for you. [Spain AEPD] [France CNIL] [Canada OPC]
Ireland's Data Protection Commission found X was using EU users' data to train Grok without adequate legal basis. X agreed to pause some uses. A formal investigation opened in April 2025. A second investigation opened in February 2026 specifically over Grok generating non-consensual sexualised images, including of minors.
Clearview AI built a facial-recognition database by scraping billions of photos from the internet without consent. Multiple regulators found this illegal:
Lesson: Photos you post online can be scraped at scale and used to build identification systems without your knowledge or consent. Be cautious about sharing photos of children especially. Sources: [France CNIL — €20m] [Netherlands AP — €30.5m] [UK ICO — £7.5m] [Austria — noyb criminal complaint]
Both countries have privacy laws that protect your personal information — but enforcement against overseas companies is limited. Your first line of protection is not sharing sensitive data in the first place.
Governs how organisations collect, use, and disclose personal information. The Office of the Australian Information Commissioner (OAIC) oversees compliance.
oaic.gov.au | 1300 363 992
New Zealand's updated privacy law requires organisations to report serious privacy breaches and gives people stronger rights to access and correct their personal information.
privacy.org.nz | 0800 803 909
Most AI tools are US or European companies. Your Australian or NZ privacy rights are difficult to enforce against overseas companies. Prevention is far more effective than reporting after the fact.
Australian organisations must report serious breaches to the OAIC. NZ organisations must notify the Privacy Commissioner. OAIC Notifiable Breaches
First try to resolve it directly with the organisation. If unresolved after 30 days, lodge a complaint with the OAIC at oaic.gov.au/privacy/privacy-complaints or call 1300 363 992.
First contact the organisation directly. If unresolved, contact the Privacy Commissioner at privacy.org.nz/your-rights/making-a-complaint or call 0800 803 909.
Practical guidance for parents, carers, and young people — keeping children safe around AI tools.
Yes — but only with supervision, clear rules, and the right understanding. Most mainstream AI tools are not specifically designed for children and may not include adequate safeguards by default.
Children often do not understand that what they type to an AI may be stored and read. They may share things they would never say to a stranger.
AI may give children incorrect, inappropriate, or harmful information. It is not a replacement for a trusted parent, teacher, or doctor.
Some children form emotional attachments to AI companions. This can interfere with healthy human relationships and development.
AI can be used to create fake images of real people, including children. Teach children not to share photos with AI tools.
These rules are designed to be read with your child, not just given to them.
You should never use AI alone. Always have a grown-up with you or nearby.
Do not tell AI your full name, address, school name, phone number, or any family information.
Do not upload photos of yourself, your family, or your friends to an AI tool.
If the AI says something that feels scary, strange, or upsetting — close it and tell a parent or teacher straight away.
What AI tells you is not always true, even if it sounds confident. Always check important things with a grown-up.
AI does not actually care about you. It is a computer program. Your real friends and family are the ones who matter.
Teens are using AI daily for schoolwork, creative projects, and social connection. These rules help them use it without getting into trouble.
You do not need to be a tech expert. You just need to have the right conversations and put sensible limits in place.
Ask them to show you. Many children use AI without their parents knowing. ChatGPT, Snapchat's My AI, and various apps all include AI.
Most AI tools require users to be 13 or 18. If your child is younger and using these tools, their account may violate the terms of service.
Computers and tablets should be used in family areas, not bedrooms, when children are using AI tools.
Ask your child what they are using AI for. Show curiosity, not just suspicion. Children who feel comfortable talking to you are more likely to tell you when something goes wrong.
Character.ai, Replika, and similar tools are AI companions. They can generate emotional connection in children and teens. These carry specific risks and warrant specific conversations.
Set clear rules together — not just at your child — about when, where, and how AI can be used. See the Family Agreement tab.
AI companion apps are designed to form emotional bonds with users. For children and teenagers, this carries significant risks.
They simulate a caring, responsive relationship. They remember things you tell them, adapt to your personality, and provide constant availability.
Children may prefer AI interaction to real human interaction. This can affect social development, increase loneliness, and create unrealistic expectations of relationships.
Children may share extremely personal information with AI companions — things they would not tell parents or friends. This data is stored by the company.
Signs include: preferring to talk to AI over people, distress when unable to access the app, sharing the AI's "opinions" as if they were a real friend's, and hiding their use of the app.
Use this as a starting point. Adapt it to your family. The goal is a shared understanding, not a punishment framework.
We agree to the following rules about using AI in our home:
Tip: Print this out, discuss it together, and sign it as a family. Children who help create the rules are more likely to follow them.
Use AI to learn better — without risking your integrity, your grades, or your development.
AI can make you a better student or a worse one — depending on how you use it. The goal is to use AI to learn, not to avoid learning.
Use AI to generate ideas, then choose and develop the best ones yourself. This is a legitimate and valuable learning tool.
Using AI to write a first draft that you then heavily edit is a grey area. Check your institution's policy before doing this.
Having AI write your assignment and submitting it as your own is academic dishonesty. In most institutions, this is a serious breach with real consequences.
For students in Years 1–12 (primary and secondary school).
University and TAFE policies vary widely — some allow AI, some prohibit it, many have nuanced rules.
Search your university or TAFE website for "AI use policy" or "generative AI". If you cannot find it, email your student services office.
Even within one institution, different units may have different rules. Check the unit outline or ask your lecturer.
Many institutions now require students to disclose AI use. Failing to do so — even for minor editing — may be treated as academic misconduct.
If you use AI, save the conversation. This shows what you asked, what it produced, and how you modified it.
AI frequently invents academic references. Submitting a fake citation — even unknowingly — can lead to serious consequences.
The line between using AI helpfully and using it dishonestly is about whether you are learning and producing your own thinking — or replacing it.
You ask AI to explain a concept, you read and understand it, and then you write about it in your own words.
You write a draft, ask AI what could be improved, then revise it yourself based on that feedback.
You ask AI to write sections of your work that you lightly edit. Whether this is acceptable depends entirely on your institution's policy.
You have AI produce an assignment and submit it with minimal or no changes. This is academic dishonesty regardless of how it is framed.
This is one of the most serious practical risks for students using AI. AI invents references — and they look completely real.
Search for the reference on Google Scholar, your library database, or the publisher's website. If you cannot find it, it may not exist.
If AI quotes someone, find the original text and verify the quote is real before using it.
Ask AI to explain a topic area, then find your own sources through your library or Google Scholar.
Before you submit any assignment where AI was involved — run through this list.
AI tools can dramatically accelerate how you find, understand, and synthesise research — but used well, they sharpen your thinking rather than replace it.
Paste a dense paragraph from a journal article and ask AI to explain it in plain English. Use this to build understanding, then return to the original text.
Describe your topic and ask AI what angles, counterarguments, or related fields other researchers often explore. Use this to expand your thinking, not to skip your own.
After reading multiple sources yourself, ask AI to help you identify themes or tensions across them. You bring the sources — AI helps you see patterns.
Asking AI to summarise a body of research it has "read" is risky — it may fabricate papers, misrepresent findings, or miss important recent work. Always verify from primary sources.
NotebookLM is a free AI research tool from Google that works differently to standard chatbots — and it is one of the most genuinely useful tools for students.
Instead of drawing on everything it has ever been trained on, NotebookLM works only from the sources you upload — PDFs, Google Docs, YouTube links, web pages, or text you paste in. It cannot go off and hallucinate from other sources because it is grounded in your specific documents.
This makes it fundamentally different from ChatGPT or Claude when it comes to research — every answer it gives is tied to the sources you provided, and it cites them.
Add journal articles, textbook chapters, lecture notes, or any document as a PDF or Google Doc. NotebookLM treats these as its knowledge base.
Ask "What do my sources say about X?" or "Are there any contradictions between Author A and Author B?" — it answers from your specific documents and cites which one.
Ask NotebookLM to create a study guide, FAQ, or topic summary from your uploaded material. These are grounded in what you actually need to know.
NotebookLM can generate a short audio discussion of your sources — two AI voices discuss the key ideas. Useful for auditory learners or commute revision.
Go to notebooklm.google.com — it is free with a Google account. Upload a PDF from your current studies and ask it to explain the key concepts. It is one of the most useful tools available to students right now.
A growing number of AI tools are specifically designed to help students learn — not just give answers, but guide understanding. Here are the most useful ones available to Australian and New Zealand students.
Khanmigo is built on OpenAI's GPT-4 and trained on Khan Academy's library of 429+ courses. Unlike ChatGPT, it will not simply give you an answer — it uses the Socratic method, asking guiding questions to help you find the answer yourself. This makes it genuinely useful for learning rather than just producing text.
Access it at khanacademy.org — Khanmigo is integrated directly into the platform.
Google's LearnLM is a version of Gemini specifically fine-tuned for education, using five learning science principles: active learning, manageable cognitive load, personalisation, curiosity, and metacognition. In 2025, LearnLM was built directly into Gemini 2.5.
Guided Learning mode in Gemini acts as a personal learning companion — instead of delivering answers, it asks probing questions and opens discussions to help you develop your own understanding.
Ask Gemini to "create a practice quiz on photosynthesis" or any topic — it generates interactive questions with hints, explanations, and a summary at the end.
Google offers a free AI Pro plan for students in participating countries (including Australia) — includes Gemini, 2TB storage, and NotebookLM.
Access at gemini.google.com. For the education plan, check edu.google.com.
OpenAI added a Study Mode to ChatGPT that changes how it responds — rather than immediately answering a question, it uses interactive prompts to ask questions back and guide you through solving the problem yourself. This is a significant shift from the standard ChatGPT experience.
OpenAI has also built a free version of ChatGPT for teachers with classroom-specific tools, and partnered with Khan Academy to power Khanmigo.
Access at chatgpt.com — Study Mode is available in the free and paid tiers.
For in-depth research and understanding your own source materials, NotebookLM is one of the most powerful tools available. See the AI-Assisted Research tab for a full explanation of how it works and how to use it effectively.
Language learning platform using AI (powered by GPT-4) for role-play conversation practice and AI-explained answers. One of the most mature uses of AI in structured learning.
Integrated into Microsoft 365 Education, Copilot can assist with writing, summarising notes, and understanding documents — all within the tools many schools already use.
An AI search tool that gives sourced answers with citations. Useful for initial research — but always verify citations directly before using them in assessed work.
Quizlet has integrated AI (Q-Chat) to help create flashcards, practice tests, and study guides from your own notes and materials. Strong for memory-based subjects.
I want to understand a concept → Khanmigo or Gemini Guided Learning (they ask questions back instead of just explaining)
I want to understand my own reading material → NotebookLM (upload your documents, ask questions)
I want to quiz myself → Gemini custom quizzes or Quizlet AI
I want to learn a language → Duolingo Max
I want to research a topic → Perplexity for discovery, then verify sources yourself
Practical AI adoption guidance for Australian and New Zealand SMEs — where to start, what to avoid, and how to protect your customers.
You do not need a large budget or a tech team to use AI safely. You need clear rules, the right tools, and common sense.
Use AI for drafting marketing copy, summarising meeting notes, writing social media posts, or answering general questions. These are lower-risk because they do not involve confidential data.
Write a one-page rule before staff start experimenting. Without rules, they will make decisions you might not agree with — often with customer data.
Consumer AI tools (free ChatGPT, free Gemini) are not suitable for confidential business data. If staff are using AI for work, use a paid business plan or a business-specific tool.
If your business handles customer data, you have obligations under the Australian Privacy Act or the NZ Privacy Act. Using AI tools with customer data may create compliance risks.
Choose one low-risk use case. Run it for 30 days. Review what went well and what concerns arose. Then decide whether to expand.
The biggest AI risk for SMEs is not the technology — it is staff sharing information they should not.
This is the most important decision SMEs need to make. Using the wrong type of tool for business data can create legal and reputational risk.
| Feature | Free consumer tools | Paid business plans |
|---|---|---|
| Cost | Free | Monthly fee per user |
| Data used for training | Often yes (by default) | Usually no (opt-out or off) |
| Human review of conversations | Possible | Usually not (check policy) |
| Data retention controls | Limited | More control available |
| Suitable for customer data | ❌ No | ⚠️ Check policy first |
| Privacy commitments | Basic | Stronger (often with DPA) |
| Examples | ChatGPT Free, Gemini Free | ChatGPT Team, Microsoft 365 Copilot |
Copy, adapt, and print this one-page policy for your business. Keep it simple — a policy your staff actually read is better than a comprehensive one nobody looks at.
Effective date: [Date] | Approved by: [Name/Role]
Purpose
This policy helps staff use AI tools safely and in a way that protects our customers, our business, and our legal obligations.
Approved tools
Staff may use the following AI tools for work purposes: [list tools]. No other AI tools should be used for work purposes without manager approval.
You must never put the following into AI tools
Always
Review AI outputs before sending, publishing, or acting on them. AI can be wrong. You are responsible for what you send under your name.
If something goes wrong
If you accidentally share information you should not have, tell [manager name] immediately. Early reporting allows us to limit the damage.
Questions?
Contact [name/email]. This policy will be reviewed [annually / when we add new tools].
Before committing to any AI tool for business use, ask these questions. A good vendor will answer them clearly.
It will happen eventually. Having a clear plan means you respond quickly and limit the damage.
The sooner you act, the more options you have. Panicking delays action. Report to the manager immediately.
What information was entered? Which tool? When? Was it customer data, financial data, or internal business data?
Most business plans have a process for requesting deletion of specific conversation data. Contact their support immediately.
Under Australian and NZ law, some data breaches must be reported to the regulator and affected individuals. Consider seeking legal advice.
Keep a written record of what happened, what was shared, what steps you took, and when. This protects you if questions arise later.
These are the patterns that get businesses into trouble — regulatory, reputational, and operational. Most are avoidable with basic governance.
Staff are using AI tools right now — with or without your knowledge. Without a policy, every decision defaults to the individual. This is how Samsung ended up with confidential source code in ChatGPT.
Staff do not know what they can and cannot paste into AI tools because nobody has told them. A simple tiered classification (public / internal / confidential) solves most of this.
Staff use personal ChatGPT accounts, browser extensions, and app-embedded AI for work tasks. What goes in is invisible to your business.
Giving an AI tool access to your email, CRM, file storage, or finance system before evaluating what it can read, write, and send on your behalf.
Moving fast because the tool looks impressive. Security and privacy considerations get skipped in the rush to adopt.
Connecting AI to email, CRM, HR files, or finance systems with "admin" or "full access" permissions when read-only or limited access would do the job.
AI tools can be manipulated by malicious content hidden in documents or web pages they are asked to read. An AI with email access that reads a malicious email could be prompted to send sensitive data to an attacker.
Not checking who owns the tool, where data is stored, what the retention policy is, whether subcontractors have access, and which jurisdiction governs the data.
Default settings on many AI tools allow training data use and human review. These require active opt-out — they are not off by default.
Using AI for customer service or advice without a clear path to a human when the AI fails, produces wrong answers, or encounters a sensitive situation.
Relying on AI-generated outputs in legal, financial, medical, or compliance work without evidence of human review and sign-off.
The Arup case (HK$200 million / approx. AU$39 million lost) involved staff who were not trained to verify financial requests made via video call — even ones using deepfaked executives. [Fortune]
Publishing AI-generated marketing copy, product descriptions, or customer advice without checking for factual errors, bias, or misleading statements. This creates regulatory and reputational risk.
Using agentic AI tools that can send emails, book appointments, submit forms, or approve transactions without a human reviewing each action first.
Without logs, you cannot audit what happened, demonstrate compliance, or investigate when something goes wrong.
Third-party plugins and browser extensions that connect to your AI tools can access your data and conversations. Plugin security is a well-documented risk area and many plugins have weak or no access controls.
Paying for a business plan reduces risk — it does not eliminate it. Data still travels through cloud infrastructure. Human review may still occur in some circumstances. Misconfigurations can still expose data (see: the Microsoft 38TB incident).
If an AI tool gives a customer wrong advice, produces a discriminatory output, or causes a data breach — who is responsible? This decision needs to be made before deployment, not after an incident.
Using data that contains personal information, commercially sensitive content, or copyright material for model training without proper legal review.
Deploying AI agents — tools that can autonomously browse the web, read files, send messages, and complete multi-step tasks — without understanding what access they have and what they might do with it.
For clubs, charities, churches, volunteer groups, and associations — keeping your members safe.
Many community groups hold sensitive member information — health needs, financial situations, contact details. This should never go into AI tools.
AI can help draft grant applications, but be careful not to include specific financial data, unrealistic commitments, or confidential project information.
AI-generated newsletters and social posts should always be reviewed by a human before publishing. AI may produce inaccurate or inappropriate content.
Many community groups serve vulnerable people. Extra care is needed to ensure AI is never used in ways that could harm or expose these individuals.
How to spot AI-powered scams, fake voices, and fake images — before they cause harm. These are real, documented threats.
Scammers now use AI to clone voices, create realistic fake video calls, write more convincing phishing emails, and run romance scams at industrial scale. The technology has lowered the skill and cost barrier dramatically.
AI can clone someone's voice from a short audio sample — a social media video, a voicemail. Scammers use this to impersonate family members, executives, or government officials.
Defence: Hang up and call the real person on their known number. Establish a family code word for emergencies.
AI can generate realistic video of a real person appearing to say things they never said. Used in business fraud, political disinformation, and personal attacks.
Defence: Verify large financial requests through an established, separate channel. Never act on instructions from a video call alone.
AI dramatically improves the quality of phishing emails — correct spelling, natural language, personalised content. The old "bad spelling" warning sign no longer applies.
Defence: Verify any urgent payment, credential, or personal information request through a known phone number, not a link.
AI enables scammers to run hundreds of convincing romantic relationships simultaneously. Victims may communicate for months before any request for money or personal information.
Defence: Request a live, unscripted video call. Anyone who consistently avoids this should be treated with caution.
These incidents are confirmed and publicly reported. The scale and sophistication will surprise most people.
A finance worker at the Hong Kong office of British engineering firm Arup attended what he believed was a legitimate video conference with colleagues, including the CFO. Every person on the call was a deepfake. He was instructed to make 15 transfers totalling HK$200 million (approximately AU$39 million / US$25 million). He only discovered the fraud when he checked with headquarters.
This is one of the largest confirmed deepfake fraud cases in the world. All traditional cybersecurity defences — firewalls, MFA, endpoint protection — were operating normally. The attack bypassed all of them.
Scammers created a fake WhatsApp account using a publicly available photo of WPP CEO Mark Read, then arranged a Microsoft Teams call where they impersonated Read using AI-generated audio and YouTube footage. They attempted to persuade a senior WPP executive to establish a new business and hand over money and personal details. The scam failed due to staff vigilance.
Scammers used an AI-generated voice clone of Italian Defence Minister Guido Crosetto to call some of Italy's most prominent business figures — including Giorgio Armani and Prada co-founder Patrizio Bertelli — claiming government money was needed to free kidnapped Italian journalists in the Middle East. Only Massimo Moratti, former owner of Inter Milan, transferred funds — nearly €1 million in two payments to a Dutch account. Italian police froze the money before it could be moved further. Crosetto confirmed his voice had been cloned and falsified.
Fake AI-generated robocalls imitating the voice of US President Joe Biden were sent to voters in New Hampshire, urging them not to vote in the primary election. This is the most prominent documented example of AI being used for political disinformation in a democratic election. [FTC warning on voice cloning]
Since April 2025, malicious actors have used AI-generated voice messages and texts to impersonate senior US government officials, targeting other officials and their contacts to gain access to accounts and sensitive systems. The FBI confirmed the AI-generated audio is often nearly indistinguishable from the real person. The campaign is ongoing. [FBI IC3 PSA] [CNBC]
All of these are documented and active. Sources include the FTC, FBI, Europol, the ACSC (cyber.gov.au), and Scamwatch.
Scammers clone a family member's voice to call you claiming to be in an emergency — arrested, in hospital, or in danger — and needing money urgently. The FTC and Europol both document this as a growing and effective scam. Defence: establish a family code word for emergencies and always call back on a known number.
Scammers clone an executive's voice or create a deepfake video to instruct finance staff to make urgent, authorised payments. The Arup case (AU$39M) is the most documented example. Defence: all large payment requests must be verified through a separate, established channel — never the one in the message.
AI analyses your public information (LinkedIn, social media, company website) to craft highly personalised phishing emails that reference your real colleagues, real projects, and real context. The FBI and security researchers confirm AI has made spear phishing dramatically more convincing. Defence: verify any unusual request through a known phone number, not by replying to the email.
The FBI warns that AI is being used to make business email compromise attacks more convincing through better language, voice messages, and video. BEC is already one of the highest-loss fraud categories globally. [FBI IC3]
AI enables a single scammer to maintain convincing romantic relationships with hundreds of victims simultaneously. Victims invest months of emotional connection before money is requested. The FTC documents romance scams as one of the highest-loss scam categories.
AI-generated videos show well-known celebrities (politicians, business figures, entertainers) apparently endorsing investment schemes, giveaways, or products they have nothing to do with. These are widely used on social media to lure victims into sending money. Defence: no legitimate celebrity investment offer works this way. If someone famous is personally endorsing an investment, it is a scam.
AI generates fake financial advisers, fake testimonials, fake trading platforms, and fake profit screenshots. Australia's Scamwatch reports investment scams as the highest-loss scam type year after year. AI makes the content cheaper and more convincing. [Scamwatch]
Scammers impersonate ATO, Services Australia, police, or immigration officials — now using AI voice and messaging to sound more convincing. They create urgency around alleged debts, warrants, or legal threats. Defence: real government agencies do not demand immediate payment by phone or threaten arrest.
AI chatbots and voice tools make fake technical support more convincing — simulating legitimate-sounding helpdesk interactions. Victims are persuaded to give remote access to their device or reveal credentials. [FTC guidance]
AI generates convincing fake retail websites, fake product reviews, and fake customer service chatbots at low cost and in large volumes. Europol and the FTC document these as a growing consumer fraud vector.
AI is used to create fake video and audio of political figures saying things they never said — influencing elections, donations, and public trust. The Biden robocall is the most documented example. This is a growing risk in Australian and NZ elections as well.
AI improves the quality and volume of fake SMS messages and voice calls impersonating banks, delivery companies, and government services. Cyber.gov.au and CERT NZ both warn of these as active threats.
AI-generated faces and voices are used to pass identity verification checks when setting up fake accounts — for banking, financial platforms, or regulated services. Europol documents this as a significant fraud risk. Defence: this is a risk for businesses deploying identity verification systems, not just individual users.
Fake employers use AI to create convincing job listings, conduct fake AI-powered interviews, and then request personal information (tax file numbers, bank details, photo ID) from applicants. Defence: verify employers exist through official sources before providing any personal information.
AI image tools are used to create intimate or sexualised images of real people — often from publicly available photos — which are then used to blackmail or humiliate them. This includes attacks targeting children and teenagers. The eSafety Commissioner handles these complaints. [esafety.gov.au/report]
The FBI, NCSC, and Europol all warn that AI lowers the skill and language barriers for scammers — enabling convincing fraud in languages and dialects that previously required specialist capability. This significantly increases the risk for culturally and linguistically diverse communities in Australia and NZ.
AI-generated "advisers" with fake credentials, fake track records, and fake testimonials are used to lure victims into investment schemes. These are often promoted through social media and messaging apps.
AI chatbots impersonate real companies' customer service — requesting account credentials, payment card details, or personal information under the guise of resolving an account issue. Defence: contact companies directly through their official website — never through a link in a message.
AI generates fake positive reviews for scam products and services, and fake negative reviews targeting legitimate competitors. This affects consumer decision-making across platforms.
AI companion tools and chatbots can be misused to build relationships with lonely, isolated, or grieving individuals — then exploit that emotional connection. This is particularly relevant for elderly Australians and New Zealanders. [Scamwatch]
The signs are getting harder to spot — but some warning patterns remain consistent.
Call your bank's fraud line now. They may be able to stop or reverse a transfer. Time is critical — every minute matters.
If you gave any login credentials to a scammer, change them on every account that uses those details. Do this from a secure device.
Responding confirms you are an active target. It often leads to escalating pressure or secondary scams ("we can help you recover your money").
Screenshots of messages, emails, call logs, payment receipts, and any website URLs. Save these to a secure location. You will need them to report.
🇦🇺 Australia: Scamwatch (1300 795 995) and ACSC (1300 CYBER1). 🇳🇿 New Zealand: Netsafe (0508 638 723) and Consumer Protection NZ.
🇦🇺 Report to the eSafety Commissioner: esafety.gov.au/report (1800 580 034). 🇳🇿 Report to Netsafe: netsafe.org.nz.
Honest, plain-English guidance on AI tools — what to use safely, what to approach with caution, and what to know before connecting anything to your accounts.
These categories carry significant risk if used carelessly, connected without thought, or deployed without oversight. The risk is not that every tool in these categories is malicious — it is that they require more caution than most people apply.
Tools that replicate a person's voice from audio samples — used legitimately for content creation, but the same technology is used in impersonation scams and fraud. Examples: ElevenLabs, Resemble AI, Descript Voice Clone. Use with caution. Never clone someone else's voice without their explicit consent.
Tools that replace a person's face in video with someone else's likeness. Used in entertainment and creative work, but also in extortion, fraud, and reputational attacks. Be extremely cautious about consenting to these tools using photos of yourself. Never create deepfakes of real people without their consent.
AI tools that connect to your email inbox and can send messages on your behalf. If the AI is manipulated — by a malicious prompt in an email it reads — it could send sensitive information or take unintended actions. Always review what an email agent has sent and limit its permissions to what is strictly necessary.
Tools that control your browser or computer to complete tasks autonomously — clicking, filling forms, copying, and submitting on your behalf. Examples: OpenAI Operator, Manus, various "AI agent" tools. These tools have broad access. Understand what they can access before granting permissions. [OpenAI Operator]
AI tools that connect to Google Drive, OneDrive, Dropbox, or similar services. When granted broad read access, the AI can access every file in your drive — including documents you may have forgotten about. Review permissions carefully and grant read-only access to specific folders only.
AI coding assistants that have access to your codebase may inadvertently expose secrets, API keys, or passwords present in code or configuration files. Examples: GitHub Copilot, Cursor, Amazon CodeWhisperer. Never include credentials in code that passes through an AI coding tool.
AI tools that can query and modify databases or spreadsheets. A confident wrong answer that modifies real data is a serious risk. Limit AI to read-only access for analysis tasks. Always back up data before allowing any AI write access.
Tools that automate customer communication, sales emails, or outreach at scale. Poorly supervised, these can send incorrect, misleading, or inappropriate communications to real customers. In some configurations, they can breach spam and consumer protection laws.
General AI tools (ChatGPT, Claude, Gemini) can be used by scammers to write highly convincing phishing emails, fake customer service scripts, and social-engineering messages. The FBI and ACSC both document this risk. The tool itself is not the problem — misuse is. This is why AI-written phishing is increasingly convincing.
AI tools that summarise or extract information from uploaded documents. The risk is simple: if you upload a sensitive document to a free consumer AI tool, that document's content may be stored and used for training. Use business plans or local/private tools for sensitive documents.
Tools that join your video calls and record, transcribe, and summarise meetings. Examples: Otter.ai, Fireflies.ai, Zoom AI Companion. The risk is that confidential business discussions, client meetings, and HR conversations are stored on third-party servers. Check where data is stored and what the retention policy is before using these for sensitive meetings.
AI tools that provide legal guidance can be useful for general information but should never substitute for qualified legal advice on your specific situation. AI has no knowledge of your jurisdiction-specific law, your full circumstances, or current case law. Always have a qualified lawyer review any significant legal matter.
AI health tools can describe symptoms and general information but cannot examine you, access your full medical history, or apply clinical judgement. For any significant health matter, consult a qualified healthcare professional. In an emergency, call 000 (AU) or 111 (NZ).
Automated trading and investment tools that claim to generate consistent returns with minimal input. Many in this category are scams. Even legitimate ones carry significant financial risk. These are not regulated financial advice. Australia's Scamwatch reports investment scams as the highest-loss scam type.
Customised AI models (often based on Meta's LLaMA or similar open-weight models) that have had safety guardrails removed. Available on platforms like HuggingFace. These have fewer restrictions on harmful content — and are therefore more likely to produce harmful, dangerous, or illegal outputs.
Third-party plugins that connect to AI tools (or that claim to add AI to your browser) can request extensive permissions — access to your emails, browsing history, and clipboard. These are a documented security risk. Only install extensions from well-known, established vendors.
AI-generated faces and synthesised voices are increasingly used to bypass biometric identity checks — the selfie-and-ID verification used by banks, platforms, and regulated services. Europol documents this as an active and growing threat to identity verification systems. This is primarily a risk for businesses running identity verification — but it affects consumers whose identity could be stolen to pass such checks.
Apps designed to simulate friendship, romance, or therapeutic relationships. Examples: Character.ai, Replika. These carry significant risks for children, teenagers, and vulnerable adults — including emotional dependency, privacy exposure (users share deeply personal information), and unrealistic relationship expectations. See the Kids & Teens section for more.
Tools like Midjourney, DALL-E, and Stable Diffusion can generate highly realistic images. Legitimate creative tools — but the same capability is misused for fake product testimonials, fake "evidence" in disputes, fake news imagery, and non-consensual intimate imagery. Always label AI-generated images clearly. Never use them to deceive.
AI agent tools that can autonomously browse the web, execute code, read and write files, send messages, and take multi-step actions with minimal human oversight. Examples: AutoGPT, Manus, CrewAI. Beginners typically underestimate the permissions these tools need and the damage an error or manipulation can cause. [OpenAI CUA]
Detailed, up-to-date profiles coming soon. In the meantime, we have rated 20+ AI providers across 80+ products and subscription tiers — covering free, consumer paid, business, and enterprise plans.
Something went wrong. Here is what to do — step by step.
If money has been transferred, call your bank's fraud line now. They may be able to stop or reverse the transfer. Time is critical.
If you gave any login credentials, change all related passwords immediately. Use a different device if you are concerned the current one is compromised.
Responding can confirm you are a live target and lead to further contact or escalating pressure.
Australia: Scamwatch or call 1300 795 995. New Zealand: Consumer Protection NZ or call 0508 426 678.
Take screenshots of all messages, emails, and any other contact. Save these securely. You will need them if you report to police.
For significant financial losses, report to your local police or via the Australian Cyber Security Centre (ACSC).
The child needs to feel safe telling you what happened. Avoid reactive anger that may cause them to shut down. Thank them for telling you.
Take screenshots of the harmful content before closing or deleting it. You may need this to report to the platform or authorities.
All major AI platforms have a safety or abuse reporting function. Use it. If the content involved a child, flag it as urgent and child-related.
Australia: Report to the eSafety Commissioner at esafety.gov.au/report. NZ: Report to Netsafe (0508 638 723).
Depending on the nature of what happened, the child may need to speak to a counsellor. Contact your school or GP for a referral if needed.
Take screenshots and save all evidence before reporting. Include URLs, profile names, dates, and any messages sent.
Social media platforms, AI tools, and websites all have abuse reporting processes. Use these immediately.
If the impersonator reached out to your contacts pretending to be you, alert those people so they do not fall for any requests.
Australia: Report to the eSafety Commissioner or local police. NZ: Contact Netsafe or New Zealand Police.
Triple Zero: 000
For immediate danger to yourself or others.
Australian Cyber Security Centre
cyber.gov.au/report
1300 CYBER1 (1300 292 371)
Scamwatch
scamwatch.gov.au
1300 795 995
eSafety Commissioner
esafety.gov.au/report
1800 580 034
Office of the Australian Information Commissioner
oaic.gov.au
1300 363 992
Lifeline: 13 11 14
Beyond Blue: 1300 22 4636
Both available 24/7
111
For immediate danger to yourself or others.
CERT NZ
cert.govt.nz/report
0800 237 869
Netsafe
netsafe.org.nz
0508 638 723 (freephone)
Consumer Protection NZ
consumerprotection.govt.nz
0508 426 678
Office of the Privacy Commissioner
privacy.org.nz
0800 803 909
Lifeline NZ: 0800 543 354
Need to talk: 1737 (free text or call)
Both available 24/7
Independent safety scores for over 80 AI products — Free, Consumer Paid, Business, and Enterprise tiers — in plain English.
Sorted highest to lowest. Midpoint line (5.0) marks the threshold for basic suitability. Score = average of SAFE + SRS.
Recommendation: Avoid entering sensitive, confidential, or commercially valuable information into any AI platform — and take extra care when using tools governed by foreign jurisdictions with different legal and transparency standards.
It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.
However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.
For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.
Sorted highest to lowest. Midpoint line (5.0) marks the threshold for basic suitability.
A paid subscription does not change the underlying legal jurisdiction.
Recommendation: Avoid entering sensitive, confidential, or commercially valuable information into any AI platform — and take extra care when using tools governed by foreign jurisdictions with different legal and transparency standards.
It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.
However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.
For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.
Sorted highest to lowest. Midpoint line (5.0) marks the threshold for basic suitability.
These products all score below 5.1 on SAFE at business tier. The legal jurisdiction applies regardless of the plan or contract.
Recommendation: Avoid entering sensitive, confidential, or commercially valuable information into any AI platform — and take extra care when using tools governed by foreign jurisdictions with different legal and transparency standards.
It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.
However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.
For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.
Sorted highest to lowest. Midpoint line (5.0) marks the threshold for basic suitability.
Even at enterprise tier, these products score below 6.0 on SAFE. A commercial contract does not change the underlying legal jurisdiction.
Recommendation: Avoid entering sensitive, confidential, or commercially valuable information into any AI platform — and take extra care when using tools governed by foreign jurisdictions with different legal and transparency standards.
It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.
However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.
For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.
All AI providers rated across 80+ products and subscription tiers. Click any name to visit their official site. ⚠️ marks providers where Chinese jurisdiction applies — see the jurisdiction note at the bottom of this tab.
Free · Plus · Pro · Business · Enterprise · API · Codex · Edu
Free · Pro · Max · Teams · Enterprise · API
Free · AI Pro · AI Ultra · AI Ultra Business · Enterprise · API
Free · Premium+ · SuperGrok · SuperGrok Heavy · API
Meta AI Free · Llama API · Llama Self-Hosted
Le Chat Free · API · Self-Hosted · Enterprise
Caution — Chinese jurisdiction applies
Free · Moderato · Allegretto · Vivace · API
Caution — Chinese jurisdiction applies
Free · API Free Tier · Coding Lite · Coding Pro · Pay-as-go API · Enterprise
Free · Pro · M365 Copilot · Enterprise (+ Copilot Chat EDP)
Built on OpenAI GPT models
Free · Pro · Max · Enterprise Pro · Enterprise Max
Routes to multiple underlying models
Free · Subscriber · Annual
Accesses ChatGPT, Claude, Gemini, Grok & more
Pi Free · Enterprise
Lite · Essentials · Standard · Enterprise — highest-rated enterprise platform
Free (API Catalog) · Self-Hosted · Enterprise — joint top enterprise score
Trial · Production API · Fine-tuning · Enterprise
Free Trial · API · Enterprise
⚠️ What does "Chinese jurisdiction" mean?
Providers marked ⚠️ operate under China's legal framework, including the National Intelligence Law, which can require organisations to provide data to authorities — regardless of the quality of their AI or how good their privacy policy appears. This is a legal and geopolitical risk, not a comment on the technical capability of these tools. Avoid entering sensitive, confidential, or commercially valuable information into any of these platforms.
Each AI product is assessed on two measures, both scored out of 10:
SAFE score — how well the vendor protects your data. It looks at things like whether your conversations are used to train the AI, whether the company has real security certifications, where your data is stored, and what happens if you want to delete it.
SRS score — how easy it is to stop using the product and move on. A high SRS means it is straightforward to export your data and switch to a different provider without being locked in.
The Total score shown in each chart is the average of these two. A score below 5 is considered not safe for sensitive or personal information. A score above 7 is generally considered safe for business use. Scores were assessed in May 2026 and may change as products update their policies.
🔵 AI Labs
Research organisations that build and train their own AI models from the ground up. This includes both US/European labs (OpenAI, Anthropic, Google, Meta, xAI, Mistral) and Chinese labs (DeepSeek, Kimi, Qwen, Doubao, ERNIE, GLM, MiniMax). All are frontier-level AI research organisations. Chinese-jurisdiction providers are individually flagged ⚠️ in the Provider Directory.
🟢 AI Platforms & Wrappers
These companies do not train their own frontier AI models. They build products on top of models created by AI Labs. Both the platform's own privacy terms and the underlying model provider's terms may apply to your data. Includes Copilot (Microsoft), Perplexity, Poe, and Pi.
🟣 Enterprise & Specialist
Purpose-built AI tools designed for business and regulated-industry use, with stronger data protections, compliance certifications (ISO 27001, SOC 2, HIPAA), and contractual data guarantees. Includes IBM watsonx, NVIDIA NIM, Cohere, and AI21 Labs.
Need help choosing the right AI tools for your organisation?
AI consulting services — coming soon. Get in touch early ↗
Who we are, how we score AI products, and where our information comes from.
Think AI Safety is a free, independent resource created for Australians and New Zealanders who want clear, honest guidance on using AI tools safely — without jargon or hype.
To give everyday people — parents, students, workers, small businesses, and community groups — the information they need to use AI confidently and safely.
Most AI safety resources are written for US audiences. This site is built specifically for Australian and New Zealand legal, regulatory, and cultural context.
This site is and will remain free to access. No paywalls, no advertising. It is funded through Kirk Holt's consulting practice.
AI tools and policies change frequently. This site is reviewed and updated regularly. All content carries a "Last updated" date. Currently: May 2026.
AI Safety Consultant · Australia
Kirk Holt is an AI safety educator and consultant helping Australian and New Zealand individuals, businesses, and community organisations understand and manage AI risk. This site reflects his independent research and does not represent the views of any AI vendor.
Think AI Safety was researched, written, and built with the assistance of AI tools. We think it's only right to be upfront about that — especially on a site about AI transparency. The following tools contributed to this project:
All AI-generated content was reviewed, verified, and edited by Kirk Holt before publication. AI tools assisted the process — the judgements, decisions, and responsibility for accuracy remain human. We believe this is exactly how AI should be used.
The SAFE score and SRS score used in the Safety Ratings section are derived from a structured vendor assessment framework applied to each AI product. Each product is assessed across six dimensions, all scored based on publicly available documentation:
How broadly the product invites or exposes sensitive content — whether the AI tool has appropriate guardrails, content policies, and limits on what can be entered or returned.
The combined strength of technical, contractual, and encryption safeguards. Includes whether the vendor holds ISO 27001, SOC 2, or similar certifications.
Documented security controls: access management, encryption in transit and at rest, audit logging, and identity features such as single sign-on and multi-factor authentication.
Privacy commitments, data processing agreements, training opt-outs, and the practical effect of the vendor's legal jurisdiction on your data.
Whether your organisation can control its own encryption keys (BYOK — Bring Your Own Key) or whether the vendor holds all keys. Vendor-held keys mean the vendor can technically access your data.
Whether your prompts and AI outputs are used to train models, shared with third parties, or retained after your session ends — and whether you can opt out.
The SRS (Switching & Retention Score) measures how easy it is to stop using the product and move to a different provider — including data export, contract exit terms, and dependency risk.
The Total score is the average of SAFE and SRS. Scores below 5.0 are considered unsuitable for sensitive or business information. Scores above 7.0 are generally appropriate for business use with appropriate governance.
Privacy policies, terms of service, trust centre disclosures, compliance certifications, and product documentation published by each vendor. Source links are included in the Tool Reviews section.
Australian Cyber Security Centre (ACSC), CERT NZ, Office of the Australian Information Commissioner (OAIC), and the NZ Office of the Privacy Commissioner.
Publicly reported and independently verified security incidents. All incidents cited in this site include links to original reporting. Sources include Wiz Research, TechCrunch, Reuters, BBC, and government regulators.
Academic and industry research on AI safety, privacy risk, and data governance. Assessments reflect the state of publicly available documentation at the time of review (May 2026).
Kirk Holt works with Australian and New Zealand individuals, businesses, schools, and community organisations on AI safety education and risk management.
Receive free AI safety updates, scam alerts, and workshop announcements.
Available across Australia and New Zealand — in person or online.
Or email: info@thinkaisafety.com.au
Use the form below for general enquiries, corrections, or media requests. We respond within 1–2 business days.