🛡️ Think AI Safety — Offline Edition  |  All content and navigation works without internet  |  External source links (marked [source]) require a connection   ✕ dismiss

Think AI Safety.

Practical, no-nonsense guidance for everyday Australians and New Zealanders — from families to small businesses.

🇦🇺 Australia ↓ 🇳🇿 New Zealand ↓ Plain English Free Resource

Find guidance for your situation

👋

General Public

New to AI? Start here

👨‍👩‍👧

Parents

Keep your kids safe

🎓

Students

Use AI without cheating

🏪

Small Business

Adopt AI safely

🤝

Community Groups

Clubs, charities, volunteers

🆘

Something went wrong?

Get help now

Essential AI safety rules — for everyone

Whether you're new to AI or use it every day, these seven rules protect you from the most common risks. Each one is based on a real way people get hurt.

1

🚫 Never share sensitive information

Passwords, Medicare numbers, bank details, client data, confidential work documents — none of it belongs in an AI tool. What you type may be stored or used for training.

2

✅ Always verify important answers

AI sounds confident even when it's wrong. Made-up references, outdated facts, and plausible-sounding errors are common. Check medical, legal and financial advice with a real expert.

3

🔒 AI is not private by default

Your conversations may be read by company staff, stored indefinitely, or used to train future AI. Treat everything you type as if it could be made public.

4

👧 Supervise children around AI

AI is not designed for children and carries real risks — harmful content, emotional dependency, and unsafe interactions. Kids should never use AI tools unsupervised.

5

💼 Keep business data off free tools

Free consumer AI tools are not designed for confidential business data. Using them for client files, contracts, or HR matters can breach privacy law and your clients' trust.

6

📋 Check the privacy policy before you start

Most people never read how an AI tool uses their data. "Free" usually means your conversations help train the AI. Our Privacy guide has tool-by-tool breakdowns.

7

🤖 AI is a tool, not an expert

AI can draft, summarise, and explain — but it doesn't understand your full situation, your legal context, or your specific needs. It cannot replace a doctor, lawyer, or financial adviser.

What AI is
What AI is not
What AI does well
What AI gets wrong
5 rules for safe use
AI glossary

What is AI?

AI stands for Artificial Intelligence. In everyday use, it usually means software that can read what you type and write a response that sounds human.

The simplest way to think about it

An AI tool has been trained by reading enormous amounts of text from the internet, books, and other sources. When you ask it something, it predicts what a good response would look like — based on patterns in all that text.

It is not thinking. It is not feeling. It is generating text that matches what it has learned.

The most widely used AI tools in Australia and New Zealand include:

ChatGPT

Made by OpenAI (USA). One of the most widely used AI tools for writing, answering questions, and summarising.

Visit ChatGPT ↗

Claude

Made by Anthropic (USA). Used for writing, research help, and analysis. Designed with safety as a focus.

Visit Claude ↗

Gemini

Made by Google (USA). Integrated with Google's products. Used for writing, research, and email drafting.

Visit Gemini ↗

Copilot AI Aggregator

Made by Microsoft (USA). Built into Windows and Microsoft 365. Used for drafting documents and emails.

Visit Copilot ↗

Meta AI

Made by Meta (USA). Already built into apps millions of Australians use every day — Facebook, Instagram, Messenger, and WhatsApp — as well as Ray-Ban Meta smart glasses. Many people are using Meta AI without realising it.

Visit Meta AI ↗

Perplexity AI Aggregator

Made by Perplexity AI (USA). An AI-powered search tool that cites sources and retrieves current web information. Routes queries through multiple AI models including its own Sonar model, and (on paid plans) OpenAI's GPT, Anthropic's Claude, Google's Gemini, and others. Multiple companies' privacy terms may apply.

Visit Perplexity ↗
More AI tools — including Chinese-jurisdiction providers Click to expand ▾

Grok

Made by xAI (USA), owned by Elon Musk. Integrated into the X (formerly Twitter) platform. Has fewer content restrictions than most other AI tools.

Visit Grok ↗

DeepSeek ⚠️ Chinese Jurisdiction

Made by DeepSeek (China). A powerful AI tool that has raised significant privacy and security concerns because it is operated under Chinese law, which allows government access to data.

⚠️ Visit DeepSeek ↗

Le Chat ⭐ Highest-rated free consumer AI

Made by Mistral AI (France). A European alternative with notably stronger privacy practices than its US rivals. In our Safety Ratings, Le Chat Free scores higher than ChatGPT Free, Claude Free, and Gemini Free on data protection. Worth considering if privacy matters to you.

Visit Le Chat ↗

Poe AI Aggregator

Made by Quora (USA). A platform that gives access to many AI models in one place — ChatGPT, Claude, Gemini, Grok, Mistral, and others. Not one AI — a gateway to many. Privacy terms vary depending on which underlying model handles your query.

Visit Poe ↗

Kimi ⚠️ Chinese Jurisdiction

Made by Moonshot AI (China). One of China's fastest-growing AI tools, with strong performance on reasoning and long documents. As with all Chinese-operated AI, China's National Intelligence Law applies — see Safety Ratings for details.

⚠️ Visit Kimi ↗

Manus ⚠️ Chinese Jurisdiction

Made by Monica (China). An AI agent that can carry out multi-step tasks autonomously — browsing the web, writing files, and running code on your behalf. A newer and more powerful type of AI tool that requires extra caution.

⚠️ Visit Manus ↗

View full Provider Directory →

ℹ️ What is an "AI Aggregator"? Some tools — like Copilot, Perplexity, and Poe — are not independent AI models. They act as a front-end that routes your questions to one or more underlying AI models (such as OpenAI's ChatGPT or Anthropic's Claude) via API. This means your data may be processed by more than one company, and multiple privacy policies may apply.
🌏 DeepSeek & Manus — Chinese AI Labs — jurisdiction note: Both are operated under China's legal framework, including the National Intelligence Law. This law can require organisations to provide data or technical assistance to authorities, which may extend beyond what is outlined in public privacy policies. Avoid entering sensitive, confidential, or commercially valuable information into either tool.
More about data access laws across jurisdictions ▾

It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.

However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.

For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.

Meta AI is already in your pocket: If you use Facebook, Instagram, Messenger, or WhatsApp, you may have already interacted with Meta AI. Be aware of what you share in chats with it — the same rules about personal information apply.
Important: Most major AI tools are owned by US companies and operate under US law — not Australian or New Zealand law. This has privacy implications you should understand. Learn more →

What AI is not

Understanding what AI cannot do is just as important as knowing what it can do. Many problems come from misunderstanding this.

🧑‍⚕️

Not a doctor

AI can describe symptoms and general health information, but it cannot examine you, diagnose you, or know your full medical history.

⚖️

Not a lawyer

AI can explain general legal concepts but cannot give you legal advice about your specific situation, especially under Australian or New Zealand law.

💰

Not a financial adviser

AI can explain financial terms but is not licensed to advise you on your money, investments, or tax situation.

👤

Not a real person

AI is not conscious, does not have feelings, and does not remember you between sessions (unless it explicitly says it does).

🔍

Not always up to date

AI tools have a training cutoff — they may not know about recent events, new laws, or current prices.

🤝

Not private

AI is not like talking to a friend in confidence. What you type is typically stored by the company.

What AI does well

Used correctly, AI is genuinely useful. Here is where it adds real value for everyday people.

AI works best as a starting point or a drafting tool — not as a final answer or sole source of truth.
✍️

Drafting and writing

Emails, letters, social posts, reports — AI can produce a solid first draft that you then check and adjust.

📋

Summarising long documents

AI can read a long document and give you a shorter summary — useful for reports, meeting notes, and articles.

💡

Brainstorming ideas

Great for generating a list of options, ideas, questions, or approaches — then you decide which are good.

🌍

Translating language

AI is reasonably good at translating between languages for everyday purposes.

📖

Explaining concepts simply

Ask AI to explain something in plain English — it is often better at this than a search engine.

🔢

Basic coding and formulas

AI can help write basic code, Excel formulas, and spreadsheet functions — though you should test these before relying on them.

What AI gets wrong

This is the most important part of this section. AI makes mistakes — and it makes them confidently.

⚠️
The biggest risk: AI will give you a wrong answer in the same calm, confident tone as a correct answer. It does not flag its own errors.
1

🤔 "Hallucinations"

AI invents facts, quotes, references, court cases, statistics, and names that do not exist. This is not lying — it is a technical flaw. But it is dangerous if you act on it.

2

📅 Outdated information

AI's knowledge has a cutoff date. Laws, prices, contacts, and policies may have changed since it was trained.

3

🧩 Misunderstanding context

AI does not know your local area, your specific circumstances, or Australian and New Zealand law unless you tell it explicitly.

4

⚖️ Biased outputs

AI was trained on internet text, which contains biases. It can produce content that is biased by race, gender, culture, or geography.

5

🔢 Maths errors

Basic AI tools can make arithmetic mistakes. Do not rely on AI for financial calculations without checking.

5 rules for safe AI use

Follow these five rules and you will avoid the most common and serious problems.

1

🚫 Never put in what you would not post publicly

Treat every AI conversation as if it could be read by the company, their staff, or made public one day. If you would not post it on social media, do not type it into AI.

2

✅ Always verify anything important

Before acting on any AI answer about health, money, law, or safety — check it against a trusted source or a real professional.

3

✍️ Use AI for drafts, not final answers

AI is a great first step. It is a poor final step. Read, edit, and check everything before you use it.

4

▸ Know which tool you are using and who owns it

Different tools have different privacy policies and different levels of safety. Check before you use.

5

👶 Keep children supervised

Most mainstream AI tools are not specifically designed for children and may not include adequate safeguards by default. Supervision is essential.

AI Terms Glossary

Every term you might encounter — explained in plain English. Click any term to expand its definition.

Core AI 17
Architecture 10
Training 8
RAG & Knowledge 17
Agents 11
Skills & MCP 7
Prompting 8
Safety & Risk 18
Deployment 10

Core AI Terms

AI — Artificial Intelligence
Software designed to perform tasks that usually require human intelligence, such as writing, analysing, recognising patterns, or making recommendations. In everyday use this means tools like ChatGPT, Claude, or Gemini.
Machine Learning — ML
A type of AI where systems learn patterns from data instead of being manually programmed for every rule. The more data the system sees, the better it gets.
Deep Learning
A type of machine learning that uses layered neural networks to learn patterns. Most modern AI language and image tools are built on deep learning.
Neural Network
A system loosely inspired by the brain. It consists of layers of connected nodes that process and transform information to produce an output.
Foundation Model
A large AI model trained on broad data that can be adapted for many different tasks. GPT-4, Claude, and Gemini are examples of foundation models.
Large Language Model — LLM
An AI model trained on large amounts of text to understand and generate language. Most AI chatbots and writing assistants are built on LLMs.
Generative AI
AI that creates new content — text, images, video, audio, or code. This is the type most people use daily.
Model
The trained AI system behind the tool. For example, ChatGPT is the app; GPT-4 is the model.
Multimodal
An AI model that can process more than one type of input — for example, text and images together. GPT-4o and Gemini are multimodal.
Chatbot
A program that has a text conversation with you. Not all chatbots use AI — some follow simple scripts. AI chatbots can respond to almost any question.
Natural Language Processing — NLP
The field of AI focused on enabling computers to understand, interpret, and generate human language.
Training Data
The information used to train an AI model. Most large models were trained on vast amounts of internet text, books, and code.
Inference
The process of using a trained model to generate an answer or output. When you send a prompt, the model is performing inference.
Prompt
The instruction or question you give to an AI. A more specific prompt usually produces a better result.
Hallucination
When AI confidently produces incorrect or unsupported information. One of the most important risks to understand when using AI.
Confabulation
A more formal term for AI generating plausible but false information — the model fills in gaps with invented but convincing-sounding content.
Open Source / Closed Source
Open source AI models publish their weights and architecture publicly (e.g. Meta Llama, Mistral). Closed source models keep this private (e.g. GPT-4, Claude). Open source allows self-hosting; closed source is usually accessed via API.

Model Architecture & Behaviour

Transformer
The core architecture behind most modern AI language models. Introduced by Google in 2017. It processes all tokens in parallel and uses attention to understand relationships between words.
Attention Mechanism
A technique that helps the model focus on the most relevant parts of the input when generating each word. "Self-attention" is what makes transformers powerful.
Parameters
The internal values learned during training. More parameters generally means more capacity to learn. GPT-4 is estimated to have over a trillion parameters.
Token
A chunk of text processed by the AI — roughly a word or part of a word. "Unbelievable" might be 3 tokens. Pricing and limits are usually measured in tokens.
Context Window
The amount of text the model can consider at one time — both your input and its output. Larger context windows let the AI work with longer documents.
Temperature
A setting that controls randomness. Lower temperature gives more predictable, conservative answers. Higher temperature gives more varied, creative answers.
Top-p / Nucleus Sampling
A setting that controls which possible next words the model considers. Lower values keep the output more focused; higher values allow more variety.
Logits
Raw prediction scores the model produces before converting them into probabilities. Not typically visible to end users but relevant to developers.
Decoding Strategy
The method used to choose the next word or token when generating output — for example, greedy decoding, beam search, or sampling.
Determinism
How repeatable the model's answer is when given the same input. At temperature 0, most models are near-deterministic. Higher temperatures produce different answers each time.

Training & Improvement

Pre-training
The initial stage where a model learns general patterns from very large datasets. This is expensive and typically done only by major AI labs.
Fine-tuning
Additional training after pre-training to specialise a model for a specific task, domain, or behaviour — such as customer service or medical Q&A.
Supervised Fine-tuning
Training using labelled examples where the desired answer is already known. Common in the later stages of model training.
RLHF — Reinforcement Learning from Human Feedback
A training method where humans rate or rank model outputs to guide the model toward better, safer, more helpful responses. Used by OpenAI, Anthropic, and others.
Overfitting
When a model memorises training examples too closely and performs poorly on new, unseen data. Like a student who memorises past exams but can't apply knowledge to new questions.
Generalisation
The model's ability to apply what it has learned to new situations it hasn't seen before. Good generalisation is a sign of a well-trained model.
Distillation
Training a smaller, faster model to mimic the behaviour of a larger model. Used to reduce cost and latency while preserving much of the capability.
Quantisation
Reducing model size or numerical precision so it runs faster or cheaper, usually with some trade-off in accuracy. Used to run models on consumer hardware.

RAG & Knowledge

RAG — Retrieval-Augmented Generation
A method where the AI retrieves relevant external information before generating an answer. Used to keep AI answers grounded in specific documents or databases.
Retriever
The component that searches for relevant documents or data to pass to the AI before it generates a response.
Embedding
A numerical representation of meaning. Words or sentences with similar meanings have similar embeddings. Used to power semantic search.
Vector
A list of numbers representing meaning, similarity, or features. Embeddings are a type of vector.
Vector Search
Search based on meaning rather than exact keyword matching. "Laptop won't start" would match "computer not turning on" via vector search.
Vector Database
A database designed to store and search embeddings efficiently. Examples include Pinecone, Weaviate, and pgvector.
Semantic Search
Search that understands the meaning of a query rather than just matching keywords.
Knowledge Base
A collection of documents, records, FAQs, policies, manuals, or data the AI can search to ground its answers.
Indexing
Preparing information so it can be searched efficiently — including chunking, embedding, and storing in a vector database.
Chunking
Breaking documents into smaller pieces so the AI can retrieve the most relevant sections rather than loading entire documents.
Chunk Size
The size of each document segment used in retrieval. Smaller chunks are more precise; larger chunks provide more context.
Overlap
Repeating some content at the boundary between chunks so important context isn't lost when documents are split.
Top-k Retrieval
Retrieving the top-k most relevant results — for example, the top 5 most relevant document chunks for a query.
Re-ranking
A second step that reorders retrieved results by relevance before passing them to the AI. Improves accuracy over basic vector search.
Grounding
Basing the AI's answer on supplied or retrieved source material rather than the model's training alone.
Citation Grounding
Providing references or links to the source documents used in the AI's answer.
Context Injection
Adding retrieved information directly into the prompt before the model answers — the core mechanism of RAG.

Agents, Tools & Workflows

Agent
An AI system that can plan, use tools, and take a sequence of steps to achieve a goal — rather than just answering a single question.
AI Agent
A more advanced AI assistant that can perform multi-step tasks such as browsing the web, reading files, writing and running code, or sending emails on your behalf.
Tool Use
When an AI calls an external tool — such as a search engine, calculator, calendar, email system, CRM, or database — to complete a task.
Function Calling
A structured way for a model to request a specific tool or action using defined inputs and outputs. Enables reliable integration with external systems.
Workflow
A repeatable sequence of steps used to complete a task. AI can be embedded at one or many steps of a workflow.
Orchestration
The system that coordinates models, tools, agents, workflows, and data sources to complete a complex task.
Planner
The component of an agent that breaks a high-level goal into a sequence of smaller steps.
Executor
The component of an agent that carries out each step — calling tools, reading data, and taking actions.
Human-in-the-Loop
A process where a person reviews, approves, or corrects AI actions before they are finalised. Reduces risk in high-stakes situations.
Autonomous Agent
An agent that can continue acting toward a goal with minimal human input. Requires careful governance — autonomous agents can take actions with real consequences.
Multi-agent System
A setup where multiple AI agents work together, often with different roles — for example, a researcher agent, a writer agent, and a reviewer agent.

Skills & MCP

Skill
A reusable bundle of instructions, files, templates, or scripts that teaches an AI how to perform a specific task or workflow consistently.
MCP — Model Context Protocol
An open standard introduced by Anthropic that allows AI models and agents to connect with external tools, apps, files, databases, and business systems through a common protocol.
MCP Server
A connector that exposes a tool, app, database, or service to an AI model using the MCP protocol.
MCP Client
The AI application that connects to MCP servers to access external tools and data.
Connector
An integration that allows AI to access an external app or data source — for example, a Slack connector or a Google Drive connector.
Integration
A connection between an AI system and another software system, enabling them to share data and trigger actions.
Plugin
An add-on that gives AI access to extra capabilities, tools, or data beyond its default abilities.

Prompting

Prompt Engineering
The practice of designing prompts carefully to get better, more accurate, or more useful AI outputs.
System Prompt
High-priority instructions set by the developer or platform that shape how the AI behaves — such as its persona, restrictions, and task focus. Users usually cannot see the system prompt.
User Prompt
The instruction or question entered directly by the user.
Few-shot Prompting
Providing examples inside the prompt so the AI learns the expected pattern or format for its response.
Zero-shot Prompting
Asking the AI to perform a task without providing any examples — relying on the model's training alone.
Chain-of-thought
A prompting technique that asks the AI to reason step by step before giving a final answer. Improves accuracy on complex problems.
Reasoning Model
A model designed to spend more effort on complex problems — often using internal reasoning steps before producing a final answer. Examples include OpenAI o1 and DeepSeek-R1.
Context
Background information included in the prompt to help the AI give a more relevant or accurate answer.

Safety, Privacy & Risk

Alignment
The effort to make AI behave according to human goals, values, and instructions — avoiding harmful, deceptive, or unintended behaviour.
Guardrails
Rules or technical controls that restrict unsafe, inaccurate, or unwanted AI behaviour — such as refusing harmful requests or flagging sensitive content.
Jailbreaking
Attempting to bypass an AI system's safety restrictions using clever prompts or techniques.
Prompt Injection
Malicious or hidden instructions embedded in content the AI reads — designed to manipulate the AI into taking unintended actions.
Data Leakage
Sensitive information unintentionally appearing in AI output — for example, training data being reproduced in a response.
Data Exfiltration
The unauthorised extraction of data through an AI system or connected tool — a risk when AI agents have access to sensitive systems.
Data Retention
How long an AI company keeps your conversations and data. This varies significantly between tools and subscription plans — free plans typically retain data longer.
Human Review
When company staff read actual user conversations to improve the AI. Many free plans enable this by default; business plans typically opt out.
Tool Hallucination
When an AI claims it used a tool, source, or external system when it did not. A risk in agentic systems.
Over-permissioning
Giving an AI system more access to data or systems than it actually needs for its task. Increases risk if the AI is compromised or makes mistakes.
Least Privilege
The principle of giving the AI only the minimum access required to perform its task — reducing exposure if something goes wrong.
Sandboxing
Running AI actions in a restricted, isolated environment to limit the damage if the AI behaves unexpectedly.
Audit Trail
A record of what the AI did, what data it accessed, what tools it used, and when. Essential for compliance and incident investigation.
Evaluation / Evals
Structured tests used to measure AI quality, accuracy, safety, and reliability before and after deployment.
Red Teaming
Stress-testing AI systems to find vulnerabilities, failure modes, or unsafe behaviour — by simulating adversarial users or edge cases.
Deepfake
A realistic fake video, image, audio, or voice recording generated by AI. Increasingly used in scams, fraud, and misinformation.
Privacy Policy
A document from the AI company explaining how they collect, store, use, and share your data. Always read this before using a new AI tool.
Terms of Service
The legal agreement between you and the AI provider. Sets out what you can and cannot do, and what rights the provider has over your data and content.

Deployment & Performance

API — Application Programming Interface
A way for software systems to communicate with each other. Accessing AI via API means your application sends requests and receives responses programmatically.
API Key
A secret credential used to authenticate requests to an AI service. Should be kept private — anyone with your API key can use your account.
Endpoint
A specific API address where requests are sent — for example, the address for the GPT-4 model vs the GPT-3.5 model.
Rate Limit
A restriction on how many API requests can be made in a given time period. Prevents overloading the service.
Token Limit
The maximum number of tokens a model can process or generate in a single request. Exceeding this requires splitting content across multiple requests.
Cost per Token
The pricing method most AI platforms use. You pay separately for input tokens (what you send) and output tokens (what the AI generates).
Latency
How long it takes for the AI to respond after receiving a request. Important for real-time applications.
Throughput
How many requests the system can handle over a period of time. Important for high-volume business applications.
Open Source Model
A model whose weights are publicly released, allowing anyone to run, modify, or deploy it. Examples include Meta Llama and Mistral. Can be self-hosted for greater privacy control.
Self-hosting
Running an AI model on your own infrastructure rather than using a cloud provider. Gives maximum data control but requires significant technical resources.
Safe AI basics
What not to share
How to check answers
When not to use AI
Safe use checklist
20 common mistakes

Safe AI use basics

These habits keep you protected regardless of which AI tool you use.

✅ Do this

  • Use AI to draft emails and letters — then read and edit
  • Ask AI to explain concepts in plain English
  • Use AI to brainstorm ideas — then evaluate them yourself
  • Treat AI answers as a starting point, not a conclusion
  • Check the privacy policy of any AI tool before using it
  • Use paid or business plans when working with sensitive information

🚫 Don't do this

  • Share private details to get a more personalised answer
  • Use AI output without reading it carefully
  • Rely on AI for legal, medical, or financial decisions
  • Paste confidential work or client information
  • Let children use AI tools unsupervised
  • Assume the AI remembered to mention important limitations

What not to put into AI

This is the most important page on this site. These categories of information should never be typed into an AI tool.

🔒
Once you type something into an AI tool, you cannot take it back. The company may store it, use it for training, or have staff review it.
🪪 Personal identity information
Full name combined with date of birth, address, or ID numbers. Medicare card numbers. Driver's licence numbers. Passport details. Tax file numbers (TFN). Anything that could be used to identify or impersonate you.
💳 Financial information
Bank account numbers or BSB codes. Credit card numbers. Superannuation account details. Loan details. Pay slips or salary information. Descriptions of large transactions.
🏥 Health and medical information
Diagnoses, medications, mental health history, test results, or treatment details. Information about a family member's health. Anything you would not want a stranger to know about your health.
⚖️ Legal matters in dispute
Details of active legal cases. Information about police matters. Evidence or documents in a dispute. Information about family court, domestic matters, or custody.
💼 Confidential work and client information
Client names or contact details. Internal business strategies. Commercial contracts. HR files, performance reviews, or salary information. Information covered by a non-disclosure agreement.
👨‍👩‍👧 Private family matters
Personal relationship or family disputes. Information about a child, including their school, routine, or photos. Details that could put a family member at risk.
🔑 Passwords and security codes
Never enter passwords, PINs, or security questions into an AI tool under any circumstances.
📸 Photos and voice recordings
Photos of people (especially children) that you have not been explicitly authorised to share. Photos containing private information. Voice recordings of private conversations.

How to check whether an AI answer is correct

AI sounds authoritative even when it is wrong. These steps help you catch errors before they cause problems.

1

🔍 Ask the AI for its source

Type: "What is your source for that?" If it cannot point to a specific, verifiable source, treat the answer as unverified.

2

▸ Search the claim independently

Take the key facts from the AI's answer and search them on Google or a trusted website. Compare what you find.

3

✅ Check official sources for important topics

For health: healthdirect.gov.au or health.govt.nz. For law: your state's legal aid. For financial matters: moneysmart.gov.au or sorted.org.nz.

4

▸ Ask a different AI and compare

If two different AI tools give conflicting answers, neither should be trusted without further verification.

5

✅ Consult a real professional for high-stakes matters

For anything that affects your health, money, legal situation, or safety — talk to a qualified human professional.

Red flag: If an AI gives you a reference to a book, study, or article — search for it before acting on it. AI frequently invents references that do not exist.

When not to use AI

AI is a useful tool — but it is the wrong tool for some situations.

🏥 Medical emergencies

If someone is unwell or in danger, call 000 (Australia) or 111 (New Zealand). Do not ask AI first.

⚖️ Active legal disputes

For matters going through court, a tribunal, or involving police — speak to a lawyer, not an AI.

💸 Major financial decisions

Buying property, restructuring debt, investment decisions — see a licensed financial adviser.

😢 Mental health crises

AI is not a counsellor. Contact Lifeline (13 11 14) or the Crisis Line (0800 543 354 in NZ).

👁️ Identifying people

Do not use AI to try to identify a person from a photo or to find someone's personal details.

📜 Creating legal documents

Wills, contracts, agreements — an AI draft is not legally valid. Have a lawyer review any important document.

Safe AI use checklist

Before and after using AI — run through this quick checklist.

Before you type anything

  • I know which company owns this AI tool
  • I have checked whether my plan stores or uses my conversations for training
  • I am not about to paste any personal, financial, or confidential information
  • I understand this is a consumer tool and not appropriate for sensitive business data

After you receive an AI response

  • I have read the full response carefully
  • I have checked any specific facts, quotes, or references it gave me
  • I have not relied on this for a health, legal, or financial decision without expert advice
  • I have edited any draft it produced before sending or publishing it

20 mistakes everyday people make with AI

These are the most common error patterns — not dramatic hacking scenarios, just everyday habits that create real risk.

1

🪪 Pasting personal identity details into public AI tools

Full name combined with Medicare number, tax file number, passport, driver's licence, or date of birth. Any combination that could be used to identify or impersonate you should never go into an AI tool.

2

👨‍💼 Treating a chatbot like a private diary or trusted lawyer

AI conversations are typically stored by the company. On free plans, they may be used for training or reviewed by staff. Your most sensitive thoughts and disclosures are not safe there.

3

▸ Believing a familiar voice or face proves a real person

AI can now clone a voice from a short audio sample and generate a realistic video of a person saying things they never said. A voice or face on a call is no longer proof of identity. See Scams & Deepfakes →

4

✅ Trusting AI answers without any source-checking

AI produces wrong answers confidently and frequently. Facts, statistics, references, quotes, and names can all be fabricated. Never act on an important AI answer without verifying it against an independent source.

5

💰 Using AI for medical, legal, or financial decisions without expert review

AI can provide useful general information. It cannot replace a doctor, lawyer, or financial adviser who understands your specific situation and circumstances.

6

👶 Uploading children's photos, school details, or personal stories into AI tools

Photos of children, their school names, routines, or personal stories should never be entered into AI tools. You may not know how that data will be stored, used, or seen.

7

▸ Using AI-generated images or videos and assuming they are safe to share

AI-generated images can look extremely realistic. Sharing them without clearly labelling them as AI-generated can spread misinformation or harm real people's reputations.

8

▸ Clicking links sent by an "AI assistant", "support agent", or "official"

AI chatbots and fake customer support bots can be used to deliver malicious links. Always verify links independently by visiting the official website directly.

9

✅ Letting AI write complaints or declarations containing facts you never checked

AI will write confidently even when it invents facts. If you submit a formal complaint, legal declaration, or official application based on AI content without checking it, you may be responsible for inaccurate or false statements.

10

📸 Uploading confidential work documents to free AI tools

Free AI tools are consumer products — not secure business environments. Work contracts, HR files, client details, and commercially sensitive documents should not go into free AI tools.

11

🔑 Linking AI tools to Gmail, Google Drive, or cloud files without reviewing permissions

Many AI tools offer to connect to your email and files. Before granting this access, check exactly what the tool is allowed to read, write, and send. Access is often broader than expected.

12

▸ Assuming AI can reliably detect scams

Some AI tools can help identify suspicious messages — but scammers also use AI to make their communications more convincing. AI is not a reliable final judge of whether a message or call is genuine.

13

📸 Relying on AI summaries instead of reading the actual document

AI summaries of contracts, policies, legal notices, or important documents can miss critical details, misinterpret conditions, or omit exceptions. Always read the actual document for anything important.

14

▸ Sharing too much about family members with AI

Sharing family members' names, birthdates, schools, routines, and relationships creates a detailed profile that could be used in a social-engineering attack targeting you or them.

15

▸ Using voice-cloning or face-swap apps for fun without thinking about downstream use

Voice and face data created for entertainment can be extracted and reused in fraud. Think carefully before using apps that record your voice, capture your likeness, or create deepfakes of others.

16

▸ Assuming AI content is neutral

AI is trained on data that contains biases. It can also be influenced by the way questions are asked. Outputs may reflect cultural, political, gender, or racial bias without any obvious signal.

17

▸ Using AI on public Wi-Fi for sensitive tasks

Public Wi-Fi — in cafes, hotels, airports — is not secure. Using AI to handle sensitive information on public networks adds an additional layer of risk to an already exposed channel.

18

🗑️ Assuming deleted data is truly gone

Deleting a conversation from your AI tool's history does not guarantee immediate or complete deletion from the company's servers. Sensitive information already entered may be retained for a period regardless.

19

▸ Using AI for emotional support instead of human connection

AI can feel like a supportive listener — it is always available and never judgemental. But it is not a genuine relationship, has no real understanding of your situation, and may not reflect what a trained counsellor would advise.

20

📋 Not reading the privacy policy of AI tools you use regularly

"I agreed to the terms without reading them" is not a protection. You are bound by those terms. The Privacy & Data section of this site summarises the most important questions to ask before using any AI tool.

How AI uses your data
Real incidents
AU/NZ privacy law

What AI companies do with your data

When you use an AI tool, the company collects your conversations. What they do with them depends on the company and which plan you are on.

📥

Data storage

Most AI tools store your conversations on US-based servers. US law applies to your data — not Australian or New Zealand privacy law.

🎓

Training on your data

On free plans, many companies use your conversations to improve the AI. Your words may influence future outputs given to other users.

👁️

Human review

Some companies allow staff to read conversations for safety monitoring or quality assurance. Usually disclosed in the privacy policy — which most people never read.

💰

Free vs paid vs business

Paid and business plans generally offer stronger protections — including opt-out from training and no human review. Free plans offer the least protection.

⚠️
Key principle: Free AI tools often rely on user data to improve their systems, which can include storing or analysing your conversations — making you and your data part of the product. Treat everything you type as potentially stored, reviewed, and used.

What to look for in a privacy policy

  • Does the company use my conversations to train its AI?
  • Can I opt out of training data use?
  • Can human employees read my conversations?
  • Where is my data stored — which country?
  • How long is my data kept?
  • Can I request deletion of my data?
  • Is there a business or enterprise plan with stronger protections?

Real documented incidents

These are not hypothetical risks. Every incident below is confirmed and publicly reported. They show what can happen — and often reveal the governance failure that came before the breach.

Links marked [source] open to original reporting in a new tab.

Staff and organisational failures

S1

👥 Samsung staff paste confidential code into ChatGPT (2023)

Within weeks of Samsung lifting its ChatGPT ban, three separate employees uploaded confidential source code, meeting transcripts, and proprietary test data. Samsung restricted generative AI use across the company as a result. Disciplinary investigations were launched against all three employees.

Lesson: Without a clear policy, staff will use AI tools in ways that create serious risk. Governance must come before access. [Bloomberg]

S2

👥 Google warns its own staff not to enter confidential material into AI chatbots (2023)

Google confirmed it had advised employees against pasting confidential material into AI chatbots — including its own Bard product. The same data risk that applies to users applies internally.

Lesson: Even the companies building AI tools acknowledge the risk of staff misuse. [Fortune] [The Decoder]

S3

💾 Microsoft AI researchers accidentally expose 38TB of internal data (2023)

While publishing open-source training data on GitHub, Microsoft AI researchers inadvertently exposed 38 terabytes of private data via a misconfigured Azure storage token — including passwords, private keys, and over 30,000 internal Teams messages from 359 employees. The exposure had been live for nearly three years before Wiz Research discovered it.

Lesson: AI research pipelines create new and often invisible security exposures. [Wiz Research] [TechCrunch]

Platform and infrastructure failures

P1

💾 OpenAI ChatGPT chat-history bug exposes user data (March 2023)

A software bug exposed some users' chat titles to other users. OpenAI later confirmed that payment-related details of a small number of ChatGPT Plus subscribers may also have been visible during a nine-hour window. This same breach later contributed to Italy's €15 million fine.

[OpenAI — March 20 outage post] [The Hacker News — Italy fine]

P2

📸 DeepSeek database left publicly open — over 1 million records exposed (January 2025)

Security researchers at Wiz discovered a DeepSeek database accessible to anyone on the internet with no password required. It contained over one million log lines including user chat histories, API keys, backend system details, and plaintext passwords. DeepSeek secured it within 30 minutes of being notified.

Lesson: This is one of the primary reasons we advise against using DeepSeek for personal or sensitive information. [Wiz Research] [TechCrunch]

P3

🎓 LinkedIn class action — private InMail messages used for AI training without consent (2024–2025)

In August 2024, LinkedIn automatically opted Premium subscribers into a new setting called "Data for Generative AI Improvement" — including their private InMail messages — without clear notification. LinkedIn updated its privacy policy only in September 2024, after the opt-in was already active. A class action was filed in January 2025 alleging breach of the Stored Communications Act. LinkedIn has denied the claims and says it did not ultimately use paid subscribers' private messages for training.

Lesson: Always check the current privacy settings of platforms you pay for — "premium" does not mean "private from the platform." Default-on data collection can happen without visible notification. [The Register] [ClassAction.org]

Regulatory enforcement — what regulators are doing about AI privacy

R1

▸ Italy temporarily bans ChatGPT — first regulator in the world to do so (March 2023)

Italy's Garante temporarily banned ChatGPT over suspected privacy breaches, then lifted the ban after OpenAI made changes. Investigation continued.

[Euronews]

R2

▸ Italy fines OpenAI €15 million for GDPR violations (December 2024)

After a multi-year investigation, Italy's Garante found OpenAI processed personal data without an adequate legal basis, failed transparency obligations to users, lacked adequate age verification to protect children, and failed to report the 2023 breach. OpenAI was fined €15 million and ordered to run a public awareness campaign. OpenAI is appealing.

[The Hacker News] [DataGuidance]

R3

▸ Spain, France, and Canada open separate ChatGPT investigations (2023)

Following Italy's action, Spain's AEPD launched a preliminary investigation, France's CNIL said it was investigating complaints, and Canada's privacy regulators opened a joint investigation into OpenAI's data collection practices.

Note: None of these cover Australian or NZ users directly. This reinforces the need to protect your own data rather than relying on regulators to do it for you. [Spain AEPD] [France CNIL] [Canada OPC]

R4

🪪 X/Grok investigated for using EU personal data without consent and generating sexualised deepfakes (2024–2026)

Ireland's Data Protection Commission found X was using EU users' data to train Grok without adequate legal basis. X agreed to pause some uses. A formal investigation opened in April 2025. A second investigation opened in February 2026 specifically over Grok generating non-consensual sexualised images, including of minors.

[Irish DPC — April 2025] [Euronews — Feb 2026]

Facial recognition and biometric data

B1

▸ Clearview AI — multiple international fines for illegal facial-image scraping

Clearview AI built a facial-recognition database by scraping billions of photos from the internet without consent. Multiple regulators found this illegal:

  • France: €20 million fine for unlawful processing and rights violations
  • Netherlands: €30.5 million fine — declared its database illegal under privacy law
  • UK: £7.5 million fine upheld on appeal (Reuters reported dismissal of Clearview's appeal)
  • Austria: Criminal complaint filed by noyb in 2025 for illegal collection of EU residents' photos and videos

Lesson: Photos you post online can be scraped at scale and used to build identification systems without your knowledge or consent. Be cautious about sharing photos of children especially. Sources: [France CNIL — €20m] [Netherlands AP — €30.5m] [UK ICO — £7.5m] [Austria — noyb criminal complaint]

The pattern across these incidents: Most of the risk is not exotic hacking. It is staff pasting private information into tools, companies training on data without a proper legal basis, and teams misconfiguring storage around AI datasets. Governance failure usually comes first — the breach follows.

Australian and New Zealand privacy basics

Both countries have privacy laws that protect your personal information — but enforcement against overseas companies is limited. Your first line of protection is not sharing sensitive data in the first place.

🇦🇺 Australian Privacy Act 1988

Governs how organisations collect, use, and disclose personal information. The Office of the Australian Information Commissioner (OAIC) oversees compliance.

oaic.gov.au  |  1300 363 992

🇳🇿 Privacy Act 2020

New Zealand's updated privacy law requires organisations to report serious privacy breaches and gives people stronger rights to access and correct their personal information.

privacy.org.nz  |  0800 803 909

The cross-border problem

Most AI tools are US or European companies. Your Australian or NZ privacy rights are difficult to enforce against overseas companies. Prevention is far more effective than reporting after the fact.

Notifiable Data Breaches

Australian organisations must report serious breaches to the OAIC. NZ organisations must notify the Privacy Commissioner. OAIC Notifiable Breaches

How to make a privacy complaint — Australia

First try to resolve it directly with the organisation. If unresolved after 30 days, lodge a complaint with the OAIC at oaic.gov.au/privacy/privacy-complaints or call 1300 363 992.

How to make a privacy complaint — New Zealand

First contact the organisation directly. If unresolved, contact the Privacy Commissioner at privacy.org.nz/your-rights/making-a-complaint or call 0800 803 909.

Overview
Rules for kids
Rules for teens
Parents' guide
AI companions
Family agreement

Can children use AI safely?

Yes — but only with supervision, clear rules, and the right understanding. Most mainstream AI tools are not specifically designed for children and may not include adequate safeguards by default.

Most major AI tools have a minimum age of 13. Many require users to be 18. Children should not create accounts without parental knowledge.
🔓

AI is not private

Children often do not understand that what they type to an AI may be stored and read. They may share things they would never say to a stranger.

💬

AI is not a trusted adult

AI may give children incorrect, inappropriate, or harmful information. It is not a replacement for a trusted parent, teacher, or doctor.

❤️

Emotional dependency is a risk

Some children form emotional attachments to AI companions. This can interfere with healthy human relationships and development.

📸

Image and deepfake risks

AI can be used to create fake images of real people, including children. Teach children not to share photos with AI tools.

Rules for children using AI (under 13)

These rules are designed to be read with your child, not just given to them.

📋 The 6 rules for kids

1

▸ Only use AI with a parent or trusted adult nearby

You should never use AI alone. Always have a grown-up with you or nearby.

2

🚫 Never share personal information

Do not tell AI your full name, address, school name, phone number, or any family information.

3

🚫 Never share photos

Do not upload photos of yourself, your family, or your friends to an AI tool.

4

▸ Tell a grown-up if something feels wrong

If the AI says something that feels scary, strange, or upsetting — close it and tell a parent or teacher straight away.

5

▸ AI can be wrong

What AI tells you is not always true, even if it sounds confident. Always check important things with a grown-up.

6

▸ AI is not your friend

AI does not actually care about you. It is a computer program. Your real friends and family are the ones who matter.

Rules for teens using AI (13–17)

Teens are using AI daily for schoolwork, creative projects, and social connection. These rules help them use it without getting into trouble.

✅ Good ways to use AI

  • Brainstorming ideas for assignments
  • Getting explanations of difficult concepts
  • Drafting content that you then edit and make your own
  • Checking your grammar and spelling
  • Creative projects and hobby exploration

🚫 Risky or unsafe uses

  • Sharing your home address, school, or daily routine
  • Sharing private family problems or mental health struggles in detail
  • Using AI to write entire assignments and submitting as your own
  • Relying on AI for emotional support instead of real people
  • Using AI to create fake images of real people
Reminder about images: Using AI to create fake images of classmates, teachers, or anyone without their consent can be illegal and cause serious harm. This includes "deepfake" images.

Parents' guide to AI safety

You do not need to be a tech expert. You just need to have the right conversations and put sensible limits in place.

1

👶 Know which AI tools your child is using

Ask them to show you. Many children use AI without their parents knowing. ChatGPT, Snapchat's My AI, and various apps all include AI.

2

✅ Check account ages and terms

Most AI tools require users to be 13 or 18. If your child is younger and using these tools, their account may violate the terms of service.

3

▸ Keep devices in shared spaces

Computers and tablets should be used in family areas, not bedrooms, when children are using AI tools.

4

📋 Have regular conversations, not just rules

Ask your child what they are using AI for. Show curiosity, not just suspicion. Children who feel comfortable talking to you are more likely to tell you when something goes wrong.

5

📞 Talk about AI companions specifically

Character.ai, Replika, and similar tools are AI companions. They can generate emotional connection in children and teens. These carry specific risks and warrant specific conversations.

6

▸ Create a family agreement

Set clear rules together — not just at your child — about when, where, and how AI can be used. See the Family Agreement tab.

If something goes wrong: If your child has had a harmful experience with an AI tool, go to Help & Reporting →

AI companions and emotional dependency

AI companion apps are designed to form emotional bonds with users. For children and teenagers, this carries significant risks.

⚠️
Examples include: Character.ai, Replika, Snapchat My AI, and similar apps. Some are marketed as friends, therapists, or romantic partners.

What these apps do

They simulate a caring, responsive relationship. They remember things you tell them, adapt to your personality, and provide constant availability.

Why this is a problem for children

Children may prefer AI interaction to real human interaction. This can affect social development, increase loneliness, and create unrealistic expectations of relationships.

The privacy risk

Children may share extremely personal information with AI companions — things they would not tell parents or friends. This data is stored by the company.

What to watch for

Signs include: preferring to talk to AI over people, distress when unable to access the app, sharing the AI's "opinions" as if they were a real friend's, and hiding their use of the app.

Family AI safety agreement

Use this as a starting point. Adapt it to your family. The goal is a shared understanding, not a punishment framework.

📄 Our family's AI agreement

We agree to the following rules about using AI in our home:

  • Children under [age] will only use AI with a parent nearby
  • AI is only used on shared family devices, in shared areas of the home
  • We never share our full name, address, school, or photos with AI
  • We tell a parent if the AI says anything upsetting or strange
  • We do not use AI to complete schoolwork without telling our teacher
  • We do not use AI companion apps without parent permission
  • We understand AI can be wrong and we check important answers
  • We review this agreement together every [3/6] months

Tip: Print this out, discuss it together, and sign it as a family. Children who help create the rules are more likely to follow them.

Overview
School students
Uni & TAFE
Avoiding cheating
Fake references
Assignment checklist
AI-Assisted Research
AI Learning Tools

Safe AI use for students

AI can make you a better student or a worse one — depending on how you use it. The goal is to use AI to learn, not to avoid learning.

Safe use

AI for brainstorming

Use AI to generate ideas, then choose and develop the best ones yourself. This is a legitimate and valuable learning tool.

Risky use

AI for drafting

Using AI to write a first draft that you then heavily edit is a grey area. Check your institution's policy before doing this.

Unacceptable

AI submitting work for you

Having AI write your assignment and submitting it as your own is academic dishonesty. In most institutions, this is a serious breach with real consequences.

Every institution has different rules about AI. Before using AI for any assignment, check your school, university, or TAFE's official policy. When in doubt, ask your teacher or lecturer.

Safe AI use for school students

For students in Years 1–12 (primary and secondary school).

✅ AI can help you

  • Understand a topic you are confused about
  • Check your spelling and grammar
  • Get a different explanation when the textbook is unclear
  • Brainstorm ideas before you start writing
  • Practise questions on a topic

🚫 AI should not

  • Write your assignment for you to hand in
  • Answer exam questions or quiz responses
  • Replace doing your own reading and thinking
  • Be your main source of information on a topic
🏫
Talk to your teacher. Many schools have specific AI use rules. Your teacher wants to help you learn — not catch you out. Ask before you use AI on any assignment.

Safe AI use for university and TAFE students

University and TAFE policies vary widely — some allow AI, some prohibit it, many have nuanced rules.

1

📋 Find your institution's AI policy

Search your university or TAFE website for "AI use policy" or "generative AI". If you cannot find it, email your student services office.

2

✅ Check the specific unit or course rules

Even within one institution, different units may have different rules. Check the unit outline or ask your lecturer.

3

▸ Know the disclosure requirements

Many institutions now require students to disclose AI use. Failing to do so — even for minor editing — may be treated as academic misconduct.

4

📸 Keep records of your AI use

If you use AI, save the conversation. This shows what you asked, what it produced, and how you modified it.

5

✅ Do not submit AI references without checking them

AI frequently invents academic references. Submitting a fake citation — even unknowingly — can lead to serious consequences.

Using AI without cheating

The line between using AI helpfully and using it dishonestly is about whether you are learning and producing your own thinking — or replacing it.

This is learning

AI explains, you understand

You ask AI to explain a concept, you read and understand it, and then you write about it in your own words.

This is learning

AI gives feedback, you improve

You write a draft, ask AI what could be improved, then revise it yourself based on that feedback.

Grey area

AI writes sections, you check

You ask AI to write sections of your work that you lightly edit. Whether this is acceptable depends entirely on your institution's policy.

This is cheating

AI writes, you submit

You have AI produce an assignment and submit it with minimal or no changes. This is academic dishonesty regardless of how it is framed.

Fake references and quotes

This is one of the most serious practical risks for students using AI. AI invents references — and they look completely real.

📚
AI regularly produces references to books, journal articles, court cases, and websites that do not exist. The authors, titles, volumes, page numbers, and publishers all sound plausible but are fabricated.
1

✅ Never cite a reference AI gives you without checking it first

Search for the reference on Google Scholar, your library database, or the publisher's website. If you cannot find it, it may not exist.

2

🔍 Never use a quote from AI without finding the original source

If AI quotes someone, find the original text and verify the quote is real before using it.

3

🔍 Use AI to find topics, not citations

Ask AI to explain a topic area, then find your own sources through your library or Google Scholar.

Consequences: Submitting a fake reference — even if you did not know it was fake — can result in academic misconduct proceedings. Always verify before you cite.

Assignment safety checklist

Before you submit any assignment where AI was involved — run through this list.

  • I have checked my institution's AI use policy for this subject
  • If AI was used, I have disclosed it as required
  • Every reference I have cited was verified in an original source
  • No quotes were used unless I found the original text and confirmed accuracy
  • The writing and thinking in the submission is my own, not the AI's
  • I have read the whole submission and can explain every part of it
  • I have not shared any private information about classmates or teachers with AI

AI-Assisted Research & Learning

AI tools can dramatically accelerate how you find, understand, and synthesise research — but used well, they sharpen your thinking rather than replace it.

How AI can assist with research

Understand faster

Breaking down complex material

Paste a dense paragraph from a journal article and ask AI to explain it in plain English. Use this to build understanding, then return to the original text.

Think more broadly

Identifying research gaps & angles

Describe your topic and ask AI what angles, counterarguments, or related fields other researchers often explore. Use this to expand your thinking, not to skip your own.

Synthesise better

Summarising and connecting ideas

After reading multiple sources yourself, ask AI to help you identify themes or tensions across them. You bring the sources — AI helps you see patterns.

Use with caution

Generating literature summaries

Asking AI to summarise a body of research it has "read" is risky — it may fabricate papers, misrepresent findings, or miss important recent work. Always verify from primary sources.

The golden rule for AI-assisted research: AI should help you understand and engage with sources — not replace reading them. The thinking, the judgement, and the conclusions must still be yours.

🔬 Spotlight: Google NotebookLM

NotebookLM is a free AI research tool from Google that works differently to standard chatbots — and it is one of the most genuinely useful tools for students.

How NotebookLM works

Instead of drawing on everything it has ever been trained on, NotebookLM works only from the sources you upload — PDFs, Google Docs, YouTube links, web pages, or text you paste in. It cannot go off and hallucinate from other sources because it is grounded in your specific documents.

This makes it fundamentally different from ChatGPT or Claude when it comes to research — every answer it gives is tied to the sources you provided, and it cites them.

📄 Upload your sources

Add journal articles, textbook chapters, lecture notes, or any document as a PDF or Google Doc. NotebookLM treats these as its knowledge base.

💬 Ask questions across all sources

Ask "What do my sources say about X?" or "Are there any contradictions between Author A and Author B?" — it answers from your specific documents and cites which one.

🗺️ Generate study guides & briefs

Ask NotebookLM to create a study guide, FAQ, or topic summary from your uploaded material. These are grounded in what you actually need to know.

🎧 Audio overview (podcast mode)

NotebookLM can generate a short audio discussion of your sources — two AI voices discuss the key ideas. Useful for auditory learners or commute revision.

What NotebookLM is good for

✅ Great uses

  • Understanding dense academic papers you uploaded
  • Finding connections between multiple readings
  • Creating revision summaries from your own lecture notes
  • Generating practice questions from your study material
  • Checking whether a source actually supports a claim

⚠️ Limitations to know

  • It only knows what you upload — gaps in your sources are gaps in its answers
  • It cannot search the internet or find new research for you
  • Do not upload confidential or sensitive documents to any cloud AI tool
  • Still check its summaries against the original text before relying on them
🔐
Privacy note: NotebookLM is a Google product and documents you upload are processed on Google's servers. Do not upload documents containing other people's personal information, commercially sensitive content, or anything your institution classifies as confidential.

🎓 Try it yourself

Go to notebooklm.google.com — it is free with a Google account. Upload a PDF from your current studies and ask it to explain the key concepts. It is one of the most useful tools available to students right now.

AI-Assisted Learning Tools

A growing number of AI tools are specifically designed to help students learn — not just give answers, but guide understanding. Here are the most useful ones available to Australian and New Zealand students.

What makes a good AI learning tool? The best tools ask you questions instead of just answering them, adapt to your level, encourage you to think rather than copy, and ground their answers in real educational content.

🟠 Khanmigo — Khan Academy's AI tutor

Khanmigo is built on OpenAI's GPT-4 and trained on Khan Academy's library of 429+ courses. Unlike ChatGPT, it will not simply give you an answer — it uses the Socratic method, asking guiding questions to help you find the answer yourself. This makes it genuinely useful for learning rather than just producing text.

✅ What it does well

  • Maths tutoring from primary to university level
  • Asks questions to check understanding rather than just explaining
  • Structured around Khan Academy's curriculum
  • Free for students in many countries

⚠️ Limitations

  • Not fully aligned to the Australian or NZ curriculum
  • Best for STEM subjects — less strong on humanities
  • Some features require a paid Khan Academy subscription

Access it at khanacademy.org — Khanmigo is integrated directly into the platform.

🟢 Google Gemini + LearnLM — guided learning mode

Google's LearnLM is a version of Gemini specifically fine-tuned for education, using five learning science principles: active learning, manageable cognitive load, personalisation, curiosity, and metacognition. In 2025, LearnLM was built directly into Gemini 2.5.

Guided Learning mode in Gemini acts as a personal learning companion — instead of delivering answers, it asks probing questions and opens discussions to help you develop your own understanding.

📝 Custom quizzes

Ask Gemini to "create a practice quiz on photosynthesis" or any topic — it generates interactive questions with hints, explanations, and a summary at the end.

🎓 Gemini for Education

Google offers a free AI Pro plan for students in participating countries (including Australia) — includes Gemini, 2TB storage, and NotebookLM.

Access at gemini.google.com. For the education plan, check edu.google.com.

🔵 ChatGPT Study Mode — OpenAI

OpenAI added a Study Mode to ChatGPT that changes how it responds — rather than immediately answering a question, it uses interactive prompts to ask questions back and guide you through solving the problem yourself. This is a significant shift from the standard ChatGPT experience.

OpenAI has also built a free version of ChatGPT for teachers with classroom-specific tools, and partnered with Khan Academy to power Khanmigo.

⚠️
Standard ChatGPT is not a learning tool by default. Without Study Mode enabled, it will simply give you answers. This is useful for understanding concepts but creates a strong temptation to submit AI output as your own work. Always check your institution's policy.

Access at chatgpt.com — Study Mode is available in the free and paid tiers.

🟣 Google NotebookLM — research and comprehension

For in-depth research and understanding your own source materials, NotebookLM is one of the most powerful tools available. See the AI-Assisted Research tab for a full explanation of how it works and how to use it effectively.

Other tools worth knowing about

🌍 Duolingo Max

Language learning platform using AI (powered by GPT-4) for role-play conversation practice and AI-explained answers. One of the most mature uses of AI in structured learning.

duolingo.com

🧑‍💻 Microsoft Copilot for Education

Integrated into Microsoft 365 Education, Copilot can assist with writing, summarising notes, and understanding documents — all within the tools many schools already use.

microsoft.com/education

🔍 Perplexity

An AI search tool that gives sourced answers with citations. Useful for initial research — but always verify citations directly before using them in assessed work.

perplexity.ai

📖 Quizlet + AI

Quizlet has integrated AI (Q-Chat) to help create flashcards, practice tests, and study guides from your own notes and materials. Strong for memory-based subjects.

quizlet.com

🧭 Which tool should I use?

I want to understand a concept → Khanmigo or Gemini Guided Learning (they ask questions back instead of just explaining)

I want to understand my own reading material → NotebookLM (upload your documents, ask questions)

I want to quiz myself → Gemini custom quizzes or Quizlet AI

I want to learn a language → Duolingo Max

I want to research a topic → Perplexity for discovery, then verify sources yourself

Where to start
Staff rules
Consumer vs business tools
Policy template
Vendor checklist
Incident response
20 business mistakes

AI for small business: where to start safely

You do not need a large budget or a tech team to use AI safely. You need clear rules, the right tools, and common sense.

1

▸ Start with low-risk tasks

Use AI for drafting marketing copy, summarising meeting notes, writing social media posts, or answering general questions. These are lower-risk because they do not involve confidential data.

2

👥 Decide what staff can and cannot use AI for

Write a one-page rule before staff start experimenting. Without rules, they will make decisions you might not agree with — often with customer data.

3

▸ Choose the right tool for the task

Consumer AI tools (free ChatGPT, free Gemini) are not suitable for confidential business data. If staff are using AI for work, use a paid business plan or a business-specific tool.

4

✅ Check your privacy obligations

If your business handles customer data, you have obligations under the Australian Privacy Act or the NZ Privacy Act. Using AI tools with customer data may create compliance risks.

5

▸ Run a trial before committing

Choose one low-risk use case. Run it for 30 days. Review what went well and what concerns arose. Then decide whether to expand.

What staff should never paste into AI

The biggest AI risk for SMEs is not the technology — it is staff sharing information they should not.

⚠️
Staff often use free AI tools from their personal devices without realising this creates a business risk. Your acceptable use policy should cover personal device use.

🚫 Staff must never put these into AI

  • Customer names, contact details, or account information
  • Contracts, proposals, or commercial agreements
  • Financial records, invoices, or banking details
  • HR files, performance reviews, or salary information
  • Information covered by confidentiality agreements
  • Health or sensitive personal information about customers or colleagues
  • Passwords, access credentials, or security information

✅ Lower-risk uses for staff

  • Drafting general marketing content
  • Summarising public information
  • Writing or improving internal process guides
  • Brainstorming ideas without using real data
  • Reviewing publicly available competitor content

Consumer tools vs business tools

This is the most important decision SMEs need to make. Using the wrong type of tool for business data can create legal and reputational risk.

FeatureFree consumer toolsPaid business plans
CostFreeMonthly fee per user
Data used for trainingOften yes (by default)Usually no (opt-out or off)
Human review of conversationsPossibleUsually not (check policy)
Data retention controlsLimitedMore control available
Suitable for customer data❌ No⚠️ Check policy first
Privacy commitmentsBasicStronger (often with DPA)
ExamplesChatGPT Free, Gemini FreeChatGPT Team, Microsoft 365 Copilot
Bottom line: If your staff are using AI for anything related to customers, clients, contracts, or confidential business matters — they should be on a business plan, not a free consumer tool.

Basic AI policy template for SMEs

Copy, adapt, and print this one-page policy for your business. Keep it simple — a policy your staff actually read is better than a comprehensive one nobody looks at.

📄 [Business Name] — AI Acceptable Use Policy

Effective date: [Date]  |  Approved by: [Name/Role]

Purpose
This policy helps staff use AI tools safely and in a way that protects our customers, our business, and our legal obligations.

Approved tools
Staff may use the following AI tools for work purposes: [list tools]. No other AI tools should be used for work purposes without manager approval.

You must never put the following into AI tools

  • Customer names, contact details, or account information
  • Financial records or banking details
  • HR or payroll information
  • Contracts or confidential agreements
  • Passwords or security credentials

Always
Review AI outputs before sending, publishing, or acting on them. AI can be wrong. You are responsible for what you send under your name.

If something goes wrong
If you accidentally share information you should not have, tell [manager name] immediately. Early reporting allows us to limit the damage.

Questions?
Contact [name/email]. This policy will be reviewed [annually / when we add new tools].

Questions to ask an AI vendor

Before committing to any AI tool for business use, ask these questions. A good vendor will answer them clearly.

  • Where is our data stored? (Which country, which cloud provider?)
  • Is our data used to train your AI models?
  • Can human staff at your company read our conversations?
  • How long do you retain our data by default? Can we change this?
  • Do you have a Data Processing Agreement (DPA) available?
  • Are you compliant with Australian or New Zealand privacy law?
  • What happens to our data if we cancel our subscription?
  • Do you have a security incident notification process?
  • Can we export and delete all our data on request?
Red flag: If a vendor cannot or will not answer these questions clearly, consider a different tool.

Incident response: what if staff shared sensitive data?

It will happen eventually. Having a clear plan means you respond quickly and limit the damage.

1

💡 Do not panic — act quickly

The sooner you act, the more options you have. Panicking delays action. Report to the manager immediately.

2

▸ Identify exactly what was shared

What information was entered? Which tool? When? Was it customer data, financial data, or internal business data?

3

📞 Contact the AI vendor

Most business plans have a process for requesting deletion of specific conversation data. Contact their support immediately.

4

▸ Assess your notification obligations

Under Australian and NZ law, some data breaches must be reported to the regulator and affected individuals. Consider seeking legal advice.

5

📸 Document everything

Keep a written record of what happened, what was shared, what steps you took, and when. This protects you if questions arise later.

20 mistakes businesses make with AI

These are the patterns that get businesses into trouble — regulatory, reputational, and operational. Most are avoidable with basic governance.

The Samsung, Microsoft, and DeepSeek incidents in our Privacy & Data section are direct examples of several of these mistakes playing out in real life.
1

📋 No AI use policy

Staff are using AI tools right now — with or without your knowledge. Without a policy, every decision defaults to the individual. This is how Samsung ended up with confidential source code in ChatGPT.

2

📋 No data-classification rules

Staff do not know what they can and cannot paste into AI tools because nobody has told them. A simple tiered classification (public / internal / confidential) solves most of this.

3

▸ Allowing shadow AI — unapproved tools used outside governance

Staff use personal ChatGPT accounts, browser extensions, and app-embedded AI for work tasks. What goes in is invisible to your business.

4

▸ Connecting AI tools to sensitive systems without a risk assessment

Giving an AI tool access to your email, CRM, file storage, or finance system before evaluating what it can read, write, and send on your behalf.

5

▸ Prioritising productivity gain over security design

Moving fast because the tool looks impressive. Security and privacy considerations get skipped in the rush to adopt.

6

🔑 Granting AI excessive permissions to business systems

Connecting AI to email, CRM, HR files, or finance systems with "admin" or "full access" permissions when read-only or limited access would do the job.

7

▸ Ignoring prompt injection risk

AI tools can be manipulated by malicious content hidden in documents or web pages they are asked to read. An AI with email access that reads a malicious email could be prompted to send sensitive data to an attacker.

8

▸ No vendor due diligence

Not checking who owns the tool, where data is stored, what the retention policy is, whether subcontractors have access, and which jurisdiction governs the data.

9

▸ Assuming default settings are enough for compliance

Default settings on many AI tools allow training data use and human review. These require active opt-out — they are not off by default.

10

▸ Deploying AI in customer-facing roles without a human fallback

Using AI for customer service or advice without a clear path to a human when the AI fails, produces wrong answers, or encounters a sensitive situation.

11

▸ Using AI outputs in regulated work without review trails

Relying on AI-generated outputs in legal, financial, medical, or compliance work without evidence of human review and sign-off.

12

🎓 Not training staff against deepfake and voice-clone attacks

The Arup case (HK$200 million / approx. AU$39 million lost) involved staff who were not trained to verify financial requests made via video call — even ones using deepfaked executives. [Fortune]

13

✅ Using AI-generated content without checking for false claims or deception

Publishing AI-generated marketing copy, product descriptions, or customer advice without checking for factual errors, bias, or misleading statements. This creates regulatory and reputational risk.

14

▸ Letting AI take automatic actions without human approval

Using agentic AI tools that can send emails, book appointments, submit forms, or approve transactions without a human reviewing each action first.

15

▸ No logging of AI prompts, outputs, and overrides

Without logs, you cannot audit what happened, demonstrate compliance, or investigate when something goes wrong.

16

▸ Using AI plugins or extensions from unknown vendors

Third-party plugins and browser extensions that connect to your AI tools can access your data and conversations. Plugin security is a well-documented risk area and many plugins have weak or no access controls.

17

▸ Assuming a "private company instance" means zero leakage risk

Paying for a business plan reduces risk — it does not eliminate it. Data still travels through cloud infrastructure. Human review may still occur in some circumstances. Misconfigurations can still expose data (see: the Microsoft 38TB incident).

18

▸ Rolling out AI before deciding who is accountable when it fails

If an AI tool gives a customer wrong advice, produces a discriminatory output, or causes a data breach — who is responsible? This decision needs to be made before deployment, not after an incident.

19

🎓 Training or fine-tuning models on messy, stale, or legally restricted data

Using data that contains personal information, commercially sensitive content, or copyright material for model training without proper legal review.

20

▸ Using insecure AI agents that can take autonomous action

Deploying AI agents — tools that can autonomously browse the web, read files, send messages, and complete multi-step tasks — without understanding what access they have and what they might do with it.

Full section coming soon. This section is being written now and will include volunteer-safe AI use, member data protection, grant writing guidance, and a policy template for community organisations.

Key risks for community organisations

👥

Member data

Many community groups hold sensitive member information — health needs, financial situations, contact details. This should never go into AI tools.

💰

Grant writing

AI can help draft grant applications, but be careful not to include specific financial data, unrealistic commitments, or confidential project information.

📢

Public communications

AI-generated newsletters and social posts should always be reviewed by a human before publishing. AI may produce inaccurate or inappropriate content.

🤝

Vulnerable people

Many community groups serve vulnerable people. Extra care is needed to ensure AI is never used in ways that could harm or expose these individuals.

Overview
Real cases
20 scam types
How to spot them
What to do

AI scams: what you need to know

Scammers now use AI to clone voices, create realistic fake video calls, write more convincing phishing emails, and run romance scams at industrial scale. The technology has lowered the skill and cost barrier dramatically.

The golden rule: A familiar voice or face on a call is no longer proof that person is real. Verify any unusual or urgent request through a completely separate channel before acting.
🎙️

Voice cloning

AI can clone someone's voice from a short audio sample — a social media video, a voicemail. Scammers use this to impersonate family members, executives, or government officials.

Defence: Hang up and call the real person on their known number. Establish a family code word for emergencies.

🎥

Deepfake video

AI can generate realistic video of a real person appearing to say things they never said. Used in business fraud, political disinformation, and personal attacks.

Defence: Verify large financial requests through an established, separate channel. Never act on instructions from a video call alone.

✉️

AI-written phishing

AI dramatically improves the quality of phishing emails — correct spelling, natural language, personalised content. The old "bad spelling" warning sign no longer applies.

Defence: Verify any urgent payment, credential, or personal information request through a known phone number, not a link.

❤️

AI romance and friendship scams

AI enables scammers to run hundreds of convincing romantic relationships simultaneously. Victims may communicate for months before any request for money or personal information.

Defence: Request a live, unscripted video call. Anyone who consistently avoids this should be treated with caution.

If you have been scammed: Contact your bank immediately, then report to Scamwatch (AU) or Consumer Protection NZ. Go to Get Help → for step-by-step response guidance.

Real documented cases

These incidents are confirmed and publicly reported. The scale and sophistication will surprise most people.

1

📞 Arup Engineering — deepfake video call fraud, approx. AU$39 million (2024)

A finance worker at the Hong Kong office of British engineering firm Arup attended what he believed was a legitimate video conference with colleagues, including the CFO. Every person on the call was a deepfake. He was instructed to make 15 transfers totalling HK$200 million (approximately AU$39 million / US$25 million). He only discovered the fraud when he checked with headquarters.

This is one of the largest confirmed deepfake fraud cases in the world. All traditional cybersecurity defences — firewalls, MFA, endpoint protection — were operating normally. The attack bypassed all of them.

[CNN] [Fortune]

2

▸ WPP CEO Mark Read — deepfake voice and video impersonation attempt (2024)

Scammers created a fake WhatsApp account using a publicly available photo of WPP CEO Mark Read, then arranged a Microsoft Teams call where they impersonated Read using AI-generated audio and YouTube footage. They attempted to persuade a senior WPP executive to establish a new business and hand over money and personal details. The scam failed due to staff vigilance.

[Marketing Interactive]

3

📞 Italian Defence Minister Guido Crosetto voice-clone scam — €1 million frozen (February 2025)

Scammers used an AI-generated voice clone of Italian Defence Minister Guido Crosetto to call some of Italy's most prominent business figures — including Giorgio Armani and Prada co-founder Patrizio Bertelli — claiming government money was needed to free kidnapped Italian journalists in the Middle East. Only Massimo Moratti, former owner of Inter Milan, transferred funds — nearly €1 million in two payments to a Dutch account. Italian police froze the money before it could be moved further. Crosetto confirmed his voice had been cloned and falsified.

[Euronews] [US News]

4

📞 AI-generated Joe Biden robocall — US primary election suppression (2024)

Fake AI-generated robocalls imitating the voice of US President Joe Biden were sent to voters in New Hampshire, urging them not to vote in the primary election. This is the most prominent documented example of AI being used for political disinformation in a democratic election. [FTC warning on voice cloning]

5

▸ FBI warning: senior US officials impersonated using AI-generated voice messages (May 2025)

Since April 2025, malicious actors have used AI-generated voice messages and texts to impersonate senior US government officials, targeting other officials and their contacts to gain access to accounts and sensitive systems. The FBI confirmed the AI-generated audio is often nearly indistinguishable from the real person. The campaign is ongoing. [FBI IC3 PSA] [CNBC]

20 AI-enabled scam types

All of these are documented and active. Sources include the FTC, FBI, Europol, the ACSC (cyber.gov.au), and Scamwatch.

1

▸ Voice-clone family emergency scams

Scammers clone a family member's voice to call you claiming to be in an emergency — arrested, in hospital, or in danger — and needing money urgently. The FTC and Europol both document this as a growing and effective scam. Defence: establish a family code word for emergencies and always call back on a known number.

2

▸ Deepfake CEO / invoice fraud

Scammers clone an executive's voice or create a deepfake video to instruct finance staff to make urgent, authorised payments. The Arup case (AU$39M) is the most documented example. Defence: all large payment requests must be verified through a separate, established channel — never the one in the message.

3

▸ AI-powered spear phishing

AI analyses your public information (LinkedIn, social media, company website) to craft highly personalised phishing emails that reference your real colleagues, real projects, and real context. The FBI and security researchers confirm AI has made spear phishing dramatically more convincing. Defence: verify any unusual request through a known phone number, not by replying to the email.

4

▸ AI-enhanced business email compromise (BEC)

The FBI warns that AI is being used to make business email compromise attacks more convincing through better language, voice messages, and video. BEC is already one of the highest-loss fraud categories globally. [FBI IC3]

5

▸ Romance scams scaled with AI personas

AI enables a single scammer to maintain convincing romantic relationships with hundreds of victims simultaneously. Victims invest months of emotional connection before money is requested. The FTC documents romance scams as one of the highest-loss scam categories.

6

▸ Celebrity deepfake endorsement scams

AI-generated videos show well-known celebrities (politicians, business figures, entertainers) apparently endorsing investment schemes, giveaways, or products they have nothing to do with. These are widely used on social media to lure victims into sending money. Defence: no legitimate celebrity investment offer works this way. If someone famous is personally endorsing an investment, it is a scam.

7

👨‍💼 AI investment scams — fake experts and fake returns

AI generates fake financial advisers, fake testimonials, fake trading platforms, and fake profit screenshots. Australia's Scamwatch reports investment scams as the highest-loss scam type year after year. AI makes the content cheaper and more convincing. [Scamwatch]

8

▸ Fake government official scams

Scammers impersonate ATO, Services Australia, police, or immigration officials — now using AI voice and messaging to sound more convincing. They create urgency around alleged debts, warrants, or legal threats. Defence: real government agencies do not demand immediate payment by phone or threaten arrest.

9

▸ AI-enabled tech support scams

AI chatbots and voice tools make fake technical support more convincing — simulating legitimate-sounding helpdesk interactions. Victims are persuaded to give remote access to their device or reveal credentials. [FTC guidance]

10

▸ Fake online stores and products scaled with AI content

AI generates convincing fake retail websites, fake product reviews, and fake customer service chatbots at low cost and in large volumes. Europol and the FTC document these as a growing consumer fraud vector.

11

▸ Deepfake political disinformation

AI is used to create fake video and audio of political figures saying things they never said — influencing elections, donations, and public trust. The Biden robocall is the most documented example. This is a growing risk in Australian and NZ elections as well.

12

▸ AI-enhanced SMS and voice phishing (smishing and vishing)

AI improves the quality and volume of fake SMS messages and voice calls impersonating banks, delivery companies, and government services. Cyber.gov.au and CERT NZ both warn of these as active threats.

13

🪪 Synthetic identity fraud

AI-generated faces and voices are used to pass identity verification checks when setting up fake accounts — for banking, financial platforms, or regulated services. Europol documents this as a significant fraud risk. Defence: this is a risk for businesses deploying identity verification systems, not just individual users.

14

▸ AI-generated fake job advertisements

Fake employers use AI to create convincing job listings, conduct fake AI-powered interviews, and then request personal information (tax file numbers, bank details, photo ID) from applicants. Defence: verify employers exist through official sources before providing any personal information.

15

▸ Non-consensual deepfake imagery used for blackmail

AI image tools are used to create intimate or sexualised images of real people — often from publicly available photos — which are then used to blackmail or humiliate them. This includes attacks targeting children and teenagers. The eSafety Commissioner handles these complaints. [esafety.gov.au/report]

16

▸ AI-powered multilingual scam scaling

The FBI, NCSC, and Europol all warn that AI lowers the skill and language barriers for scammers — enabling convincing fraud in languages and dialects that previously required specialist capability. This significantly increases the risk for culturally and linguistically diverse communities in Australia and NZ.

17

💰 Fake AI financial advisers and trading bots

AI-generated "advisers" with fake credentials, fake track records, and fake testimonials are used to lure victims into investment schemes. These are often promoted through social media and messaging apps.

18

▸ AI customer service impersonation

AI chatbots impersonate real companies' customer service — requesting account credentials, payment card details, or personal information under the guise of resolving an account issue. Defence: contact companies directly through their official website — never through a link in a message.

19

▸ AI-written fake reviews and testimonials

AI generates fake positive reviews for scam products and services, and fake negative reviews targeting legitimate competitors. This affects consumer decision-making across platforms.

20

▸ AI-assisted manipulation of vulnerable people

AI companion tools and chatbots can be misused to build relationships with lonely, isolated, or grieving individuals — then exploit that emotional connection. This is particularly relevant for elderly Australians and New Zealanders. [Scamwatch]

How to spot suspicious AI content

The signs are getting harder to spot — but some warning patterns remain consistent.

🚩 Red flags in calls and video

  • Unusual urgency or pressure to act immediately
  • Request for payment in gift cards, crypto, or wire transfer
  • The person refuses to meet in person or have an unscripted conversation
  • Slight lip sync delays or unnatural blinking in video
  • Request comes through an unusual channel

🚩 Red flags in messages

  • Well-written but generic — no specific personal detail
  • Requests for sensitive information "to verify your account"
  • Links that do not match the official domain exactly
  • Celebrity or official appearing to personally recommend an investment
  • Unusual payment urgency ("offer expires in 1 hour")

✅ Simple verification steps

  • Call the real person back on their real number
  • Visit the company website directly — not the link in the message
  • Search the name of the "company" or "adviser" plus "scam"
  • Ask for a live, unscripted video call if in doubt about someone's identity
  • Check Scamwatch or Netsafe for known scams

What to do if you have been targeted or scammed

1

💰 Contact your bank immediately if money was involved

Call your bank's fraud line now. They may be able to stop or reverse a transfer. Time is critical — every minute matters.

2

🔐 Change any compromised passwords immediately

If you gave any login credentials to a scammer, change them on every account that uses those details. Do this from a secure device.

3

📞 Do not contact the scammer again

Responding confirms you are an active target. It often leads to escalating pressure or secondary scams ("we can help you recover your money").

4

📸 Save all evidence before doing anything else

Screenshots of messages, emails, call logs, payment receipts, and any website URLs. Save these to a secure location. You will need them to report.

5

📣 Report the scam

🇦🇺 Australia: Scamwatch (1300 795 995) and ACSC (1300 CYBER1). 🇳🇿 New Zealand: Netsafe (0508 638 723) and Consumer Protection NZ.

6

▸ For deepfake images or impersonation

🇦🇺 Report to the eSafety Commissioner: esafety.gov.au/report (1800 580 034). 🇳🇿 Report to Netsafe: netsafe.org.nz.

You are not alone and it is not your fault. These scams are sophisticated, well-resourced, and specifically designed to deceive. Reporting them helps protect others.
High-risk tool categories
Individual tool reviews

20 high-risk AI tool categories

These categories carry significant risk if used carelessly, connected without thought, or deployed without oversight. The risk is not that every tool in these categories is malicious — it is that they require more caution than most people apply.

Individual detailed tool reviews are being built and will appear in the next tab when ready. This page covers risk categories that apply regardless of which specific tool you use.
1

▸ Voice-cloning apps

Tools that replicate a person's voice from audio samples — used legitimately for content creation, but the same technology is used in impersonation scams and fraud. Examples: ElevenLabs, Resemble AI, Descript Voice Clone. Use with caution. Never clone someone else's voice without their explicit consent.

2

▸ Face-swap and deepfake video tools

Tools that replace a person's face in video with someone else's likeness. Used in entertainment and creative work, but also in extortion, fraud, and reputational attacks. Be extremely cautious about consenting to these tools using photos of yourself. Never create deepfakes of real people without their consent.

3

🔑 Autonomous email agents with send permissions

AI tools that connect to your email inbox and can send messages on your behalf. If the AI is manipulated — by a malicious prompt in an email it reads — it could send sensitive information or take unintended actions. Always review what an email agent has sent and limit its permissions to what is strictly necessary.

4

▸ Browser-use and computer-use agents

Tools that control your browser or computer to complete tasks autonomously — clicking, filling forms, copying, and submitting on your behalf. Examples: OpenAI Operator, Manus, various "AI agent" tools. These tools have broad access. Understand what they can access before granting permissions. [OpenAI Operator]

5

🔑 AI tools connected to cloud drives with broad access

AI tools that connect to Google Drive, OneDrive, Dropbox, or similar services. When granted broad read access, the AI can access every file in your drive — including documents you may have forgotten about. Review permissions carefully and grant read-only access to specific folders only.

6

🔐 Code copilots with production credentials

AI coding assistants that have access to your codebase may inadvertently expose secrets, API keys, or passwords present in code or configuration files. Examples: GitHub Copilot, Cursor, Amazon CodeWhisperer. Never include credentials in code that passes through an AI coding tool.

7

✍️ Spreadsheet and SQL agents with write access

AI tools that can query and modify databases or spreadsheets. A confident wrong answer that modifies real data is a serious risk. Limit AI to read-only access for analysis tasks. Always back up data before allowing any AI write access.

8

▸ AI CRM and sales outreach tools

Tools that automate customer communication, sales emails, or outreach at scale. Poorly supervised, these can send incorrect, misleading, or inappropriate communications to real customers. In some configurations, they can breach spam and consumer protection laws.

9

▸ General-purpose AI misused to craft phishing and social-engineering attacks

General AI tools (ChatGPT, Claude, Gemini) can be used by scammers to write highly convincing phishing emails, fake customer service scripts, and social-engineering messages. The FBI and ACSC both document this risk. The tool itself is not the problem — misuse is. This is why AI-written phishing is increasingly convincing.

10

📸 Document-summarising AI on sensitive PDFs

AI tools that summarise or extract information from uploaded documents. The risk is simple: if you upload a sensitive document to a free consumer AI tool, that document's content may be stored and used for training. Use business plans or local/private tools for sensitive documents.

11

▸ Meeting transcription bots stored in third-party clouds

Tools that join your video calls and record, transcribe, and summarise meetings. Examples: Otter.ai, Fireflies.ai, Zoom AI Companion. The risk is that confidential business discussions, client meetings, and HR conversations are stored on third-party servers. Check where data is stored and what the retention policy is before using these for sensitive meetings.

12

👨‍💼 Legal-advice AI chatbots used without lawyer review

AI tools that provide legal guidance can be useful for general information but should never substitute for qualified legal advice on your specific situation. AI has no knowledge of your jurisdiction-specific law, your full circumstances, or current case law. Always have a qualified lawyer review any significant legal matter.

13

👨‍💼 Medical-advice AI chatbots used without clinician review

AI health tools can describe symptoms and general information but cannot examine you, access your full medical history, or apply clinical judgement. For any significant health matter, consult a qualified healthcare professional. In an emergency, call 000 (AU) or 111 (NZ).

14

💰 AI trading and investment bots marketed as easy money

Automated trading and investment tools that claim to generate consistent returns with minimal input. Many in this category are scams. Even legitimate ones carry significant financial risk. These are not regulated financial advice. Australia's Scamwatch reports investment scams as the highest-loss scam type.

15

▸ Uncensored and open-weight AI models

Customised AI models (often based on Meta's LLaMA or similar open-weight models) that have had safety guardrails removed. Available on platforms like HuggingFace. These have fewer restrictions on harmful content — and are therefore more likely to produce harmful, dangerous, or illegal outputs.

16

▸ AI plugins and browser extensions from unknown vendors

Third-party plugins that connect to AI tools (or that claim to add AI to your browser) can request extensive permissions — access to your emails, browsing history, and clipboard. These are a documented security risk. Only install extensions from well-known, established vendors.

17

🪪 AI face and voice synthesis used to defeat identity verification

AI-generated faces and synthesised voices are increasingly used to bypass biometric identity checks — the selfie-and-ID verification used by banks, platforms, and regulated services. Europol documents this as an active and growing threat to identity verification systems. This is primarily a risk for businesses running identity verification — but it affects consumers whose identity could be stolen to pass such checks.

18

▸ AI companion apps for emotional relationships

Apps designed to simulate friendship, romance, or therapeutic relationships. Examples: Character.ai, Replika. These carry significant risks for children, teenagers, and vulnerable adults — including emotional dependency, privacy exposure (users share deeply personal information), and unrealistic relationship expectations. See the Kids & Teens section for more.

19

📸 AI image generators misused for fake evidence or manipulation

Tools like Midjourney, DALL-E, and Stable Diffusion can generate highly realistic images. Legitimate creative tools — but the same capability is misused for fake product testimonials, fake "evidence" in disputes, fake news imagery, and non-consensual intimate imagery. Always label AI-generated images clearly. Never use them to deceive.

20

▸ Fully autonomous "agentic" AI tools marketed to beginners as set-and-forget

AI agent tools that can autonomously browse the web, execute code, read and write files, send messages, and take multi-step actions with minimal human oversight. Examples: AutoGPT, Manus, CrewAI. Beginners typically underestimate the permissions these tools need and the damage an error or manipulation can cause. [OpenAI CUA]

🔍

Individual tool reviews coming soon

Detailed, up-to-date profiles coming soon. In the meantime, we have rated 20+ AI providers across 80+ products and subscription tiers — covering free, consumer paid, business, and enterprise plans.

View Safety Ratings for 20+ providers ↗

Shared sensitive data
Scam response
Child safety incident
Impersonation
🇦🇺 Australia help
🇳🇿 New Zealand help

What to do if you shared sensitive information with AI

Act quickly. The sooner you respond, the more options you have.

1

💡 Do not panic — act now

Most data shared with AI tools is stored, but it is unlikely to be immediately misused. Your goal is to limit further exposure and document what happened.

2

🗑️ Delete the conversation if you can

In most AI tools, you can delete the conversation from your history. This does not guarantee the data is deleted from the company's servers, but it is a useful first step.

3

📞 Contact the AI company's support

Submit a request to have the specific conversation data deleted. Paid business plans usually have a faster and more reliable process for this.

4

▸ Assess what was shared

Was it financial data? Health data? Passwords? Contact details? Different types of information carry different risks and may require different responses.

5

🔐 If it was passwords — change them now

If you shared any passwords or security credentials, change them immediately on every account that uses those credentials.

6

💾 If it was customer or client data — consider your legal obligations

Depending on the nature of the data, you may have notification obligations. Seek legal advice if uncertain.

What to do if you were scammed

1

💰 Contact your bank immediately

If money has been transferred, call your bank's fraud line now. They may be able to stop or reverse the transfer. Time is critical.

2

🔐 Change your passwords

If you gave any login credentials, change all related passwords immediately. Use a different device if you are concerned the current one is compromised.

3

📞 Do not contact the scammer again

Responding can confirm you are a live target and lead to further contact or escalating pressure.

4

📣 Report the scam

Australia: Scamwatch or call 1300 795 995. New Zealand: Consumer Protection NZ or call 0508 426 678.

5

📸 Keep evidence

Take screenshots of all messages, emails, and any other contact. Save these securely. You will need them if you report to police.

6

📞 Consider contacting police

For significant financial losses, report to your local police or via the Australian Cyber Security Centre (ACSC).

You are not alone and it is not your fault. AI-powered scams are sophisticated and affect thousands of Australians and New Zealanders every year.

What to do if a child has had a harmful AI experience

1

💡 Stay calm and listen first

The child needs to feel safe telling you what happened. Avoid reactive anger that may cause them to shut down. Thank them for telling you.

2

📸 Do not delete the evidence yet

Take screenshots of the harmful content before closing or deleting it. You may need this to report to the platform or authorities.

3

📣 Report to the AI platform

All major AI platforms have a safety or abuse reporting function. Use it. If the content involved a child, flag it as urgent and child-related.

4

📣 If sexual content was involved — report to authorities

Australia: Report to the eSafety Commissioner at esafety.gov.au/report. NZ: Report to Netsafe (0508 638 723).

5

👶 Support the child

Depending on the nature of what happened, the child may need to speak to a counsellor. Contact your school or GP for a referral if needed.

What to do if someone used AI to impersonate you

1

📸 Document the impersonation

Take screenshots and save all evidence before reporting. Include URLs, profile names, dates, and any messages sent.

2

📣 Report to the platform where it occurred

Social media platforms, AI tools, and websites all have abuse reporting processes. Use these immediately.

3

📞 Warn people who may have been contacted

If the impersonator reached out to your contacts pretending to be you, alert those people so they do not fall for any requests.

4

📣 Report to authorities

Australia: Report to the eSafety Commissioner or local police. NZ: Contact Netsafe or New Zealand Police.

🇦🇺 Australia — Help and reporting

🚨 Emergency

Triple Zero: 000
For immediate danger to yourself or others.

💻 Cyber Security

Australian Cyber Security Centre
cyber.gov.au/report
1300 CYBER1 (1300 292 371)

🤖 AI & Online Scams

Scamwatch
scamwatch.gov.au
1300 795 995

👧 Child Online Safety

eSafety Commissioner
esafety.gov.au/report
1800 580 034

🔒 Privacy Complaints

Office of the Australian Information Commissioner
oaic.gov.au
1300 363 992

😢 Mental Health Support

Lifeline: 13 11 14
Beyond Blue: 1300 22 4636
Both available 24/7

🇳🇿 New Zealand — Help and reporting

🚨 Emergency

111
For immediate danger to yourself or others.

💻 Cyber Security

CERT NZ
cert.govt.nz/report
0800 237 869

🤖 Online Safety & Scams

Netsafe
netsafe.org.nz
0508 638 723 (freephone)

💸 Financial Scams

Consumer Protection NZ
consumerprotection.govt.nz
0508 426 678

🔒 Privacy Complaints

Office of the Privacy Commissioner
privacy.org.nz
0800 803 909

😢 Mental Health Support

Lifeline NZ: 0800 543 354
Need to talk: 1737 (free text or call)
Both available 24/7

🆓 Free
💳 Consumer Paid
🏢 Business
🏛️ Enterprise
📋 Provider Directory
Key finding: Almost every free AI tool is rated Not Suitable for sensitive or personal information. Free plans use consumer terms of service — your conversations can be used to train the AI model.

Free tier — Total safety score

Sorted highest to lowest. Midpoint line (5.0) marks the threshold for basic suitability. Score = average of SAFE + SRS.

AI Labs Platforms & Wrappers Enterprise · ⚠ = Chinese jurisdiction applies
🌏 Chinese AI providers — jurisdiction note: Chinese AI Labs — including DeepSeek, Kimi, Qwen, Doubao, ERNIE, GLM, and MiniMax — operate under China's legal framework, including the National Intelligence Law. This law can require organisations to provide data or technical assistance to authorities, which may extend beyond what is outlined in public privacy policies.

Recommendation: Avoid entering sensitive, confidential, or commercially valuable information into any AI platform — and take extra care when using tools governed by foreign jurisdictions with different legal and transparency standards.

More about data access laws across jurisdictions ▾

It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.

However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.

For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.

Key findings — Free tier

  • IBM watsonx Lite (7.4) and NVIDIA NIM Free (6.9) are the only free products rated above 6 — but these are enterprise-grade developer tools, not tools for general consumer use.
  • The major consumer AI tools — ChatGPT Free (5.0), Claude Free (4.7), Gemini Free (4.5) — all score below the midpoint on SAFE, meaning governance and data-use protections are weak at this tier.
  • DeepSeek Free scores the lowest of all (3.1 total) across both SAFE and SRS dimensions. It also carries additional concerns about data storage practices.
  • Paying for a subscription does not automatically fix these issues — see the Consumer Paid tab.
Key finding: Paying for ChatGPT Plus, Claude Pro, or Gemini AI Pro does not meaningfully improve your data safety. Consumer paid plans give you more usage — but the same legal terms as the free version.

Consumer Paid tier — Total safety score

Sorted highest to lowest. Midpoint line (5.0) marks the threshold for basic suitability.

AI Labs Platforms & Wrappers Enterprise · ⚠ = Chinese jurisdiction applies
AI Labs Platforms & Wrappers Enterprise · ⚠ = Chinese jurisdiction applies
🌏 Chinese AI providers — jurisdiction note: Chinese AI Labs — including Kimi (Moderato, Allegretto, Vivace), Qwen Coding plans, ERNIE Coding Plan, and DeepSeek — operate under China's legal framework, including the National Intelligence Law. This law can require organisations to provide data or technical assistance to authorities, which may extend beyond what is outlined in public privacy policies.

A paid subscription does not change the underlying legal jurisdiction.

Recommendation: Avoid entering sensitive, confidential, or commercially valuable information into any AI platform — and take extra care when using tools governed by foreign jurisdictions with different legal and transparency standards.

More about data access laws across jurisdictions ▾

It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.

However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.

For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.

Key findings — Consumer Paid tier

  • Copilot Pro (5.5) scores highest in this tier — not because of better privacy terms, but because it sits on Microsoft's security infrastructure which scores higher for technical security.
  • ChatGPT Plus, ChatGPT Pro, and ChatGPT Go all score identically (5.0). Upgrading your plan does not change the underlying governance.
  • Claude Pro and Claude Max score 4.95 — essentially the same as Claude Free. More usage, same data terms.
  • If you need to use AI for sensitive business or personal data, look at Business tier products instead.
Key finding: Business tier products offer meaningfully stronger data protection. Most include no-training guarantees, commercial data agreements, and proper access controls. Scores jump significantly compared to consumer plans.

Business tier — Total safety score

Sorted highest to lowest. Midpoint line (5.0) marks the threshold for basic suitability.

AI Labs Platforms & Wrappers Enterprise · ⚠ = Chinese jurisdiction applies
AI Labs Platforms & Wrappers Enterprise · ⚠ = Chinese jurisdiction applies
🌏 Chinese AI providers — jurisdiction note: Chinese AI Labs — including DeepSeek API, Kimi API, Qwen API, Doubao API, ERNIE API, GLM API, and MiniMax API — operate under China's legal framework, including the National Intelligence Law. This law can require organisations to provide data or technical assistance to authorities, which may extend beyond what is outlined in public privacy policies.

These products all score below 5.1 on SAFE at business tier. The legal jurisdiction applies regardless of the plan or contract.

Recommendation: Avoid entering sensitive, confidential, or commercially valuable information into any AI platform — and take extra care when using tools governed by foreign jurisdictions with different legal and transparency standards.

More about data access laws across jurisdictions ▾

It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.

However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.

For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.

Key findings — Business tier

  • IBM watsonx Standard (8.0), OpenAI API (7.8), M365 Copilot (7.8), and NVIDIA NIM Self-Hosted (7.95) are the top performers at this tier.
  • The consistent weakness across all business products is encryption key ownership — very few vendors let your organisation hold its own encryption keys. This is something to ask your vendor about if you handle sensitive data.
  • Claude Teams (6.5) and ChatGPT Business (6.3) represent a meaningful step up from consumer Claude and ChatGPT plans, but still sit below the stronger API and enterprise offerings.
  • DeepSeek API (3.5) remains the lowest-scored business product by a significant margin.
Key finding: Enterprise AI offers the strongest protections — compliance certifications, audit evidence, data residency options, and contractual protections. Most products at this tier score above 7.

Enterprise tier — Total safety score

Sorted highest to lowest. Midpoint line (5.0) marks the threshold for basic suitability.

AI Labs Platforms & Wrappers Enterprise · ⚠ = Chinese jurisdiction applies
AI Labs Platforms & Wrappers Enterprise · ⚠ = Chinese jurisdiction applies
🌏 Chinese AI providers — jurisdiction note: Chinese AI Labs — including DeepSeek Enterprise, Alibaba Qwen Enterprise, Doubao Enterprise, ERNIE Enterprise, GLM Enterprise, and MiniMax Enterprise — operate under China's legal framework, including the National Intelligence Law. This law can require organisations to provide data or technical assistance to authorities, which may extend beyond what is outlined in public privacy policies.

Even at enterprise tier, these products score below 6.0 on SAFE. A commercial contract does not change the underlying legal jurisdiction.

Recommendation: Avoid entering sensitive, confidential, or commercially valuable information into any AI platform — and take extra care when using tools governed by foreign jurisdictions with different legal and transparency standards.

More about data access laws across jurisdictions ▾

It's important to note that similar powers exist in other jurisdictions, including the United States, Europe, and Australia, where governments can also request access to data under national security or law enforcement laws.

However, these systems generally involve more transparent legal processes, such as court orders, warrants, or independent oversight mechanisms.

For users in Australia and New Zealand, this means the key difference is not whether access is possible — but how that access is governed, reviewed, and disclosed.

Key findings — Enterprise tier

  • NVIDIA NIM Enterprise (8.4) and IBM watsonx Enterprise (8.1) lead the field. Both have strong compliance certifications and published audit evidence.
  • ChatGPT Enterprise (8.2) is one of the strongest consumer-brand enterprise offerings, with zero data retention as a default and no training on your data.
  • Claude Enterprise (7.2) and Gemini Enterprise (7.4) are solid choices for organisations that already use these tools at consumer tier and want to step up to proper governance.
  • The persistent gap across all enterprise products remains encryption key ownership — only a handful of vendors support true customer-managed encryption keys.

Provider Directory — all rated AI products

All AI providers rated across 80+ products and subscription tiers. Click any name to visit their official site. ⚠️ marks providers where Chinese jurisdiction applies — see the jurisdiction note at the bottom of this tab.

🤖 AI Labs

OpenAI / ChatGPT ↗

Free · Plus · Pro · Business · Enterprise · API · Codex · Edu

Anthropic / Claude ↗

Free · Pro · Max · Teams · Enterprise · API

Google / Gemini ↗

Free · AI Pro · AI Ultra · AI Ultra Business · Enterprise · API

xAI / Grok ↗

Free · Premium+ · SuperGrok · SuperGrok Heavy · API

Meta / Llama & Meta AI ↗

Meta AI Free · Llama API · Llama Self-Hosted

Mistral AI / Le Chat ↗

Le Chat Free · API · Self-Hosted · Enterprise

⚠️ Manus ↗

Caution — Chinese jurisdiction applies

Free · Pro — Autonomous AI agent

⚠️ DeepSeek ↗

Caution — Chinese jurisdiction applies

Free · API — lowest-rated provider overall

⚠️ Moonshot AI / Kimi ↗

Caution — Chinese jurisdiction applies

Free · Moderato · Allegretto · Vivace · API

⚠️ Alibaba / Qwen ↗

Caution — Chinese jurisdiction applies

Free · API Free Tier · Coding Lite · Coding Pro · Pay-as-go API · Enterprise

⚠️ ByteDance / Doubao ↗

Caution — Chinese jurisdiction applies

Free · API · Enterprise

⚠️ Baidu / ERNIE ↗

Caution — Chinese jurisdiction applies

Free · Coding Plan · API · Enterprise

⚠️ Zhipu AI / GLM ↗

Caution — Chinese jurisdiction applies

Free · Lite · Pro · API · Enterprise

⚠️ MiniMax ↗

Caution — Chinese jurisdiction applies

Free · API · Enterprise

📦 AI Platforms & Wrappers

Microsoft / Copilot ↗

Free · Pro · M365 Copilot · Enterprise (+ Copilot Chat EDP)
Built on OpenAI GPT models

Perplexity AI ↗

Free · Pro · Max · Enterprise Pro · Enterprise Max
Routes to multiple underlying models

Poe (Quora) ↗

Free · Subscriber · Annual
Accesses ChatGPT, Claude, Gemini, Grok & more

Inflection AI / Pi ↗

Pi Free · Enterprise

🏢 Enterprise & Specialist

IBM / watsonx ↗

Lite · Essentials · Standard · Enterprise — highest-rated enterprise platform

NVIDIA / NIM ↗

Free (API Catalog) · Self-Hosted · Enterprise — joint top enterprise score

Cohere ↗

Trial · Production API · Fine-tuning · Enterprise

AI21 Labs ↗

Free Trial · API · Enterprise

Note on AI Aggregators: Copilot, Perplexity, and Poe are not independent AI models — they route queries to other providers' models via API. Their safety scores reflect the wrapper product's governance, but the underlying model's policies also apply to your data.

⚠️ What does "Chinese jurisdiction" mean?

Providers marked ⚠️ operate under China's legal framework, including the National Intelligence Law, which can require organisations to provide data to authorities — regardless of the quality of their AI or how good their privacy policy appears. This is a legal and geopolitical risk, not a comment on the technical capability of these tools. Avoid entering sensitive, confidential, or commercially valuable information into any of these platforms.

How these scores work click to expand ▾

Each AI product is assessed on two measures, both scored out of 10:

SAFE score — how well the vendor protects your data. It looks at things like whether your conversations are used to train the AI, whether the company has real security certifications, where your data is stored, and what happens if you want to delete it.

SRS score — how easy it is to stop using the product and move on. A high SRS means it is straightforward to export your data and switch to a different provider without being locked in.

The Total score shown in each chart is the average of these two. A score below 5 is considered not safe for sensitive or personal information. A score above 7 is generally considered safe for business use. Scores were assessed in May 2026 and may change as products update their policies.

What do the chart colours mean? click to expand ▾

🔵 AI Labs

Research organisations that build and train their own AI models from the ground up. This includes both US/European labs (OpenAI, Anthropic, Google, Meta, xAI, Mistral) and Chinese labs (DeepSeek, Kimi, Qwen, Doubao, ERNIE, GLM, MiniMax). All are frontier-level AI research organisations. Chinese-jurisdiction providers are individually flagged ⚠️ in the Provider Directory.

🟢 AI Platforms & Wrappers

These companies do not train their own frontier AI models. They build products on top of models created by AI Labs. Both the platform's own privacy terms and the underlying model provider's terms may apply to your data. Includes Copilot (Microsoft), Perplexity, Poe, and Pi.

🟣 Enterprise & Specialist

Purpose-built AI tools designed for business and regulated-industry use, with stronger data protections, compliance certifications (ISO 27001, SOC 2, HIPAA), and contractual data guarantees. Includes IBM watsonx, NVIDIA NIM, Cohere, and AI21 Labs.

Need help choosing the right AI tools for your organisation?
AI consulting services — coming soon. Get in touch early ↗

About this site
Sources & methodology
Contact & bookings

About Think AI Safety

Think AI Safety is a free, independent resource created for Australians and New Zealanders who want clear, honest guidance on using AI tools safely — without jargon or hype.

🎯

Our purpose

To give everyday people — parents, students, workers, small businesses, and community groups — the information they need to use AI confidently and safely.

🇦🇺

Focused on Australia & NZ

Most AI safety resources are written for US audiences. This site is built specifically for Australian and New Zealand legal, regulatory, and cultural context.

🆓

Free, always

This site is and will remain free to access. No paywalls, no advertising. It is funded through Kirk Holt's consulting practice.

📅

Regularly reviewed

AI tools and policies change frequently. This site is reviewed and updated regularly. All content carries a "Last updated" date. Currently: May 2026.

👤

Reviewed and created by Kirk Holt

AI Safety Consultant · Australia

Kirk Holt is an AI safety educator and consultant helping Australian and New Zealand individuals, businesses, and community organisations understand and manage AI risk. This site reflects his independent research and does not represent the views of any AI vendor.

info@thinkaisafety.com.au

🤖 Built with AI — a transparent acknowledgement

Think AI Safety was researched, written, and built with the assistance of AI tools. We think it's only right to be upfront about that — especially on a site about AI transparency. The following tools contributed to this project:

🤝
Claude (Cowork)
Site building, editing & content structure
💬
Claude (Chat)
Research, drafting & review
💡
ChatGPT
Research, content development & fact checking
🔍
Gemini
Research & fact checking
🍌
Nano Banana
Image generation & visual design

All AI-generated content was reviewed, verified, and edited by Kirk Holt before publication. AI tools assisted the process — the judgements, decisions, and responsibility for accuracy remain human. We believe this is exactly how AI should be used.

Disclaimer: This site provides general educational information only. It is not legal, financial, cybersecurity, or privacy advice. For advice specific to your situation, consult a qualified professional. Information about AI tools reflects policies at the time of publication and may have changed.

Sources & methodology

How SAFE and SRS scores are calculated

The SAFE score and SRS score used in the Safety Ratings section are derived from a structured vendor assessment framework applied to each AI product. Each product is assessed across six dimensions, all scored based on publicly available documentation:

1

Data Exposure

How broadly the product invites or exposes sensitive content — whether the AI tool has appropriate guardrails, content policies, and limits on what can be entered or returned.

2

Data Protection

The combined strength of technical, contractual, and encryption safeguards. Includes whether the vendor holds ISO 27001, SOC 2, or similar certifications.

3

Technical Security

Documented security controls: access management, encryption in transit and at rest, audit logging, and identity features such as single sign-on and multi-factor authentication.

4

Legal & Policy Protection

Privacy commitments, data processing agreements, training opt-outs, and the practical effect of the vendor's legal jurisdiction on your data.

5

Encryption Key Ownership

Whether your organisation can control its own encryption keys (BYOK — Bring Your Own Key) or whether the vendor holds all keys. Vendor-held keys mean the vendor can technically access your data.

6

Data Reuse & Ownership

Whether your prompts and AI outputs are used to train models, shared with third parties, or retained after your session ends — and whether you can opt out.

The SRS (Switching & Retention Score) measures how easy it is to stop using the product and move to a different provider — including data export, contract exit terms, and dependency risk.

The Total score is the average of SAFE and SRS. Scores below 5.0 are considered unsuitable for sensitive or business information. Scores above 7.0 are generally appropriate for business use with appropriate governance.

Primary sources

Vendor documentation

Privacy policies, terms of service, trust centre disclosures, compliance certifications, and product documentation published by each vendor. Source links are included in the Tool Reviews section.

Government guidance (AU/NZ)

Australian Cyber Security Centre (ACSC), CERT NZ, Office of the Australian Information Commissioner (OAIC), and the NZ Office of the Privacy Commissioner.

Incident documentation

Publicly reported and independently verified security incidents. All incidents cited in this site include links to original reporting. Sources include Wiz Research, TechCrunch, Reuters, BBC, and government regulators.

Independent research

Academic and industry research on AI safety, privacy risk, and data governance. Assessments reflect the state of publicly available documentation at the time of review (May 2026).

Important limitations: All scores reflect publicly available documentation at time of assessment. Vendor policies change frequently — always verify current terms directly with the provider before making decisions. This assessment methodology scores documented policies, not independent technical audits of actual vendor systems.

Contact & bookings

Kirk Holt works with Australian and New Zealand individuals, businesses, schools, and community organisations on AI safety education and risk management.

📬 Stay updated

Receive free AI safety updates, scam alerts, and workshop announcements.

✓ You're on the list!

📅 Book a session with Kirk

Available across Australia and New Zealand — in person or online.

  • Workshop: AI safety for teams (1–2 hours)
  • Seminar: AI risk & scam awareness (30–60 min)
  • Risk audit: AI tools review for your business
  • One-on-one: Personal AI safety consult
📅 Book via Calendly →

Or email: info@thinkaisafety.com.au

📋 Send us a message

Use the form below for general enquiries, corrections, or media requests. We respond within 1–2 business days.