How to Talk to Your Kids About AI
Why This Conversation Matters Now
Your child is almost certainly using AI already. They might be asking ChatGPT for help with a science project, letting Gemini settle a debate with a friend, or prompting an image generator to make something funny during free time. AI has quietly woven itself into the fabric of childhood — as casually as Google did a generation ago, but with far more conversational power.
And yet, most children have never had a single conversation with a trusted adult about what AI actually is.
Not what it does — they can see that for themselves. What it is. How it works. Where it gets things right, and where it gets things dangerously, confidently wrong. They've never been given the vocabulary to think critically about the tool they're using every day.
As a child psychologist and speech coach, I want to be very clear about something: this conversation is not about fear. If you approach your child with alarm — "AI is dangerous, stay away" — you've already lost the thread. They'll nod, walk away, and keep using it in private. Prohibition breeds secrecy with children just as reliably as it does with adults.
This conversation is about building a skill — the same way you'd teach a child to cross the street. You're not trying to make them afraid of cars. You're teaching them to look both ways. The goal is critical thinking: a reflex that serves them long after any specific AI tool has been replaced by the next one.
The window for this conversation is open right now. Children who develop source-evaluation habits early show measurably stronger critical thinking skills in adolescence. The research on media literacy supports this consistently — and AI literacy is the next chapter of that same story.
So let's talk about how to actually have this conversation — in a way that works.
Start with What AI Actually Is (Age-Appropriate Explanations)
One of the most common mistakes adults make is either over-explaining AI or under-explaining it. They either launch into a technical monologue about neural networks — which glazes even adult eyes — or they wave vaguely and say "it's like a really smart computer," which leaves children with exactly the wrong impression. AI is not smart. Not in the way they understand that word.
The right explanation depends entirely on the child's developmental stage and language level. Here's a framework I use in practice.
For ages 6–8
"AI is like a very fast library that guesses what answer you want. It looks at tons and tons of books and tries to find words that seem like they belong together. But it doesn't actually understand things the way you do. It doesn't know what it's like to be confused, or excited, or curious. It just makes guesses that sound good."
At this age, the key concept is the difference between guessing and knowing. Young children understand guessing — they do it constantly. Framing AI as a guesser rather than a knower gives them an accurate mental model without overwhelming them.
For ages 9–12
"AI predicts what word should come next based on patterns it found in millions of texts. That's why it sounds so confident — it's really good at making sentences that sound right. But sounding right and being right are two different things. AI can be completely wrong and still sound like it's sure."
Children in this age range are developing the ability to hold two ideas in tension — something can sound true and not be true. This is a critical cognitive milestone, and AI provides a perfect real-world laboratory for practicing it.
For teens
"AI is a statistical pattern-matching system. It generates plausible text, not truth. It has no understanding, no experience, and no ability to evaluate whether what it's saying is accurate. Your job — every single time you use it — is to evaluate what it gives you. Think of it as a first draft from someone who didn't do the research."
Teenagers respond well to being given responsibility. Framing AI use as a skill that requires their judgment — rather than a danger that requires your supervision — respects their developmental need for autonomy while still equipping them with the right lens.
The speech coach principle
Across all ages, one rule governs: use the child's language level, not yours. Don't over-explain. Give the core idea in two or three sentences, then stop. Let them ask follow-up questions. The questions they ask will tell you exactly what they're ready to understand — and what they're not. Resist the urge to deliver a lecture. The best conversations about AI are the ones where the child talks more than you do.
Five Conversation Starters That Actually Work
Theory is useful. But parents need practical entry points — things they can actually say at the dinner table or in the car without it feeling like an interrogation. These five prompts are designed to open doors rather than shut them down.
1. "Show me something cool you asked AI recently."
This is a non-judgmental entry point. You're expressing curiosity, not suspicion. Most children will happily show you something if they don't sense a trap. Whatever they show you, respond with genuine interest first — even if the content gives you pause. Your reaction in this moment determines whether they'll share with you again.
2. "Let's test if AI knows the answer to something we can fact-check."
Turn it into a game. Pick something verifiable — a historical date, a scientific fact, something about your city. Ask AI together, then look it up. When AI gets it right, acknowledge it. When it gets it wrong, don't gloat — just observe it together. "Huh, it sounded really sure about that. Interesting." You're building an evaluation reflex, not an anti-AI bias.
3. "What would you do if AI told you something that felt wrong?"
This question builds agency. You're communicating that your child's instincts matter — that a feeling of "that doesn't seem right" is worth paying attention to. From a child psychology perspective, this kind of question strengthens what we call epistemic autonomy: the confidence to question an authoritative-sounding source based on their own reasoning.
4. "Do you ever use AI for feelings or personal stuff?"
This one requires a light touch. Don't push. If they say no, accept it. If they say yes, don't react with alarm. Many children turn to AI for emotional support because it feels safer than talking to a person — no judgment, no consequences, no awkward silences. That impulse is completely understandable, and shaming it will only drive it underground. Instead, gently open the door: "That makes sense. I'm glad you have something to talk to. I'd love to be someone you can talk to about that stuff too, whenever you're ready."
5. "What rules do you think we should have for AI in our family?"
This is the single most effective prompt on the list — because it's collaborative. You're not handing down a decree from on high. You're inviting your child to co-create the framework they'll live within. Children who participate in rule-making follow rules more consistently. It's one of the most well-established findings in developmental psychology, and it applies directly here.
Let them propose ideas. Some will be impractical. Some will surprise you with their thoughtfulness. The conversation itself is the point.
Building the "Says Who?" Reflex
If you take only one thing from this article, let it be this: the most important skill you can teach your child in the age of AI is the habit of asking where information comes from.
We call it the "says who?" reflex. It's not skepticism for its own sake. It's not teaching children to distrust everything. It's teaching them to pause — just for a moment — before accepting information as fact. To ask: Who said this? How do they know? Can I check?
AI makes this skill more urgent than ever, because AI doesn't cite sources by default. It delivers information in a clean, authoritative tone with no footnotes, no links, no attribution. For a child, this creates the illusion of a single, definitive answer — when in reality, the response was assembled from patterns across millions of documents of wildly varying quality.
How to practice this together
- Pick a topic your child is curious about. It works best when it's something they genuinely want to know — not something you've assigned.
- Ask AI the question together. Read the response out loud.
- Identify the claims. Help your child spot the specific factual statements in the response. "It says the Great Wall of China is visible from space. Is that a fact or a guess?"
- Fact-check together. Use a search engine, an encyclopedia, a book — anything with a verifiable source. Compare what you find to what AI said.
- Talk about what you discover. Sometimes AI is right. Sometimes it's partly right. Sometimes it's completely wrong. Each outcome is a learning opportunity.
The research on this is encouraging. Children who practice source evaluation early — even in informal, low-stakes settings like a dinner table conversation — develop measurably stronger critical thinking skills as they move into adolescence. The habit transfers. A child who learns to question AI today will question misleading headlines, dubious social media claims, and unfounded arguments tomorrow.
From a speech and language perspective, this practice also builds a crucial communication skill: the ability to distinguish between confident delivery and actual knowledge. That distinction is valuable in every relationship, every classroom, and every job they'll ever have.
What Not to Do
Good intentions can backfire. Here are the most common mistakes I see parents make — and why they matter.
Don't panic or ban AI outright
Prohibition almost never works with children, and it especially doesn't work with technology that's freely available on every device they encounter. Banning AI doesn't remove it from their life — it removes you from their AI experience. They'll use it at a friend's house, at school, on a borrowed phone. The difference is that now they're using it without your guidance, and they've learned that it's something to hide from you.
Don't shame them for using AI
If your child tells you they used AI to help with an essay, your first response matters enormously. If you react with disappointment or anger — "That's cheating" — you've just taught them not to tell you next time. A better response: "Interesting. Show me what you asked it and what it gave you. What did you change? What did you keep?" You're staying in the conversation instead of ending it.
Don't assume they understand AI's limitations
Many adults don't fully understand how AI works. It's unreasonable to expect children to intuit that a tool delivering polished, confident paragraphs might be entirely wrong. They need to be explicitly taught this — not once, but repeatedly, in different contexts, over time.
Don't treat this as a one-time conversation
AI is evolving rapidly. The tools your child uses today will look different six months from now. The conversation about AI isn't a box you check — it's an ongoing dialogue that grows as your child grows and as the technology changes. Build it into the rhythm of your family life rather than treating it as a special event.
Making It a Family Practice
The families I see navigating AI most successfully aren't the ones with the strictest rules or the most technical knowledge. They're the ones who've made AI a normal thing to talk about — casually, regularly, without drama.
Start a weekly AI check-in
It doesn't need to be formal. Over dinner, in the car, before bed — spend five minutes asking what everyone used AI for that week. Parents included. When children see that you're also learning, also making mistakes, also figuring this out, it normalizes the process. It stops being a surveillance exercise and starts being a shared experience.
Model good AI behavior yourself
If you want your child to fact-check AI, let them see you fact-checking AI. If you want them to use AI as a starting point rather than a final answer, show them what that looks like in your own work and daily life. Children learn more from what they observe than from what they're told — this is one of the most consistent findings in developmental psychology.
Use tools designed with families in mind
Most AI platforms were built for adult professionals. They don't account for the developmental needs of young users, the communication patterns of children, or the role of parents in guiding the experience. Tools like SapioChat are designed differently — with age-appropriate guardrails, transparent sourcing, and features that support the kind of critical thinking we've been discussing. If you're looking for a starting point, it's worth exploring a platform built with your child's development in mind rather than retrofitting an adult tool.
Keep learning together
Visit our resources for parents for ongoing guidance on raising critical thinkers in the age of AI. Our FAQ covers the most common questions families ask — from age-appropriate usage guidelines to how AI handles sensitive topics.
The goal has never been to make your child afraid of AI. It's to make them thoughtful about it. To give them the language, the habits, and the confidence to use powerful tools without being used by them. That starts with a conversation — and the fact that you're reading this means you're already ready to have it.