Why the most dangerous gap in the AI conversation isn’t technical, and what we’re too afraid to admit about it.

There is something almost reassuring about the fact that every generation has faced a version of this moment. A new force arrives, one that most people don’t fully understand, and the space between what is known and what is coming gets filled, quickly and inevitably, with the voices of those who claim to understand it better than everyone else. Sometimes they do. Often they have something to sell. Usually it’s both.

For centuries, knowledge was the preserve of the relatively few. Institutions, religious, political, academic, interpreted the unknown for the masses and framed it in ways that served their purposes. Truth was not always the priority. As people grew tired of having the narrative controlled for them, history has shown us what followed: revolutions, upheavals, a slow and painful expansion of who gets to know what and on whose terms. The democratisation of information was hard won. And when it finally arrived in its fullest form with the internet, it felt like liberation. In many ways it was. But access to information turned out to be a very different thing from the ability to evaluate it, and that distinction matters now more than perhaps it ever has.

Because here we are again. AI is arriving into the same noisy landscape that has always greeted the unknown, except the noise is louder, faster and considerably better funded than anything that came before it. Political voices are using it to signal relevance. Industries are deploying it to appear ahead of the curve. Media cycles are milking it for engagement. And ordinary people are doing what ordinary people have always done: patching together an understanding from whatever sounds most credible at the time.

Take Mo Gawdat, former Chief Business Officer of Google X and one of the most shared voices on AI risk. His podcast appearances carry alarming headlines, and those headlines are what get shared. But if you actually listen to what he says in full, it’s more nuanced than the headlines suggest. He’s clear that spreading fear isn’t his intention, and he argues that the real danger comes not from AI itself but from the people directing it. That nuance keeps being missed, because it isn’t as neat as some would like. It’s also worth remembering that he’s building his own AI company and has books to sell. That doesn’t make him wrong. It makes him human, with a perspective shaped by his position, like every other voice in this conversation. The question to ask of any source isn’t simply whether they’re credible. It’s who is saying this, why now, and what are they not telling me.

The problem isn’t that people lack these skills. It’s that nobody told them they had them.

Which brings me to the gap I’m most conscious of, and it isn’t the one most people are talking about. It isn’t the gap between now and an uncertain AI future. It’s the gap between what we’re asking adults to do for the young people in their care and what those adults were ever taught for themselves. This isn’t just a schools problem. We’re asking teachers and school leaders to prepare children for a world that’s genuinely difficult to read, using critical thinking skills many were never systematically given. That’s not a criticism of the people in the room. It’s a recognition of what the systems that educated them were, and weren’t, designed to produce.

Formal education and lived experience are two very different teachers, and most people have had far more of the second than the first. The ability to question a source, smell an agenda, weigh what you’re being told against what you actually know, these things develop through life whether anyone teaches them deliberately or not. They develop through navigating difficult decisions, through learning to read a room, through the slow accumulation of knowing when something doesn’t quite add up. The problem isn’t that people lack them. It’s that nobody has pointed at those instincts and said: that thing you just did, that’s the skill, and it works here too. And, as I’ve written elsewhere, AI is already exposing how much of what we’ve been doing didn’t need to exist in the first place

Fluency with technology is not the same as judgment about the world it operates in.

AI Literacy is important and schools must do more to address it. But the assumption that teenagers being digital natives makes them better equipped for the digital world needs challenging, because it doesn’t hold up. Fluency with technology is not the same as judgment about the world it operates in, and judgment is built through experience that teenagers are still accumulating. An adult who has spent decades questioning what they’re told, navigating difficult decisions and learning to smell an agenda has something a sixteen year old with a smartphone simply doesn’t have yet, regardless of how confident they are online. We’ve been mistaking familiarity with capability, and that mistake has consequences.

The answer, though, has been under our noses the whole time. Across every lesson where a student has been asked to question a source, consider a motive, show their workings, or sit with uncertainty rather than reaching for the nearest comfortable answer, those are the transferable skills. They just haven’t always been named as such, and naming them matters. Because a skill that isn’t named is a skill that isn’t trusted, and a skill that isn’t trusted doesn’t get used when it’s needed most.

Sometimes we need to step back and look at the gap with a wider lens. Perhaps what becomes clear is that this isn’t about AI at all. It’s deeply rooted in our humanness and whether we’re bold enough to trust ourselves to engage with something we still can’t fully comprehend. Every generation has faced that test in some form. The ones who navigated it best weren’t the ones who found the most authoritative voice to follow. They were the ones who trusted their own capacity to reason, question and sit with not knowing yet.

We have the tools, and they’ve been there all along. But with some honest self-reflection, sometimes we’re a bit lazy about it. We let other people shape our thinking, absorb the headline, move on. This is too important for that. We need to consciously choose to engage, to question, to criticise, to argue and to help shape what comes next.

How much of what you think you know about AI actually came from a headline you never looked behind?

That question isn’t rhetorical. It’s where the work begins.