Two things happened recently that made me rethink how I use AI.

The first was Fyxer, which I use to manage my mailbox and draft email replies. Its latest update has got so good at sounding like me that I rarely need to change a reply. Brilliant, right? Except I started to notice that some of the replies were answering questions that were either obvious or could have been found faster than it took to email me. The AI wasn’t just drafting my response; it was highlighting that the email was a waste of both our time.

The second was handing a framing paper to Claude’s Opus 4.6, and asking it to produce a summary handout and a slide deck (you know I wrote PowerPoint!). It produced both so well that I didn’t need to change anything other than adding a logo. That should have felt like a great success. Instead, it made me a little uncomfortable, and not because I thought it could replace me. If AI could distil my detailed, carefully written document into something the people in the room actually needed, was my original too long? However well-intentioned it was to share the full details of the vision, had I overengineered it?

And that’s when the thought hit me: if this can be outsourced so quickly, do we even need to do it this way in the first place?

In my work with staff and students, one of my key messages around AI has always been, “don’t outsource your thinking. Use it to expand your thought process.” It’s the research assistant you never knew you needed. But I’ve started asking a harder question, not just “can AI do this faster?” but “why was this taking so long in the first place?” And harder still, “does this even need to exist?”

Because there’s a spectrum here. Sometimes AI is a genuine efficiency gain: it processes complex tasks faster than you or I can imagine, and that’s incredible. Yet, at the other end, perhaps it sometimes exposes that the process around the task was bloated, built on layers of bureaucracy, tradition, and “how we’ve always done it.” And sometimes, if you’re honest, it reveals that the task shouldn’t exist at all.

Take email, for example. Most platforms are now building in “summarise this email” features, and it’s worth thinking about what that’s actually telling us. If an email needs summarising before anyone will read it, was it too long? And if AI is drafting my reply and summarising their message, while they’re doing the same at their end, at what point are we just two AIs talking to each other? Could that exchange have been an instant message or an action on a task list?

Meetings are the same story. People are using AI to summarise meetings they sat through but couldn’t stay focused in, or maybe didn’t even need to attend in the first place. If AI can pull out the three things that actually mattered from an hour-long meeting, was a meeting the right format, or was it a five-minute decision that needed a conversation and some actions to take away?

Now, a slight tangent, but one that really got me thinking: what about all super-long terms and conditions with the tiny font that no one ever reads before accepting? For years, organisations have buried clauses in small print, knowing most people won’t read them. “Well, you accepted the terms” is the response when you challenge something. Now, anyone can paste those terms into AI and ask, “What am I actually agreeing to?” and get a straight answer. AI is about to force a transparency shift, whether organisations like it or not. If you’re hiding something in the small print, it won’t stay hidden for long. And honestly? Good.

To be clear, I’m not saying we reduce everything to three bullet points and call it a day. The thinking, the research, the deep work, still matters because that’s where understanding is built. But there’s a difference between the work you do to develop your thinking and the work you produce to communicate it. AI is exposing the gap between those two things, and instead of just using AI to bridge that gap every time, maybe we should be closing it ourselves.

Before you outsource your next task to AI, ask yourself three things.

  1. If AI can do this in seconds, why was it taking me hours? Is the task genuinely complex, or is the process around it bloated? If it’s the second one, redesign the process.
  2. Who is the output actually for? If the honest answer is “nobody really reads this”, then AI hasn’t solved your problem; it’s helped you produce something useless more quickly. Stop producing it, or rethink the format entirely.
  3. If I strip this back to what AI extracted, is that enough? If the summary was all anyone needed, start there next time. Write the summary. No long document, no AI to distil it.

That third one is the point of the whole post. The best use of AI might be to work itself out of a job, not yours, its own.

But here’s what I think is really important in all of this. We need to celebrate the humanity.

The headwinds of change are coming. Generative AI is normalising its place in our world, but most AI development companies aren’t aiming to make money from a better chatbot. They’re aiming for Artificial ‘General’ Intelligence (AGI) that will transform, and, in some cases, replace the workforce. We are being gradually conditioned (harsh but true), often without realising it, to hand over more and more of our input and control. Some of us are sleepily drifting into that. I’d rather we were conscious participants who know what was happening and decide whether to accept it.

Because when you strip away the bloat, the unnecessary work, the tasks that didn’t need to exist, what you’re left with is time. And what humans do with time is the stuff AI can’t replicate. We go on tangents (some more than others). We follow curiosity down unexpected paths (and rabbit holes). We think about one thing, and it accidentally sparks something completely unrelated. I was doing this when I suddenly thought of that, which, unintentionally, made me produce this (hello, post-it notes!). That’s not inefficiency. That’s the stuff of human innovation and ingenuity.

And when the task does matter? That’s where the real power of AI begins. But that’s for the next post.

And the real efficiency? It isn’t AI doing your tasks faster. It’s realising those tasks shouldn’t have existed in the first place.