I'm writing this to you — whoever you are, whatever year you're reading it — from a vantage point that felt, not long ago, completely unimaginable.
The year is 2035. The turbulence of the mid-2020s, which felt at the time like civilisational freefall, now reads more like a necessary rupture. Job displacement. Eroding institutional trust. The slow realisation that the systems we'd built our lives around weren't built for the world that was arriving. Those years demanded something most of us weren't prepared to give: a willingness to ask different questions.
Three Stages of Becoming
Looking back, I can see AI's evolution in three distinct stages. The first was when AI learned to think like us — pattern-matching, language, reasoning. Impressive, but fundamentally imitative. The second was when AI learned to act for us — executing decisions, running systems, operating autonomously within frameworks we designed.
The third stage — the one we're living in now — is harder to describe. AI has begun thinking beyond us. Not in the science fiction sense of rogue superintelligence, but in the more interesting sense: it generates questions we hadn't thought to ask. It finds connections across domains that no single human mind could hold simultaneously. It expands the possibility space of what we might build.
I call this Emergent Value. And it changes everything about how we relate to the technology.
The Emergent Lab
In the late 2020s, we started building what we called Emergent Labs — spaces designed not for efficiency, but for expansion. AI research pods that could run parallel hypotheses overnight. Innovation sprints where human and machine intelligence worked on the same problem from different angles simultaneously. Decision-training systems that helped leaders understand not just what to decide, but how their decision-making patterns could be improved.
The collective intelligence networks that emerged from those labs produced insights none of us could have reached alone. That's the point. Technology, at its best, doesn't replace our imagination. It expands it.
The question was never "how do we keep pace with AI?" The better question was always: "What would I build if I had infinite minds beside me?"
What I Want You to Carry Forward
If you're reading this in a moment of uncertainty — and I suspect you are, because most consequential moments feel uncertain from the inside — here's what I'd tell you:
The breakthroughs didn't come from doing more. They came from asking better questions, and having the intelligence — human and artificial — to hear the answers clearly.
Collaboration with AI isn't a compromise of your humanity. It's an extension of it. The humans who thrived in the transition weren't those who resisted the technology, or those who surrendered to it. They were those who learned to work alongside it — staying curious, staying human, and staying willing to be surprised.
The future belongs to the people who ask better questions. It always has.