Work is getting faster.
Not everyone is being given a way to keep up.
We tend to build things with the intention of making life easier, more connected, more efficient, and for a while, that’s exactly what happens.
Over time, we start asking different questions about what those systems are doing to us.
I’ve been thinking about that in the context of the recent lawsuits against Meta and the way Lennon Torres at Mashable described growing up on social media as “digital nicotine.” These were tools meant to connect us, but they also changed how we interact, process information, and how much of ourselves we outsource to systems we don’t fully question or understand.
That same tension is starting to show up in how we talk about AI at work. The dominant narrative is speed, efficiency, and output, even as broader analysis from the World Economic Forum has started to warn that these systems may widen the gap between who benefits and who gets left behind.
Most leaders say the design of human-AI interaction matters, yet Deloitte’s latest Human Capital Trends report notes only 6% feel they are actually doing it well.
What I keep coming back to is the difference between using AI to support thinking and gradually or wholly outsourcing thinking to it. Those two can look very similar on the surface, especially in environments that reward speed, but they lead to very different outcomes over time. At a certain point, the tradeoff stops being about efficiency and starts shaping how we collaborate, make decisions, and evaluate work.
As output increases, the space for human judgment, collaboration, and pushback shrinks, and over time that has consequences not just for quality, but for the kind of thinking we bring into our work.
I find myself thinking about this through my own relationship with technology. I’ve always leaned in early, partly out of curiosity and partly because, as someone living with a disability, I’m constantly looking for ways to make systems work better in practice. That instinct to optimize and streamline has real value, and it also makes tradeoffs harder to ignore.
Speed is never neutral. Every system optimizes for something, and in doing so, deprioritizes something else. When speed and output become the primary measure, what often gets deprioritized is the human context that makes work sustainable.
That’s the part that feels most important to pay attention to, not just who gets left behind, but how gradually and quietly that process can happen when speed becomes the primary measure of value.
As we continue to integrate AI into how work gets done, the question that keeps coming up for me is not just what these systems allow us to do, but what kind of work and what kind of participation they are shaping in the process.
Where are you seeing that balance being handled well, and where is it starting to break down?