If AI keeps getting better at prediction, our biggest risk isn’t hallucinations — it’s sameness. And sycophancy.
In my previous post, I shared how I addressed drift, hallucinations, and context limits when working with AI. Those problems are real, but they’re solvable — and the models are improving.
The harder problem is this: AI is built on everything humans have already done. It can remix the past, but it cannot originate. It will confidently agree with your worst ideas. And it has no stake in whether you succeed or fail.
We’re solving for accuracy while ignoring differentiation. If every company buys the same models, trained on the same data, who creates the next “new”? If you replace apprentices with autocomplete, where will tomorrow’s experts come from?
Mass AI adoption raises the premium on human originality, domain expertise, and ethical courage. The more we predict, the rarer real invention becomes. And here’s what the hype cycle won’t tell you: many companies that laid off workers for AI are quietly hiring them back — because the technology didn’t deliver what was promised.
Define a “human floor” — decisions your organization refuses to automate: strategy, ethics, safety, hiring, incident root cause. Use AI to clear busywork and fund upskilling, not to hollow out the craft.
AI should scale human judgment and creativity — or it will erase them.
What’s one decision in your organization that must stay human, and why?
I’d welcome your perspective. Feel free to reach out.
Article: https://bit.ly/4iiTQrv