Software Inversion
AI development is software development now, not data science.
The traditional ML stack required specialized expertise: feature engineering, model training, hyperparameter tuning. This created a talent bottleneck.
LLMs inverted this. The hard part (the model) is now a commodity API. The new hard part is product thinking: What should the AI do? How should it fail gracefully? These are software problems, not ML problems.
This Is Now Table Stakes
In 2023, "your engineers can build AI" was revelatory. In 2025, everyone knows this. Every consultancy says it. The differentiation has eroded.
The question has shifted from "can we build AI?" to "do we know what to build?"
The Next Inversion
The model isn't just a commodity API—it's becoming the developer. The hard part isn't "how do we build AI features" but "how do we build with AI as a co-creator."
This changes the economics of what's worth building. See: Taste as Infrastructure.
Implication
Your existing engineering team can ship AI features. But shipping features isn't the goal—building capability infrastructure is. The constraint has moved from "can we build" to "do we have taste worth encoding."
Contrarian To
"AI projects require specialized ML teams"
Metaphor
Like when AWS made infrastructure a credit card swipe instead of a data center buildout. Useful, but now everyone has a credit card. The new differentiator is knowing what to buy.