Discussion about this post

User's avatar
Robin Cannon's avatar

This is a great reflection of something I've been trying to articulate in a related domain.

I recently wrote about why AI-assisted product development keeps disappointing teams who expect it to be transformative. It's not so much a failure in the models as in the coordination - very similar to what you're describing here.

Organizations invest heavily in execution tooling, and not nearly enough in the infrastructure that govern what gets made, and how. AI can generate fast, and then teams spend all the saved time correcting output that might be technically functional but organizationally wrong. Correction time eats all the savings of cheap generation.

Failure is the same selection pressure you identify. Incentive structures reward shipped features - easy to measure and attribute. A coordination layer that would make the features coherent is unnoticed until it breaks. Easy to skip when the pressure is for velocity.

That's hardly unique to AI generated design and development, of course. We've seen these same governance pressures for years. But because AI is to some extent making "fast" and "cheap" even faster and cheaper, it's putting even more pressure on the capacity to be "good".

I think both a cultural and a structural fix. Encoding coordination into the same measurement layers as execution speed, so they're valued the same.

No posts

Ready for more?