Every week someone asks: “Can we automate this with AI?”
If you’re stepping into AI product leadership, here’s the harder question: should you?
This week in my agentic AI Product Management course, I thought we’d be diving into architecture diagrams and system design. Instead, we were asked to make judgment calls — choosing between rules-based systems and machine learning approaches for specific problems, and using a framework to justify our decisions.
It sounded simple enough. But the moment our instructor pushed back on each answer, the real lesson emerged. The exercise wasn’t about getting it right. It was about learning how to think: how to argue, adapt, and defend decisions in a domain where there isn’t always a right answer at all.
What to Build, and Whether to Build It
One of the hardest questions in product management isn’t how to build something. It’s what to build, or sometimes, whether to build at all. In the age of AI, restraint can be as strategic as innovation. The instinct to automate is strong, but it isn’t always justified.
Decisions with Many Dimensions
Our debates started with capability and cost, but quickly expanded into a tangle of trade-offs: customer risk, sanctions, ad revenue, compute, reputation, regulation. I began to see that AI product decisions don’t just live in the technical layer. They live at the intersection where technology meets economics, policy, and ethics.
The True Cost of Being Wrong
A moderation case brought this home. The cost of error could far exceed the cost of compute. In that discussion, we stopped asking “Is AI better?” and started asking “What happens if the platform gets banned from AWS?” That shift, from optimization to survival, was sobering.
Beyond Logic, Into Governance
When we argued about AI versus rules-based systems, it wasn’t really about accuracy anymore. It was about transparency, bias, accountability, and ethics. Once AI systems make decisions, they stop being just technical artifacts. They become governance systems. Someone always owns the consequences, even when the model doesn’t explain them cleanly.
SaaS Isn’t Dying, It’s Shifting
Another insight surfaced during the week: AI doesn’t kill SaaS, it transforms it. Agents orchestrate SaaS tools; they draw power from context, not just computation. The moat isn’t model access anymore. It’s understanding, history, and domain depth.
The Art of Arguing with Conviction
When our instructor kept challenging our assumptions, it started to feel like satire, an endless back-and-forth with no winning move. But that was the point. It wasn’t about compliance with a framework. It was about articulation, confidence, and learning to stand your ground when the stakes and the ambiguity are both high. Frameworks guide, but articulation convinces.
Owning the Downside
Looking back, one thread connected it all: domain expertise and tolerance for risk. Every AI decision redistributes risk, and that tolerance isn’t neutral. Someone always bears the downside of being wrong. The more I think about it, the more I see AI product work not as building intelligence, but as negotiating tolerance: variance, unpredictability, and consequence.
That brings me to the questions still echoing after class. If something can now be automated, should it be? How much unpredictability are we truly willing to accept? Who defines acceptable error, and who pays for it? And as we expand what’s possible, do we expand accountability just as fast?
AI product management, I’m beginning to realize, isn’t simply about creating smarter systems. It’s about practicing stewardship in an era of probabilistic responsibility.
We’re living through a moment where capability is accelerating faster than reflection. I’m interested in the space between those two forces, where technology meets accountability, where innovation meets consequence.
If you’re stepping into AI product work and sitting with these same tensions, what’s the trade-off that’s hardest for you to reason through?
This is Latina-in-the-Loop — a running exploration of what it means to build, question, and steward intelligent systems in real time. Follow along.