Why most AI projects stall
In my consulting practice the same pattern shows up almost every quarter. A board sees a demo, allocates a budget, a pilot ships, and three months later the work hasn't moved a metric anyone in the operating business cares about. The technology rarely fails. The framing does.
AI projects fail because organizations treat them like software projects, but agentic AI has fundamentally different operational properties: it changes its own behavior under load, it pulls work across team boundaries, and it requires governance that didn't exist in the previous generation of enterprise systems. The Frontier Approach is the framework I use with clients to handle that gap.
The four moves
1. Align on outcomes, not capabilities
"We want to use AI" is not a project. "We want to compress quote-to-cash from 14 days to 4 days" is. The first move is forcing the conversation back to a measurable business outcome with a named owner. If a senior leader can't name the metric, the project isn't ready to start. This single move kills more bad projects than any technical decision.
2. Architect for autonomy
Most enterprise systems were built for deterministic workflows. Agentic AI introduces probabilistic decision-making — agents that pick paths, call tools, hand work to other agents. The architecture has to assume non-determinism: structured logging of every agent decision, idempotent side effects, evaluators running alongside production traffic. The companies that get this right treat their evaluation infrastructure as production infrastructure, not a research artifact.
3. Implement with engineering rigor, not prompt rigor
Prompt engineering is real but overweighted. The durable wins come from the surrounding engineering: retrieval pipelines that handle out-of-distribution inputs, retry semantics that don't compound errors, human-in-the-loop checkpoints at the points where stakes are highest, and a continuous-evaluation harness that catches regressions before customers do. Treat the LLM as one component in a system designed for failure, not the system itself.
4. Govern the way the auditor would want
Boards and auditors will eventually ask hard questions about AI decisions: who authorized this, what data trained it, how do you rollback, who is accountable when it goes wrong. Get those answers in place before they're asked. AI governance done well becomes a competitive advantage when regulation lands — companies that already have the controls don't slow down.
What's different about agentic systems
The shift from using AI to deploying agents is bigger than the shift from desktop to mobile. An agent can take action, hand work to other agents, escalate when stuck, and learn from feedback. The unit of work is no longer a request-response cycle but a conversation between systems with goals.
That changes everything operational. Capacity planning becomes about tokens, not transactions. Observability becomes about decisions, not just spans. Quality becomes about agent behavior, not pixel correctness. Companies that internalize this early will operate at scales their competitors can't reach.
Where this came from
This framework grew out of work at Columbus Global helping enterprises across the Nordics and globally implement Microsoft Dynamics, Azure, and now agentic AI on top of those stacks. I've presented versions of it at Microsoft Norway's all-hands, D-Congress (the largest Nordic retail and e-commerce event), the Finnish Embassy business forum, and to the boards of several public companies in Scandinavia. Each engagement sharpens it. Each new client teaches me something the framework didn't yet account for.
How to use this
You can apply this framework yourself — most of it is just disciplined thinking. If you're working on an AI initiative right now, the highest-value question to start with is the first one: what business metric does this move, and who owns that metric? If you can't answer cleanly, fix that before anything else.
If you'd rather have me work alongside your team on a specific initiative, that's the kind of advisory work I take on selectively through Columbus and independently. Reach out and let's talk through what you're working on.
Talk to me about a specific initiative
Available for executive advisory, board briefings, and strategic engagement on AI and digital transformation programs.