Skip to main content

Autonomy vs Workflow

· 3 min read
Davis Martens
Building Chuff | Ex-BCG

There’s a fundamental flaw in how most agentic systems are built today. Too often, agents are treated like glorified workflow engines, designed as a series of step-functions that chain together discrete reasoning tasks. One step labels an email, another analyzes a document, another triggers an action. This architecture can produce polished flows, but it’s ultimately an extension of traditional automation—just with a bigger brain.

That might sound like progress. But my strong belief is that it’s a dead end.

The real power of AI agents isn’t in replicating task sequences with more sophistication. It’s reasoning dynamically to achieve outcomes. Instead of encoding step-by-step processes, we should treat agents as autonomous problem-solvers. You don’t tell them what to do in what order—you define what good looks like, what tools they can use, and where the lines are. Then you let them figure it out.

This requires a completely different mental model. I’ve been experimenting with a new design pattern: goal-driven agents, guided by principles and bounded by permissions. Think of it as setting intent, constraints, and context—not crafting instructions. This means:

  • A clear goal (e.g. "increase demo conversion rate")
  • A set of guiding principles (e.g. "prioritize speed over polish" or "personalize every message")
  • A permissions layer (e.g. which tools or data the agent can access, and which actions require approval)

The system must handle dynamic context management: injecting runtime memory, summarizing recent experiences, retrieving relevant long-term memories, and synthesizing know-how. And yes, this also means building a permission system that doesn’t just say yes/no, but lets agents escalate, ask for help, and justify why they need something.

Coding agents offer a useful analogy. A GitHub Copilot-like agent can propose a diff, but still requires human sign-off to merge. That same permission flow can apply in CRM, support, or growth operations: an agent drafts a message, proposes a segmentation strategy, flags a churn risk—but holds action until it’s cleared.

This flips the assumption that repeatability is the highest goal. Repeatability matters for industrial processes. But agents should be used where reasoning, context, and judgment matter more than consistency. That’s where the leverage is.

This shift won’t be easy. It challenges how teams build, measure, and think about reliability. But we’re not trying to replace workflows with smarter workflows. We’re trying to replace workflows with systems that can think.