I just started a new job as a GTM Engineer.

The first thing I did was read a book about a factory.

Not a sales playbook.

Not a cold email course.

Not another breakdown of how to use Clay.

The Goal by Eliyahu Goldratt, published in 1984, follows a factory manager who has 90 days to turn around a failing plant or watch it get shut down.

It contains one of the most precise frameworks ever written for diagnosing why a system produces less than it should, and what to do about it.

Here's what caught me.

I now work in a world where I can spin up multiple AI agents in parallel and have them building different things simultaneously before lunch.

Execution capacity is essentially infinite.

Which means the frameworks Goldratt built for a world of physical constraints and scarce resources apply more urgently now, not less.

When the machine will work on anything you tell it to, figuring out what to tell it becomes the whole job.

Here's how I'm applying it.

Definitions

Goldratt measures the health of any system through three metrics.

Throughput is the rate at which the system generates money through sales.

The goal of the entire revenue system is closed revenue. The GTM engineer doesn't close deals, that's the AE's job, but they build and optimize the machine that makes closing possible.

Their specific contribution to throughput is qualified pipeline generated: meetings booked with buyers who have the problem you solve and the authority to act on it.

A reply is not throughput.

A demo scheduled is not throughput.

A sequence launched is not throughput.

Worth noting:

throughput looks different depending on where in the revenue system you sit.

For a CS team it might be expansion revenue per account.

For a growth team it might be trial-to-paid conversion rate.

The metric changes.

The principle doesn't.

Inventory is all the money the system has invested in things it intends to convert into throughput.

In GTM, that's the accumulated cost sitting in your tech stack, your data subscriptions, and your enrichment workflows that hasn't produced a qualified meeting yet.

Every tool you're paying for, every list you've built, every sequence running with no replies, that's inventory.

It feels like infrastructure.

Goldratt would call it money waiting to be justified.

Operating Expense is all the money the system spends to turn inventory into throughput. Salaries, overhead, tools.

A GTM engineer attacks this directly: every manual task replaced by a system is operating expense removed.

But the real compounding effect happens when lower operating expense and higher throughput move at the same time.

More pipeline generated, less human cost to generate it.

That ratio is the GTM engineer's scoreboard.

Two additional concepts worth defining before the frameworks make full sense.

Input is what enters the system before any human or AI has touched it.

In GTM, a raw list of 10,000 ICP accounts pulled from Clay before a single enrichment has run. Ore before it hits the factory floor.

Output is what exits the system at the other end.

In GTM, the boundary where the GTM engineer's machine ends and the AE's work begins.

A qualified meeting with a buyer who has the problem you solve and the authority to act. Everything after that is a different system.

What Is the Goal of a GTM Engineer?

Most plant managers he encountered defaulted to things like efficiency, quality, or customer satisfaction when asked what their company's goal was.

He called these necessary conditions.

Important, but not the goal.

The goal is to make money.

Everything else is noise.

The goal of a GTM engineer is to build and optimize the revenue system that converts market opportunity into closed revenue, while reducing the human time and cost required to do it.

They don't own the close.

They own the machine that makes closing possible.

Not automations for their own sake.

Not integrations because they're interesting to build.

Not making the sales team's life easier as an end in itself.

Every project, every workflow, every tool either accelerates the conversion of opportunity into pipeline into revenue, or it creates inventory pileup.

If Goldratt were defining the role he would say something like: find the constraint in the revenue system, eliminate it, and repeat.

Use AI and automation to compress the operating expense required at each step so that throughput compounds without headcount scaling alongside it.

The Theory of Constraints

This is Goldratt's central thesis.

Every system has exactly one constraint at any given time limiting its ability to produce throughput. Building capacity anywhere other than that constraint is waste, because it doesn't change the output of the system.

Five steps:

  1. identify the constraint

  2. exploit it before spending anything new

  3. subordinate everything else to serve it rather than its own local efficiency metrics,

  4. elevate it once you've exhausted exploitation

  5. repeat when it moves.

The step that trips people up is subordination.

It means non-constraints should sometimes run below their maximum capacity on purpose. Overproducing ahead of a constraint just creates pileup.

In GTM terms that's pipeline bloat and AEs drowning in accounts they can't meaningfully work.

Here's what the five steps look like in practice.

You run a cohort analysis and find your AI enrichment waterfall is producing 2,000 qualified accounts per week but your AE team can only meaningfully work 200.

The constraint is AE capacity.

You exploit by re-scoring the queue so every hour an AE spends goes to the highest-intent accounts.

You cut the bottom 80% from active sequences. No new resources yet.

You subordinate by telling your Clay workflows to slow down.

You pull email volume off the reporting dashboard.

The machine runs at the pace the constraint can absorb.

You elevate by building an AI layer that handles early touches autonomously so AEs only engage when there's a real signal of intent. Capacity multiplied without headcount added.

Then the constraint moves.

Show rates drop because messaging isn't qualifying hard enough upstream.

You start over.

Problem Selection: The Part Most People Skip

The main character in the book spends the first half solving the wrong problems.

He optimizes robot efficiency, reduces cost per part, hits local productivity metrics, all while the plant bleeds cash and misses shipments.

His mentor Jonah never tells him what to do.

He asks questions that force Alex to redefine what problem he's actually trying to solve.

The methodology underneath it:

most people work on symptoms of a constraint rather than the constraint itself.

Symptoms are visible, measurable, and feel urgent. The actual constraint is usually hidden behind an assumption everyone has accepted without questioning.

Jonah's three questions.

What is undesirable about the current situation?

What core conflict is causing it?

What assumption, if broken, makes the conflict disappear?

A team sees low reply rates.

The obvious interventions are better copy, more personalization, better timing, more sends.

Push deeper and you find they're targeting on firmographic fit because they assume volume is required to hit meeting targets.

Question that assumption and the whole problem reframes.

Now you're not writing better emails.

You're rebuilding your ICP definition and your signal stack.

The leverage is an order of magnitude higher.

Most GTM problems fall into one of three categories and it matters enormously which one you're dealing with before you build anything.

A throughput problem means the system is structurally sound but output is too slow.

Something is creating friction at the constraint.

This is where AI automation genuinely helps.

An assumption problem means the system is working exactly as designed but the design is based on a false premise.

More automation makes this worse, not better. This one requires thinking before building.

A measurement problem means you don't know where your constraint is because you're tracking local efficiency instead of system throughput.

This is where most teams live without realizing it.

Before you open Claude Code or spin up a new workflow, you need to know which one you're dealing with.

The diagnosis is the work.

What to Work on When You Can Work on Anything

Goldratt wrote The Goal for a world where capacity was scarce.

A machinist ran one machine.

Adding a second shift cost real money.

The constraint was physical and visible on the factory floor.

AI changes that completely.

You can run multiple Claude Code instances in parallel and have them building different things simultaneously.

A scraping workflow, an enrichment pipeline, a personalization layer, a sequence framework, all before lunch.

Execution capacity is infinite and nearly free.

This is only an advantage if you've already done the thinking.

Because when AI handles execution, the constraint shifts entirely to you.

Your judgment about what is worth building.

Your ability to define the problem clearly enough that the output is useful.

Your capacity to review, validate, and deploy what gets built.

The five steps of TOC still apply.

They just map to a different system.

Before opening a new session, run through three questions.

What is the current constraint in my GTM system?

Is what I'm about to build directly exploiting or elevating it?

If not, am I consciously subordinating this work to something that does?

Goldratt would say the most expensive thing in any factory is a manager who keeps machines busy on the wrong thing.

With AI you are the manager of a factory that never sleeps, never pushes back, and never tells you when you've given it the wrong job.

The GTM engineers who compound fastest won't be the ones running the most agents in parallel.

They'll be the ones who applied Jonah's questions before they wrote a single line of code or opened a single Clay table.

The One Constraint Worth Finding First

The most commonly misidentified constraint in an AI-augmented GTM motion is the feedback loop between outbound activity and learning.

You send 1,000 emails. You get 30 replies. You get 8 meetings. You close 1 deal.

Most teams use that data to optimize send volume or copy.

Goldratt would ask a different question:

What do the 29 replies that didn't convert tell you about the 970 who never replied at all?

What did the 7 meetings that didn't close share in common?

The constraint isn't the top of the funnel.

It's your ability to extract signal from the full funnel and feed it back into ICP definition and targeting.

AI makes this solvable in a way it never was before.

You can use Claude code to analyze every reply, every call transcript, every lost deal note and surface patterns a human analyst would take weeks to find.

But you only do this work if you've correctly identified the feedback loop as your constraint rather than jumping straight to building more.

Find the constraint.

Exploit it fully.

Subordinate everything else.

Elevate when you've exhausted what you have.

Repeat every week because with AI, the constraint moves faster than it ever did on a factory floor.

If this landed, I write about GTM engineering, AI in revenue systems, and building things that actually move the number. Follow along or reply and tell me where your constraint is right now.

Keep Reading