Skip to content
Joseph Gitonga

../notes

How to scope an AI engagement when you have never done one

2026-03-28 / 7 min / ai / consulting / scoping / founders

A practical framing for founders about to commission their first AI project. The right starting question, the four things to define up front, and the red flags that almost always go badly.


"What's your budget for AI?" is the question most founders open with. It is also the wrong question.

The right question is what you are trying to do, and what "done" looks like. The budget falls out of that. Reverse the order and you end up with a number on paper, a vague scope, and a six-month engagement that runs over.

I have started many AI engagements with founders who had never done one before. Here is the framing that works.

Start with the user outcome

Before anything else, name the user outcome you are trying to deliver. Not "we want to use AI". Not "we want an AI feature somewhere in the product". The outcome.

Examples that work:

  • "Customers should be able to ask plain-English questions about their account and get accurate answers in under 30 seconds."
  • "Our underwriting team should be able to clear simple applications in two minutes instead of twenty."
  • "The support inbox should auto-draft replies that humans approve or edit, cutting reply time by half."

Each one is concrete enough that you could test whether it shipped. That is the foundation of a real scope.

If you cannot describe the outcome at this level, the engagement is not ready to start. The first job is to figure that out, and it is a conversation, not a build.

Define four things before any work begins

Once the outcome is clear, four things have to be defined in writing before engineering starts. Each one prevents a class of expensive surprise.

  1. What is in scope. Write down the features and user flows you are building. Then write down what you are explicitly not building. Almost every scope conflict in the engagement will trace back to whether something was on this list.

  2. How you will measure success. Not "it works". A number, a threshold, a target. "85% accuracy on a 100-question evaluation set" is a measurable target. "It feels right" is not.

  3. How decisions will be made. Who can approve a tradeoff. Who can change scope. Who can sign off on launch. Who calls it done. If three people have to agree on every change, the project will be slow. If nobody is empowered to decide, it will stall.

  4. What you will ship behind. A feature flag. A small user cohort. An internal-only release. A staged rollout to specific accounts. AI features need a way to roll out and roll back. Decide this up front.

A scope that has all four written down is defensible. A scope that has none of them is a wish list.

What good scoping looks like in practice

A first conversation usually takes 30 to 60 minutes. By the end, both sides should be able to answer:

  • What are we actually trying to do.
  • Who is going to use it.
  • What does success look like.
  • What is the smallest thing we can ship to learn whether this works.
  • What can we cut if we have to.

That last one matters most. Almost every engagement has a moment where something has to be cut to hold the deadline. Knowing in advance what gets cut first removes a future fight.

Red flags in a scope

Some patterns that almost always go badly:

  • "We'll figure that out as we go." For details, fine. For the user outcome or success metric, no.
  • A scope that lists technologies instead of outcomes. "We need a RAG system with vector search" is not a scope. It is a guess at the implementation.
  • An open-ended timeline. The shape of the deadline forces tradeoffs. Without one, scope creeps.
  • A budget set before the outcome is defined. The number ends up driving the scope. That is backwards.

Fixed price or time and materials

For a first engagement, fixed price on a tightly scoped piece of work is usually better. The reason is alignment. With fixed price, both sides have an incentive to be specific about what is in and what is not, because both sides bear the cost of ambiguity.

Time and materials makes sense once you have worked together and trust the process. It also makes sense for genuinely open-ended discovery work. For a first scope-of-work, ambiguity is the enemy. Fixed price forces clarity.

If you are about to start an AI engagement and your scope does not have a written user outcome, a success metric, a decision-maker, and a release plan, slow down. A week of scoping saves three months of drift.

If you want to talk through a scope, the contact page has the details.