Tools don't matter. Outcomes do.
The obsession with which AI tool to use is a distraction from the only question that matters.
I had three conversations this week that went roughly the same way.
“Should we use Claude or ChatGPT?” “Is Make better than n8n?” “Should we switch to Cursor or stick with Copilot?”
All reasonable questions. All missing the point.
The wrong question
Tool selection is a distraction when you haven’t answered the more fundamental question: what outcome are you trying to achieve?
Not “what do we want the tool to do” — that’s a feature question. The outcome question is: what changes in our business when this works? How do we measure it? What does success look like in numbers?
When teams start with the outcome, tool selection becomes obvious. When they start with the tool, they end up evaluating features they’ll never use against benchmarks that don’t matter for their specific problem.
What I’ve seen
The best AI implementations I’ve worked on all started the same way. Not with a tool evaluation. Not with a proof of concept. With a clear statement of what they wanted to change.
“We want to reduce first-response time from 4 hours to under 30 minutes.”
“We want to cut content repurposing time from 6 hours to under 1 hour.”
“We want our sales team spending less than 10 minutes per follow-up email.”
Once you have that, the tool conversation takes 20 minutes instead of 3 weeks. You evaluate against your specific outcome, not against abstract capabilities.
The tool trap
Here’s what the tool trap looks like in practice:
- Team hears about a new AI tool
- Team spends 2 weeks evaluating it
- Team builds a pilot project
- Pilot works for the demo
- Nobody can explain what business outcome improved
- Tool gets added to the stack but barely used
- Six months later, someone suggests a different tool
- Repeat
Sound familiar?
The fix
Before evaluating any AI tool, write down three things:
- The outcome: What specific metric changes? By how much?
- The workflow: What process does this fit into? What triggers it? What does it produce?
- The constraint: What’s the maximum acceptable failure rate? What happens when it breaks?
If you can’t answer these questions, you’re not ready to evaluate tools. You’re ready to define your problem better.
The tool is the last decision, not the first.