Keep your workflow. Lose the license bloat.
Prototype the exact flow your team runs, integrate cleanly, and move faster than the vendor’s roadmap.
Agentability factors
Usage concentration
3
Change frequency
3
Integration pain
3
Data gravity
3
Compliance fit
3
UI complexity needed
3
Risk of downtime
3
Internal talent
3
Reporting quality
3
Results
Total
27 / 45
Percent
60%
026 (threshold)45
Prototype candidate
Green‑light Prototype in the Room to ship a thin, working slice in 1–5 days (or choose the 1‑week MVP‑Starter).
Top drivers to target first
- Usage concentration — Cut the 80/20 slice and ship a thin end‑to‑end prototype.
- Change frequency — High change velocity favors owning the slice; build to keep iteration in your control.
- Integration pain — Map data flows and wrap brittle vendor APIs; prioritize the worst friction first.
Recommended next step
Prototype in the RoomShip a thin, working slice in 1–5 days (or a 1‑week MVP‑Starter).
Also consider
- Replace Your SaaSStand up an owned agentic workflow where vendor pain is highest.Integration pain + internal data gravity (or SaaS replacement mode) indicate an owned workflow is viable.
- Prototype to ProductionHarden: integrations, guardrails, evals, observability, rollback.Risk/operability signals—add guardrails, evals, observability, and rollback patterns early.
Legend & examples
What each factor measures and how to score it. Use the 1/3/5 anchors to calibrate your inputs.
Usage concentration
Measures: How concentrated the workflow is among a small set of roles/steps.
Score 1
Spread across many roles/flows; low repetition per user.
Score 3
Team-level repetition; some variance between users.
Score 5
Highly concentrated in one role/flow; same steps daily.
Examples
- Tier-1 support macro used by 8 reps 50+ times/day.
- AP invoice triage done by 2 specialists.
Change frequency
Measures: How often the workflow or business rules change.
Score 1
Rarely changes (quarterly+).
Score 3
Monthly tweaks; occasional rule updates.
Score 5
Weekly/daily changes; product/ops move fast.
Examples
- New SKUs weekly; promos rotate daily.
- Pricing rules adjusted every sprint.
Integration pain
Measures: Friction with vendor APIs, auth, rate limits, or brittle connectors.
Score 1
Stable vendor; low maintenance.
Score 3
Some manual workarounds; occasional outages.
Score 5
Frequent breakages, rate limits, or missing endpoints.
Examples
- CSV uploads for core data.
- Vendor webhook limits throttle ops.
Data gravity
Measures: How much critical data lives inside your boundary (DWH, lakehouse, systems).
Score 1
Mostly external/vendor data.
Score 3
Mixed; some key tables internal.
Score 5
Core entities/tables internal and well-modeled.
Examples
- Customer 360 in warehouse; service logs centralized.
- Internal feature store exists.
Compliance fit
Measures: Ability to meet policy/regulatory needs (PII, audit, approvals).
Score 1
Heavy restrictions; unclear path.
Score 3
Manageable with controls/approvals.
Score 5
Clear policies; audit/approval patterns established.
Examples
- SOX controls defined; approval matrix exists.
- PII redaction policies enforced.
UI complexity needed
Measures: How thin the UI can be (agent-first with minimal shell vs. heavy bespoke UI).
Score 1
Rich custom UI required; many edge cases.
Score 3
Moderate forms/tables.
Score 5
Checklist-style; minimal inputs/outputs.
Examples
- Two-field intake + checklist.
- Single-page “approve/deny with note”.
Risk of downtime
Measures: Business impact from vendor downtime and ability to fail open/safe.
Score 1
Low impact; manual fallback is fine.
Score 3
Moderate impact; some SLAs.
Score 5
High impact; strong need for control/rollback.
Examples
- Missed SLAs when vendor throttles.
- Revenue loss if queue stalls.
Internal talent
Measures: Capacity/skills to build and operate an agentic slice.
Score 1
No bandwidth/skills today.
Score 3
Some experience; needs guidance.
Score 5
Strong builder/operator bench.
Examples
- 1–2 builders w/ tool-calling experience.
- On-call rotation + observability in place.
Reporting quality
Measures: Depth of analytics & observability: event instrumentation, consistent metrics, dashboards, agent evals.
Score 1
Manual spreadsheets; ad-hoc queries; little logging.
Score 3
Some dashboards/logs; inconsistent metric definitions.
Score 5
Well-instrumented events; consistent metrics; dashboards tracked; eval harness exists.
Examples
- Events with user/session IDs; Looker/Mode dashboards reviewed weekly.
- Agent eval suites (golden sets, A/B) and alerting wired.
Assumptions & Notes
- Higher scores mean stronger fit for a build/prototype with agents.
- Default pass bar is 26/45. Adjust to tune your bar.
- Use alongside the Buy vs Build ROI Calculator to combine qualitative and financial signals.
- Run this live in Prototype in the Room to select a thin vertical slice for the spike.