Skip to main content

Master Clay, Outcome-Driven Innovation (ISM-C)

Master Clay, Outcome-Driven Innovation

Intro

You're here for leverage, not guesswork. This process turns ambiguity into a defensible ODI roadmap that survives scrutiny from product, design, engineering, and leadership. We define the job, map the process, capture outcomes, validate evidence, and prioritize the highest-ROI opportunities.

We split the work into two pipelines: the Consumer ODI pipeline (core job outcomes) and the Consumption/PLG pipeline (adoption outcomes). That split keeps context fresh and confidence high. It also makes the strategy usable: you’ll know what to build, why it matters, and what wins first.

The Process

Outcomes beat features. Every step is a loop: define, map, capture, validate, score, prioritize. The output of one step becomes the input for the next, preserving context and preventing drift. We don’t move forward if the evidence is weak.

Expect fast iteration, clear logic, and strict quality bars. You’ll see tables, evidence traces, and opportunity landscapes that transform into a roadmap you can execute in days, not months.

Process Overview

  • 00: Intake & Initialize
  • 01: Job Executor Persona (MVS + MSP Sides)
  • 02: JTBD Statement & Dimensions (Per Job Executor)
  • 03: JTBD Job Map (JMS)
  • 04: Consumer DOS (No Scores)
  • 05: Competitor Analysis (Consumer DOS)
  • 06: Consumer Opportunity Landscape (Scored)
  • 07: Roadmap Clustering (Consumer)
  • 08: Roadmap Prioritization (Consumer)
  • 09: Consumption Jobs (PLG)
  • 10: Consumption JMS
  • 11: Consumption DOS (No Scores)
  • 12: PLG Benchmarks
  • 13: PLG Opportunity Landscape (Scored)
  • 14: Roadmap Clustering (Consumption)
  • 15: Roadmap Prioritization (Consumption)
  • 16: Executive Summary
  • 17: Conclusion

Phase 1: Consumer ODI Pipeline

This phase builds the core strategy: define the job, map it, capture outcomes, validate with evidence, and turn it into a prioritized roadmap.

Step 00: Intake & Initialize

Intro

We align scope, context, and inputs before any analysis begins. If something is missing, we fix it here.

Fundamentals

ODI is scope-sensitive. If the scope is wrong, every outcome is wrong. We explicitly set the scope level and validate upstream context to protect statistical confidence.

Actions

I validate preloaded context, collect missing inputs, set the ODI scope, and generate the intake summary. This is the clean-room step: no strategy without clean inputs.

Pitfalls

  • Skipping scope alignment creates contradictions later in the scoring steps.
  • Treating preloaded context as correct without validation creates false precision.

Review Checklist

  • Scope is explicit and correct.
  • Preloaded inputs are validated or replaced.
  • Intake summary reflects actual context, not guesses.

Inputs

Required Full Persona Profile for Job Executor, may be loaded from any of these variables:

  • mm_audiences (list): Legacy variable from gen1/gen2 - migrate to */*niche_analysis
  • */*ideal_user_hxc (markdown): Pre-loaded from other agents

Optional

  • */*niche_analysis (markdown): Pre-loaded from other agents
  • */*business_goals (markdown): Pre-loaded from other agents
  • */*solution_journey_jtbd (markdown): Pre-loaded from other agents
  • */*user_needs_dos (markdown): Pre-loaded from other agents
  • */*growth_journey_plg (markdown): Pre-loaded from other agents
  • */*product_led_growth_dos (markdown): Pre-loaded from other agents
  • mm_odir_json (JSON): Pre-loaded ODIR structure

Deliverables

  • c02_innovation_strategy/00_intake_summary (html): Intake Summary (HTML)

Step 01: Job Executor Persona (MVS + MSP Sides)

Intro

We define who is actually doing the job. For multi-sided platforms, we capture all sides and pick a Most Valuable Side (MVS) as the workflow anchor.

Fundamentals

ODI requires a single executor per iteration, but strategy must aggregate across all sides in an MSP. We keep the MVS for pathing and keep all executors for output aggregation.

Actions

I build or refine personas, create empathy maps, and confirm the MVS. Each executor gets a clear profile: context, motivations, pains, and gains.

Pitfalls

  • Selecting a non-representative executor skews the entire opportunity landscape.
  • Mixing roles into one persona blurs outcomes and reduces accuracy.

Review Checklist

  • All MSP sides are represented.
  • MVS is explicitly selected.
  • Empathy maps are complete enough to inform outcomes.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/01_job_executor_profile (html): Job Executor Profiles (HTML) - aggregate all MSP sides

Step 02: JTBD Statement & Dimensions (Per Job Executor)

Intro

We define the job in clear, solution-agnostic language for each executor.

Fundamentals

A strong JTBD is stable over time and focused on outcomes, not features. It includes functional, personal, and social dimensions to avoid shallow definitions.

Actions

I craft a Verb + Object + Context job statement and define the three dimensions per executor. If a statement is vague, we sharpen it.

Pitfalls

  • Mentioning solutions or tech locks you into a design too early.
  • Skipping dimensions produces shallow outcomes later.

Review Checklist

  • Job statement is solution-agnostic and atemporal.
  • Functional, personal, and social dimensions are explicit.
  • Each executor has its own JTBD.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/02_jtbd (html): JTBD Statement & Dimensions (HTML) - aggregate all job executors

Step 03: JTBD Job Map (JMS)

Intro

We map how the job gets done, step-by-step, for each executor.

Fundamentals

The universal job map provides structure; the domain map provides relevance. Both are required to avoid generic or overly specific maps.

Actions

I create the 8-step JMS table with Universal + Domain columns, with clear descriptions and rationale.

Pitfalls

  • Collapsing steps loses outcome resolution.
  • Domain-only maps remove comparability across jobs.

Review Checklist

  • 8 universal steps are present.
  • Domain names are clear and contextual.
  • JMS numbering is consistent.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/03_solution_journey_jtbd (html): JTBD Core Statement + JMS Table (HTML) - aggregate all job executors

Step 04: Consumer DOS (No Scores)

Intro

We translate each JMS step into measurable outcomes, without scoring yet.

Fundamentals

Desired Outcome Statements (DOS) are outcome-first and metric-driven. The format is Direction + Metric + Object + Context. Scoring comes later to keep evidence clean.

Actions

I generate the minimum DOS per JMS per executor, keeping them solution-agnostic and measurable. We do not score here.

Pitfalls

  • Outcomes that sound like features destroy comparability.
  • Mixing scoring too early biases evidence.

Review Checklist

  • Each JMS has the minimum DOS count.
  • Outcomes are measurable and solution-agnostic.
  • No scoring fields are filled.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/04_user_needs_dos_table (html): Consumer DOS Table (HTML) - one table per job executor
  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/04_user_needs_dos (html): Consumer DOS (HTML) - narrative + tables

Step 05: Competitor Analysis (Consumer DOS)

Intro

We attach evidence to the outcomes.

Fundamentals

Satisfaction scoring must be grounded in reality. Competitors and alternatives provide the evidence base and reveal where outcomes are already satisfied.

Actions

I identify top competitors/alternatives per DOS and append evidence with timestamps. History is preserved for auditability.

Pitfalls

  • Overweighting a single competitor distorts satisfaction.
  • Removing history breaks evidence continuity.

Review Checklist

  • Every DOS has competitor evidence.
  • Evidence is timestamped and append-only.
  • Alternatives beyond direct competitors are included.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/05_competitive_analysis (html): Competitive Analysis (HTML) - aggregate all job executors

Step 06: Consumer Opportunity Landscape (Scored)

Intro

We score outcomes and expose the biggest strategic upside.

Fundamentals

Opportunity is calculated as Importance + max(Importance − Satisfaction, 0). Outcomes fall into zones that guide roadmap decisions (underserved, overserved, adequately served).

Actions

I score each DOS, compute opportunity, and generate the opportunity landscape visualization.

Pitfalls

  • Inconsistent scoring scales distort opportunity zones.
  • Missing evidence inflates confidence.

Review Checklist

  • Scores are consistent and evidence-backed.
  • Opportunity formula is applied correctly.
  • Landscape matches table scores.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/06_user_needs_dos_scored_table (html): Consumer DOS Scored Table (HTML) - one table per job executor
  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/06_user_needs_dos_scored (html): Consumer DOS Scored (HTML)
  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/06_consumer_opp_landscape_viz (html): Consumer Opportunity Landscape (HTML)

Step 07: Roadmap Clustering (Consumer)

Intro

We group outcomes into strategic themes that a roadmap can carry.

Fundamentals

Clusters translate outcome data into product strategy structure. A good cluster is coherent, distinct, and evidence-backed.

Actions

I cluster outcomes into themes, explain the rationale, and ensure coverage across executor needs.

Pitfalls

  • Overlapping clusters create conflicting priorities.
  • Clusters based on features rather than outcomes reduce flexibility.

Review Checklist

  • Every DOS is assigned to a cluster.
  • Clusters are mutually distinct.
  • Rationale references outcomes, not features.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/07_outcomes_roadmap_odir (html): Consumer Roadmap Clusters (HTML)

Step 08: Roadmap Prioritization (Consumer)

Intro

We decide what gets built first and why.

Fundamentals

Prioritization balances impact, effort, and strategic leverage. The goal is to sequence outcomes that compound value.

Actions

I rank clusters and outcomes with a consistent scoring model and clear rationale.

Pitfalls

  • Prioritizing based on internal preference over evidence.
  • Ignoring effort or feasibility signals.

Review Checklist

  • Priorities align with opportunity zones.
  • Rationale is explicit and consistent.
  • Roadmap sequencing is plausible.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/08_outcomes_roadmap_odir (html): Consumer Roadmap Prioritization (HTML)

Phase 2: Consumption & PLG Pipeline

This phase models adoption, activation, retention, and advocacy outcomes. It is the growth engine layer.

Step 09: Consumption Jobs (PLG)

Intro

We define the full adoption lifecycle for each executor.

Fundamentals

Consumption jobs explain how value is realized, not just delivered. They expose friction in adoption and retention.

Actions

I define the consumption chain per executor and align it with PLG strategy.

Pitfalls

  • Skipping phases hides churn risk.
  • Treating adoption as a single step misses activation friction.

Review Checklist

  • Full chain is defined per executor.
  • Jobs are outcome-based, not feature-based.
  • Terminology is consistent across executors.

Deliverables

  • None (this step outputs JSON only; no visible outputs)

Step 10: Consumption JMS

Intro

We map each consumption job into concrete steps.

Fundamentals

Universal steps preserve structure; domain steps preserve context. Both are required for reliable DOS.

Actions

I build JMS tables for each PLG phase and executor.

Pitfalls

  • Missing JMS steps reduces DOS coverage.
  • Mixing phases obscures growth insights.

Review Checklist

  • JMS tables exist for each PLG phase.
  • Universal and domain columns are clear.
  • Steps are sequenced logically.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/10_growth_journey_plg (html): Consumption JMS (HTML) - aggregate all job executors

Step 11: Consumption DOS (No Scores)

Intro

We translate growth steps into measurable outcomes.

Fundamentals

PLG outcomes must capture time-to-value, activation quality, and retention behavior. Scoring comes later.

Actions

I generate DOS per JMS with PLG context fields and ensure minimum coverage.

Pitfalls

  • Outcomes that do not map to PLG phases lose diagnostic power.
  • Scoring too early biases opportunity.

Review Checklist

  • Minimum DOS count per JMS is met.
  • PLG context fields are filled.
  • Scoring fields are blank.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/11_product_led_growth_dos_table (html): Consumption DOS Table (HTML) - one table per job executor
  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/11_product_led_growth_dos (html): Consumption DOS (HTML)

Step 12: PLG Benchmarks

Intro

We ground PLG outcomes in real-world evidence.

Fundamentals

Benchmarks create the satisfaction anchor for scoring and prevent bias.

Actions

I append benchmarks per DOS with evidence and timestamps, preserving history.

Pitfalls

  • Benchmarks without evidence weaken scoring confidence.
  • Overwriting benchmarks erases auditability.

Review Checklist

  • Benchmarks exist for each DOS.
  • Evidence is timestamped and append-only.
  • Coverage spans multiple sources.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/12_plg_benchmarks (html): PLG Benchmarks (HTML)

Step 13: PLG Opportunity Landscape (Scored)

Intro

We score growth outcomes and highlight the highest-leverage bets.

Fundamentals

Opportunity math is the same as consumer ODI, but tuned to adoption outcomes. Zones show where growth is underserved.

Actions

I score DOS, compute opportunity, and generate the PLG opportunity landscape.

Pitfalls

  • Inconsistent scoring scales distort growth priorities.
  • Weak benchmarks inflate opportunity.

Review Checklist

  • Scores align with benchmark evidence.
  • Opportunity zones match the data.
  • Landscape reflects the table accurately.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/13_product_led_growth_dos_scored_table (html): Consumption DOS Scored Table (HTML) - one table per job executor
  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/13_product_led_growth_dos_scored (html): Consumption DOS Scored (HTML)
  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/13_plg_opp_landscape_viz (html): PLG Opportunity Landscape (HTML)

Step 14: Roadmap Clustering (Consumption)

Intro

We group PLG outcomes into strategic themes.

Fundamentals

Clusters must map to growth leverage, not just feature ideas. They should be coherent and defensible.

Actions

I cluster outcomes with rationale and ensure coverage across phases.

Pitfalls

  • Clusters that mix phases hide growth bottlenecks.
  • Clusters without rationale become arbitrary.

Review Checklist

  • Each DOS is assigned to a cluster.
  • Clusters reflect growth logic.
  • Rationale is tied to outcomes.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/14_plg_roadmap_odir (html): PLG Roadmap Clusters (HTML)

Step 15: Roadmap Prioritization (Consumption)

Intro

We rank PLG bets by growth impact and feasibility.

Fundamentals

Prioritization balances compounding growth, execution cost, and strategic fit.

Actions

I rank clusters and outcomes with rationale.

Pitfalls

  • Prioritizing based on opinion over evidence.
  • Ignoring feasibility signals.

Review Checklist

  • Priorities align with PLG opportunity zones.
  • Rationale is explicit and consistent.
  • Sequencing supports compounding growth.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/15_plg_roadmap_odir (html): PLG Roadmap Prioritization (HTML)

Phase 3: Executive Summary & Conclusion

This phase turns the work into a board-ready narrative and closes with next actions.

Step 16: Executive Summary

Intro

You get the strategic story, not just the data.

Fundamentals

A good executive summary converts analysis into decisions: what matters, why it matters, and what to do next.

Actions

I synthesize key findings, opportunities, risks, and priorities in a concise format.

Pitfalls

  • Summary that repeats tables without decisions.
  • Missing the strategic trade-offs.

Review Checklist

  • Clear top priorities and rationale.
  • Risks and gaps are explicit.
  • Next actions are unambiguous.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/16_executive_summary (html): Executive Summary (HTML)

Step 17: Conclusion

Intro

We close fast, clean, and ready for execution.

Fundamentals

A strong close preserves momentum and routes you to the next master without ambiguity.

Actions

I recap outcomes, highlight wins, and point to the next logical master.

Pitfalls

  • Vague handoff that slows execution.
  • Missing the next master routing.

Review Checklist

  • Summary is concise and accurate.
  • Next-step path is clear.
  • Momentum is preserved.

Deliverables

  • c02_innovation_strategy/{{mm_session_scope}}/{{mm_job_executor}}/17_conclusion (html): Conclusion (HTML)