Organic Design
A methodology for research, product design, and stakeholder decision-making. It builds on findings from interviews, field studies, and direct stakeholder contact, then structures those findings so each project starts from what previous rounds already learned.
The method uses a tree as its organizing metaphor. Raw observations and captures are ether, the unprocessed material waiting to be traced. When findings are compressed into reusable knowledge, they become soil that feeds new work. The trunk is the operational surface: live projects, active decisions, and work in progress. Leaves are what the world sees: products, case studies, published work. And where growth was blocked or a direction did not work, bark forms as protective structure that strengthens the whole system. The four stages of the cycle follow this pattern: Trace captures what is alive, Root follows it to the deeper constraint, Re-soil feeds findings back into the team's working knowledge, and Regrow produces new directions from what the research revealed.

The Cycle
Every engagement moves through four stages. Each pass produces outputs that feed the next pass, so the research gets sharper and the product decisions get more grounded over time.
Trace
Start where the person is. Observe what they actually do: the behaviors, decisions, frustrations, and workarounds. Document before interpreting.
Root
Follow the pattern to the real constraint. Not the surface complaint or the feature request. The deeper need or structural gap that produces the visible behavior.
Re-soil
Feed findings back into the team's working knowledge. Structured research outputs enter roadmap discussions, positioning decisions, and prioritization frameworks. The organization can now see what it could not see before the research.
Regrow
Act on what the research revealed. New product directions, reprioritized features, revised positioning, or entirely new problem framings emerge from what the team can now see. Three planned features get deprioritized. Two new directions emerge that competitive analysis could not have surfaced.
Three Components
Organic Design operates through three integrated components. Each one feeds the others: research infrastructure makes applied research repeatable, applied research produces the findings that work distribution packages, and work distribution generates structured feedback that strengthens the infrastructure.
Research Infrastructure
Interview protocols, synthesis templates, stakeholder frameworks, and structured finding repositories that persist after the project ends. Each study builds on what previous rounds already produced rather than starting from a blank research plan.
In practice: a standardized interview guide, a 48-hour synthesis workflow, and a tagged finding repository that the team can query when making product decisions months later.
Applied Research
Qualitative interviews, stakeholder sessions, decision facilitation, and evidence-based go-to-market. This is where the method meets the person. Every engagement traces how people actually experience problems before proposing solutions.
In practice: interviewing home buyers to discover that affordability, sustainability, and life fit are three separate questions, then structuring a product around that separation.
Structured Work Distribution
A system for dividing work into self-contained packets where each packet carries the context, references, and decision criteria needed to complete it. Work divides across people and teams without losing the knowledge that informed it.
In practice: instead of a Jira ticket that says "build onboarding flow," a work packet that includes the relevant research findings, the design constraints those findings produced, the specific user need it addresses, and the acceptance criteria. Anyone picking it up has what they need to make good decisions, not just a task description.
Operating Principles
These commitments distinguish Organic Design from conventional research and product methodologies. Each one governs real decisions in how products get built and how teams operate.
Multi-Perspective Research
Formerly: Coordination Field
The unit of analysis in any research engagement is not the individual user. It is the space where multiple perspectives on the same problem meet. A home buyer, a real estate agent, and a lender each hold part of the picture. Research designed to surface one perspective at a time misses the interactions between them.
In practice: running structured sessions where different stakeholders respond to the same scenarios, then synthesizing across their accounts to find where their experiences diverge. That divergence is where the real product opportunity lives.
Users as Decision-Makers
Not just research subjects
Users hold governing weight in the product. They tag, vote, curate, and shape what the product becomes through direct participation. Their input does not stop at the research phase. It is built into the product as an ongoing feedback mechanism.
In practice: in The Commons, community members vote on how local information is categorized and surfaced. Their curation decisions directly shape what other residents see. The product learns from use, not just from research about use.
Governance as Design
Fairness is a design constraint, not a policy layer
Moderation rules, content curation logic, and community structure are designed as core product surfaces from the start. They are not added when problems appear. How a product handles conflict, abuse, and edge cases is as much a design decision as the interface layout.
In practice: before writing any UI code for The Commons, we designed the moderation model: who can flag content, how disputes are resolved, what constitutes removal vs. de-ranking. These decisions shaped the data model and the interface.
Two Participant Types
Lived experience and professional expertise
Every product is informed by two distinct groups: people who directly experience the problem (a renter navigating housing, an operator running a business) and people with professional expertise in the domain (agents, advisors, industry specialists). Both contribute real knowledge. Neither replaces the other.
In practice: for Home Ground, we interview both first-time buyers navigating affordability and mortgage professionals who see patterns across hundreds of transactions. The buyer knows what it feels like. The professional knows what typically goes wrong. The product needs both.
Low-Barrier Participation
Trust first, moderate structurally
Participation does not require identity disclosure or credential verification upfront. The system extends access by default and manages misuse through rate limiting, content flagging, and structural constraints rather than gatekeeping who gets in. This lowers the barrier for the people whose perspectives are most often excluded.
In practice: anyone can post to The Commons without creating an account. Abuse is managed through post rate limits, community flagging, and geographic scoping rather than identity verification. Moderation scales through structure, not through requiring people to prove who they are.
Needs-Based Evaluation
Products measured against human needs, not feature lists
Product decisions are evaluated against ten core human needs organized into four families: relational (connection, acceptance, care, honesty), orientation (awareness, meaning), agency (autonomy, play), and regulation (peace, physical well-being). This framework catches when a product serves a feature request but misses the actual need behind it.
In practice: a feature request for "better search" might trace back to an orientation need (awareness: "I need to know what is happening near me") or an agency need (autonomy: "I need to find information on my own terms"). The need determines the design direction. The feature request alone does not.
How It Compounds
Each engagement produces reusable research protocols, synthesis templates, and structured finding repositories alongside the project-specific insights. The next engagement starts from those assets rather than from zero. Interview guides get sharper. Synthesis gets faster. The team's ability to act on research findings improves with each cycle. This is not a consulting model where every project resets. It is an accumulation model where every project makes the next one stronger.