Making the logic of your initiative explicit
Clarifying how change is expected to happen, before costly decisions are locked in.
Programs and initiatives are rarely simple. People working on the ground or leading delivery often have a strong intuitive sense of what needs to change and how improvement is likely to occur. What is less straightforward is articulating, in a clear and disciplined way, how those actions are expected to lead to outcomes over time.
This work focuses on making that logic explicit.
By stepping back early in program or initiative design, it becomes possible to clearly map how activities are expected to lead to short- and mid-term outcomes, and how these in turn contribute to longer-term outcomes and broader system impact. Making these connections explicit reduces the risk of costly design flaws and provides a clear reference point for decision-making as the work unfolds.
A well-articulated outcome logic serves several purposes at once: it creates a shared understanding of intent, provides a roadmap against which progress can be assessed, and establishes clear benchmarks for judging whether a program is on track. For organisations seeking funding, it also demonstrates to funders that the initiative is grounded in a coherent and plausible logic rather than aspirational intent.
What this work involves
This work typically results in:
A clear, plain-language change narrative (Theory of Change)
A structured or visual outcome pathway linking activities to outcomes
Explicit articulation of the assumptions underpinning the initiative
Identification of plausible short- and mid-term outcomes
Clarity about what can and cannot be meaningfully evaluated
When this is most useful
This work is particularly valuable when:
You are developing a new or complex program and want an independent perspective to clarify and test its underlying logic
You need to clearly articulate to funders, boards, or other stakeholders how an initiative is expected to lead to its stated outcomes
A program or initiative is already underway and you want to assess whether its design and assumptions remain sound
You are planning an evaluation — whether conducted internally or commissioned externally — and need a clear Theory of Change to guide the evaluation approach
What my involvement usually looks like
Engagements are typically structured and time-limited, and may include:
A focused engagement over approximately 3–6 weeks
Review of relevant documentation
Targeted discussions with key stakeholders
One or two facilitated working sessions
At the conclusion of the process, I provide a clear articulation of outcome pathways in both written and schematic form. Where useful, I can also present and work through the logic with teams or stakeholders.
Designing evaluation that supports real decisions
Ensuring evaluation effort produces insight that can actually be used.
Evaluation is a mandatory requirement for most publicly funded programs and initiatives. Where it is not mandatory, serious organisations rarely skip it — because evaluation is how you know whether a program is actually delivering what it set out to do.
The disappointment that follows an evaluation is rarely because it uncovers problems. More often, it is because the evaluation is not useful.
Common pitfalls are familiar: evaluations that measure how busy a service is rather than what it achieves; evaluations that become a tick-box exercise; evaluation budgets spent on collecting data that does not inform future decisions; or evaluations that show disappointing outcomes but cannot distinguish between weak implementation and flawed program assumptions.
When this happens, the problem is not the findings.
It is the evaluation design.
Well-designed evaluation frameworks ensure that evaluation effort is worth the time, cost, and organisational energy invested — because they are anchored in the decisions that need to be made during implementation and at key decision points.
What this work involves
This work involves designing evaluation frameworks that are explicitly anchored to decision-making rather than reporting.
It focuses on clarifying what decisions need to be made during implementation and at key points in a program’s life, and shaping evaluation questions, indicators, and data collection around those decisions. A central part of this work is being explicit about trade-offs: what can reasonably be evaluated now, what cannot yet be assessed, and where uncertainty or attribution limits should be acknowledged rather than obscured.
The aim is to ensure evaluation effort leads to insight that can be interpreted and acted upon — not just reported.
When this is most useful
This work is particularly valuable when:
Evaluation is a mandatory requirement of funding and you want a clear, proportionate evaluation plan rather than a tick-box exercise
You are commissioning an evaluation and want confidence that it will generate information useful at future decision points
You have internal capability to conduct evaluation but want independent expertise to design a robust, decision-focused framework
Existing evaluation activity is producing too many indicators and too little clarity
Outcomes are difficult to measure or attribute, and you want an evaluation approach that reflects this complexity rather than setting the program up to fail
What my involvement usually looks like
Engagements are typically structured and time-limited, and may include:
A short advisory engagement over approximately 3–6 weeks
Review of program documentation and existing evaluation materials
Targeted discussions to clarify decision needs, constraints, and risk
Development of a clear, usable evaluation framework that can guide internal evaluation or be used to commission external work
This work often prevents organisations from investing in evaluations that are over-engineered, poorly aligned to decisions, or unlikely to produce interpretable findings.
It is not uncommon for evaluation reports to go largely unused. Significant resources are invested in evaluation, only for the resulting report to sit unread — or to be referenced selectively without genuinely informing what happens next.
This is rarely because the evaluation uncovered uncomfortable findings. More often, it is because the report does not support decision-making. Reports may be overly long, written in academic or technical language, or avoid drawing clear conclusions. In other cases, findings are ambiguous, stakeholders disagree on what the results mean for program operations, or there is concern that important issues have been overlooked or overstated.
At critical decision points, organisations often need more than data. They need clear, independent judgement about what the evidence does and does not support — and what the implications are for future action.
Independent evidence-based judgement at decision points
Review and sense-making of evaluation and evidence
What this work involves
This work provides independent, evidence-informed judgement to support decision-making at key points in a program’s life.
It is not a re-analysis of data. Rather, it involves structured review, interpretation, and sense-making of existing evaluation findings and related evidence, with a focus on relevance, quality, and implications for action.
This work typically includes:
Review of evaluation reports and supporting material
Assessment of the quality, limitations, and relevance of the evidence
Identification of blind spots, gaps, and over-claims
Clear articulation of what can and cannot reasonably be concluded
Framing of realistic options, risks, and implications for next steps
When this is most useful
This work is particularly valuable when:
You receive an evaluation report that feels unclear, unhelpful, or difficult to act on
You want a clear, plain-language interpretation of what a report actually says — and does not say
You need to translate evaluation findings into realistic decisions about continuation, adaptation, or scale-up
There is disagreement among stakeholders about how findings should be interpreted
You want an independent view on the quality and usefulness of the evidence before making a high-stakes decision
What my involvement usually looks like
Engagements are typically short, focused, and high-trust, and may include:
Review of relevant reports and supporting materials
A concise review memo and focused briefing that clearly sets out:
what can be concluded
what cannot be concluded
the implications for future decisions
This work often helps organisations move beyond debate or inertia and make decisions that are grounded in evidence, while being honest about uncertainty and limitations.