M&E framework design begins with a theory of change reconstruction — not the diagram that appears in the project document, but a working causal model that identifies which assumptions are empirically testable, which linkages are most likely to break, and where monitoring effort should therefore be concentrated. This exercise often reveals that existing logframes conflate outputs with outcomes, assign attribution to factors outside programme control, or set targets that were never grounded in baseline evidence.
Indicator selection follows a structured protocol: each proposed indicator is assessed against SMART criteria, data availability, collection cost, and sensitivity to the specific intervention logic. Where administrative data systems are weak, the framework incorporates primary data collection design — survey instruments, sampling strategies, and enumerator protocols calibrated to the programme context and budget envelope.
For social protection and public expenditure programmes, diagnostic assessment methods draw on established frameworks including the World Bank's CODI (Core Diagnostic) approach and GIZ's results-based monitoring standards. Evaluation designs — whether process, outcome, or impact — are specified with explicit counterfactual strategies, including difference-in-differences, regression discontinuity, or matched comparison group designs where experimental assignment is not feasible.