A minimum viable standard for community-level impact verification in international development contexts. Pass/fail, community-generated, counting-based. Designed for development finance institutions, governments, NGOs, and researchers.
Billions of dollars flow into African development projects annually, claiming livelihoods created, emissions reduced, communities protected. But when a community member asks "How do we know this actually happened?" the answer dissolves into unaudited self-reports and metrics designed in Geneva that bear no resemblance to conditions on the ground.
This is the impact verification gap. It is not a minor accounting inconvenience. It is a structural failure that affects three constituencies simultaneously. No existing framework adequately addresses it at the community level, cheaply enough for frontier-market deployment, and transparently enough for both village councils and institutional investors to trust.
The proliferation of overlapping ESG standards has created what researchers describe as a "bureaucratic maze" where compliance is achievable without accountability (Das et al., 2024). A mining operation can report against GRI, align with OECD Due Diligence Guidance, prepare for ISSB standards, and score well on CDP disclosure — while communities downstream face uncompensated environmental harm, because no framework verifies what actually changed at the village level. In the Democratic Republic of the Congo, which produces over 70% of global cobalt, researchers have documented contamination of the Lulu River reaching 47,468 mg/kg copper and 13,199 mg/kg cobalt despite the proliferation of ESG frameworks across the sector (Das et al., 2024; RAID-UK and AFREWATCH, 2024).
In the clean cooking sector, 558 million Africans still lack access to clean energy. Projects routinely report "stoves distributed" rather than "meals cooked cleanly" — a well-documented gap between output metrics and lived outcome (IEA, 2024). A stove sitting unused in a shed is a wasted subsidy and a failed health outcome, but no existing framework catches it at the point of impact. Systematic reviews of Social Return on Investment (SROI) methodology reveal a deeper problem: studies apply "different ratios and techniques," rely on "highly subjective" financial proxies, and produce results that resist comparison across contexts (Vik, 2019). The result is due diligence fatigue, greenwashing exposure, and capital that either flees to safer markets or demands punitive risk premiums.
What is missing is not another framework. It is a single, simplified standard that starts where people live, not where spreadsheets end. VIS is minimum viable, not maximum proven. It is rigorous where tested, honest where hypothesised, and open to falsification everywhere.
Kamoga, J.C. (2026). The Verifiable Impact Standard (VIS): A Minimum Viable Standard for Community-Level Impact Verification. Working Paper. CC BY 4.0. Available at: vis-standard.org
VIS is deployed at the project level using a pass/fail verification logic based on observable, community-generated evidence and a counting-based scoring system that eliminates analyst discretion. Each pillar addresses a documented failure mode in existing impact verification practice. Click each pillar to expand.
Project sponsors produce a single-page public commitment, rendered in the primary language of the target community, specifying three measurable outcome targets and a defined completion deadline. This is the foundation of the VIS framework. Without a clearly stated, community-legible promise, no subsequent verification is meaningful — because there is no agreed standard against which to measure.
This requirement addresses a well-documented gap between project documentation and community comprehension. Conventional feasibility studies and logical framework analyses, while methodologically rigorous, are rarely accessible to the communities whose outcomes they purport to measure (Chambers, 1994; Estrella and Gaventa, 1998). A 200-page project document written in English and filed in a capital city office is not accountability. It is administration.
By constraining the commitment to one page with three targets, VIS shifts the burden of clarity from the community to the project sponsor. Outcome indicators are privileged over output metrics — not "stoves distributed" but "households cooking cleanly daily." Not "boreholes drilled" but "water available within 500 metres for 90% of target households." The promise must be falsifiable. If it cannot be proven false, it cannot be verified true.
The Promise document is publicly posted at the project site and registered in the Ledger at inception. Any community member, auditor, or investor can retrieve it at any point in the project lifecycle and compare it against the evidence record.
Three independent, tamper-resistant evidence streams are collected simultaneously at the project site: a digital layer comprising automated sensor data or mobile payment transaction logs; a human layer comprising testimony and documentation from a community-elected monitor operating on a rotating basis with a defined stipend; and a physical layer comprising tamper-evident markers or geotagged photographic records.
No single stream is treated as sufficient. A sensor can break. A monitor can be pressured. A photograph can be staged. Three independent layers that rarely agree perfectly — but whose divergence is itself informative — create verification resilience without imposing the technical capacity requirements that have limited digital-only traceability approaches across sub-Saharan African contexts (Asamoah Oppong, 2024; Osei-Mensah et al., 2023).
Divergence exceeding 15% triggers automatic review. Divergence exceeding 30% freezes the VIS score pending independent external audit. This divergence protocol is not a failure mode — it is a feature. It means the system is working as designed, catching discrepancies before they compound into systemic misreporting.
In practice, the three layers look like this in a Zambian rural water project: a $30 LoRaWAN flow meter sends daily litres pumped; a village-elected agent photographs the queue each morning with a timestamp; and a painted meter face shows cumulative totals visible to any passer-by. Together they create an evidence chain that is prohibitively difficult to falsify across all three simultaneously.
Four minimum performance standards, each derived from existing international benchmarks and operationalised as a binary pass/fail question to eliminate scoring ambiguity and resist strategic manipulation. Each gate is a yes/no question that any procurement officer, community representative, or independent auditor can answer from the evidence record without specialist training.
Economic Viability: does projected revenue cover operations and maintenance over the project lifetime? Drawn from the World Bank's financial sustainability framework for infrastructure and service delivery projects. A project that cannot sustain itself financially cannot sustain its community benefit. This gate catches donor-dependent projects that will collapse when external funding ends.
Climate Resilience: does the project remain functional under a one-in-ten-year dry-spell scenario as defined by IPCC SSP2-4.5 stress parameters? African development projects designed for average climatic conditions fail disproportionately under moderate climate stress — a pattern with particular relevance to water access, clean cooking fuel supply, and agricultural supply chains (IPCC, 2022; Hallegatte et al., 2016). A water pump that fails in a drought year is not a water solution.
Gender Inclusion: do women hold at least 30% of project decision-making roles and receive at least 50% of time-saving benefits? Drawn from the UN Women gender-responsive budgeting framework and consistent with EBRD (2022) standards. Development projects without structural gender accountability systematically underserve women despite claiming community-wide impact (Duflo, 2012; Kabeer, 1999). This gate is not aspirational — it is a minimum threshold with defined consequences for failure.
Non-Revenue Control: is product or service loss through leakage, theft, spoilage, or infrastructure failure 20% or less of total output? Derived from the IBNET 75th percentile performance standard. Binary pass/fail gates foreclose optimisation behaviour — the dynamic where sponsors design projects to score well rather than to perform well (Power, 1997). A project either meets the minimum standard or it does not.
Daily evidence outputs from Pillar 2 are cryptographically compressed into a unique hash value and recorded on a public distributed ledger at approximately $0.001 per transaction. Any future auditor can verify that a specific evidence record existed on a specific date and has not been subsequently altered, without requiring access to original data collection infrastructure or the cooperation of the project sponsor.
Paper records burn, flood, or get lost. Excel files get edited. Staff turn over. Regimes change. A $0.001 hash on a public ledger survives all of these. In a cobalt mine in the DRC, daily water quality hashes mean that when a downstream community alleges pollution in 2027, the 2026 ledger proves the claim true or false without expensive forensic investigation. The evidence existed on a specific date and was not altered. That is a transformative accountability shift for communities whose historical claims have been dismissed for lack of documentary evidence.
The Electronic Trade Documents Act 2023 (England and Wales) established that blockchain-timestamped records constitute tamper-evident evidence admissible in court proceedings. This is significant for cross-border development finance accountability where documentary disputes have historically favoured better-resourced parties (Zetzsche et al., 2020; Werbach, 2018).
A critical methodological limitation requires explicit acknowledgement: the immutable ledger verifies the existence and integrity of a data record on a given date. It does not verify the accuracy of the underlying data at the point of initial entry. A falsified sensor reading or a staged photograph, if entered into the system, will be preserved with the same cryptographic integrity as accurate data. Detection of initial falsification depends on the redundant human and physical evidence layers of Pillar 2. VIS is tamper-evident, not truth-generating. This distinction must be communicated clearly to all deployment stakeholders.
A composite performance score on a 0–100 scale, calculated through a transparent counting procedure that eliminates analyst discretion entirely. The score is derived from three observable counts: gates passed (maximum 4) + evidence layers active (maximum 3) + consecutive months of uninterrupted ledger records (maximum 12), summed, divided by 19, multiplied by 100.
The arithmetic is additive rather than algebraic. No weighting coefficients. No monetised proxies. No subjective adjustments at any stage. The result is transparent to a village council and credible to a pension fund. A score of 95 means: four gates passed, three evidence layers active, twelve months of clean ledger. A score of 63 means: three gates passed, two layers active, five months of ledger. The calculation is visible, reproducible, and contestable — by anyone.
This design directly addresses three structural weaknesses of Social Return on Investment methodology identified in systematic reviews: poor cross-contextual comparability arising from inconsistent financial valuation conventions; reliance on analyst discretion in the selection of financial proxies for social outcomes; and resource intensity that renders complex monetisation inaccessible in low-capacity deployment contexts (Vik, 2019; Banke-Thomas et al., 2015; Nicholls et al., 2012). VIS eliminates all three weaknesses by replacing valuation with counting.
Defined decision thresholds translate the score into capital allocation signals that development finance institutions and procurement officers can act on directly, without additional interpretation. A score below 70 does not mean the project is a failure — it means specific, identifiable standards were not met, and remediation is possible. Transparency about failure is the precondition for improvement.
Enter your project data across the five pillars and receive a live VIS score with capital allocation signal. All calculation is local, no data leaves your device.
Complete the benchmark gates, evidence layers, and ledger months to generate your VIS score.
VIS is designed for application across international development contexts where community-level accountability has consistently lagged behind investment commitments. Three sectors illustrate its range, the framework is not limited to them.
The United Nations 2030 Agenda for Sustainable Development requires an estimated $5–7 trillion annually in sustainable development investment. Public funds and ODA will not close this gap. Private capital must scale, but private capital will not scale on unaudited promises.
Have you applied VIS to a project? Document it here. Every submission becomes part of the living evidence base that the framework works in practice. Submissions are reviewed and published in the VIS Use Case Registry.