Open-Source Framework · CC BY 4.0 · v1.1 · June 2026

Verifiable Impact Standard.

A minimum viable standard for community-level impact verification in international development contexts. Pass/fail, community-generated, counting-based. Designed for development finance institutions, governments, NGOs, and researchers.

CC BY 4.0 Open Source Working Paper · 2026 5 Pillars 0–100 Score $0.08 / beneficiary / year
$0.08
Per beneficiary / year
5
Pillars
v1.1
Current release
VIS Framework Overview — Five-Pillar Architecture
01
The Promise
Community-Contract
Single-page commitment in community language. Three measurable targets. One deadline.
Input: Project sponsor
02
The Proof
Three-Layer Evidence
Digital, human, and physical streams. Divergence triggers review.
03
The Benchmark
Pass/Fail Gates
Four international standards. Binary yes/no. Zero discretion.
PASS
FAIL
04
The Ledger
Immutable Record
Daily hash. Public ledger. Tamper-evident. $0.001 per entry.
$0.001 / entry
05
The Score
0–100 Output
Counting-based. Zero analyst discretion. Clear financing signal.
95
90–100
Preferential financing
80–89
Standard financing
70–79
Restructuring required
< 70
Financing suspended
Formula: (Gates passed max 4 + Evidence layers max 3 + Months clean ledger max 12) ÷ 19 × 100 = VIS Score (0–100)
The Problem

The impact verification gap.

Billions of dollars flow into African development projects annually, claiming livelihoods created, emissions reduced, communities protected. But when a community member asks "How do we know this actually happened?" the answer dissolves into unaudited self-reports and metrics designed in Geneva that bear no resemblance to conditions on the ground.

This is the impact verification gap. It is not a minor accounting inconvenience. It is a structural failure that affects three constituencies simultaneously. No existing framework adequately addresses it at the community level, cheaply enough for frontier-market deployment, and transparently enough for both village councils and institutional investors to trust.

The proliferation of overlapping ESG standards has created what researchers describe as a "bureaucratic maze" where compliance is achievable without accountability (Das et al., 2024). A mining operation can report against GRI, align with OECD Due Diligence Guidance, prepare for ISSB standards, and score well on CDP disclosure — while communities downstream face uncompensated environmental harm, because no framework verifies what actually changed at the village level. In the Democratic Republic of the Congo, which produces over 70% of global cobalt, researchers have documented contamination of the Lulu River reaching 47,468 mg/kg copper and 13,199 mg/kg cobalt despite the proliferation of ESG frameworks across the sector (Das et al., 2024; RAID-UK and AFREWATCH, 2024).

In the clean cooking sector, 558 million Africans still lack access to clean energy. Projects routinely report "stoves distributed" rather than "meals cooked cleanly" — a well-documented gap between output metrics and lived outcome (IEA, 2024). A stove sitting unused in a shed is a wasted subsidy and a failed health outcome, but no existing framework catches it at the point of impact. Systematic reviews of Social Return on Investment (SROI) methodology reveal a deeper problem: studies apply "different ratios and techniques," rely on "highly subjective" financial proxies, and produce results that resist comparison across contexts (Vik, 2019). The result is due diligence fatigue, greenwashing exposure, and capital that either flees to safer markets or demands punitive risk premiums.

What is missing is not another framework. It is a single, simplified standard that starts where people live, not where spreadsheets end. VIS is minimum viable, not maximum proven. It is rigorous where tested, honest where hypothesised, and open to falsification everywhere.

Kamoga, J.C. (2026). The Verifiable Impact Standard (VIS): A Minimum Viable Standard for Community-Level Impact Verification. Working Paper. CC BY 4.0. Available at: vis-standard.org

The gap affects three constituencies
Communities
In the DRC, communities face documented displacement without traceable remedy, river contamination without accountability, and benefit agreements that exist only on paper. No mechanism allows a village elder to verify whether a promise made at project inception was kept at project completion. VIS generates that mechanism at the point where people live.
Investors
Capital is allocated on trust rather than evidence. The overlapping ESG framework landscape creates compliance costs without verification value. Greenwashing exposure is increasing as regulators in the EU and US tighten disclosure requirements. A single, independently verifiable score — generated from community-level data — reduces due diligence cost and greenwashing risk simultaneously.
Governments
Procurement systems reward narrative over performance. A rural water project can claim 90% uptime while the pump has been broken for six months. A clean cooking programme can report 300,000 stoves distributed while most sit unused. Without community-level verification, public money funds promises rather than outcomes. VIS gives governments a procurement-ready accountability tool that satisfies both domestic oversight and international compliance requirements including EU CSRD and California SB-253.
The VIS Framework

Five pillars. One score.

VIS is deployed at the project level using a pass/fail verification logic based on observable, community-generated evidence and a counting-based scoring system that eliminates analyst discretion. Each pillar addresses a documented failure mode in existing impact verification practice. Click each pillar to expand.

VIS Pass/fail · Community-generated · $0.08/beneficiary
SROI Monetisation required · Analyst-dependent · High cost
ISSB Disclosure only · No outcome verification
Gold Standard Project-level · Resource intensive · $0.12/beneficiary
B-Corp Organisation-level · Not project-level
01
The Promise
Community-Contract

Project sponsors produce a single-page public commitment, rendered in the primary language of the target community, specifying three measurable outcome targets and a defined completion deadline. This is the foundation of the VIS framework. Without a clearly stated, community-legible promise, no subsequent verification is meaningful — because there is no agreed standard against which to measure.

This requirement addresses a well-documented gap between project documentation and community comprehension. Conventional feasibility studies and logical framework analyses, while methodologically rigorous, are rarely accessible to the communities whose outcomes they purport to measure (Chambers, 1994; Estrella and Gaventa, 1998). A 200-page project document written in English and filed in a capital city office is not accountability. It is administration.

By constraining the commitment to one page with three targets, VIS shifts the burden of clarity from the community to the project sponsor. Outcome indicators are privileged over output metrics — not "stoves distributed" but "households cooking cleanly daily." Not "boreholes drilled" but "water available within 500 metres for 90% of target households." The promise must be falsifiable. If it cannot be proven false, it cannot be verified true.

The Promise document is publicly posted at the project site and registered in the Ledger at inception. Any community member, auditor, or investor can retrieve it at any point in the project lifecycle and compare it against the evidence record.

Design principle
One page. Three targets. One deadline. Community language.
Example restatement
"100,000 stoves by 2025" becomes "80% of stoves in Bungoma County used daily by March 2026, verified monthly."
Limitation
Assumes literate community representation. Audio/video recording required where literacy is low.
Grounded in
Chambers (1994); Estrella and Gaventa (1998), participatory monitoring literature
02
The Proof
Three-Layer Evidence

Three independent, tamper-resistant evidence streams are collected simultaneously at the project site: a digital layer comprising automated sensor data or mobile payment transaction logs; a human layer comprising testimony and documentation from a community-elected monitor operating on a rotating basis with a defined stipend; and a physical layer comprising tamper-evident markers or geotagged photographic records.

No single stream is treated as sufficient. A sensor can break. A monitor can be pressured. A photograph can be staged. Three independent layers that rarely agree perfectly — but whose divergence is itself informative — create verification resilience without imposing the technical capacity requirements that have limited digital-only traceability approaches across sub-Saharan African contexts (Asamoah Oppong, 2024; Osei-Mensah et al., 2023).

Divergence exceeding 15% triggers automatic review. Divergence exceeding 30% freezes the VIS score pending independent external audit. This divergence protocol is not a failure mode — it is a feature. It means the system is working as designed, catching discrepancies before they compound into systemic misreporting.

In practice, the three layers look like this in a Zambian rural water project: a $30 LoRaWAN flow meter sends daily litres pumped; a village-elected agent photographs the queue each morning with a timestamp; and a painted meter face shows cumulative totals visible to any passer-by. Together they create an evidence chain that is prohibitively difficult to falsify across all three simultaneously.

Digital layer
IoT sensor or mobile-money transaction log · $0.001–$0.10/record
Human layer
Community-elected monitor, rotating, paid stipend · $20–50/month
Physical layer
Tamper-evident seal or geotagged photo · $5–15/unit
Divergence protocol
>15% triggers review · >30% score frozen · audit cost: $8,000/year
03
The Benchmark
Pass/Fail Gates

Four minimum performance standards, each derived from existing international benchmarks and operationalised as a binary pass/fail question to eliminate scoring ambiguity and resist strategic manipulation. Each gate is a yes/no question that any procurement officer, community representative, or independent auditor can answer from the evidence record without specialist training.

Economic Viability: does projected revenue cover operations and maintenance over the project lifetime? Drawn from the World Bank's financial sustainability framework for infrastructure and service delivery projects. A project that cannot sustain itself financially cannot sustain its community benefit. This gate catches donor-dependent projects that will collapse when external funding ends.

Climate Resilience: does the project remain functional under a one-in-ten-year dry-spell scenario as defined by IPCC SSP2-4.5 stress parameters? African development projects designed for average climatic conditions fail disproportionately under moderate climate stress — a pattern with particular relevance to water access, clean cooking fuel supply, and agricultural supply chains (IPCC, 2022; Hallegatte et al., 2016). A water pump that fails in a drought year is not a water solution.

Gender Inclusion: do women hold at least 30% of project decision-making roles and receive at least 50% of time-saving benefits? Drawn from the UN Women gender-responsive budgeting framework and consistent with EBRD (2022) standards. Development projects without structural gender accountability systematically underserve women despite claiming community-wide impact (Duflo, 2012; Kabeer, 1999). This gate is not aspirational — it is a minimum threshold with defined consequences for failure.

Non-Revenue Control: is product or service loss through leakage, theft, spoilage, or infrastructure failure 20% or less of total output? Derived from the IBNET 75th percentile performance standard. Binary pass/fail gates foreclose optimisation behaviour — the dynamic where sponsors design projects to score well rather than to perform well (Power, 1997). A project either meets the minimum standard or it does not.

Gate 1
Economic viability · World Bank
Gate 2
Climate resilience · IPCC SSP2-4.5
Gate 3
Gender inclusion · UN Women / EBRD 2022
Gate 4
Non-revenue control · IBNET 75th percentile
Failure consequence
Score below 80 · 6-month remediation period · no capital drawdown
04
The Ledger
Immutable Record

Daily evidence outputs from Pillar 2 are cryptographically compressed into a unique hash value and recorded on a public distributed ledger at approximately $0.001 per transaction. Any future auditor can verify that a specific evidence record existed on a specific date and has not been subsequently altered, without requiring access to original data collection infrastructure or the cooperation of the project sponsor.

Paper records burn, flood, or get lost. Excel files get edited. Staff turn over. Regimes change. A $0.001 hash on a public ledger survives all of these. In a cobalt mine in the DRC, daily water quality hashes mean that when a downstream community alleges pollution in 2027, the 2026 ledger proves the claim true or false without expensive forensic investigation. The evidence existed on a specific date and was not altered. That is a transformative accountability shift for communities whose historical claims have been dismissed for lack of documentary evidence.

The Electronic Trade Documents Act 2023 (England and Wales) established that blockchain-timestamped records constitute tamper-evident evidence admissible in court proceedings. This is significant for cross-border development finance accountability where documentary disputes have historically favoured better-resourced parties (Zetzsche et al., 2020; Werbach, 2018).

A critical methodological limitation requires explicit acknowledgement: the immutable ledger verifies the existence and integrity of a data record on a given date. It does not verify the accuracy of the underlying data at the point of initial entry. A falsified sensor reading or a staged photograph, if entered into the system, will be preserved with the same cryptographic integrity as accurate data. Detection of initial falsification depends on the redundant human and physical evidence layers of Pillar 2. VIS is tamper-evident, not truth-generating. This distinction must be communicated clearly to all deployment stakeholders.

Transaction cost
~$0.001 average · public distributed ledger
Legal standing
Electronic Trade Documents Act 2023, tamper-evident evidence admissible in court
What it proves
That data existed on a specific date and was not altered subsequently
Critical limitation
Does not verify data was true at initial entry. Pillar 2 human layer required for falsification detection.
05
The Score
Counting-Based, Zero Analyst Discretion

A composite performance score on a 0–100 scale, calculated through a transparent counting procedure that eliminates analyst discretion entirely. The score is derived from three observable counts: gates passed (maximum 4) + evidence layers active (maximum 3) + consecutive months of uninterrupted ledger records (maximum 12), summed, divided by 19, multiplied by 100.

The arithmetic is additive rather than algebraic. No weighting coefficients. No monetised proxies. No subjective adjustments at any stage. The result is transparent to a village council and credible to a pension fund. A score of 95 means: four gates passed, three evidence layers active, twelve months of clean ledger. A score of 63 means: three gates passed, two layers active, five months of ledger. The calculation is visible, reproducible, and contestable — by anyone.

This design directly addresses three structural weaknesses of Social Return on Investment methodology identified in systematic reviews: poor cross-contextual comparability arising from inconsistent financial valuation conventions; reliance on analyst discretion in the selection of financial proxies for social outcomes; and resource intensity that renders complex monetisation inaccessible in low-capacity deployment contexts (Vik, 2019; Banke-Thomas et al., 2015; Nicholls et al., 2012). VIS eliminates all three weaknesses by replacing valuation with counting.

Defined decision thresholds translate the score into capital allocation signals that development finance institutions and procurement officers can act on directly, without additional interpretation. A score below 70 does not mean the project is a failure — it means specific, identifiable standards were not met, and remediation is possible. Transparency about failure is the precondition for improvement.

Formula
(Gates + Layers + Months) ÷ 19 × 100
90–100
Preferential financing eligible
80–89
Standard financing · enhanced monitoring
70–79
Restructuring required before drawdown
< 70
Financing suspended · remediation required
Cost vs SROI
$0.08 per beneficiary/year vs $0.12 conventional M&E
Live Calculator

Score your project.

Enter your project data across the five pillars and receive a live VIS score with capital allocation signal. All calculation is local, no data leaves your device.

Project name
1
Benchmark gates, select all that pass
Economic Viability
Does projected revenue cover operations and maintenance over the project lifetime?
World Bank Financial Sustainability Framework
Fail
Climate Resilience
Does the project remain functional under a 1-in-10 dry-spell year?
IPCC SSP2-4.5 Stress Parameters
Fail
Gender Inclusion
Do women hold ≥30% of decision-making roles and receive ≥50% of time-saving benefits?
UN Women GRB Threshold · EBRD (2022)
Fail
Non-Revenue Control
Is product or service loss ≤20% of total output?
IBNET 75th Percentile Performance Standard
Fail
2
Evidence layers, mark all active
Digital
IoT sensor or mobile-money log
Inactive
Human
Community-elected monitor, stipend
Inactive
Physical
Tamper seal or geotagged photo
Inactive
3
Ledger continuity
Consecutive months of clean ledger records
0 months
036912
VIS Score
out of 100
Enter data above

Complete the benchmark gates, evidence layers, and ledger months to generate your VIS score.

Score breakdown
Gates passed0 / 4
Evidence layers0 / 3
Months clean ledger0 / 12
Raw total0 / 19
VIS = (0 + 0 + 0) ÷ 19 × 100
     =
Copied to clipboard ✓
Kamoga, J.C. (2026). The Verifiable Impact Standard (VIS).
Working Paper · Open-source · CC BY 4.0
johnckamoga@gmail.com
Application Contexts

Where VIS can be applied.

VIS is designed for application across international development contexts where community-level accountability has consistently lagged behind investment commitments. Three sectors illustrate its range, the framework is not limited to them.

01
Congo Basin Critical Minerals
The DRC produces more than 70% of global cobalt, yet persistent accountability failures characterise the sector. VIS applies community-generated evidence to governance gaps that existing ESG frameworks, GRI, SASB, ISSB, have not resolved at the village level. The US-DRC Strategic Partnership Agreement (December 2025) designates Congo Basin critical minerals governance as a US national strategic priority, reflecting growing international recognition of the accountability gap.
Congo Basin · Critical Minerals · DRC
02
Clean Cooking Energy Access
558 million Africans lack clean cooking. Projects routinely report "stoves distributed" rather than "meals cooked cleanly", a well-documented gap between output metrics and outcome reality. VIS replaces distribution counts with cook-time sensor data, community monitor documentation, and monthly fuel-purchase records, creating a three-layer evidence chain that proves actual use, not unit shipment.
Energy Access · Sub-Saharan Africa
03
Circular Economy Supply Chains
Sustainable sourcing premiums in cocoa, coffee, and cotton supply chains are paid on audit samples of 5% of farms. Child labour and deforestation shift to unaudited plots while certified boundaries remain clean. VIS applies satellite imagery, farmer mobile check-ins, and cooperative ledger hashing to create continuous community-generated traceability, not periodic audit snapshots.
Circular Economy · Supply Chain · Traceability
National and International Significance

Why VIS matters.

The United Nations 2030 Agenda for Sustainable Development requires an estimated $5–7 trillion annually in sustainable development investment. Public funds and ODA will not close this gap. Private capital must scale, but private capital will not scale on unaudited promises.

Development Finance Institutions
VIS reduces due diligence cost and replaces narrative-based impact claims with independently verifiable, counting-based scores. A single, community-generated score replaces the current bureaucratic maze of overlapping ESG frameworks, allowing capital to flow faster and cheaper to projects that prove their worth.
African Governments
VIS provides a procurement-ready accountability tool that satisfies both domestic oversight requirements and international compliance frameworks including EU CSRD and California SB-253, without requiring specialist technical capacity at the point of delivery. A ministry can insert VIS score requirements directly into tender documents.
Communities
VIS returns agency, placing the instruments of accountability in the hands of those whose outcomes are being measured. The community monitor, the WhatsApp photo, the painted meter, these are ownership instruments, not surveillance tools. When evidence is generated locally and verified publicly, the power to hold projects accountable shifts from headquarters to the village.
Use Case Registry

Submit your use case.

Have you applied VIS to a project? Document it here. Every submission becomes part of the living evidence base that the framework works in practice. Submissions are reviewed and published in the VIS Use Case Registry.

Build the evidence base
VIS v2.0 will incorporate real-world application evidence. Your submission directly informs the next version of the framework.
Get recognised
Approved use cases are published in the VIS Use Case Registry with your organisation credited as an early adopter.
Join the community
Connect with other practitioners applying VIS across Africa and internationally through the GitHub Discussions community.
Also engage on GitHub
For methodological questions, sector adaptations, and deeper discussion, join the VIS GitHub community.
Open GitHub Discussions →
VIS Use Case Submission
Fields marked * are required
2–3 sentences on what you measured, challenges encountered, and what the score revealed.