Evaluating the State of Assessment


Evaluating the State of Assessment

By Heather Esper and Yaquta Kanchwala Fatehi

An enlightening piece by the authors on the evolution of impact assessment since its genesis in the 1950s and 60s, where it stands today, and the mind-set needed to move from debating to extracting the most value from it.

“Socially-minded organizations must find transparent ways for assessment – especially to access investment and engage in non-traditional partnerships.”

“If the goal of an ambulance service is to provide timely and quality transport for patients to a hospital, then they should assess indicators related to their services rather than an outcome such as improved health years later.”

Evaluating the State of Assessment

Recent debate has focused on whether businesses should assess impact. Erik Simanis, Head of Frontier Markets Initiative at Cornell University, argues[1] that impact assessments are good for non-profits but bad for businesses that want to profitably serve the poor due to management perception, implementation costs and burdens on consumers. We, the authors, believe Simanis is concentrating on a type of assessment– an evaluation– since it appears he is focusing on causality. We use the following definitions in this article: assessment means using qualitative or quantitative data to better understand an issue such as an organization’s performance or their consumers’ well-being; while evaluation is to establish causality to judge an issue and measurement implies numerical data[2]. Shortly thereafter, Irit Tamir and Mara Bolis of Oxfam America responded[3] by stating Simanis’s stance would encourage business negligence. They see assessing impact as a means towards managing risk and identifying new opportunities through customer feedback.

Context: Evolution of the assessment debate

This is not the first time we have heard this debate; this is just the latest entry in a global conversation on why and how to assess impact[4]. Approaches to impact assessment have been implemented in the international development community since the 1950s with logframes being popular[5]. The logframe links an organization’s inputs and activities to resulting outputs, outcomes and impacts[6].

In the 1980s, leading development theorists such as Robert Chambers advocated for the inclusion of program stakeholders in the design, analysis and reporting of impact evaluations[7]. In the 1990s, assessment continued to gain a wider following due to the increased need for accountability by nongovernmental organizations (NGOs). Alnoor Ebrahim, Associate Professor of Business Administration at Harvard University viewed[8] assessments as a tool to track accountability. He found assessments were increasingly used for upward accountability (to donors and governments), but identified significant potential to use them for downward accountability (to patrons and communities) and as a learning tool for innovation.

Meanwhile, Greg Dees, one of the pioneers of the social entrepreneurship field, defined accountability as part of the field’s definition, thus social entrepreneurs must assess “their progress in terms of social, financial, and managerial outcomes, not simply in terms of their size, outputs, or processes”[9]. Dees with his co-author, Beth Battle Anderson who was also at Duke at the time (2003), recognized that while it is difficult to measure and attribute impact, social entrepreneurs should conduct internal measurement and external evaluations. They even recommended the use of indirect or leading indicators in order to break down impact goals into “specific, measurable process and outcome objectives.” [10] At a similar time, a big push to measure results came from the Millennium Development Goals (MDGs).

Moving deeper into the debate, the conversation evolved from the need for assessment into the different metrics available and their different purposes. Sheila Bonini and Jed Emerson who at the time they wrote their piece[11] were at the Hewlett Foundation pushed all organizations to concentrate on what data to track, how to track it cost-effectively and how to integrate it into decision making. Similarly, other groups also focused on measurement integration and standardization of metrics, for example the Rockefeller Impact Investing Collaborative (RIIC)[12], which found the “lack of clear, consistent and credible impact” as a barrier to reaching scale. An example of standardization of metrics is the Impact Reporting and Investment Standards (IRIS). Managed by the Global Impact Investing Network (GIIN) as a catalogue of output metrics, IRIS aims to increase capital flow and allow investors to compare social, environmental and financial performance of their investees. From there, the conversation moved into the relationship between business and poverty reduction and assessment’s role to improve[13] the work of grantors (foundations and agencies), social enterprises, NGOs, impact investors, through mutual value creation.[14]

So how then did the debate begin to yo-yo between “to assess or not to assess”? We and others like Howard White of 3ie[15] believe some of the confusion in the larger debate, stems from interpreting assessment terms in different ways[16]. Some use a narrower definition and require a counterfactual for evaluation[17]. Another important reason for the debate is the push for randomized control trials (RCTs) which requires many resources and rigid design requirements. The Evaluation Gap Working Group led by the Center for Global Development’s 2006 article is widely credited with the adoption of RCTs in the development sector. A widely cited example[18] is that of d.light, a company that sells solar home systems in emerging markets. Their bilateral grantor pushed to conduct a RCT of customer’s health and educational outcomes for further funding. d.light argued that the RCT design would require they offer the products for free or with heavy subsidies, partner with unfamiliar distribution partners and target poorer household than they usually do – which wouldn’t be a sustainable model for scaling. d.light’s proposed solution would allow selling products at market prices with planned distributors at scale in the areas they operate.

Where we are now

A framework, called Metrics 3.0[19] released in 2014 calls for integrating impact metrics with operational metrics[i] as well as carrying out evaluations that contribute to collective learning agendas for the broader ‘ecosystem’ in order to create the most value from impact assessment. Going back to the d.light example, it is clear that the question ‘do d.light solar units improve the welfare of selected households’ would fit in the collective learning agenda[ii] as other organizations providing and investing in solar units would benefit from this knowledge. However, the question ‘does d.light’s business model improve customer’s welfare’ would provide specific information to help d.light’s operations and decision-making. This example also illustrates how the assessment’s goals help influence the indicators and methodology used. [20],[21] 

In the past, partly due to the confusion created by this debate, organizations collected too much or too little data resulting in much unused data. Many organizations simply gathered data at donors request or feared that if they didn’t, investors would question their investment; but they struggled to make ordered sense of it. At the same time organizations feared they’d overestimate their impact or base their estimates on false assumptions.

Today, many groups including ourselves develop thoughtful organization-wide data design processes with a clear rationale that minimizes wastage of resources while maximizing value of data collection. Many professionals now accept all assessment types (impact, process, performance) and methods (qualitative and quantitative) and use a range of these to incorporate multiple perspectives and types of indicators[22],[23][24]. One such example –the CART principles– provides helpful guidance in deciding what to assess and how to create the right-sized data collection system.[25] CART dictates indicators to be credible, actionable, responsible on resources, and transportable to other contexts.

There is also a movement toward tools that allow for quicker and less resource-intensive assessment such as rapid feedback loops, lean data, social performance, and impact proxies. Other ideas include: Unitus Seed Fund, a seed-stage venture firm operating in ten Indian cities, suggest selecting indicators based on key questions associated with the organization’s stage of growth[26]. A second idea presented by Ebrahim, suggests organizations focus what they assess within a logframe on the complexity of their theory of change and operational strategy[27]. For instance, if the goal of an ambulance service is to provide timely and quality transport for patients to a hospital, then they should assess indicators related to their services rather than an outcome such as improved health years later.  

Assessment for value creation

Socially-minded organizations must find transparent ways for assessment – especially to access investment and engage in non-traditional partnerships. Assessment creates value for businesses and non-profits alike for reasons such as learning at the organization and community level, benchmarking, identifying causality, and course-correcting. Deciding what (indicators) and how (methods and tools) to assess depends on the most important question/s an organization wants to answer.

So let’s agree to end the assessment debate and advance to a conversation of how to extract the most value. As Emerson states[28] “In the end, I hate the whole metrics debate…It is repetitive, mind numbing and distracting from the critical task of fighting the forces presently destroying our societies and planet.” To accelerate this learning, businesses should contribute to ecosystem-level learning agendas and all organizations should strengthen their capabilities to develop strategic data collection systems. Advancement in technology has led to data collection via mobile phones and geo-spatial visualizations. However in our push to collect data especially from low-income areas, we must not inundate people with too many questions resulting in survey fatigue. Many questions different organizations ask are likely to overlap and thus we call for data sharing. We must also share results back and provide information in exchange for survey responses. We look forward to continuing on this conversation with you.

Acknowledgements: We would like to thank the following persons for their time and helpful comments to strengthen our article: Scott Anderson (Managing Editor, NextBillion at William Davidson Institute), Chiropriya Dasgupta (Director, Strategic Initiatives at Enterprise for a Sustainable World), CJ Fonzi (Senior Project Manager, Dalberg Global Development Advisors), Daniele Giovannucci (President, Committee on Sustainability Assessment (COSA)), and Thane Kreiner (Executive Director, Miller Center for Social Entrepreneurship and Howard and Alida Charney University Professor at Santa Clara University). 

[1] Simanis, E. Dec 3 2014. Why Impact Assessments Are Good for Non-profits but Bad for Business, Guardian Sustainable Business the Role of Business in Development, The Guardian, Accessed 2 June 2015.

[2] Huitt, W. (2007) Assessment, measurement, and evaluation: Overview. Educational Psychology Interactive. Valdosta, GA: Valdosta State University

[3] Tamir, I.,  Bolis, M. Jan 19 2015. Next Thought Monday – Five Reasons Why BoP Impact Assessments Are Good for Business: Companies Overlook Risks and Miss Opportunities When They Don’t Assess Human Rights Impacts, Capturing Impact, NextBillion- Development through Enterprise, Accessed 2 June 2015.

[4] Staff Writer (2014) Why and How Social Enterprises Should Measure their Social Impact?, social platform, Accessed 15 May  2015.

[5] Roche, C. (1999) Impact Assessment for Development Agencies: Learning to Value Change. Oxford: Oxfam GB.

[6] Ebrahim, A.,Rangan, V. (2010) The Limits of Non-Profit Impact: A Contingency Framework for Measuring Social Performance, Harvard Business School.

[7] Sette, C. Participatory Evaluation, Better Evaluation, Accessed 15 June 2015.

[8] Ebrahim, A. (2003) Accountability in Practice: Mechanisms for NGOs, World Development, 31 (5), pp.813-829

[9] Dees, G. (1998) The Meaning of Social Entrepreneurship, Stanford University: Graduate School of Business.

[10] Dees, G., Anderson, B. (2003) For-Profit Social Ventures, Social Entrepreneurship, Senate Hall Academic Publishing.

[11]Bonini, S., Emerson, J. (2005) Maximizing Blended Value–Building Beyond the Blended Value Map to Sustainable Investing, Philanthropy and Organizations.

[12] Olsen, S.,Galimidi, B.(2008) Catalog of Approaches to Impact Measurement: Assessing social impact in private ventures, Social Venture Technology Group with the support of The Rockefeller Foundation.

[13] Kramer, M., Graves, R., Hirschhorn, J.,  Fiske, L. (2007) From Insight to Action: New Directions in Foundation Evaluation: FSG Social Impact Advisors.

[14] London, T. (2009) Making better investments at the base of the pyramid, Harvard Business Review, 87, pp. 106-113.

[15] White, H. (2010) A contribution to current debates in impact evaluation, Evaluation, 16, pp. 153-164.

[16] Maas, K., Liket, K. (2011) Social impact measurement: Classification of methods, In R. Burritt (Ed.), Environmental Management Accounting and Supply Chain Management,27, pp. 171–202.

[17] Heider, C. (Nov 13 2013). Embrace All Evaluation Methods – Can We End the Debate?, Independent Evaluation Group, World Bank Group, Accessed 2 June 2015.

[18] Shah, N. (April 16 2015). Evaluation with Impacts : Decision-focused Impact Evaluation as a Practical Policymaking Tool [PowerPoint slides],, Presented at 3ie Evidence Week: Washington, DC.

[20] IDinsight. (April 21 2015). IDinsight Presents at 3ie Evidence Week on ‘The Future of Impact Evaluation’, Accessed 2 June 2015.

[21] Shah, N. (April 16 2015). Evaluation with Impacts : Decision-focused Impact Evaluation as a Practical Policymaking Tool[PowerPoint slides], Presented at 3ie Evidence Week: Washington, DC.

[22] Heider, C. (Nov 13 2013). Embrace All Evaluation Methods – Can We End the Debate?, Independent Evaluation Group, World Bank Group, Accessed 2 June 2015.

[23] Carttar, P. (Mar 21 2014). Strategic Planning and Evaluation: Tools for Realizing Results. Stanford Social Innovation Review, Accessed 2 June 2015

[24] Gopal, S. (May 18 2015). Stop (Just) Measuring Impact, Start Evaluating. FSG Blog, Accessed 2 June 2015.

[25] Gugerty, MK., Karlan, D. (Apr 2 2014). Measuring Impact Isn’t for Everyone, Stanford Social Innovation Review, Accessed 2 June 2015.  

[27] Ebrahim, A.,Rangan, V. (2010) The Limits of Non-Profit Impact: A Contingency Framework for Measuring Social Performance, Harvard Business School.

[28] Emerson, J. (Mar 12 2015). The Metrics Myth, Markets for Good, Accessed 2 June 2015.   

Previous Evaluating the State of Assessment
Next Inclusive channels: High Touch-Low Cost