Decision-makers are under increasing pressure to justify their decisions and then account for their success (or otherwise) to a variety of stakeholders. Evidence-based management (1) is further increasing this pressure. While we know intuition plays a significant role in decision-making (2-4), large decisions (Will we merge and how? How will we change our culture?) will require thoughtful deliberation as well as experimentation. A conundrum emerges, however, when dealing with programs designed to change behaviours: how do we know that the program of activities is responsible for the change?
Initiatives designed to change behaviour are notoriously difficult to assess using traditional techniques. Let’s take a learning initiative called After Action Reviews (AAR) as an example. This intervention is designed to create knowledge and new behaviours through personal and group reflection. A group is facilitated to answer and discuss 3 questions in relation to a current or recently completed project: What was supposed to happen? What happened? What accounts for the difference?
Once a program of after action reviews is in place, is it the AARs or something else creating new knowledge? This knowledge, the argument goes, should create new behaviours. But it is the knowledge gained from the AAR, or something else, creating the behaviours? Finally these new behaviours should impact organisational outcomes. Again, are the new behaviours creating the impact or something else? There are two many causal links in this complex system to know for sure (5).
Assessing hard facts alone is insufficient in helping stakeholders appreciate the impact of a program designed to change behaviours. Qualitative perspectives are essential. The need for a balance to hard facts is heightened by the increasing number and variety of stakeholders involved, each one having their own criteria and needs. A new method of evaluation was required and its development occurred in the most unlikely place.
In 1994 Rick Davies was faced with the job of assessing the impact of an aid project on 16,500 people in the Rajshahi zone of western of Bangladesh (6). The idea of getting everyone to agree on a set of indicators was quickly dismissed as there was just too much diversity and conflicting views. Instead Rick devised an evaluation method which relied on people retelling their stories of significant change they had witnessed as a result of the project. Furthermore, the storytellers explained why they thought their story was significant.
If Rick had left it there the project would have had a nice collection of stories but the key stakeholders’ appreciation for the impact the project would have been minimal. Rick needed to engage the stakeholders, primarily the region’s decision-makers and the ultimate project funders, in a process that would help them see (and maybe even feel) the change. His solution was to get groups of people at different levels of the project’s hierarchy to select the stories which they thought was most significant and explain why they made that selection.
Each of the 4 project offices collected a number of stories and were asked to submit one story in each of the four areas of interest to the head office in Dhaka. The Dhaka head office staff then selected one story from the 16 submitted. The selected stories and reasons for selection were communicated back to the level below and the original storytellers. Over time the stakeholders began to understand the impact they were having and the project’s beneficiaries began to understand what the stakeholders believed was important. People were learning from each other. The approach, called Most Significant Change, systematically developed an intuitive understanding of the project’s impact that could be communicated in conjunction with the hard facts.
Rick’s method was highly successful: participation in the project increased; the assumptions and world views surfaced, helping in one case resolve an intra-family conflict over contraceptive use; the stories were extensively used in publications, educational material and videos; and, the positive changes where identified and reinforced.
To date the application of Most Significant Change has been mostly confined to NGO programs and other not for profit organisations. But this is changing. Corporations are also recognising that issues such as culture change, communities of practice, learning initiatives generally and leadership development could benefit from an MSC approach. Anecdote is currently assisting one large IT and consulting company implement MSC to evaluate the impact of its culture change program.
Jessica Dart (an Anecdote Associate) and Rick Davies have published the technique in the prestigious American Journal of Evaluation (7) and have made a guide freely available to anyone interested in implementing the technique. Anecdote and Jess Dart have teamed up to provide support services to corporations and public sector agencies to help them get the most out of Most Significant Change.
1. Pfeffer, J.; Sutton, R. I. Hard Facts: Dangerous Half-thruths & Total Nonsense: Profiting from Evidence-based Management. Harvard Business School Press: Boston, MA, 2006.
2. Gladwell, M. Blink: The Power of Thinking Without Thinking. Little, Brown & Company: New York, 2005.
3. Klein, G. Intuition at Work. Currency Doubleday: New York, 2003.
4. Gigerenzer, G.; Todd, P. M.; ABC Research Group. Simple Heuristics That Make Us Smart. Oxford University Press: Oxford, UK, 1999.
5. Dixon, N. The Organizational Learning Cycle: How We Can Learn Collectively. Gower Publishing Company, 1999.
6. Davies, R. An evolutionary approach to facilitating organisational learning: An experiment by the Christian Commission for Development in Bangladesh. Centre for Development Studies: Swansea. UK, 1996.
7. Dart, J.; Davies, R. A Dialogical, Story-Based Evaluation Tool: The Most Significant Change Technique. The American Journal of Evaluation 2003, 24, 137.
Would you like to streamline how you do Most Significant Change? Check out zahmoo.