Most Significant Change – a primer

Posted by Shawn Callahan - September 19, 2007
Filed in Evaluation

“Not everything that can be counted counts, and not everything that counts can be counted.”

When Einstein uttered these words little did he know that he was stating the case for techniques like Most Significant Change (MSC).

MSC is a simple process for helping senior decision-makers develop a gut feel for what an initiative has achieved. It’s not a replacement for gathering and analysing the numbers. Rather is a supplemental evaluation that helps to systematically develop decision-makers’ intuitive knowledge. And research shows that many of the decisions we make are based on our judgements and intuitive, so it’s a part of our knowledge we mustn’t ignore.1 2

Here’s how MSC works. It can be done in 4 steps.

STEP 1 – COLLECT STORIES OF SIGNIFICANT CHANGE

The process starts by asking two simple questions of the people affected by the initiative of interest.

1. What is the most significant change that happened since the initiative started?

2. Why is this change significant for you?

STEP 2 – IDENTIFY AND ASSEMBLE THE DECISION-MAKERS WHO NEED TO KNOW WHAT REALLY HAPPENED AS A RESULT OF THE INITIATIVE.

This step is crucial to the success of the evaluation and consists of the evaluation designers asking the question, “Who needs to know, in their gut, the impact this initiative is having?”. These decision-makers could be at any level in the organisation, in any location. The evaluation designer then arranges the decision-makers into groups of 6-8 people and arranges for these groups to meet for 90 minutes or so to consider the significant change stories collected in Step 1.

STEP 3 – SELECTING THE MOST SIGNIFICANT CHANGE STORY

When you have your decision-makers in a room the facilitator guides the group in a discussion about 4 to 6 of the stories that where selected. In fact, we encourage the group to read each story then argue why they think a story is most significant. This discussion helps embed the stories in the minds of the participants while raising issues of strategies and implementation. The participants experience a lively debate and get to know one another and the issues affecting people in the field. Most importantly they develop an intuitive understanding of the impact the initiative is having. At the end of the session the group agrees on a most significant story and describes why they selected it. They also identify actions they will take to reinforce the good things that are happening and disrupt the undesirable outcomes.

The result is communicated to the original storytellers. The most significant change story from each group is then made available to the next level in the organisation, such as an executive group, who repeats the process with the subset of stories.

STEP 4 – MAKING THE STORIES AND WHAT WAS SELECTED AVAILABLE

The evaluation concludes by collating all the stories and creating a document that includes which stories were selected and why.

Invariably lessons are learned during the process and these ideas can be then fed into a continuous improvement process.

The selection process is frequently scheduled to occur on a regular cycle. Organisations that use MSC often select a period of between selections of 3-6 months to evaluate ongoing change.

Additional resources

How to conduct an MSC selection workshop

Dart, J. and R. Davies (2003). “A Dialogical, Story-Based Evaluation Tool: The Most Significant Change Technique.” The American Journal of Evaluation 24(2): 137.

A short history of MSC

Zahmoo – software for supporting MSC projects



References

1. Klein, G. (2003). Intuition at Work. New York, Currency Doubleday.

2. Westen, D. (2007). The Political Brain: The Role of Emotion in Deciding the Fate of the Nation. New York, PublicAffairs.

 

Assessing the impact of arseholes

Posted by Shawn Callahan - March 25, 2007
Filed in Evaluation

Bob Sutton is on a campaign against workplace arseholes. In yesterday’s post he describes Rob Cross’ work on social network analysis. In particular he looks at how to identify people who energise and de-energise.

Bob’s interesting in ways to measure the impact of arseholes.

I am trying to figure out some ways and places to measure this stuff, and am hoping to recruit Rob to help as has some really cool software that he uses with the companies that he works with and that are partners in his network.

One technique he might consider is Most Significant Change. While this technique wont create a measure of arseholeness, it will give people in the organisation a very good understanding of what’s happening and provides a forum to address some of the issues. Very soon he will be able to use the Zahmoo software to support the technique.

 

One of the big misunderstandings about stories and tacit knowledge

Posted by Shawn Callahan - February 25, 2007
Filed in Business storytelling, Evaluation, Knowledge

People have heard that storytelling is great for dealing with tacit knowledge. They say things like, “If we could only capture our stories we could then capture our organisation’s tacit knowledge.”

This is the big mistake! Stories only have meaning in the context of their telling. That is, you need to tell and listen to stories to transfer (not capture) tacitly held knowledge. It’s a social process. You need to be part of the conversation.

In practice, this means creating spaces for stories to be told and listened to. We do it in a bunch of different ways depending on the needs and objectives of our clients.

For example, if we are helping tackle complex issues such as trust, leadership, culture change, we would create the space in sensemaking workshops.

If we need to evaluate the impact of difficult to measure initiatives we create the space using Most Significant Change and the selection workshops.

NASA creates this space for staff to listen to and tell stories in their monthly project management seminars where PMs discuss the stories collected in the their monthly newsletter, ASK.

Everyone is busy and no one will give up their valuable time to listen and tell stories. But they will allocate time to evaluate a project, tackle a complex problem or learn lessons from their colleagues.

The stories don’t contain magical solutions that we can capture, dissect and unleash. Rather they provide a language of engagement, of learning and a way to transfer what is impossible to write down and store in any database.

Comments Off

 

The power of ordinary practices

Posted by Mark Schenk - January 10, 2007
Filed in Anecdotes, Evaluation

An article titled ‘The power of ordinary practices’ was the seventh ‘most read’ of Harvard Business School’s Working Knowledge articles for 2006. The articles includes the following:

I believe it’s important for leaders to understand the power of ordinary practices. Seemingly ordinary, trivial, mundane, day-by-day things that leaders do and say can have an enormous impact. My guess is that a lot of leaders have very little sense of the impact that they have.

Read the rest of this entry »

 

Interview with Jess Dart

Posted by Shawn Callahan - September 11, 2006
Filed in Evaluation

I have just posted an interview with Jess Dart, co-developer of the Most Significant Change technique, over at the Zahmoo blog.

Comments Off

 

Meeting with the creator of Most Significant Change

Posted by Shawn Callahan - August 25, 2006
Filed in Evaluation

I met with Rick Davies this week. He’s in Melbourne visiting his family and doing some work for Oxfam. We talked about the Zahmoo project and he made some very helpful suggestions. Rick asked me to make a link from the Zahmoo home page to the MSC guidebook that he and Jess Dart put together, which is now there. If you want to know how to do MSC this is the resource.

Rick made an interesting observation. He asked why we make so many of our ideas available noting that this behaviour was very unlike most consultants. Before I could answer he said, “would you like to be remembered for one idea or would you like to known as someone who can create and implement many ideas? It seems your website demonstrates the latter.”

Comments Off

 

A new applicaton to support Most Significant Change projects

Posted by Shawn Callahan - August 16, 2006
Filed in Evaluation

Zahmoo-blog-pip150pxMost Significant Change is a monitoring technique based on the collection and selection of stories. The technique involves collecting stories, gathering people together to talk about them and then selecting the stories they believe are the most significant. This selection process creates new conversations in an organisation while systematically developing an intuitive understanding among staff of a program’s impact. Here is a short history of the technique.

Today we are announcing that we will soon launch a new web 2.0 innovation that will help you run your most significant change projects. The project is called Zahmoo and if you want to get an early look at the application, sign up as a beta user here.

Comments Off

 

Intervention design – an example

Posted by Shawn Callahan - May 16, 2006
Filed in Anecdotes, Changing behaviour, Evaluation

I’m always on the lookout for intervention design examples and I found one last week I think you’ll like. But before I describe it, remember what we mean by an intervention: a discrete action designed to improve the system but you can’t predict exactly how things are going to turn out. It’s not a project in the sense that there is a clear objective, and a set of milestones over sometimes lengthy periods.

This example is from Pfeffer and Sutton’s Hard Facts, Dangerous Half Truths & Total Nonsense: Profiting from Evidence-based Management [pp. 116–117]

A classic demonstration of the power of external reinforcements was a study in the early 1970s at Emery Air Freight, a freight forwarder. Before the development of large package companies with their own airplanes, freight forwarders picked up packages and shipped them on airlines. They got a better rate to the extent the packages were placed in larger containers that were easier to handle. So Emery management wanted employees to put as many packages as possible into larger containers to cut freight costs.

The company conducted a performance audit and found that, although managers thought they were using larger containers 90 percent of the time it was feasible, only 45 percent of the eligible packages were actually being put into larger containers. So the company announced a new program that provided rewards such as praise—not financial rewards—for improvement.

On the first day, the proportion of packages placed in the larger containers increased to 95 percent in about 70 percent of the company’s offices. The speed of this overwhelming improvement suggests that a change in performance derived not just from the rewards that were offered, but also from the information provided that the current performance level was poor and this action—consolidating shipments—was important to the company.

Pfeffer and Sutton are careful to point out that rewards and recognition approaches don’t work in all cases. It’s one of the dangerous half truths they explore. In this case recognition is used to convey a message to staff about what is important to the company. I think it provides a useful pattern (one of many possibilities) for intervention design: identify a desired improvement that can be measured (of course many cannot—have a read of this) and heap praise on those people who are adopting the desired behaviours.

For example, if you are a scientific organisation and are unhappy with the number and quality of the papers being published, management might communicate the importance of, for example, publication in tier 1 journals then heap praise on people who succeed in publishing their papers in these journals. This is a better option than just saying “we should provide better feedback and praise to our staff.”

 

Evaluating the soft stuff

Posted by Shawn Callahan - April 24, 2006
Filed in Evaluation

Decision-makers are under increasing pressure to justify their decisions and then account for their success (or otherwise) to a variety of stakeholders. Evidence-based management (1) is further increasing this pressure. While we know intuition plays a significant role in decision-making (2-4), large decisions (Will we merge and how? How will we change our culture?) will require thoughtful deliberation as well as experimentation. A conundrum emerges, however, when dealing with programs designed to change behaviours: how do we know that the program of activities is responsible for the change?

Read the rest of this entry »