Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Monday, October 10, 2011

Collective Impact Follow Up | By Amy Delamaide and Seth Bate

About a million years ago—or maybe just a few months—I wrote a post about an article we at CCSR are reading, “Collective Impact” by John Kania and Mark Kramer. I promised a follow-up post once we had discussed it at staff meeting.

Your wait is over. Here is that promised follow-up post.

We talked about the article at our August 10th staff meeting. In no particular order and without attribution to the staff members who contributed, here are some things we discussed:
  • Communication is important to keeping collective impact efforts going. When different organizations are working on the same issue, sharing what each organization is doing and the impact it is seeing would energize the other organizations and support mutually reinforcing activities.
  • The idea of collective impact seems rather utopian. In real-life, it was suggested, change takes much longer than the article indicated. The work is never done and practitioners are constantly revising their approach.
  • It is worth exploring what barriers exist that prevent us from moving towards collective impact. How do you reinvigorate organizations at a grassroots level when they are in crisis or under stress, such as many are in these economic times?
  • When doing research, especially participatory or action research, it is worth engaging the people doing the work as co-researchers and co-evaluators. This could result in having several “layers” of researchers—the participants in an intervention, the direct service staff delivering an intervention, and those academics observing at a distance could all contribute as researchers.
  • It is useful to us as an organization to continue sharing articles and periodically discussing them as a large group. This makes sense for us as a university-based center where continued learning is valued. This might be something that makes sense for your organization, too.
We’ve continued hearing “backbone support organization” and “collective impact” in meetings with partners, so the ideas from the Kania and Kramer article are definitely worth grappling with if you haven’t yet. There is also a blog where the authors and other contributors are continuing to develop their ideas: Collective Impact Blog. Check it out.

Wednesday, September 15, 2010

Wichita Survey Takers Wanted

Hey ICT people-

One of our researchers – Emily Grant – needs your help! She’s working on her dissertation about the beliefs/attitudes that Wichita residents have about the environment and needs 1000 ICT residents to fill out her survey.

This survey is purely to gather information; there are no right or wrong answers.




So help her out by donating 15 minutes of your time! http://wichita.kumc.edu/care/








Photo courtesy of Sean McGrath

Friday, February 5, 2010

Staying on Top of Change: The Value of Research and Evaluation Part Three | Tara Gregory

My last post considered why it is important to measure if your program has made a difference. The second issue is the importance of evaluation in being accountable during times when funds are tight.

It may seem like a luxury to implement an evaluation when people are in need of services. But implementing an inappropriate, ineffective or damaging program is clearly not a good use of funds. A common problem for organizations is not tying their programs to clear needs or intended outcomes.

A formative evaluation (i.e., needs or asset assessment) for example can help an organization identify what issues need to be addressed, the population most affected, or the potential for change. This is always a crucial step but even more so when social conditions, and funding attached to such issues, are particularly unstable.

A summative evaluation (i.e., outcome evaluation) can provide evidence that the program—and the funding that supported it—made a difference in the lives of recipients and/or the community. Again, in times of economic and social uncertainty, an organization that can point to evidence of need and effectiveness has an advantage in making the case that these programs are sound investments.

In the last few posts, I’ve tried to make the case that organizations can help sustain themselves in the face of societal and economic shifts by evaluating the needs and outcomes of their service population. I recognize that, as the Research and Evaluation Coordinator, I might be a bit biased toward my area of interest and expertise. But change is inevitable, both societally and in the lives of those served by non-profit, faith and community-based organizations. Making evaluation part of any program helps ensure that change isn’t an unexpected obstacle or trauma…but evidence of good, well-informed work.



Photo courtesy of David M. Goehring

Wednesday, February 3, 2010

Staying on Top of Change: The Value of Research and Evaluation Part Two | Tara Gregory

Evaluation is directly connected to organizational effectiveness.

There are two issues that are particularly salient here – especially when there are larger societal changes swirling around organizations. First, just because an organization implements a program or activities doesn’t mean it has made a difference. Social services aren’t just about numbers i.e., the number of people served, the number of sessions held, the number of resources provided. Those things are easy to count, and some organizations look at these numbers as evidence of doing “a good job.” 


But without true evaluation—which looks at the actual impact on recipients and the resulting changes created in their lives—there’s no real measure as to whether those served are gaining anything of value. If changes DO take place, and there’s been no evaluation, it’s hard to tell if the program contributed.   What’s worse? Not knowing if it’s done something harmful. 

Providing services without knowing their impact on recipients is like a doctor doing a procedure without paying attention to whether it helped or hurt the patient. Just like diagnostic or follow-up exams, program evaluations help outline and document:

•    the need for and purpose of the program (needs assessment and outcomes identification):
•    How it was implemented (fidelity measures)
•    How recipients responded (process measures)
•    How they were changed (outcome measures)

All of these evaluation elements help increase the likelihood that programs stay true to their intended purpose, do no harm, and are changed appropriately when they’re off target. 

My next post will look at the second issue that is important to consider when planning your research and evaluation.


Photo courtesy of Yasser

Monday, February 1, 2010

Staying on Top of Change: The Value of Evaluation Part One | Tara Gregory

Like most people, I don’t really love change…especially when it comes at me unexpectedly. But for our Research and Evaluation team at CCSR, change is the currency of what we do. Whether we’re looking for change in individuals, settings, organizations, or communities, it’s an indicator that something is “working.” I use the term working because changes can be positive or negative, but either way implies that an action has had an effect.

Right now, nonprofits and agencies who work with CCSR are thinking about changes related to the effects of the economy on organizational stability and conditions for those they serve. In changing economic times, many organizations batten down the hatches by cutting activities that may seem superfluous or not of direct benefit to service recipients—evaluation activities are often the first to go.

However, evaluation that documents change, whether for individuals, the organization or community, is key to maintaining effectiveness and in proving an organization’s worth.


Photo courtesy of Mike Baird