Image via Horia Varlan |
I experimented earlier this year with a different kind of facilitation. I did it so the group I was working with could learn the differences between role and self. I played a role in the front of the room quite dissimilar to what I would normally do. Instead of calling the room to order loudly, I waited quietly at the front, making eye contact and smiling to get attention. It took the room more than 10 minutes to quiet down and be ready to begin. Once I started asking questions, I decided not to be the one to acknowledge when people could talk by looking at them and nodding. Instead I sat in a chair, avoided eye contact, and did not acknowledge raised hands. When someone started talking who had already contributed many times, I quickly interrupted and asked for others to give input. The things I was doing at the front of the room were really, really different from what I’ve learned is “good” facilitating and from what I normally do. But I was trying some things out, for what I thought were good reasons.
It did not go over with everyone very well. Some people’s feelings got hurt. I got specific and negative feedback on the session evaluation forms. I found it hard to get to sleep for a few nights because I was replaying events in my mind and thinking about what I would have done differently.
Through the winding ways of the internet, the following paragraph ended up on my Tumblr dashboard last week, from an article by Adam Ruben, and it connects to this question of how often we discuss experiments that don’t work:
Last month, I learned about a publication that has been quickly gaining popularity, the Journal of Negative Results in BioMedicine (JNRBM). Published, presumably, by a gang of dour curmudgeons who hate everything, JNRBM openly welcomes the data that other journals won’t touch because it doesn’t fit the unspoken rule that all articles must end on a cheery note of promise. …You might imagine that JNRBM is a place where losers gather to celebrate their failures, kind of like Best Buy or Division III football. But JNRBM meets two important needs in science reporting: the need to combat the positive spin known as publication bias and the need to make other scientists feel better about themselves.
In the world of leadership development, do we have a bias towards talking about experiments that worked? What can we do to cultivate an environment where efforts to try something new that don’t go so well can be discussed, reviewed, learned from?
In my case, support from my colleagues and conversation about how I could have better communicated my intentions and how they could have drawn out more learning from my interactions with the group helped me learn from my failed experiment.
What experiments have you tried that didn’t work? What factors need to be in place for you to feel free sharing your negative results?