For the last few days, I’ve been at the ICTD (Information Communication Technology for Development) conference in Atlanta. Apart from a broader discussion which raise some very important points about the word ‘development’, its use and its impact, another big discussion about failure keeps coming up. Whatever we mean by failure (which is so deeply tied to our perspective, attitude, and definition of success/development) it’s a pretty common theme that a lot of projects definitely don’t work.
At ICTD there has been mention of a lot of projects that didn’t go well–from the light-hearted Fail Faire to a number of individual presentations. Discussing when things don’t go as planned is important and these panels and events have played an essential role in creating the open and honest dialogue that is necessary–and so I applaud the participants and organizers for this.
The idea behind the Fail Faire comes from the similar idea in the start-up world of ‘failing fast’. In both cases we try and turn failure into an inflection point and teachable moment to change your idea and evolve your assumptions. But while we might praise a new start-up for jumping out of gate full speed ahead into their pivot point of total failure, I don’t think this model is as applicable when you are working with either vulnerable populations, non-market groups, or public services and fundamental needs. We aren’t just gambling with an investors funds. Importantly, I think we can do better than failing fast and especially better than failing often.
The biggest challenge of this type of work is in making sense of our assumptions. From the big ones, like assuming that development should look a particular way, to the smaller ones, choosing a particular color, the more work we can do to check and change our assumptions the better work we can do. Also note that assumptions aren’t just a foreign thing, they come from all sides and local groups can also assume things, for example that an unfamiliar technology can and will function in a particular way.
The result of our discussions of failure is a growing list of past assumptions–a database of things to look out for and check off in each project. This is not a waste of time, we can learn a lot from these lists, but we will never be able to have and use an exhaustive list. So what we really need are the relationships and partnerships across communities–geographic communities, expert communities, practitioner communities–that constantly teach us to identify our assumptions. Working with new people let’s us practice finding our assumptions and questions our expertise, not so we can list and remove them all, but so we can learn how to become aware of them and eventually check them or change them before a big failure.