Earlier this week, I posted a reply to Thanassis Cambanis’s article, “Call to Arms,” from the July 31, 2011 Boston Globe. Today, Mr. Cambanis posted a reply to my response, which you can find here. I really appreciate that he took some time to do so.
I don’t wish to belabor my points, but Mr. Cambanis brings up a common critique of our data set which I’d like to address (I find myself doing so quite often). He quotes Mark Kramer as saying:
I find their argument very intriguing, but one clear problem is that their database unavoidably omits countless non-violent resistance campaigns that never begin (because they are deterred) or that are crushed at a very early stage before they become widely known. Hence, the database is biased toward successful cases of non-violence, leaving ample room for debate about the authors’ conclusions. Moreover, even if Stephan and Chenoweth are correct in their aggregate analysis of non-violent resistance campaigns unadjusted for size, the existence of crucial outliers — China in June 1989, Burma in 2007, Zimbabwe in 2005 and 2008, and Iran in June-July 2009 — raises further questions about the validity of their argument. Suffice to say that more research will be needed.
Now, I couldn’t agree more with Prof. Kramer on his basic argument. In collecting the data, my main concern was that we were missing many nonviolent resistance campaigns that never began because of deterrence or their suppression early on. This is the classic “selection effect” problem, and it’s super hard to deal with, especially with the data in its aggregate form. Kramer himself argues that this problem is “unavoidable.” I go into lots of detail about ways we tried to get around this problem in the supplementary web appendix, which is available (all 183 pages of it!) here. But I’d like to mention two major points:
1. For inclusion in the database, both nonviolent and violent campaigns had to exceed 1,000 active participants (and, in the case of the violent insurgencies, 1,000 battle deaths). We chose this strict inclusion criteria in part to deal with the problem of selection effects. As many other political scientists will do, we qualify our argument by saying that once a campaign has achieved a level of active participation above 1,000, nonviolent resistance is more successful.
2. The same selection problem (i.e. the tendency to see only the campaigns that are not crushed at the outset) also applies to violent insurgencies. Many are crushed in their infancy or are deterred from emerging in the first place, just like nonviolent campaigns are. Even if the data are somewhat biased toward successful nonviolent campaigns, it would also then be biased toward successful violent campaigns (i.e. those that obtain a certain threshold). So when we compare the relative effectiveness of nonviolent to violent insurgencies, that bias should not be driving their success rates relative to one another.
Is this method perfect? No. Is it the best we could do at the time? Yes. Given these limitations, do our findings say anything meaningful about the relative effectiveness of nonviolent versus violent insurrection? At these threshold levels, absolutely. Is more research required to assess the robustness of our findings, and to explain the crucial outliers? Wholeheartedly, yes. In fact, it is my great hope that the field takes the empirical study of nonviolent resistance seriously, and that we start to see more refined data sets emerge on the subject with innovative ways to deal with problems such as these.