Kickstarting research on development: Right on involvement, wrong on outputs
What it is
I’ve written a lot on this site about the idea of using crowdsourced funding to support research efforts. Here’s the first example of that sort of thing that I’ve come across.
What Works in Development: 10 Meta-Analyses of Aid Programs is a funding call for $10,000 to support research dissemination. Namely printing a book with the research results.
This is obviously about research that will be done anyway but the money will help with spreading the word.
What is interesting about it
But the more interesting aspect of this idea is that people donating will be able to vote on which particular aid interventions should be investigated. And that’s exactly what I’m suggesting Researchity should be about.
Why I won’t be contributing
I like the idea of involving the public but there are 2 reasons why I won’t be actually contributing any funds to the effort.
1. Wrong outputs
I don’t think that printing another book is what is needed. Why not just self-publish a Creative-Commons licensed e-book and make it available on Lulu.com in case somebody wants to purchase a printed copy.
I’d fund a contribution towards editing and fact-checking. But not printing. That’s the wrong way to go. Printing expensive books is what keeps research out of the reach of people who need to access it the most.
I’d also fund other dissemination methods. How about going on a tour and running free workshops on project evaluation based on the research results in the field? Funding students to blog about this.
But never, ever printing a book. There are plenty of channels for that. The traditional book serves its author better than its audience. So not particularly compatible with the innovative nature of the funding.
2. Suspicious aim
While I have nothing against doing meta-analyses, I am extremely suspicious about the “What works ethos”. There is certainly space for sharing successes and failures. But in all cases (and above all development and aid) these are highly contextualized and cannot be reduced to abstract methods. It’s generally not the type of program that can be evaluated but only its particular implementation. The same thing that will work in one context will fail in another. It depends on many local variables, including who is running the project.
I’m also highly suspicious of statements like “This book will review the quantitative evidence on the effectiveness of aid programs in a very thorough and rigorous way, using meta-analysis.” As I indicated in the previous paragraph, a “quantitative” analysis of the success of a particular aid program will say nothing about its suitability in other contexts. And it may not even capture its other benefits that may have made it worthwhile even if the stated aims were not met (which they rarely are). Equally, a program that is successful in quantitatively meeting its aims may have done more long-term damage than good. Qualititive analysis is just as important here. If not more. That’s not to say that metanalysis could not be useful here but the danger lies in how it will be used. And I fear it will be used as a dumb on/off switch for funding or defunding programs without an in-depth evaluation of their suitability.
Having said all that, I hope that the project will meet its funding target encouraging other scholars to go a similar route. With any luck, they will be aiming to research things I can feel happy about putting my own money behind.