Civil society groups emphasize the need for high quality public information on the performance of politicians. But, does information really make a difference in institutionally weak environments? Does it lead to the rewarding of good performance at the polls or are voting decisions going to be dominated by ethnic ties and clientelistic relations?
Enter the Metaketa project’s first phase, running 7 experimental evaluations in 6 countries to answer that question: Does more info change voter behavior? The results and synthetic analysis are all coming out early next year, which is exciting, but a long ways away. I was also happy to see that they have a pre-analysis plan for that synthesis work (basically a self control mechanism to ensure that data doesn’t get fudged under analysis to support preferred outcomes, unfortunately they don’t really get used).
This is a great reminder of what rigorous research looks like. But it’s also a reminder of why more of us don’t do it. A long, slow haul this, to answer only very specific questions about a very specific context. But still, rigor for the win.
Lastly, it’s worth noting how differently this initiative contrasts with other prominent research efforts. MAVC comes immediately to mind. They’ve got deep pockets and a research mandate to generate an evidence base, and they pursue that primarily through mentoring and funding researchers and practitioners in project countries.That leads to research methods that tend towards case-studies and the qualitative, so applying insights across different contexts requires careful thought and retrofitting. This is a very different generalization problem than what hounds the experimental approach Metaketa exemplifies.
There’s also a marked contrast in profiles. MAVC is everywhere: blogposts, reports, panels at all the conferences, and a dizzying stream of glossy flyers, brochures, postcards and banners. Meanwhile, EGAP (the network dedicated to field experiments on governance supporting Metaketa), goes quietly about awarding grants to replicate and validate evidence on widely held assumptions. And by quietly I mean that multiple emails got me no additional info on what they’re doing and learning.
There must be a happy medium somewhere between these two poles.