Using a multilevel linear model to account for the hierarchical structure of our survey data, we find evidence that performance assessments yield greater policy influence when they make an explicit comparison of government performance across countries and allow assessed governments to participate in the assessment process. This finding is robust to a variety of tests, including country-fixed and respondent-fixed effects.
Whoa. That’s from a new AIDDATA working paper on global performance assessments (international measures of how well countries do at combating corruption, ensuing fair elections, opening data, or what have you).
Those findings aren’t shocking (that ranking countries can motivate govt actors in crude ways has become almost as much a platitude as the idea that participatory research enhances uptake), but their exciting because they are so clearly and directly relevant to the design of comparative assessments, and this study “feels” robust enough to carry some weight in conversations with the people who manage the budgets and optics of such projects. Based on econometric analysis of data from the 2014 Reform Efforts Survey (n= 3,400 gov officials in 123 low and middle-income countries) and 103 different GPAs, the use of elite survey data is a smart way to get around the problems of measuring influence by policy outcome.
The treatment of GPA characteristics is a bit crude though. 8 independent variables are identified at the GPA level (bi-/multilateral, whether govt’s are involved, whether data is public, etc), but none of these address policy areas or types of norms. Dan Honig’s research has suggested how important this can be for mobilizing the soft power of GPAs, and my own work on OGP suggests that it’s critical.
So not even MethodicalSnark can resist the US presidential elections (as christened by John Oliver).
New York Times ran a piece this week entitled How One 19-Year-Old Illinois Man Is Distorting National Polling Averages.
Our Trump-supporting friend in Illinois is a surprisingly big part of the reason. In some polls, he’s weighted as much as 30 times more than the average respondent, and as much as 300 times more than the least-weighted respondent.
Alone, he has been enough to put Mr. Trump in double digits of support among black voters. He can improve Mr. Trump’s margin by 1 point in the survey, even though he is one of around 3,000 panelists.
Survey weighting is the first of two explanations for this provided for polling distortion, and the article does a good job describing why this is a challenge. Continue reading “Panel weights and voice for the voiceless (lessons from Uncle Sam’s Rock-Bottom Yankee Doodle Suicide Pact 2016)”
Andrew Gelmen gives a great talk on how data gets abused in research and politics. He goes a bit into the statistical weeds at times with T & P values and the like, but he’s also a pleasure to listen to. And he gives some great examples of both academics and public figures that either “treat statistics as a means to prove what they already know, or as hoops to be jumped through.” Continue reading “Crimes against data, talk by Andrew Gelman”
I just attended the digital methods summer school, hosted by University of Amsterdam initiative of the same name. It’s something I’ve wanted to do for years, but first had the opportunity as a phd candidate. It was worth the wait, and here’s a quick summary of what I learned about the methods, the tools, and the course.
“Digital methods” could mean a lot of different things, but there’s a lot at stake in the rhetoric. Digital humanities, data journalism, webometrics, virtual methods, data science, oh my. Cramming the internet into social science research makes for a complicated landscape, and there’s ontological and political work to be done in how academic schools and approaches distinguish themselves.
Digital methods stakes out its turf with a 2-part move: Continue reading “What I Learned about Digital Methods”
Open Knowledge International recently asked for feedback on survey questions for the 2016 Open Data Index. This is great, and has produced a modest but likely useful discussion to improve Index processes for national research, as well as the resulting data. But regardless of how much effort goes into fine tuning the survey questions, there’s a fundamental problem underlying the idea of an international open data index. There’s a good argument to be made that you simply can’t compare the politics of #open across countries. Open Knowledge should think carefully about what this means when refining how they present the Index, and see what can be learned from the last 15 years of experience with international indices on human rights and governance. Continue reading “Apples, oranges and open data”