Methodical Snark critical reflections on how we measure and assess civic tech
Category

methods

New evidence on the domestic policy influence of global performance assessments

Using a multilevel linear model to account for the hierarchical structure of our survey data, we find evidence that performance assessments yield greater policy influence when they make an explicit comparison of government performance across countries and allow assessed governments to participate in the assessment process. This finding is robust to a variety of tests, including country-fixed and...

Panel weights and voice for the voiceless (lessons from Uncle Sam’s Rock-Bottom Yankee Doodle Suicide Pact 2016)

So not even MethodicalSnark can resist the US presidential elections (as christened by John Oliver). New York Times ran a piece this week entitled How One 19-Year-Old Illinois Man Is Distorting National Polling Averages. Our Trump-supporting friend in Illinois is a surprisingly big part of the reason. In some polls, he’s weighted as much as 30 times more than the average respondent, and as much...

Crimes against data, talk by Andrew Gelman

Andrew Gelmen gives a great talk on how data gets abused in research and politics. He goes a bit into the statistical weeds at times with T & P values and the like, but he’s also a pleasure to listen to. And he gives some great examples of both academics and public figures that either “treat statistics as a means to prove what they already know, or as hoops to be jumped through...

What I Learned about Digital Methods

I just attended the digital methods summer school, hosted by University of Amsterdam initiative of the same name. It’s something I’ve wanted to do for years, but first had the opportunity as a phd candidate. It was worth the wait, and here’s a quick summary of what I learned about the methods, the tools, and the course. The methods “Digital methods” could mean a lot of different things, but...

Apples, oranges and open data

Open Knowledge International recently asked for feedback on survey questions for the 2016 Open Data Index. This is great, and has produced a modest but likely useful discussion to  improve Index processes for national research, as well as the resulting data. But regardless of how much effort goes into fine tuning the survey questions, there’s a fundamental problem underlying the idea of an...

Methodical Snark critical reflections on how we measure and assess civic tech

Tags