So not even MethodicalSnark can resist the US presidential elections (as christened by John Oliver).
New York Times ran a piece this week entitled How One 19-Year-Old Illinois Man Is Distorting National Polling Averages.
Our Trump-supporting friend in Illinois is a surprisingly big part of the reason. In some polls, he’s weighted as much as 30 times more than the average respondent, and as much as 300 times more than the least-weighted respondent.
Alone, he has been enough to put Mr. Trump in double digits of support among black voters. He can improve Mr. Trump’s margin by 1 point in the survey, even though he is one of around 3,000 panelists.
Survey weighting is the first of two explanations for this provided for polling distortion, and the article does a good job describing why this is a challenge. Continue reading “Panel weights and voice for the voiceless (lessons from Uncle Sam’s Rock-Bottom Yankee Doodle Suicide Pact 2016)”
Andrew Gelmen gives a great talk on how data gets abused in research and politics. He goes a bit into the statistical weeds at times with T & P values and the like, but he’s also a pleasure to listen to. And he gives some great examples of both academics and public figures that either “treat statistics as a means to prove what they already know, or as hoops to be jumped through.” Continue reading “Crimes against data, talk by Andrew Gelman”
I just attended the digital methods summer school, hosted by University of Amsterdam initiative of the same name. It’s something I’ve wanted to do for years, but first had the opportunity as a phd candidate. It was worth the wait, and here’s a quick summary of what I learned about the methods, the tools, and the course.
“Digital methods” could mean a lot of different things, but there’s a lot at stake in the rhetoric. Digital humanities, data journalism, webometrics, virtual methods, data science, oh my. Cramming the internet into social science research makes for a complicated landscape, and there’s ontological and political work to be done in how academic schools and approaches distinguish themselves.
Digital methods stakes out its turf with a 2-part move: Continue reading “What I Learned about Digital Methods”
Open Knowledge International recently asked for feedback on survey questions for the 2016 Open Data Index. This is great, and has produced a modest but likely useful discussion to improve Index processes for national research, as well as the resulting data. But regardless of how much effort goes into fine tuning the survey questions, there’s a fundamental problem underlying the idea of an international open data index. There’s a good argument to be made that you simply can’t compare the politics of #open across countries. Open Knowledge should think carefully about what this means when refining how they present the Index, and see what can be learned from the last 15 years of experience with international indices on human rights and governance. Continue reading “Apples, oranges and open data”