Methodical Snark critical reflections on how we measure and assess civic tech

Panel weights and voice for the voiceless (lessons from Uncle Sam’s Rock-Bottom Yankee Doodle Suicide Pact 2016)


So not even MethodicalSnark can resist the US presidential elections (as christened by John Oliver).

New York Times ran a piece this week entitled How One 19-Year-Old Illinois Man Is Distorting National Polling Averages.

Our Trump-supporting friend in Illinois is a surprisingly big part of the reason. In some polls, he’s weighted as much as 30 times more than the average respondent, and as much as 300 times more than the least-weighted respondent.

Alone, he has been enough to put Mr. Trump in double digits of support among black voters. He can improve Mr. Trump’s margin by 1 point in the survey, even though he is one of around 3,000 panelists.

Survey weighting is the first of two explanations for this provided for polling distortion, and the article does a good job describing why this is a challenge. It’s particularly relevant here, because surveys are in many ways the next big fronteir for understanding for tech & accountability research. We’ve got a lot of case studies, but the few surveyes that get conducted to assess “impact” or perceptions of tech and accountability are conducted from within the academy, so rarely accessible or timely enough to feed back into project design.

Practitioners (and grey zone researchers) will inevitably start doing more of this, and weighting is likely to be an especially thorny methodological issue. As the above case illustrates, this is especially true when regarding underrepresented groups, who need to be overweighted to acheive representation and analytical granularity, but who can easily be over-overweighted. Think a civic tech and accountability context where projects aim to strenghten citizen voice.

The issue of past vote methods is less relevant, but interesting, and it’s worth following the Times’ link to earlier reporting.

For those who want to go deep into the weeds on the issue of weighting, Dan Gillmore has an excellent read on how to correct for this using Multilevel regression and poststratification, which includes a snarky complaint about how the world expects him to immediately blog about all such things.

Lastly, I joing others in auplading the critiqued survey for releasing its data and method for open review, though I’ll note that one has to apply for access (I’m still waiting for approval.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech