Methodical Snark critical reflections on how we measure and assess civic tech

research links w 14-17

r

The weeds are deep in this one.

Findings

All the findings: @3ieNews‏ has mapped out existing evidence on citizen-state relations, put together a linked matrix organized according to the interventions and outcomes measured, plus confidence levels. It includes “18 completed systematic reviews and two systematic review protocols, 305 completed impact evaluations reported in 280 papers, 60 ongoing impact evaluations reported in 59 papers.” And everything is linked. And it’s not ugly. Swoon. h/t @OpenGovHub 

Meanwhile, @bbcmediaaction ‏ blogs on their new data portal, which collects survey data on “rarely polled” people in 13 developing countries, providing insights on media use, governance and freedom of expression perspectives.

Online/offline: Europeans that seek out political content online tend to be more politically active, especially if they have internet at home and if internet access is more widespread in their country, but they don’t necessarily vote. This according to analysis of the 2010 European Social Survey (N = 40,582; 25 countries). Meanwhile, a German study suggests online communication has a more powerful effect on negative than positive offline consumerism (boycotts vs buycotts).

Community

Lots of cool stuff launched, released and proposed this week. Unpaywall is a browser extension that searches for free, full text versions of articles whenever you get stuck behind a paywall, while @jwyg  launched a @PublicDataLab, with about a dozen institutions involved, and a focus on digital methods, public policy and participatory design, and @leotanczt ‏ thinks academia needs more crypto parties. Yup.

Elsewhere, @Rbarahona77 and@datanauta describe how analytics and data mining helped them fill  holes in their advocacy strategy in Guatemala. This is what David Karpf calls Analytical Activism, and what@EngnRoom calls DYI M&E. Great to see this description of how much you can do with free tools and a mission.

Seeking greater clarity on #OpenWashing, this new paper reviews cases in NYC, London and Berlin to suggest a three part typology for open washing (selective transparency, obfuscation and red herring releases [my terms, for brevity]).

In the Methodological Weeds

To use a simple analogy, the concept of flying is intuitively understood and appreciated by most, yet when asked to explain the underlying principles, many would fall short.

That’s ‘s Jos Vaessen describing how people like to talk about mixed methods project evaluations without really understanding what it means. His quick post is a concise reminder that the whole point of mixed methods is to “to offset the biases and limitations of one method with the strengths of another,” and breifly describes the main ways they can do this.

Social Psychology Quarterly has a new article describing a method for online field experiments, using big data to assign treatment/control groups, and which looks useful for exploring very complex online interactions within large groups. The authors spill some ink on the problem of participant retention and bias before describing the method’s application to assessing how trust is influenced by experience in the sharing economy (AirBnB). Their discussion of ethics is brief.

Speaking of ethics, @anabmap & describe the costs and benefits of crowdmapping for research data while an LSE blogpost argues that ethically, crowdsourcing research tasks is fundamentally flawed, and perhaps inavoidably, because the researcher’s assumptions about the faceless crowd will inevitably drive research design. Basically, the crowd will hide your ethical shortcuts from you. So yeah, there’s that to feel bad about now too. #responsibldata

Meanwhile, an article in the Journal of Physics promises a method for Finding The Most Important Actor in Online Crowd by Social Network Analysis and @hrdag ‏’s Patrick Ball shares the code and method used to build a model for predicting where to find hidden graves in Mexican counties (both of which are well above my intellectual pay grade), while research on GitHub tends to focus on interaction, but often use small data sets and poor sampling sets, according to a systematic review assessing 80 publications from 2009 to 2016.

Finally, the debate on comparative human rights indices continues, this time with a serious methodological critique of the Cingranelli-Richards (CIRI) human rights dataset. @LawrenceSaez ‏ first notes that all the CIRI data is pulled from US State Department’s Human Rights reports, which carries some obvious potential for inherent bias. He then critiques the 0-2 coding scale for indicators, noting how this obviates nuance, produces statistically skewed results, and invites criticism from qualitative researchers. A very lively debate ensues in the comments.

Academic Opps

Hires:

Calls for Papers/participation

Miscellanea & Absurdum

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags