Methodical Snark critical reflections on how we measure and assess civic tech
Tag

RCTs

Roundup: participation is up in Latin America, nobody’s paying for the data revolution, and somebody finally asked the activists what research they actually want

Last week in civic tech saw a new index on civic engagement in Latin America, findings on government run crowdsourcing initiatives, lessons from m-health pilots, and some excellent summaries from the world of development research. Plus major geekdom on QCA methods, and for the first time I'm aware of, actual research on what kind of research activists want.

Mechanism Mapping: a tool for determining when programs can be scaled or adapted

A recent Oxford white paper proposes mechanism modelling as a method to determine when results of policy evaluations should be scaled or adapted to other contexts. This is an compelling contribution to ongoing debates about external validity of RCTs, more importantly, it's a simple and useful tool for thinking about when and how civic tech programs work across different contexts.

research links w 23-24 / 17

Findings Research on nearly 3 decades of democratic innovation and e-participation in Latin America has some interesting findings (Brazil, Colombia, Mexico and Peru). According to an Open Democracy blogpost (the actual project’s website is down): civil society participation programming uses tech more often than not, smaller countries are less prolific than large countries in terms of tech...

Case by case: what development economics can teach the civic tech and accountability field about building an evidence base

Warning: long post, deep weeds. Last week saw some really interesting thinking in the development economics blogosphere, focused on design questions for external validity (the applicability of case-specific findings to other cases). This is a central question for research on civic tech and accountability programming, which talks a lot about wanting an evidence base, but remains dominated by case...

research links w 17-17

Findings Power users of civic reporting platforms tend to cluster geographically and disseminate use of platform use in their neighborhoods. This is the main finding of new research on 311 platforms in San Fransisco (surveys, n=5k over 5 yrs), though the title and abstract are misleading, promising insights on “co-production” more generally (the authors reference the distinction, but...

research links w 11-17

Findings Voice online: Twitter advocacy can bypass mainstream media that excludes non-elite voices, according to a study of how #IfTheyGunnedMeDown was used following 2014 police shootings in Ferguson, Missouri. That’s good news for digital advocacy innovators, but important to remember that people don’t feel safe online and don’t understand how their personal information gets...

research links w5-17

Papers & Findings What makes for a strong and democratic public media? According to comparative research on “12 leading democracies,” it’s all about multi-year funding, legal charters limiting gov influence, arms-length oversight agencies and audience councils. Compelling, but not shocking. Similarly, we know that the internet doesn’t drive democracy, but increased...

research links w 4/17

Papers & Findings The world is ending. The 2016 Corruption Perceptions Index finds links between corruption and inequality, and notes falling scores for countries around the world. The Economist Intelligence Unit’s Democracy Index is titled Revenge of the “deplorables”, and notes a worsening of the worldwide “democratic recession” in 2016. Civic techs. What are...

Methodical Snark critical reflections on how we measure and assess civic tech

Tags