Methodical Snark critical reflections on how we measure and assess civic tech

research links w 21-17



E-government projects are more successful when formal decision-making processes include stakeholders and actively manage risk, according to a survey of  Swedish national government agencies and municipalities (N=550). Meanwhile, @timdavies is coauthor on a paper in Science & Technology Studies that tracks how data standards influence bureaucratic processes for opening government data. The paper warns that standards can in some ways obstruct actual engagement with users, and puts a useful focus on people in institutions just trying to get things done.

Mixed findings on social media effects this week. Chinese participants in political discourse on Weibo experience that discourse as deliberative, despite the interactions being “mostly non-dialogical and non-creative in nature, and characterised by homophily and polarisation.” (New study, n= 417). In the US, social media played a definitive role in determining how the Tea Party negotiated it’s identity and relationship with the Republican party in the course of Trump’s rise to power. Not in the least, it allowed for quick differentiation of activist perceptions on appropriate degrees of openness, which seem to correspond with political objectives and conceptions of political efficacy. This is described by a new paper in Social Media + Society (not to be confused with New Media and Society, I recently made that mistake > facepalm), which offers a fascinating case, without clearly actionable findings.

Community & Resources

@Hewlett_Found continues to impress with their thoughtful and progressive thinking on evidence for accountability programming. They’ve called for comment on drafts of three “sub-strategies” for promoting “active citizens and accountable government.” The drafts focus on fiscal transparency, governance channels and public service monitoring, and they’re very exciting.

Oxfam is building progressive bridges too, and recently invited LSE Masters Students to analyze a bunch of case studies on adaptive programming,. There’s a draft report out for comment. Great practice, and the findings are smart too.

In other news, New Media and Society has a lit review on digital volunteers in crisis response, the LSE Research Impact Blog has more advice for academic blogging, and Erica Hagen is using a research practitioner grant from @allvoicescount to reflect on 8 years participatory mapping in Kibera, Nairobi.

New Data:

New tools:

In the Methodological Weeds

FOIAnet has published a 3-part methodology to Rate RTI Implementation (qua SDG Indicator 16.10.2). That’s exciting since the UN has declined to provide measurement tools and because easy tools for civil society measurement are always a win. The big question is whether @FOIAnet will provide support to ensure quality and comparable evaluations.

The development economics blogosphere had some fantastic discussions about generalizing and transporting case-based evidence, and the research design pitfalls that RCTs tend to fall into. More soon in a seperate post.

Academic Opps

Miscellanea & Absurdum

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech