Methodical Snark critical reflections on how we measure and assess civic tech

Research Links (w24/16)

R

Papers/Findings

  • Making All Voices Count this week reported on their recent Learning Event, in a document that collects some useful schematics and tools for thinking about civic tech programming, and also captured some of the practitioner thinking about what it all means.

    A certain scepticism and sense of let-down has been expressed by some observers, but this may have more to do with the way civic technologies were described than their actual impacts on the ground.

    The report also includes some compelling thoughts on what “practitioners” want research to do (p42): get embedded in projects to contribute practically to the generation of useful info, and in-depth evaluations.

  • MAVC also released a couple of “Research Briefs” following up on the recent IDS bulletine article When Does the State Listen (rarely): one on Ghana (which argues that urgency surrounding policy moments can be a positive catalyst for collaboration with civil society [at least when interests and understanding of that urgency are aligned]) and one on Kenya (which argues that the explosion oftechnology and government technology rhetoric has not fundamentally transformed governance, largely due to lack of engagement [though unsurprising this breif appears to be based on some research, unlike the Ghanian peice, though the sample quite small: 120 “young people”, 5 buraeaucrats and 2 elected politicians]).
  • A NetChange report reviewed 47 “successful” (mostly North American) campaigns to identify success factors, and conclude that “directed network campaigns” are most successful. They suggest “campaign excellence” and offer 3 commonsensical recommendations to campaigns. Methods are fuzzy: data collection methods not standardized (participated in16 campaigns, interviewed 11), success defined as a relationship between impact (undefined) and resources, pattern mapping method to review data was either not scripted or the coding method not explained.
  • Global Integrity released it’s research report on the OGP, composed of “five exhaustively researched case studies.” The report looks at whether/how OGP opens up policy or advocacy spaces for reform. The methods are interview driven and loosely sociological. Key finding: context is kind.
  • Facebook published details for it’s research ethical review process in an academic law review (interesting choice). This is a milestone for transparency, but the process it describes is not obviously any worse than academic review processes.

In the Methodological Weeds

  • Brown and Heathers present the GRIM test, to quickly identify reporting errors in empirical research statistics. They found that over half of the 260 psychology articles they reviewed for their paper contained errrors.
  • RCT-YES “a free software tool that allows you to easily analyze data and report results on the effectiveness of programs in your context.” Intriguing.

Commentary/Community

  • New book on quantification of politics talks about the exercise of power in international measurement efforts.
  • The Open Data Research Consortium is crowdsourcing research questions
  • Lies, Damn Lies and FOI Statistics. FOIman (UK) notes the lack of statistics on FOI requests and responses, and lack of comparability in what data is available, but suggests that consistent increasees in UK FOI requests in recent years may have just peaked.

Absurdum

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags