Methodical Snark critical reflections on how we measure and assess civic tech

Research Links (w25-28/16)

R

4 weeks’ worth, yikes. #summer

Papers/Findings

Citizen Engagement FTW!
The Journal of Public Administration Research and Theory just released a “virtual issue” on citizen engagement, collecting the most important articles with that focus in that journal since 1995, to make some sense of how citizens actually engage with governance across the policy cycle. The editors’ take on the compilation is compelling and there are some real gems in the articles, such as those demonstrating how citizen expectations influence participation in public service accountability initiatives. Crudely summarized, some of the findings suggest that:

  • citizen reporting is less positive if citizens believe services are provided by government (here)
  • participation in reporting strengthens previously help perceptions about government performance and political orientation (here)
  • users of government websites (in the US) tend to be “white, better educated, wealthier, and younger than Internet users in general.” (here)

Meanwhile, a large scale Finnish study suggests that Warm and Supportive Parenting Can Discourage Offspring’s Civic Engagement in the Transition to Adulthood. Specifically, this identified negative correlations with political activism up to 10 years after “young adulthood” and with volunteering up to 2 years after. The authors propose a “life stage” explanation.
There’s also been some study on participation in health policy processes, including results from a citizen jury on emergency health care in Australia, and a special issue of Health Organization and Management finds legitimacy to be a key challenge, and notes the conceptual challenges posed by the vast array of forms that participation can take.

Evaluating Open Government
A canadian study finds that some municipalities are evaluating OG internally, on the basis of how processes are engaged government agencies, while others evaluate on the basis of data use external to government (civil society, private sector and other government bodies). This paper is grounded in the claim that “little research has evaluated how municipal government evaluates the success of their open data programs,” despite a significant amt of research on those metrics, most recently surveyed in this Croatian study, which cites work on evaluation frameworks by the Web Foundation, World Bank and others, before asserting that Croatia needs a legal intellectual property framework to further advance open government, all the while employing the most terrible citation convention/error I’ve ever come across.

What are infomediaries?
Forthcoming paper on intermediation in open development (inclusive open government, open access, etc), offers two fundamental conceptual contributions. The authors propose a “knowledge stewardship” model for considering open intermediation, which includes process-oriented responsibilities towards public goods, and suggest 5 “schools of thought” for thinking about the role of intermediation. These prompt some interesting research questions, to my mind they also open up for a compelling analysis of competition in the tech for good scene. The matrices they offer would also be good tools for theory of change and program design exercises for in-country programming.

Open washing and consultation washing
A European study (focus groups in 6 European cities, on the basis of e-participation platforms being developed as part of an EU-funded project) suggests that public administrators circumvent the spirit of open data, by pursuing a “strategically opaque transparency policy”, ei: releasing the data that’s comfortable to release. The author’s rightly note that this is about incentives and preserving strategic interests in institutional contexts, but their proposed solution via platforms is prompts a sigh.
Meanwhile, a conference paper from March proposes a framework for understanding the uptake of policy proposals in consultation processes. The authors identify a set of contextual and proposal-related factors, then road test them on 571 proposals in policy consultations in three Spanish regions between 2007-2011. Their findings do not support the assumption that governments “cherry pick”, “selective listen” or otherwise engage in #consultationwashing (approx 2/3 of proposals were implemented, nearly 1/2 without significant change). The sample selection is well justified methodologically and practically, but still, it’s just Spain.

Open research
PASTEUR4OA, has just closed out it’s collab w/ Open Knowledge by releasing two research briefs, one on the “true cost” of Open Access gold, to the community, and one on the rise of proprietary research sharing and management platforms, and considers potential open alternatives.

FOIA
A literature review of FOI research finds wide recognition of FOI as a human right, significant skepticism about the real political impact of FOIA legislation, and an increasing turn towards secrecy by states across the globe (since 9/11).On evaluating open governement

NGO media campaigns and the efficiency of development porn
An experimental survey (N=701 British adults) suggests that traditional NGO appeals that elicit anger or revulsion can backfire more that “alternative” appeals that aim to elicit empathy, but that there is no difference in their efficiency in getting people to donate (at least for a previously un-engaged audience as was this sample, findings might be different for an established donor base such as that enjoyed by large orgs like Amnesty and Oxfam). (Of methodological note, they offered £10 to participate, treated the subject with a campainging statement from an NGO, then asked how much of that £10 the subject would like to donate to that NGO in order to measure the efficacy of campaign appeals). Meanwhile, an article based on desk research suggests three reasons why NGOs continue to seek mainstream media attention in the digital age (donors like it, politicians watch it, and NGOs tend to ally with journalists).

Oh, and toilets
Research involving 100 users suggests that the UK’s public toilet database has improved quality of life.

In the Methodological Weeds

  • Conceptualizing Transparency and Accountability
    New paper on the conceptual relationship between transparency and accountability. Swiss researchers provide a thurough lit review of T&A, with a special focus on Fox (2007), Hood (2010) and Meijer (2014). They note some factors that previous conceptual frameworks fail to accomodate, and suggest a new, more complicated framework, that could be critiqued on the same grounds because, well, it’s complicated. But there are tables, lots of tables. (SSci-hub)
  • What good is peer review
    A new paper (preprint) argues that the peer review process itself doesn’t add much value to research outputs. The methods are debated, but the conclusion is also a little obvious, and begs lots of questions about alternative peer review mechanisms. Nice to see research on this, in any case.
  • Sampling methods for local governance studies
    New paper uses systematic sampling to assess electronic access and availability of financial documents for transparency in 237 counties in the US. This approach allows for some broad claims about the state of A2I at the municipal level (including some compelling uses of Chi-square tests to assess the “findability” of different types of financial documents), but also some interesting preliminary claims about correlations between transparency and county size.

Commentary/Community

  • Anne Gallagher gives a methodological (and methodical) takedown of the 2016 Global Slavery Index. After methodically addressing inconsistent object definition, data cherrypicking and sampling problems, she discusses why bad methods are bad for the advocacy community.

Absurdum

3 Comments

  • The link to the Canadian study is a link to the Croatian study. I’d be interested in reading the Canadian study.

      • It’s not clear to me that the existence of research on evaluation frameworks and metrics contradicts the claim that “little research has evaluated how municipal government evaluates the success of their open data programs”. The claim is about research on self-evaluations, not about research on tools to conduct self-evaluations.

        I would certainly agree that the open data research community spends a lot of time talking about measurement tools, and much less time trying to understand their application.

Methodical Snark critical reflections on how we measure and assess civic tech

Tags