Methodical Snark critical reflections on how we measure and assess civic tech

research links w1-2017 (!)

r

Papers and Findings

A field experiment among county governments in the US last April showed that municipal governments are more likely to fulfill public records requests if they know that their peers already have, suggesting profound implications for peer conformity and norm diffusion in responsive government. A recent commentary in Public Administration Review builds on these insights, to suggest concrete ways in which open data advocates can capitalize on this dynamic (publicize proactive fulfillment, bolster requests by citing prior fulfillment, request proactive fulfillment through feedback channels, request data on fulfillment when all else fails).

Meanwhile, Austrian researchers surveyed users of a citizen reporting platform for municipal public services (n=2,200, city not named, which is problematic for external validity, they call their study an “experiment”), and argue personal and pro-social motivations as the most important drivers of participation, but find no support for the technology acceptance model or demographic characteristics as drivers of participation (though they do note that “the gender divide is disappearing” (2768), so that’s good to know).

A study of comments and activity on the Facebook page of Mexican politician “El Bronco” further discredits the slacktivism meme, with non-US data for a change, and in China, researchers argue that a survey of 4,400 municipal government data suggest that online political engagement in China is directly impacting social policy development at the local level, a causal relationship they note has not been identified in the relevant literature on western democracies. Another paper looking at 7 years of online interaction between citizens and the national government finds that Chinese government responsiveness is significant, but highly selective, favoring requests according to issue area and the social characteristics of requesting citizens.

OGP has a new report with 7 case studies, featuring results from early OGP initiatives (Costa Rica, Chile, Italy, Tanzania, Indonesia, Macedonia and Israel). Strictly speaking, I’m not sure I agree with the report’s claim that each case “demonstrates measurable progress and the added value of the collaboration between government and civil society.” Most seem simply to demonstrate increased collaboration or publication of information, which is not necessarily the same thing.

@sverhulst and coauthors have proposed a taxonomy to differentiate models for “data collaboratives” (by which they mean “cross-sector [and public-private] collaboration initiatives aimed at data collection, sharing, or processing for the purpose of addressing a societal challenge”). The taxonomy differentiates between 14 dimensions (supply and demand) and is based on 10 case studies, plus relevant literature.

Technology, media and information are noticeably absent from Bert Rockman’s assessment of the four main trends changing in governance in the US (confidence in government, polarization, privatization and austerity), which was published to mark the 30 year anniversary of Governance, the academic journal.

The following papers were all presented at the 50th Hawaii International Conference on System Sciences last week. Apparently, the spot to be.

Assessment of an Australian platform for open performance data (My School) identified three ways in which putting evaluations on an open data portal leads to “datification”, with negative societal consequences – #datagonewrong. Meanwhile, interviews with civic hackers in Seattle (n=15) show that expectations about open government data quality don’t meet hacker expectations. The authors blame this on the hackers backgrounds in the tech sector and arguwfor third parties (like newspapers) to get involved in open data hackathons, to start cleaning the data up before the hackers show up.

An assessment of how Jakarta’s government transparency program provoked citizen engagement found that, unsurprisingly,   “YouTube-enabled Government Transparency” promotes more engagement and cross-platform sharing online than in mass-media. The authors speculate about the advantages of digitally promoted transparency for policy objectives (#grainofsalt).

In keeping with the above, a review of the last 50 years of research on how governments are using computers asserts that research consistently demonstrates a “propensity to optimism,” which is a charming way to put it. They also note that researchers are consistently aware of their own “naivety” in the face of failed and failing predictions, and blame this trend in part on the huge influx of diverse disciplines studying computers in government. They also note that there has been an explosion of research outputs in recent years, but that the quality of research seems to have declined, and that the many of the original research questions asked at the beginning of this period remain unanswered.

Community and Commentary

@RebeccaRumbul offers her top 5 favorite articles on the GovLab’s Open Government Research Exchange ( OGRX), and they are predictably great. Notably, 3 of the 5 are produced by practitioners, rather than academics :).

@freedominfoorg suggests that figuring out how to measure whether countries implement SDG 16.10 (providing access to information) should be one of four key challenges for the FOI community in 2017. Couldn’t agree more.

Interviews with Kenyan T4T&A initiatives conducted as part of a MAVC research project on adaptive programming surfaced some interesting insights about how funding and technology pose opportunities to adapt across program cycles. They also surfaced a significant demand among projects for more interaction with peers. So the researchers organized a workshop and the participants refused to leave when it was over. #knowledgesharinginaction

From the world of academic-I-told-you-so’s, @mathbabedotorg is unsurprised by recent research showing that recidivism risk algorithms are inherently discriminatory. She goes into the weeds to describe how this was forecast in a 2011 paper. But it raises again the question: why doesn’t good research have meaningful impact on policy and social norms? Ruth Dixon’s (unrelated) post on the LSE blog blames academics’ lack of emotional literacy for lack of real world impact, which places them firmly in the camp of “the elites.” And 2016 has shown us nothing if not how hopeless that lot is.

Building on the “replication backlash“, Daniel S. Hamermesh argues in an IZA white paper that lack of reproduction isn’t a problem, because the academic community naturally selects those experimental findings which ought to be tested. “The majority of articles in those journals are… essentially ignored, so that the failure to replicate them is unimportant…” Why does this remind me of the trump cabinet? (h/t @dmckenzie001)

In the Methodological Weeds

Andrew Gelman went a little nuts on the Electoral Integrity Project’s  methodology for comparing the quality of elections across countries. I describe the arguments and suggest what they means for comparing things like open data and accountability projects here.

A new working paper from Berkeley scholars proposes methods and software (Stata, R) for power calculations in RCTs with panel data, essentially helping to determine the appropriate number of participants for making credible causal claims on the basis of experimental data. It’s deep deep deep in the weeds, but looks like a very cool trick for answering one of the big questions that always seems to be resolved through magic by the people who have been doing it for a long time.

@schock is coauthor on on a NM&S paper describing participatory research with US-based LGBTQ organizations, documenting the novel and innovative approaches to media advocacy and movement building. They conclude with 5 recommendations to researchers, include a call to move away from output-centric and qualitative metrics for impact assessment, and the importance this has for funding decisions. In doing so, they present significant anecdotal support for moving away from quantitative metrics such as “number of stories […]; node centrality […]; or clicks, likes, comments, and shares”, towards assessments of things like leadership and organizational development, though they fail to reflect on appropriate methods, and note that “perhaps the most difficult outcomes to measure, are among the most powerful.”

Academic Opps

Berkman Klein Center for Internet & Society at Harvard University is accepting applicaitons for 2017 summer interns.

Evidence Action is looking for researchers who want to scale RCT findings that might benefit people living in poverty.  (deadline 3 Feb)

Call for panels/speakers/awards: World Summit on mobileGovernment (Brighton, 7-9 May)

Calls for Papers:

Miscellanea and Absurdum

“The ancient Greek historian Herodotus once observed that Persian rulers indulged the habit of getting drunk when making important decisions. When sober and sensible next morning, their custom was to reconsider their decision, and either stick to it, or revise or reject it outright. They had another method of decision-making, he noted: they took decisions when sober, then affirmed or declined them when drunk.” -So begins John Keane’s post on War and democracy in the age of Trump. A good read.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags