Methodical Snark critical reflections on how we measure and assess civic tech

research links w 19 & 20-17

r

Findings

The University of Vienna has a new report on far-right attacks on the press, a concept they sketch to include legal action, abuse of power and online abuse. The report describes a delicate relationship between the rise of far-right nationalism/populism and declines in the quality of European democracy.  Meanwhile @datasociety‘s new report on Media Manipulation only describes the tactics and platforms that “far-right groups” are using to manipulate media, but the social and economic factors that make traditional media vulnerable.

A survey of Chinese localities suggests that “technology competence, top management support, perceived benefits, and citizen readiness significantly influence assimilation of social media in local government agencies.” And globally it doesn’t seem to be going well, at least in terms of responsive web design. Global research suggests that government websites still suck on mobiles. Or more carefully put: “The results show that only 0.03% of government websites comes close to adhere to mobile web best practices (MWBP) guidelines with compliant rate greater than 80%.” But every little bit counts. Even when government’s are lackadaisical on social media, having a Facebook page can still spur citizen engagement, at least according to a study of 18 months of communications in La Paz, Mexico.

Surveys of U.S. adults suggest that social media emphasize perceived political disagreement compared to offline interactions, and that people would learn more from political debates if they weren’t on social media talking about them while they were happening, and MySociety released analysis of civic tech integration into local government in 5 US cities. Conclusion: it’s complicated. The interaction and adoption of technology in government institutions interacts with a number of other institutional factors, implying both a broad scope of impact and vulnerability. But there seems to be significant potential for interplay with policy, which is emphasized in the author’s conclusions.

New MAVC research examines open government data ecosystems in South Africa, comparing use cases and use stories, to see how and when opened data can actually be used for accountability. In short, it’s complicated. People seek data for a variety of reasons, and though accountability is often part of their motivation, it’s not always articulated as such. Nor does the data always help. Perhaps most useful here is how the diversity of use stories challenge traditional programming assumptions about supply and demand, and the emphasis on a desire for local data.

What we know: A literature review in International Journal of Human-Computer Studies demonstrates how  gamification strategies in crowdsourcing vary dramatically, but argues convincingly that they are generally effective. A lit review of ten years of scholarship on mobile phones and open government in East Africa concludes that there hasn’t been enough research. “Sadly, our review of mobiles as a citizen-controlled tool for fighting corruption in East Africa, did not provide us with enough research to systematically verify or discard earlier claims or hopes connected with mobiles.

Community & Resources

The @opencontracting partnership has mapped it’s networks and fed the results into strategy.  It’s play-by-play best practice in adaptive analytics for tech-driven NGOs.

J.UCS has a Special Issue on Analyzing Political Discourse in On-line Social Networks, with a focus on elections and fighting terrorism, and Carnegie’s new report “Global Civic Activism in Flux” consists of eight case studies, of which several feature prominent use of technology (Brazil, Egypt, Tunisia, Turkey). Tech is notably (and perhaps nicely?) absent from the report’s general conclusions. @fp2p gives a nice review and summary. In International Conference on Social Computing and Social Media describes a 5 year experiment using Facebook to build a virtual civic community for a Brazilian town. After 5 years, 66% of the population participates, and the authors propose 14 strategies for developing online civic communities. Also, the folks at Facebook weren’t helpful.

Frameworks:

@beatricemartini made a reading list for decolonizing technology and Hungarian activists push back against their characterization by international researchers in: “We are not your case study: weaving transnational solidarities across the semi-peripheries”

Collections: The #responsibledata forum has curated a list of resources for open source human rights research and there’s a new Behavioral Evidence Hub which presents nudge programming and related experiments and evidence for non-researchers. Meanwhile, @ICT_works has a blogpost with 7 Training Tips for Launching a Mobile Data Collection Platform, Tragic Design is a book stuffed with case studies examining all the bad things bad design can do and Brookings has summarized the independent evidence that “aid works.”

Lastly, the OGP has announced a new national assessment of open government by the Mexican government: the Open Government Metric. Is been translated to English and polished by a graphic designer, but there’s no info on the methods or where the data comes from.

In the Methodological Weeds

Pew Research Center launches Methods 101, a you tube series explaining basic methods for survey research. The first episode on random sampling is great. Very much for beginners. (“We use random sampling all the time…If you stir a pot of soup the right way, you don’t have to taste the whole thing.” Lovely.)

More interview methods for asking sensitive questions in interviews: @BerkOzler12 reflects on the challenges and advantages of using list experiments. Useful, but begs a lot of questions about response bias, for which I still prefer the method of having respondents roll dice that determine their question/answer, when only they know the outcome. Similarly, @FiveThirtyEight describes how only 3% of Americans will identify as atheist if asked outright in a survey, but hiding a less identity-bound question in a vanilla list of questions increases that number to 27%. And on a related note, @bbcmediaaction ‏ discusses the challenges of designing surveys to document changes in social norms.

Tools for Analysing Twitter! The updated overview from @was3210 has some useful thoughts on methodological and ethical challenges.

Academic Opps

Hires/Positions

Calls for Proposals/Papers/Participation

Events:

Miscellanea & Absurdum

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags