So I’ve been away for a whopping 8 weeks, bouncing between holidays, summer schools, consultancies and moving the fam to DC. Somehow the internet refused to stop while I was gone. So as I get back into the swing of things, here is an abbreviated summary of the summer’s findings in civic tech research, plus a couple of choice weeds and reflections.
An assessment of 100 Indian smart city initiatives supports previous findings regarding the lack of correlation between digital literacy, infrastructure citizen and participation in municipal e-government. A comparison of national log data with select case studies further suggests that national centralization of e-government services may have a negative consequence on citizen engagement, and high uptake rates in mid-sized cities are used to articulate a “theory of civic intimacy at play between citizens and governments and its relation to the scale of urban spread.”
While the framework remains unchanged, the characteristics and indicators that make up the index change from context to context, aiming to capture the characteristics of an ‘empowered woman’ in the socio-economic context of analysis. The index provides a concise, but comprehensive, measure of women’s empowerment, while also allowing breakdown of the analysis by level of change or the individual indicator.
That’s a description from the launch of Oxfam’s new ‘How To’ Guide to Measuring Women’s Empowerment. This is essentially a manageable algorithm, into which program staff can plug their data into in order to receive a single number representing a complex phenomenon. And while that makes a certain amount of principled sense (we’re all big fans of bespoke measurement approaches), it raises some questions too.
Civic Hall is tracking how new movements organize and communicate during rapid growth. Reports from six groups suggest that everyone is using everyday tools to communicate, but no one is satisfied with them.
Meta data and data format are the most important characteristics of open government data for African media practitioners, according to survey administered in 5 African countries (n=198).
Americans like their government digital, according to a survey of digital service users (n=4,584 in Q4 of 2016). The report noted digital consumer satisfaction 2 percentage points higher than non-digital satisfaction, which doesn’t sound like a lot, but is a big change. Continue reading “research links w 10-17”
Papers and Findings
Do global norms and clubs make a difference? A new dissertation assesses implementation of EITI, CSTI and OGP in Guatemala, the Philippines and Tanzania to conclude that multi-stakeholder initiatives can strengthen national proactive transparency, but have little impact on demand-driven accountability. There are interesting insights on open washing and the importance of high level political ownership.
Meanwhile, MySociety’s @ assessed civic technology in Mexico, Chile and Argentina (interviews w/ gov and non-gov, n=47), to conclude that the “intended democratising and opening effects of civic technology have in fact caused a chilling effect,” prompting Latin American governments to seek more restrictive control over information. In Brazil, researchers assessed 5 municipalities to see whether strong open data initiatives correlated with strong scores on the digital transparency index–they don’t.
Austrian researchers reviewed the literature on gamification strategies in e-participation platforms globally, concluding that gamification of democracy doesn’t happen often, and when it does, it’s often rewards-based, a strategy they expect to “decrease the quality of participation.” This conference paper by computer scientists proposes an e-government maturity model, based on a literature review of 25 existing models, and the International Budget Partnership has released a report on how civil society uses fiscal transparency data. Spoiler: they don’t have the data they want.
A number of global reports and releases were published. The DataShift has a new guide on Making Citizen-Generated Data Work, based on a review of 160 projects and interviews with 14 case studies, and which presents some useful classifications and typologies. Creative Commons has released the 2016 Global Open Policy Report, with an overview of open policies four sectors (education, science, data and heritage) in 38 countries. The White House has released a report on the performance of it’s public petition site, We the People, highlighting four cases where e-petitions arguably impacted policy in the platform’s first five years of operation.
Meanwhile, the Governance Data Alliance has released a report entitled “When is Governance Data Good Enough?” based on snap polls with “500 leaders” in 126 countries, and which suggests among other things that credibility and contextualization of governance data is important to governance data users, and that governance data is used primarily for research and analysis. The general impression seems to be that yes, in many countries, the governance data that exists is in fact good enough “to support reform champions, inform policy changes, and improve governance.” A launch event was held on Dec 15.
Community and Commentary
GovLab sought Peer Reviewers for open gov case studies on Cambodia, Ghana, India, Jamaica, Kenya, Paraguay and Uganda, but there were only 9 days to sign up (in late Dec) and 2 weeks to review (during the holidays). Hope they found someone. There must be a happy medium between the glacial grind of academic peer review and… this.
A Freedominfo.org post highlights the Access to Information component in the World Bank’s Open Data Readiness Assessment Tool, and suggests how it can be a useful tool for advocates and activists.
The World Bank has released a new guide on crowdsourcing water quality monitoring, with a focus on program design, not measurement, which is nicely summarized here.
Mike Ananny and Kate Crawford’s new article in New Media & Sociaty critiques the “ideal of transparency” as a foundation for accountability, identifying 10 limits of transparency, and suggest alternative approaches for pursuing algorithmic accountability.
The LSE blog re-posted a piece describing novel metrics for research social media influence, and distinguishing between aspects of “influence”, such as amplification, true reach and network score, but failing to link to that research. In Government Information Quarterly, a troika of international researchers have suggested an uninspired research agenda for “open innovation in the public sector”, with a focus on domain-specific studies, tools other than social media, and more diverse methods.
This TechCrunch article attributes civic innovation in US cities to governmental gridlock at the federal level, the NYT describes research suggesting that price transparency in the US health sector has failed to drive prices down and Results for Development Institute is developing a framework to help governments “cost” open government initiatives before they pursue them.
In the Methodological Weeds
The Development Impact Blog has a great post on life satisfaction reporting between women and men. The discussion begins with the assertion that “women definitely say they are happier” and moves quickly to debunk that assertion, using hypothetical vignettes, anchored to common response scales. The methods are smart, and highly relevant to response bias problems in any social survey setting, especially in assessing political and social impacts of media and information.
An article in JeDEM presents a model for multidimensional open government data, which focuses on the integration of official and unofficial statistics. The proposed method builds on the data cube model from business intelligence, and relies entirely on a linked data technology. This paper goes a bit beyond my technical expertise, but at bottom it promises to harmonize indicators from different data sources (with different, but overlapping meta-data and data context) on the basis of shared attributes. Kind of a lowest common denominator approach. This is intuitive, and the type of thing I’ve seen attempted at data expeditions via excel, but having a rigorous method could be a huge advantage. Especially if demonstrated with the participation of governments in the pilots this article references, a solid methodology for this could be hugely useful to initiatives like DataShift, which talk a lot about merging citizen-generated data with official statistics, but struggle to make that happen either politically or technically.
- Communication Studies Spring School on Media, Culture and Power (Lisbon, April 3-7)
- The newly revamped Transparency and Accountability Initiative is seeking student fellows for Spring 2017 (deadline Jan 6)
- Ph.D. Program in Communication, Rhetoric, and Digital Media (deadline Jan 15)
- Rockefeller Foundation Junior Scholars Forum (for research on civil soceity, Stanford June 8 – 10, deadline Feb 13)
- Data & Society is hiring a Research Analyst on Media and Accountability (deadline Jan 11)
Calls for Papers:
- Technology for the Common Good (Troyes, France, June 26-30, deadline Feb 1)
- Political Communication in Uncertain Times: Digital Technologies, Citizen Participation and Open Governance (Spain, 7-8 Sept, deadline Jan 30)
- Journalism, Society and Politics in the Digital Media Era (Cyprus Sept 1-3, deadline Feb 27)
- Digital Methods for Public Policy panel at the International Conference on Public Policy (Singapore, June 28-30, deadline Jan 15).
Miscellanea and Absurdum
- America’s most common Christmas-related injuries, in charts (from Quartz)
- The Hate Index “represents a journalistic effort to chronicle hate crimes and other acts of intolerance since Donald Trump’s presidential election victory.”
- DataDoesGood is asking you to donate your anonymized shopping data, which gets sold, and profits donated to charity.
- Academic article: “Tinder Humanitarians”: The Moral Panic Around Representations of Old Relationships in New Media
- The Association of Internet Researchers has a YouTube channel (!)
- 4% of U.S. internet users have been a victim of “revenge porn” (via Data & Society)
- CFP: Women’s Head Hair as a tool of communication, in media outlets and social media activism
Papers / Findings
- Citizen engagement in rulemaking — evidence on regulatory practices in 185 countries (from the World Bank). TL;DR: opportunities for engagement are greatest in developed countries with strong regulatory systems, as are the use of ex post ante impact assessments. Paper includes an incredibly brief literature review and the study itself is based on e-questionnaires (word docs, expert perception only, no data on actual participation), which was sent to 1,500 individuals in 190 countries. The researchers also conducted follow up interviews for clarification, but there is no information on how many questionnaire responses were received. Most strikingly, the report advances a composite scoring mechanisms for engagement in rulemaking, for application across all country contexts. It’s clunky, with 4 scoring options for most metrics, each of which beg a million questions about comparability and the applicability of the scores to individual political contexts. I’d love to read some reflections on the challenges in actually applying this. Methods and questionnaire available here.
- User Research on UK parliamentary data from the ODI. Contains 4 detailed recommendations plus user journeys, but very sparse info on the methods or users interviewed. Also, @ODIHQ, stop using Scribd, we’ve been through this.
Papers and Findings
- Nordic Open Access to Research Data. A new research paper reiterates important conditions for effective open access, and offers 3 recommendations for Nordic research communities that take advantage of their countries’ size and position.
- A psychology study in Zimbabwe suggests that for political activism in repressive political contexts, psychological resilience in the face of threats is a more important predictor of civic engagement than access to information or technology.
- Analysis of municipal electoral data in Mexico suggests that participatory governance mechanisms can strengthen authoritarian governance structures, and that participatory mechanisms are most impactful when they have a “bottom up” design.
- Define “open”: data portal edition: Austrian researchers dove into 232 open data portals to analyze the usability of 200,000 data sets. They find that it’s really messy in there. Only half the data labelled as tabular is actually csv, the meta data is bad, and a shocking number of words in the header values aren’t actually words. The authors offer a number of recommendations Continue reading “Research Links w 39”