Looking for voice in the chatter

First of all, let me say that I like theory. I like convoluted and complex language when it represents careful and complex argumentation and analysis. I actually enjoy reading Deleuze, I think philosophy is fun, and I will almost never dismiss complicated text as mumbo-jumbo. Today is an exception, and Voice or chatter? is an apt title for the recent research report form MAVC.

This research report explores the “conditions in democratic governance that make information and communication technology (ICT)-mediated citizen engagement transformative,” on the basis of eight country cases. Ultimately, however, it fails to deliver meaningful insights. The bulk of text engages with broad theoretical frameworks and a long list of findings, both of which are convoluted, and largely detached from the empirical cases., Though the cases receive only meager attention ( 8 of 68 pages), their description is the most compelling portion of the report. All in all, this document is an difficult and unrewarding read. But it’s weight and ambition make it seem important.

I read this report so that you don’t have to. Below is a quick summary.

Continue reading “Looking for voice in the chatter”


Last week in civic tech research: T4T/A boosts government efficiency, govt social media is for broadcasting and 700(!) activism nodes in LatAm

Firstly: policy makers say that readability is the most imp thing for getting your research used for decision-making, + more tips from @fp2p. Just getting that out there.


Research on National Integrity Systems in New Zealand and the UK suggest that NIS impact is limited and disparate, while data analysis across 51 countries from 2003-2010, suggests that ICTs, transparency and anti-corruption efforts make government more efficient. Other studies show that gamification of civic tech meetups improves creative problem-solving when the games build empathy instead of rewarding skills, and that internet shutdowns have cost sub-Saharan African countries $235 million. Continue reading “Last week in civic tech research: T4T/A boosts government efficiency, govt social media is for broadcasting and 700(!) activism nodes in LatAm”

Governance beyond elections: how considering US political crises helps bridge the gap between disciplines and methodologies

…it is critical that we look beyond the conventional focus on elections, campaign finance reform, and voting rights. There is no question that these are critical areas of concern, and necessary preconditions for meaningful democracy reform. But these areas are also well-studied and understood by many of us in the field. In this report, we hope to highlight some of the other dimensions that seem to be essential to an improved and vibrant democratic society: new forms of organizing and community engagement; new institutional strategies for participation at the national and local levels; and a greater self-consciousness about how to build multi-racial constituencies and alliances to make our democracy more inclusive. (2-3)

That’s Hollie Russon Gilman and K. Sabeel Rahman in the introduction to their New America report, Building Civic Capacity in an Era of Democratic Crisis. There’s a number of useful analyses in the report, and some compelling conceptual and rhetorical moves (like distinguishing between “us populism” and “them populism”). But this emphasis on considering democracy as something that happens in-between elections is what caught my eye. It’s a rare focus in scholarship on political communication and democratic theory, and can help build some important bridges between disciplines. Continue reading “Governance beyond elections: how considering US political crises helps bridge the gap between disciplines and methodologies”

Click bait for accountability pundits: this month’s most misleading blog title

Screen Shot 2017-10-13 at 13.54.42

This blogpost describes an MAVC learning event, which in turn identified “7 streams of tech-enabled change that have proven to be effective in pursuing accountable governance.” Those seven streams are listed below, and while they represent a useful typology of tech for accountability programming, they do not represent activities that connect governments with their citizens. Continue reading “Click bait for accountability pundits: this month’s most misleading blog title”

research links w 40,17


European governments are making decisions behind closed doors, according to research by Access Info.  A survey on citizen uptake of a reporting platform (Linz, Austria, n=773) finds mixed results on motivations for participation, but community disconnectedness and previous reporting experience seem strong predictors. A natural experiment with @openstreetmap‏ data suggests that data seeding from external sources is bad for online community development and crowd contributions and @NetChange is running on online survey on non-profit digital engagement strategies. Takes 20 min, help out.

‘s @CourtneyTolmie has evidence-based tips for designing and testing community scorecards, and research from @NJNewsCommons suggests that there are now 6 models for collaborative journalism, distinguished by how sustained and interactive they are.

In other news: water activists are adopting digital techniques, but they’re no match for Chinese bureaucracy , and there’s now a thing called evidence networks.

Evidence on social accountability programs

…social accountability processes almost always lead to better services, with services becoming more accessible and staff attendance improving. They work best in contexts where the state-citizen relationship is strong, but they can also work in contexts where this is not the case. In the latter, we found that social accountability initiatives are most effective when citizens are supported to understand the services they are entitled to.

That’s from a blogpost describing a macro evaluation of 50 DFID projects (selected from a pool of 2,379 for their data quality, full report here). The findings here are super interesting (though they’ve been discussed for a while, the final report was published this summer, and the team held a webinar last week). The  “almost always” language in some of the findings is a bit over-enthusiastic, given the nature of their project pool and all the hidden factors that play into becoming a DFID project. They don’t really suss out what this means for external validity (generalizatoin gets a 70 word bullet on pg 19). But this is still likely the most evidence-based analysis, and worth further testing in other contexts. Their take on the accountability trap (factors that constrain scaling local wins) is also worth close consideration. Continue reading “Evidence on social accountability programs”