I love it when messy methods get topical, and this might be one of the very few silver linings to come out of Trumpland. December saw the publication of an IPSR special issue on measuring democracy, and then shit got real this week, when Andrew Gelman began a stream of posts criticizing the application of EIP methodology to the recent presidential elections in US states, and especially the claim/meme that North Carolina is no longer a democracy.
The institutional language of engagement has been defined by its measurement. Chief engagement officers in corporations are measuring milliseconds on web pages, and clicks on ads, and not relations among people. This is disproportionately influencing the values of democracy and the responsibility of public institutions to protect them.
Too often, when government talks about engagement, it is talking those things that are measurable, but it is providing mandates to employees imbued with ambiguity.
That’s Eric Gordon, writing about how civic engagement is understood and incentivized by city governments in the US. He goes on to argue that institutions of governance need to conceptualize civic engagement as more than market efficiency, and begin thinking towards a “relational approach” to civics in which “public institutions create value systems and metrics that support long-term relationship building in addition to short-term attention.” Continue reading “Civic engagement in practice and in metrics”
Civil society groups emphasize the need for high quality public information on the performance of politicians. But, does information really make a difference in institutionally weak environments? Does it lead to the rewarding of good performance at the polls or are voting decisions going to be dominated by ethnic ties and clientelistic relations?
Enter the Metaketa project’s first phase, running 7 experimental evaluations in 6 countries to answer that question: Does more info change voter behavior? The results and synthetic analysis are all coming out early next year, which is exciting, but a long ways away. I was also happy to see that they have a pre-analysis plan for that synthesis work (basically a self control mechanism to ensure that data doesn’t get fudged under analysis to support preferred outcomes, unfortunately they don’t really get used). Continue reading “The long haul towards evidence: information in elections edition”
That’s the title of an Open Democracy article published yesterday, which takes issue with the way that international comparative indices (such as Ciri Human Rights Data Project and Freedom in the World) hide injustice in rich western democracies. Specifically, the authors are angered by the US government’s consistently high ranking, despite systematic disenfranchisement of the African-American electorate. Continue reading “Gaps in Human Rights Research, Advocacy and Compliance”
Now, there’s a lot we could debate here about data collection processes, or tools, or when and how data clerks should be employed – but that’s not the point. Instead, we suggest that a growing amount of the qualitative evidence indicates that costs of collecting and reporting on the data that inform high-level performance indicators (for various agencies) can be quite high – perhaps higher than the M&E community typically realizes. These opportunity costs were echoed across countries and sectors; discussions with agricultural staff in Ghana, for example, suggest that many extension workers spend up to a quarter (or more) of their time collecting and reporting data.
That’s from a recent ICTworks blogpost. It’s focused on a specific initiative (HIV clinic in TZ), but the spirit will ring true to anyone who’s done donor reporting from the field (or at all really). The idea that reporting gets in the way of work is familiar, but it’s often even worse in a tech context, when there’s a presumption of data abundance, and finding metrics to strenghten work and anticipate roadblocks can be hard enough. Continue reading “When Indicators Get in the Way, Go Report Minimal?”
Or at least the three I had in my bookmarks. But I feel like there’s been a lot in recent weeks. Are there others to add to this list?
Being a Scholar in the Digital Era: Transforming Scholarly Practice for the Public Good (Jessie Daniels and Polly Thistlewaite, Eds).
Strong normative bent in this one, for open research as well as social impact. Explicit focus on collaborating with activists. I look fwd to reading. Their blurb: Continue reading “All the books on researchers and the interwebs”
This isn’t about research or methods, so I’ll be brief.
- Cass Sunstein, US policy veteran and eminent scholar, recently released a draft article distinguishing between input and output transparency, suggesting that arguments are weaker for the former, and offering reasons why input transparency might often not be a good thing. (To grossly oversimplify: there are too many inputs to policy-making processes, and making inputs transparent is potentially costly, not very useful, and could have chilling effects on the kind of open conversation that leads to good policy).
Continue reading “The problem with the problem with input transparency”