Research on nearly 3 decades of democratic innovation and e-participation in Latin America has some interesting findings (Brazil, Colombia, Mexico and Peru). According to an Open Democracy blogpost (the actual project’s website is down): civil society participation programming uses tech more often than not, smaller countries are less prolific than large countries in terms of tech-driven innovations, and tech driven innovations are just as common at the national level as they are at the sub-national level. Though digital innovations are widespread, they only rarely facilitate decision-making (30%) or are formalized in legislation or policy (less than 50%).
University of Maryland research on anti-Trump protests finds digital media commonalities among an exceptionally diverse group, suggesting something that approximates a “movement.”
A review of research on government social media use finds that it is generally quantitative, ignoring both users and impacts, while a library study in the UK suggests that Open Data makes it hard to archive well in the NHS, and a study of service delivery in Kenya found that it was improved by decentralization, but that the mediating effects of e-government initiatives were insignificant (275 respondents, 8 county govts).
Continue reading “research links w 23-24 / 17”
Using a multilevel linear model to account for the hierarchical structure of our survey data, we find evidence that performance assessments yield greater policy influence when they make an explicit comparison of government performance across countries and allow assessed governments to participate in the assessment process. This finding is robust to a variety of tests, including country-fixed and respondent-fixed effects.
Whoa. That’s from a new AIDDATA working paper on global performance assessments (international measures of how well countries do at combating corruption, ensuing fair elections, opening data, or what have you).
Those findings aren’t shocking (that ranking countries can motivate govt actors in crude ways has become almost as much a platitude as the idea that participatory research enhances uptake), but their exciting because they are so clearly and directly relevant to the design of comparative assessments, and this study “feels” robust enough to carry some weight in conversations with the people who manage the budgets and optics of such projects. Based on econometric analysis of data from the 2014 Reform Efforts Survey (n= 3,400 gov officials in 123 low and middle-income countries) and 103 different GPAs, the use of elite survey data is a smart way to get around the problems of measuring influence by policy outcome.
The treatment of GPA characteristics is a bit crude though. 8 independent variables are identified at the GPA level (bi-/multilateral, whether govt’s are involved, whether data is public, etc), but none of these address policy areas or types of norms. Dan Honig’s research has suggested how important this can be for mobilizing the soft power of GPAs, and my own work on OGP suggests that it’s critical.
An assessment of 100 Indian smart city initiatives supports previous findings regarding the lack of correlation between digital literacy, infrastructure citizen and participation in municipal e-government. A comparison of national log data with select case studies further suggests that national centralization of e-government services may have a negative consequence on citizen engagement, and high uptake rates in mid-sized cities are used to articulate a “theory of civic intimacy at play between citizens and governments and its relation to the scale of urban spread.”
Continue reading “research links w 22 – 17”
While the framework remains unchanged, the characteristics and indicators that make up the index change from context to context, aiming to capture the characteristics of an ‘empowered woman’ in the socio-economic context of analysis. The index provides a concise, but comprehensive, measure of women’s empowerment, while also allowing breakdown of the analysis by level of change or the individual indicator.
That’s a description from the launch of Oxfam’s new ‘How To’ Guide to Measuring Women’s Empowerment. This is essentially a manageable algorithm, into which program staff can plug their data into in order to receive a single number representing a complex phenomenon. And while that makes a certain amount of principled sense (we’re all big fans of bespoke measurement approaches), it raises some questions too.
Continue reading “Measuring women’s empowerment: pushing composite indicator frameworks on projects?”
Warning: long post, deep weeds.
Last week saw some really interesting thinking in the development economics blogosphere, focused on design questions for external validity (the applicability of case-specific findings to other cases). This is a central question for research on civic tech and accountability programming, which talks a lot about wanting an evidence base, but remains dominated by case studies, enthusiasm and a handful of ametuer researchers. We see this clearly a couple times a year (at TicTech, the Open Data Research Forum), where the community gathers to talk about evidence, share novel case studies, acknowledge that we can’t generalize from case studies, and then talk some more as if we can.
If we want to develop the kinds of general heuristics and and rules of thumb that would be useful for the people who actually design and prioritize programming modalities, the kind that would make it possible to learn across country contexts, then we have to be smarter about how we design our research and conceptualize our evidence base. There’s a lot to learn from development economics in that regard. Development studies is like pubescent civic tech and accountability’s older uncle, who used to be cool, but still knows how to get shit done. In particular, there was a lot to learn from last week’s discussions about generalization and validity.
Continue reading “Case by case: what development economics can teach the civic tech and accountability field about building an evidence base”
E-government projects are more successful when formal decision-making processes include stakeholders and actively manage risk, according to a survey of Swedish national government agencies and municipalities (N=550). Meanwhile, @timdavies is coauthor on a paper in Science & Technology Studies that tracks how data standards influence bureaucratic processes for opening government data. The paper warns that standards can in some ways obstruct actual engagement with users, and puts a useful focus on people in institutions just trying to get things done.
Mixed findings on social media effects this week. Chinese participants in political discourse on Weibo experience that discourse as deliberative, despite the interactions being “mostly non-dialogical and non-creative in nature, and characterised by homophily and polarisation.” (New study, n= 417). In the US, social media played a definitive role in determining how the Tea Party negotiated it’s identity and relationship with the Republican party in the course of Trump’s rise to power. Not in the least, it allowed for quick differentiation of activist perceptions on appropriate degrees of openness, which seem to correspond with political objectives and conceptions of political efficacy. This is described by a new paper in Social Media + Society (not to be confused with New Media and Society, I recently made that mistake > facepalm), which offers a fascinating case, without clearly actionable findings.
Continue reading “research links w 21-17”
The University of Vienna has a new report on far-right attacks on the press, a concept they sketch to include legal action, abuse of power and online abuse. The report describes a delicate relationship between the rise of far-right nationalism/populism and declines in the quality of European democracy. Meanwhile ‘s new report on Media Manipulation only describes the tactics and platforms that “far-right groups” are using to manipulate media, but the social and economic factors that make traditional media vulnerable.
A survey of Chinese localities suggests that “technology competence, top management support, perceived benefits, and citizen readiness significantly influence assimilation of social media in local government agencies.” And globally it doesn’t seem to be going well, at least in terms of responsive web design. Global research suggests that government websites still suck on mobiles. Or more carefully put: “The results show that only 0.03% of government websites comes close to adhere to mobile web best practices (MWBP) guidelines with compliant rate greater than 80%.” But every little bit counts. Even when government’s are lackadaisical on social media, having a Facebook page can still spur citizen engagement, at least according to a study of 18 months of communications in La Paz, Mexico. Continue reading “research links w 19 & 20-17”