“Individual-level characteristics such as the race, gender, wealth and age of councillors meaningfully predict attitudes and perceptions on a range of important questions about voice and accountability.”
That’s the money line from the MAVC-supported research on municipal politicians in South Africa (survey, n=1032 from 21 municipalities). The main recommendation is that accountability initiatives should take differences between councillors into account and avoid a on-size-fits-all approach. Makes sense, but this report is worth a closer look on two counts.
Firstly, it has a fantastically transparent methods section about conducting real life research in unpredictable environments. This includes a frank (if brief) discussion on self-selection bias among survey respondents and how data collection was delayed by “literal physical fighting between elected councillors” (7). Kudos. This is excellent practice, it should be lauded and mimicked.
Secondly, there are some begged questions about the generalizability of the findings. Notably, this research is based on a survey of “elected South African local municipal councillors in urban and near-urban areas” in the first year of their term. That’s a pretty specific population, which has some consequences.
Don’t get me wrong, the findings are fascinating and unquestionably important for anybody campaigning to municipal councillors in urban South African municipalities. But it’s not clear to whom else they would or could apply. Take, for example, the fact that 95% of respondents agreed with the statement that “in areas with service delivery protests, it is important to meet with the protesters and hear their grievances” (14). That’s tough to generalize. It’s even harder to disentangle from any of the other findings.
To be clear, the authors of this report don’t claim generalizability, but they don’t raise the question either. What would make this research useful to a wider audience would be a systematic discussion of the contextual factors that define it. That’s kind of there in the narrative, but it isn’t treated in a way that makes it easy to compare to other contexts. That’s not a critique of the research per se, but it does represent an opportunity cost (especially for MAVC research which is supposed to inform program design).
Not spelling out the implications and contours of niche research populations also makes it really easy for enthusiasts to casually imply that that the findings can be generalized, because 280 characters. That’s not the researchers’ fault, but it’s something that civic tech research infomediaries should take into account. In our unconferency-posture-rich-twittosphere where no one has time to read anything, but we all like the idea of evidence, we need to be clear about what we’re learning and where it matters.