We had some discussion here a few years ago about implementing some scoring method for comments — there were some proponents who thought it would be a useful way to get community input. I’ve always been dead-set against it. It turns out I have scholarly justification now.
Abi Sutherland discusses a psychology paper at Making Light, which examined the effect of up- and down-voting on large user communities at CNN, IGN, Breitbart (oops, there’s a dollop of poison in the database), and allkpop, a Korean entertainment site. Cheng, Danescu-Niculescu-Mizil, and Leskovec proposed to test a prediction of the operant conditioning model, that peer feedback would lead to a gradual improvement in the quality of posts. That’s not what they saw.
By applying our methodology to four large online news communities for which we have complete article commenting and comment voting data (about 140 million votes on 42 million comments), we discover that community feedback does not appear to drive the behavior of users in a direction that is beneficial to the community, as predicted by the operant conditioning framework. Instead, we find that community feedback is likely to perpetuate undesired behavior. In particular, punished authors actually write worse in subsequent posts, while rewarded authors do not improve significantly.
It’s a kind of backlash — downvote a commenter, and they don’t see it as a suggestion to change for the better, but instead see it as a challenge: those other assholes need to change to agree with me, so I’m going to rage even more at them. We don’t do downvoting here, but I certainly do weigh in with the landslide election of the banhammer, and you would not believe how much furious fulminating I get in email and in the spam queue over that. Well, maybe you would — you see lots of those jokers then charging out to other sites complain about the vast injustice done to them.
From the authors’ conclusion:
In contrast to previous work, we analyze effects of feedback at the user level, and validate our results on four large, diverse comment-based news communities. We find that negative feedback leads to significant changes in the author’s behavior, which are much more salient than the effects of positive feedback. These effects are detrimental to the community: authors of negatively evaluated content are encouraged to post more, and their future posts are also of lower quality. Moreover, these punished authors are more likely to later evaluate their fellow users negatively, percolating these undesired effects through the community.
Sutherland has also experienced this phenomenon.
More than once, I’ve watched groups of people gather on particular LiveJournals, blogs, and chatrooms to spin up their energy and hone their arguments, then go back to the “main” venues to continue the discussion. These side-channels act as adjuncts to the visible conversation, where people not actively participating can research claims, suggest arguments, and feed support and affirmation to those who are.
This is not, in itself, a good thing or a bad thing; it’s just how conversations work on the internet at the moment. I’ve participated in it, both unconsciously and knowingly, trying to move the “group mind” in the directions that I find best and most ethical.
But when you apply the study conclusions to the internet as a whole, you get exactly what we’re seeing now: communities like Reddit and 4chan are criticized (negative feedback), and begin to see themselves as persecuted. Their worst sides gain strength. The volume of negative output increases, and the gleeful nastiness drives out thoughtful, balanced conversation, even within the communities themselves.
This psychology stuff sure seems kind of useful, doesn’t it?