Monday 7 March 2016

Antisocial Behaviour in Online Discussion Communities

Antisocial Behaviour in Online Discussion Communities


User-generated content is critical to the success of any online platforms. Sites like Facebook, Stackoverflow engage their users by allowing them to contribute and discuss content, strengthening their sense of ownership and loyalty. While most users tend to be civil, others may engage in antisocial behaviour, negatively affecting other users and harming the community. Many platforms implement mechanisms designed to discourage antisocial behaviour. These include community moderation, up- and down-voting and the ability to report posts, mute functionality, and more drastically, completely blocking users’ ability to post.

Antisocial behaviour is a significant problem that can result in offline harassment and threats of violence. This motivates us to address several issues related to antisocial behaviour:
- The instant of inception of this behaviour with respect to the lifetime of the user on online discussion forums.
- The role of community on the behaviour gradient of antisocial users.
- Feasibility of an effective prediction mechanism to identify such users early on.

In the rest of this text we refer to Future-Banned Users as FBUs and the Never-Banned Users as NBUs. Data was collected from three online discussion forums viz. Brietbart, IGN, CNN and the presented observations are with respect to this data. In a broad way it is observed that FBUs tend to write less similarly to other users, and their posts are harder to understand. They use less positive words and use more profanity as seen in Figure1 (a), (b) and (c). They receive more replies than average users, suggesting that they might be successful in luring others into fruitless, time-consuming discussions. The behaviour of FBUs worsen over their active tenure in a community. Communities may play a part in incubating antisocial behaviour. On the other hand, while communities appear initially forgiving, they become less tolerant of such users the longer they remain in a community. This results in an increased rate at which their posts are deleted, even after controlling for post quality.



A user’s posting behaviour can be used to make predictions about who will be banned in the future. With the help of features that capture various aspects of antisocial behaviour: post content, user activity, community response, and the actions of community moderators. We find that we can predict with over 80% AUC (Area under ROC curve) whether a user will be subsequently banned. The features indicative of antisocial behaviour that we discover are not community-specific. 

On average, the deletion rate of an FBU’s posts tends to increase over their life in a community. In contrast, the deletion rate for NBUs remains relatively constant as shown in Figure 2. The increase in the post deletion rate could have two causes: (H1) a decrease in posting quality— that FBUs tend to write worse later in their life; or, (H2) an increase in community bias — that the community starts to recognize these users over time and becomes less tolerant of their behaviour, thus penalizing them more heavily. Further it is observed that while both FBUs and NBUs write worse over time, this change in quality is larger for FBUs.


The goal to be achieved here is building tools that could allow for automatic, early identification of users who are likely to be banned in the future. The developed methodology can accurately differentiate FBUs from NBUs with only a user’s first ten posts. By finding these users more quickly, community moderators may be able to more effectively police these communities. State of the art mechanisms result in misidentifying one of five users as antisocial. Whereas trading off overall performance for higher precision and have a human moderator approve any bans is one way to avoid incorrectly blocking innocent users, a better response may instead involve giving antisocial users a chance to redeem themselves.

Further Reading:
https://cs.stanford.edu/people/jure/pubs/trolls-icwsm15.pdf

No comments:

Post a Comment