Latest posts by Akinlolu Aguda (see all)
- Club Spotlight: NJIT Society of Musical Arts (SOMA) - May 1, 2018
On Twitter, free speech is not always free. It comes with strings that are sometimes tangled in various levels of community policies and individual interpretations
There are every day conversations being held on Twitter every day. We send greetings to the world (#HelloWorld), we make announcements, we discuss current issues, and we share personal messages – Free Speech. Irrespective of the sale of personal data and advertisements based on subject interests and life activity, free speech on Twitter is free. However, what happens when a persistent user goes overboard with their rights to free speech? Say they begin to infringe on other user’s rights to express themselves (think Chuck Johnson, Yiannopoulos) or they begin to take liberty at propagating hate and violence like the recently suspended Daily Stormer. How does Twitter deal with this issue when this is after all, still free speech?
Based on the rules stated on its policy page, Twitter condones no form of abusive behavior, hate speech, or negative rhetoric. Twitter’s issue however, is not whether or not it appropriately rebukes such activity as an infringement on the right of others, but rather the handling of cases where things have become objectionable.
As is expected, violation of Twitter’s terms is subject to action by the company. In the same vein, one may reasonably wonder, how does Twitter review its millions of active users everyday? In a recent transparency report, Twitter announced that over the course of about twenty three months (until January of this year), more than 935,000 accounts had been suspended for the promotion of violence, citing the use of proprietary tools to facilitate their effort.
Lucrazon, for example, an e-commerce company in 2015 found that their Twitter account had been identified as spam and had been suspended. After contacting Twitter’s support, a representative revealed to Lucrazon’s account handler that Twitter uses automated systems to manage the expulsion of multiple automated spam accounts, often suspending them in bulk quantities all at once. Another user, Joseph Cox of the Daily Beast, recently got his Twitter account suspended after writing on a controversial topic at the time. He soon found out that his account’s suspension was effected due to the numerous amount of bot users that followed him shortly after he published his work, tripping Twitter’s anti-bot spamming tools into action. Cox’s account had been targeted by an opposing party, bringing into question the justifiability of Twitter’s rampant suspension of suspected spam accounts.
The application of their automated expulsion method, though seemingly efficient, is just as well quite debatable. Consider for example, if it is good practice to carelessly cause inconveniences to policy-abiding users, diminishing their user experience, and ultimately the reputation of the company. Consider how easy it is for a user to be harassed through the manipulations of this susceptible anti-spam software – is this justifiable practice? Perhaps, someone who has not gone through retrieving a suspended Twitter account may suggest that a greater good in having safe spaces for open conversation may well be worth the occasional inconvenience of being wrongfully suspended.
Among the activities listed under ‘Abusive behavior’ on Twitter’s support page are ‘Multiple account abuse’, ‘Impersonation’, ‘Harassment’, and under spam descriptions, standouts like “aggressive following and unfollowing”, sending “large numbers of duplicate replies and mentions”, “creating misleading contents and interactions”, etc. are listed. Most of these listings are activities known to be facilitated by robotic programs. A New York Times article from 2013 gives a summary of some bot applications common at the time, adducing usages from automatically replying to scrutinizing tweets with counteracting links, to their use as marketing tools to promote e-cigarettes.
A recent study by Indiana University in fact, suggests that up to fifteen percent of Twitter users are bots; an estimate achieved mostly by assessing English speaking users alone. Knowing this, one can only imagine what the actual number of Twitter bots are – as the website itself can hardly tell what portion of its users are real people, and what proportion is robot. As a company that has been openly dedicated to the notion of free speech and the use of information to positive effect, Twitter needs new innovative (and reasonable) ways to properly handle its robot accounts and their activities, and perhaps, be one step further at maintaining its safe space for conversation.