Amongst all the social media platforms, Twitter is considered to be the most open tool to express and scan opinions. However, for the last few years, Twittersphere has turned toxic, especially towards women, for whom using Twitter and many such social media platforms come at a massive cost.
Every day, they are subject to trolling, online abuse, threats, harassment, bullying, and whatnot. Often, women tend to eschew this toxicity by blocking a harassing user or by reporting abusive and deriding tweets. The question that remains is that for how long will this shunning strategy be effective?
Twitter has often assured that the privacy and security of women remain on top of their agenda. However, the sentiment is exactly opposite among women, who continue to be at the receiving end of the online abuse.
A January 2020 report claims that even women politicians and famous public figures are not provided amnesty by the online abuse brigade. But all this is going to change post-August 2020!
Abuse Detecting Algorithm
A team of researchers from the Queensland University of Technology (QUT), Australia, has developed a new algorithm that tracks harassing and abusive tweets and removes them from Twittersphere. The team took a sample of a million out of a rabble of everyday tweets that offered a hint of misogynistic content. The selection was further refined for three abusive keywords: whore, slut, and rape. This brought the number down from a million to 5000 misogynistic tweets purely based on their intent and context.
But Twittersphere is full of noisy and complex tweets, so how did the researchers manage to detect such misogynistic tweets? The biggest challenge in the detection of a misogynistic tweet lies in understanding the context of the tweet claimed the researchers. They further added that teaching machine a spoken language is a complicated job, as it continues to evolve every day with new vocabulary.
Sighting this, they decided to develop a deep learning algorithm called ‘Long Short-Term Memory’ with Transfer Learning. This means that the machine can go back to its previous understanding of terminology and continue to evolve and update itself in parallel with the actual language. This way, the machine continues to develop its contextual and logical interpretation of the language.
Eureka!
The key to the project success was to teach the machine to differentiate context, only through text and not the tone. Finally, the eureka moment arrived when the algorithm identified ‘go back to the kitchen’ as misogynistic devoid of any tone and structural inequality.
The researchers claim that the algorithm can identify misogynistic content with 75% accuracy, which is stupendous in comparison to other efforts investigating the similar aspect of social media languages. The research could well translate into a platform policy for Twitter, which will allow them to remove any tweet that is identified by the algorithm as misogynistic and eliminate online abuse once and for all.
The algorithm is developed with a futuristic view of curbing online abuse towards women. However, it has the provision of expanding towards detection of racism or abuse towards physically disabled people. If successfully deployed, the algorithm can act as a landmark achievement in the field of machine learning and can help create a safe online environment for all users.