In its latest efforts to keep users free from online hate speech and bullies, Instagram is using a new method of filtering message requests for offensive words, phrases, and even emojis to keep them from ever reaching people’s inboxes. In addition, it is working to make it harder for bullies to repeatedly bother the same people by opening different accounts. “Unfortunately, there are many groups and individuals sending abusive DMs and DM requests,”  Damon McCoy, associate professor of computer science and engineering at New York University’s Tandon School of Engineering, told Lifewire in an email. “A lot of it is misogynistic and bigoted. Some of it is focused on exerting power and control. For example, shaming or making someone feel unsafe so that they post less frequently or self-censor.”

Instagram Filters DMs Based on Key Words

Instagram has already introduced ways to filter out negative comments. Now, it is rolling out an optional feature that automatically can screen offensive direct message requests from reaching users’ inboxes using a list of words, phrases, and emojis. Instagram said it worked with anti-bullying and anti-discrimination organizations to come up with a list of the offensive words and phrases it will use to filter messages, and also will allow users to manually add their own. Requests flagged as offensive will end up in their own folder without the text immediately visible. To activate or deactivate DM filtering in Instagram, go to the Privacy Settings and find the section called “Hidden Words.” There, you’ll be able to control filters for both messages and comments. A Facebook company spokesperson told Lifewire in an email that it would first start rolling out these features in the UK, France, Germany, Ireland, Australia, New Zealand, and Canada. It plans to add additional countries soon.  Instagram also will allow users to block not just a person’s current account, but any new ones they may use in the future. The social media company said it will offer this around the world in a few weeks.

Will These New Measures Work?

These latest efforts build on Instagram’s previous attempts to stop bullying and harassment, and come after some researchers have called on the platform to do more to combat the issue. Experts say that while the latest measures are also a step in the right direction, they likely won’t be able to curb all instances of abuse if a user is determined to bother someone.  J. Mitchell Vaterlaus, an associate professor at Montana State University, said the changes are “a good step forward and may decrease random or non-targeted abuse,” but points out that bullies can find a variety of ways to target specific people with private comments, public messages, sharing their content without permission, and videos. “While it is great to reduce opportunity via one aspect of the app, there will likely be alternative pathways within the app for the cyberbully to utilize,” Vaterlaus told Lifewire in an email. “It is critical for the user who is being bullied to have knowledge of how to block someone, how to seek support, how to change privacy settings, and how to report someone.” However, there is evidence that methods like controlling comments have had at least some success. “Our research shows that many of our recent anti-bullying tools—like comment controls and nudge warnings—have been effective in helping people manage bullying,” the Facebook company spokesperson said.  McCoy, along with researchers at New York University and the University of Illinois at Chicago (UIC), found in a 2017 study that people likely to experience harassment after being “doxed”—having their personal information maliciously exposed online—deleted their accounts less frequently when Facebook and Instagram started filtering abusive comments. 

Stopping Abuse Is a Complex Process 

So, why hasn’t Instagram been able to completely stop bullying and abuse on its platform?  “This is complex, in part because context matters so much,” the Facebook company spokesperson told Lifewire. “Determining what qualifies as bullying on Instagram is a big challenge, and factors such as context and intent are very important.”  “It is difficult to prevent all harassment, since it is often contextual and might require members of the targeted community to recognize that it is harassment,” McCoy said. “As an example, the message could say that ’there are police protecting the voting location.’ To members of the immigrant communities, that would be a threatening and harassing message.”