Instagrams next big fix for online bullying is coming in the form of artificial intelligence-flagged comments and the ability for users to restrict accounts from publicly commenting on their posts.
The team is launching a test soon thatll give users the power to essentially “shadow ban” a user from their account, meaning the account holder can “restrict” another user, which makes their comments visible only to themselves. It also hides when the account holder is active on Instagram or when theyre read a direct message.
The company also separately announced today that its rolling out a new feature thatll leverage AI to flag potentially offensive comments and ask the commenter if they really want to follow through with posting. Theyll be given the opportunity to undo their comment, and Instagram says that during tests, it encouraged “some” people to reflect and undo what they wrote. Clearly, that “some” stat isnt concrete, and presumably, people posting offensive content know that theyre doing so, but maybe theyll take a second to reconsider what theyre saying.
Instagram has already tested multiple bully-focused features, including an offensive comment filter that automatically screens bullying comments that “contain attacks on a persons appearance or character, as well as threats to a persons well-being or health” as well as a similar feature for photos and captions. The features are much needed, but they m