Captions explain the context of photos and videos. On Instagram, captions give life to the already colorful community.
However, anything can be typed as captions. Besides some opinions or explanation, users can also insert hashtags and apparently, some offensive words. This is a big no to the photo- and video-sharing platform, as it tries to curb online bullying.
This is why Instagram announced the roll out of a feature to warn users if their caption is offensive.
Specifically, if the caption of users' post has an offensive word in it, the AI flags it and warns users that the caption looks “similar to others that have been reported.” It then gives them the option to edit, “learn more”, and to share it regardless.
According to Instagram on its press release:
"As part of our long-term commitment to lead the fight against online bullying, we’ve developed and tested AI that can recognize different forms of bullying on Instagram."
In addition to limiting the reach of bullying, this feature should help educate users on what is and isn't allowed on Instagram.
"To start, this feature will be rolling out in select countries, and we’ll begin expanding globally in the coming months," said Instagram.
This is an extension of the features Instagram rolled out earlier this 2019, in which AI warns users that a comment they’re about to post might be considered offensive.
The strategy here is that, users that are about to post something offensive, are given a chance to reword what they wrote.
Considering how popular Instagram is, the notion that some users might stop themselves if they’re properly educated on what they’re saying.
However, Facebook as the owner of Instagram, is a supporter of free speech. It may try to curb or stop any online harassment from happening, but at its core, the company may not defer anyone who truly wants to post something offensive.
But still, the features show that Instagram does not deny the knowledge of or responsibility for any damnable actions committed by its users