Periscope users, particularly those with many followers, enjoy great popularity thanks to it being live, unfiltered and open. But because of this openness, there is also an increased risk for spam and abuse. Wanting its users to be safe the app introduced comment moderation. “Today, we’re rolling out a comment moderation system that empowers our community to report and vote on comments that they consider to be spam or abuse,” wrote Periscope on its blog.
The system works as follows: viewers can flag comments as spam or abuse during a broadcast. Periscope will then randomly select a few viewers who will be asked to vote on whether they agree that the comment is spam or abuse or it’s OK. If the majority of viewers agree that the comment is inappropriate the commenter will be notified that their commenting ability has been temporarily disabled. If the abuse is repeated, the chat may be disabled for the users until the end of the broadcast.
“We’ve designed this system to be very lightweight — the entire process above should last just a matter of seconds. That said, if people don’t want to participate, broadcasters can elect to not have their broadcasts moderated, and viewers can opt out of voting from their Settings. This system works in tandem with other tools that we have in place for our community. You can still report ongoing harassment or abuse, block and remove people from your broadcasts and restrict comments to people you know,” said Periscope.