“That’s why I’m so sick of it. We’re tired, ”popular black influencer Ziggi Tyler said in a recent viral video on TikTok. “Anything about black people is inappropriate content,” he continued later in the video.
Tyler was expressing his frustration with TikTok over a discovery he made while editing his bio on the app’s Creator Marketplace, which connects popular account holders with brands that pay them to promote products or services. Tyler noticed that when he typed in phrases about black content in his Marketplace creator bio, such as “Black Lives Matter” or “Black success”, the app flagged its content as “inappropriate.” But when he typed in phrases like “white supremacy” or “white success,” he received no such warning.
For Tyler and many of his supporters, the incident seemed to fit into a larger pattern of how black content is moderated on social media. They said it was proof of what they believed to be the racial bias of the application against blacks; some urged their followers to quit the app while others tagged TikTok’s corporate account and demanded responses. Tyler’s original video of the incident received over 1.2 million views and over 25,000 comments; his follow-up video received almost a million more views.
“I’m not going to sit here and let this happen,” Tyler, a 23-year-old graduate from Chicago, told Recode. “Especially on a platform that makes all of these pages say things like, ‘We stand with you, it’s Black History Month in February. “”
A spokesperson for TikTok told Recode that the issue was an error with its hate speech detection systems that it is actively working to resolve and that it is not indicative of racial bias. TikTok policies do not limit posting to Black Lives Matter, according to a spokesperson.
In this case, TikTok told Recode that the app mistakenly reported phrases like “Black Lives Matter” because its hate speech detector is triggered by a combination of words involving the words “Black” and “public” – because “public” has the word “die” in it.
“Our TikTok Creator Marketplace protections, which flag phrases commonly associated with hate speech, have been incorrectly configured to flag phrases without word order,” a company spokesperson said in a statement. . “We recognize and apologize for the frustration of this experience, and our team is working quickly to correct this important error. To be clear, Black Lives Matter does not violate our policies and currently has over 27 billion views on our platform. TikTok says he contacted Tyler directly and did not respond.
But Tyler said he didn’t find TikTok’s explanation to Recode adequate and felt the company should have identified a problem with its hate speech detection system earlier.
“Regardless of the algorithm and how it was detected, someone must have programmed that algorithm,” Tyler told Recode. “And if [the problem] is the algorithm, and the market has been available for , why wasn’t this a conversation you had with your team, knowing that there were racial controversies? ” He asked.
Tyler is not alone in his frustration. He is one of the many black creators who have recently protested against TikTok because they say they are unrecognized and underserved. Many of these Black TikTokers are participating in what they call the “#BlackTikTok strike”, in which they refuse to invent original dances to a hit song because they are angry that the app’s black performers are not being properly credited for the viral dances they first choreograph and that of other designers imitate.
These issues are also linked to another criticism leveled at TikTok, Instagram, YouTube, and other social media platforms over the years: that their algorithms, which recommend and filter posts that everyone sees, are often racially biased. and inherent sexist.
In 2019, for example, a study showed that Top AI models for detecting hate speech 1.5 times more likely to flag tweets written by African Americans as “offensive” to other tweets.
Such findings have fueled an ongoing debate about the merits and potential harms of relying on algorithms – especially the development of AI models – to automatically detect and moderate social media posts.
Large social media companies like TikTok, Google, Facebook and Twitter, while they recognize that these algorithmic models can be flawed, still make them a key part of their growing hate speech detection systems. They say they need a less labor-intensive way to keep up with the ever-increasing volume of content on the Internet.
Tyler’s TikTok video also shows the tensions surrounding these apps’ lack of transparency in how they monitor content. In June 2020, during the Black Lives Matter events Across the United States, some activists accused TikTok of censoring some popular #BlackLivesMatter posts – which for a while the app showed they had no eyesight even when they had billions of views. TikTok denied this and said it was a technical issue affecting other hashtags as well. And at the end of 2019, TikTok executives would have been discuss appeasement of political discussions on the application, according to Forbes, to avoid political controversy.
A spokesperson for TikTok acknowledged the greater frustrations with the representation of blacks on the platform and said that earlier this month the company launched an official @BlackTikTok account to help foster the Black community. TikTok and that, overall, its teams are committed to developing recommendation systems that reflect inclusiveness and diversity.
But for Tyler, the company still has a lot of work to do. “This case is just the tip of the iceberg and below the water level you have all of these issues,” Tyler said.