Around half the UK population, more than 30 million people, use TikTok each month (Kirill KUDRYAVTSEV)

TikTok took down 450,000 videos uploaded in Kenya between January and March 2024 for breaching its community standards, the platform has disclosed in its latest enforcement report.

The ByteDance-owned social media app said the flagged content was removed for violating a range of guidelines related to authenticity, user safety, privacy, and sensitive content.

On a global scale, TikTok removed over 211 million videos during the same period. Of those, more than 187 million were taken down automatically using the platform’s moderation tools, while over 7.5 million videos were reinstated after review.

Between January and March, TikTok also shut down 6.4 million accounts, citing reasons such as age violations, fake identities, or breaches of the platform’s rules. Additionally, more than 19 million live sessions were suspended, with just over 1.2 million later reinstated.

Comment sections were not spared either. The platform eliminated more than 1.1 billion comments found to violate its policies on respectful and safe engagement.

The report indicates a marked improvement in proactive content moderation during this period. TikTok said it successfully removed the majority of harmful videos within 24 hours of being posted, with over 99% taken down before any user flagged them and more than 90% pulled down before they gained views.

The content removed was flagged under several policy categories including:

  • Integrity and authenticity
  • Safety and civility
  • Privacy and security
  • Mental and behavioural health
  • Regulated goods and commercial activity
  • Sensitive and mature themes

In its broader crackdown on inauthentic activity, TikTok said it prevented over 146 million fake accounts, blocked over 8 billion fake follow requests, and removed more than 6 billion fake likes during the same period. Additionally, over 4 billion fake likes were scrubbed from the platform.

The company also revealed it is increasingly relying on artificial intelligence to improve the speed and accuracy of content moderation. It is now testing large language models (LLMs) to better detect and manage violative comments and other user-generated content.

“LLMs are capable of understanding complex human language and carrying out specific moderation tasks with a high degree of accuracy and speed,” TikTok noted.

TikTok further stated that its automation-driven approach is designed to reduce the emotional toll on human moderators by minimizing their exposure to harmful or graphic content.