Content Moderation at Scale: 2M+ Items Monthly for OTT Platform
Large-scale content moderation operation for a major OTT platform — covering UGC, comments, profile images, and livestream monitoring.
The Challenge
A major OTT platform with 22M monthly active users was moderating UGC, viewer comments, and livestream content with a small internal team — resulting in policy violations remaining live for 8+ hours on average and increasing advertiser concerns about brand safety.
Our Approach
Deployed 120 moderation specialists across 3 shifts — providing 18/7 coverage (6 overnight hours handled by AI automation).
Structured moderation queues by content type: UGC, comments, profile images, and livestream real-time monitoring.
Built client-specific policy guidelines training programme: 40 hours pre-deployment + monthly calibration sessions.
Implemented a 2-hour escalation SLA for critical content (self-harm, CSAM, terrorism) — direct escalation line to client's Trust & Safety lead.
Weekly accuracy auditing: 500-item sample reviewed by QA team, with agent-level feedback and retraining triggers.
Related Engagements
Ready to Achieve Results Like These?
Tell us about your operational challenge — we'll show you exactly how we'd approach it.