Content Moderator


Machine-assisted content moderation APIs and human review tool for images, text and videos. Provided by Microsoft and available through Azure. The Azure Content Moderator API is a cognitive service that checks text, image, and video content for material that is potentially offensive, risky, or otherwise undesirable. When such material is found, the service applies appropriate labels (flags) to the content. Your app can then handle flagged content in order to comply with regulations or maintain the intended environment for users. See the Content Moderator APIs section to learn more about what the different content flags indicate.

Main Features

Image moderation

Enhance your ability to detect potentially offensive or unwanted images through machine-learning based classifiers, custom lists and optical character recognition (OCR).

Text moderation

Use content filtering to detect potential profanity in more than 100 languages, flag text that may be deemed inappropriate depending on context (in public preview) and match text against your custom lists. Content Moderator also helps check for personally identifiable information (PII).

Video moderation

Enable machine-assisted detection of possible adult and racy content in videos. The video moderation service (in public preview) is available as part of Azure Media Services.

Human review tool

The best content moderation results come from humans and machines working together. Use the review tool when prediction confidence can be improved or tempered with a real-world context.
Inputs Accepted:
Text, Image, Video
Services Offered:
Developer API, AI, Hashing
Threats Detected:
Hate Speech, Sexually Explicit, Profanity, Self-Harm, Illegal Items

You can find out more information at: Content Moderator