Enhance Systems

Featured Article : Microsoft Launches New AI Content Safety Service

Microsoft has announced the launch of Azure AI Content Safety, a new content moderation service that uses AI to detect and filter out offensive, harmful, or inappropriate user and AI-generated text or image content.

What Kind of Harmful Content?

The type of content Microsoft’s developed Azure AI Content Safety to filter out includes anything that’s offensive, risky, or undesirable, e.g. “profanity, adult content, gore, violence, hate speech” and more. Azure is Microsoft’s cloud computing platform, where the new Content Safety moderation filter will be deployed (ChatGPT is available in the Azure OpenAI Service).

What’s The Problem? 

Microsoft says that the impact of harmful content on platforms goes beyond user dissatisfaction and can damage a brand’s image, erode user trust, undermine long-term financial stability, and even expose the platform to potential legal liabilities.  As well as the problem of user-generated content, the new feature uses AI to filter out the growing problem of AI-generated harmful content, which includes inaccurate content (misinformation – perhaps generated by AI ‘hallucinations’).

A Sophisticated AI Moderation Tool 

Although Microsoft’s AI Content Safety Filtering feature sounds as though it’s primarily designed to protect private users, it’s actually primarily designed to protect companies and their brands from the risks and challenges of moderation and of the rub-off associations and legal problems of having harmful content and misinformation or disinformation published on their platforms (a moderation tool), with users being the secondary beneficiaries – if it’s filtered out, they won’t see it (a win-win).

With Microsoft being a major investor in AI (i.e. OpenAI) it also appears to have a wider purpose that utilises this and shows that AI can have a really positive purpose, countering the fear stories of AI running away with itself and wiping out humanity.

In a nutshell, Microsoft says its new Azure AI Content Safety Filtering feature ensures “accuracy, reliability, and absence of harmful or inappropriate materials in AI-generated outputs” and “protects users from misinformation and potential harm but also upholds ethical standards and builds trust in AI technologies” which Microsoft says will help “create a safer digital environment that promotes responsible use of AI and safeguards the well-being of individuals and society as a whole”. 

How Does It Work and What Can It Do? 

The types of detection and filtering possible and the capabilities of AI Content Safety includes:

– Offering moderation of visual and text content.

– A ‘Severity’ metric,’ which (on scale of 0 to 7) gives an indication of the severity of specific content (safe 0-1, low 2-3, medium 4-5, and high 6-7) which enables businesses to assess the level of threat posed by certain content, make informed decisions, and take proactive measures. A severity level of 7 (the highest), for example, covers content that “endorses, glorifies, or promotes extreme forms of harmful instruction and activity towards Identity Groups”.

– The multi-category filtering of harmful content across the domains of Hate, Violence, Self-Harm, and Sex.

– The use of AI algorithms to scan, analyse, and moderate visual content because Microsoft says digital communication also relies heavily on visuals.

– Moderation across multiple languages.

How? 

Businesses can choose to operate and use the new filtering system either via API/SDK integration (for automated content analysis) or by using the more hands-on ‘Content Safety Studio’ dashboard-style, web-based interface.

AWS 

Amazon also has a similar content moderation service for its AWS called ‘Amazon Rekognition.’ It also uses a hierarchical taxonomy to label categories of inappropriate or offensive content and has “DetectModerationLabels” in operation to detect inappropriate or offensive content in images.

What Does This Mean For Your Business? 

As any social media platform or larger company will be able to testify, moderation of content posts is a major task and human moderators alone can’t really scale efficiently to meet these the demands quickly or well enough, so companies need a more intelligent, cost-effective, reliable, and scalable solution.

The costs of not tackling offensive and inappropriate content don’t just relate to poor user experiences but can lead to expensive legal issues, loss of brand reputation, and more. Whereas before generative AI arrived on the scene, it was bad enough trying to moderate just the human-generated content, with the addition of AI-generated content, moderation of offensive content has become exponentially harder. It makes sense, therefore, for Microsoft to leverage the power of its own considerable AI investment to offer an intelligent system to businesses that covers both images and texts, uses an ordered and understandable system of categorisation, and offers businesses the choice of an automated or more hands-on dashboard version.

AI offers a level of reliability, scalability, and affordability that wasn’t available before, thereby reducing risk and worry for businesses. The recent events of the conflict in Israel and Gaza (plus the posting of horrific images and videos which have prompted the deletion of social media apps for children) illustrates just how bad some content posts can be, although images of self-harm, violence, hate speech, and more have long been a source of concern for all web users.

Microsoft’s AI Content Safety system therefore gives businesses a way to ensure that their own platform is free of offensive and damaging content. Furthermore, in protecting themselves, it follows that customers and other web users and viewers are also spared and protected from the bad experience and effects that some content can cause.

LinkedIn
Facebook
Twitter