Today, content is easily accessible across multiple platforms. This content can be user-generated or created by companies.
User-generated content (UGC) includes text (like comments, forum posts, reviews, ratings, podcasts, testimonials), photos, videos, audio, links, or even documents. Often, the content dispersal happens through content communities, which are not a part of the UGC model but welcome UGC in the form of questions and comments.
Among the consequences of UGC is the risk of users getting exposed to inappropriate or irrelevant content. Considering that content impacts a company’s credibility and brand, this is of great significance.
As this two-way communication is unsupervised, content moderation solutions must ensure the checking and monitoring of the content. The purpose of content moderation is to filter, validate, and monitor publicly available content in order to ensure company credibility and professionalism. These services are often provided by third parties to companies today.
The need for content moderation
Internet World Stats estimates that there are currently 5.16 billion internet users in the world. The Social Media Benchmark Report of 2021 estimates that there are four billion users of social media. As a result, UGC has also grown significantly over the years. Moreover, content communities (hosted by companies) have also grown in popularity – mainly to provide users with quick access to technical information.
The abundance of public content, combined with a lack of adequate and appropriate moderation of this content, raises many risks, including:
Exposure to offensive content
A brand's reputation may be put at risk when unregulated UGC is posted. Such content could upset certain groups, leading to a chain reaction that damages the brand’s image.
Risk of unmonitored two-way interactions becoming abusive
Companies that provide two-way communication are at high risk of communications getting out of control, exposing them to abuse through uncensored texts, images, videos, etc., that might depict violence, hate speech, drugs, or cause offense, etc. Typical businesses in this category include delivery services, ride-hailing platforms, customer service platforms, online marketplaces where buyers and sellers meet, gaming platforms with real-time multiplayer features, etc.
Risk of incorrect content sharing among target groups/communities
Many companies find it critical to provide their internal focus groups or communities with accurate and verified information to any questions they raise on the system. It may adversely affect clients' businesses if incorrect code/detail/information circulates on such platforms. Focus groups are commonly used by technology companies, such as IT companies and technical service providers.
To safeguard brand image and prevent users from viewing inappropriate content on the web, companies often set up internal review teams to check the content posted online. Due to the increase in volume, service providers are enlisted to handle this specialized service more efficiently, accurately, and cost-effectively.
Expert Market Research estimates that the global content moderation solution market reached a value of $5300 million in 2020 and is now expected to grow at a CAGR of 12.6% over the forecast period of 2021-2026.
Benefits of content moderation services in today’s scenario
AdWeek reports that 85% of users are more influenced by UGC than by brands’ content directly. For multinational companies and brands to succeed in the market, content moderation services need serious attention. Here are some ways in which content moderation solutions can help protect and manage the brand image of companies:
- Verifying and moderating the content on any online platforms, identifying users who violate policies, identifying groups spreading misinformation, unsocial content, etc.
- Identifying and closely monitoring posts with keywords that are most likely to be misinterpreted by readers.
- Early notification of an event that may initiate or spread misinformation/fake news among masses facilitates prevention.
- Protecting users from content that contains violence, hate speech, illegal and illicit information, and the profound psychological impact.
- Protect against any information, monetary loss, or data breach caused by any content that contains spamming links, etc.
- Opportunity to reframe and simplify any statements or statistics that readers may misunderstand, leading to a loss of the company’s brand value.
Business cases for adopting content moderation services
The importance of user sentiment has been growing with the rise of the internet and social media, and content moderation solutions are relevant for most industries and sectors today. Content moderation solutions play a vital role in the following use cases:
Use Case 1:
Gaming platforms attract a large number of young users, including students at schools and colleges. Children are sometimes exposed to abusive comments, posts, and group mockery among the player community leaving a lasting psychological impact on them.
The content on these platforms can be pre-moderated to protect vulnerable audiences. Chats and video interactions are closely monitored to ensure the safety of children and other users.
Use Case 2:
Platforms that allow people to interact with one another or rely on information provided by others, such as marriage matching sites, dating websites, online product reviews, yellow book sites, appointment scheduling, and recruitment platforms, all allow users to interact with one another or rely on others for information. These companies are concerned about ensuring that such reviews are not misleading or that users are not harassed.
Interactions on such platforms are closely monitored, and users are notified if any suspicious activity is detected. Users are also protected from comments and interactions that are flagged as inappropriate.
Use Case 3:
Marketplaces where products are listed and sold online and include delivery services include both reviews and authenticity concerns. The interactions with delivery persons, salespersons, and customer service executives can result in tough conversations and lead to outbursts on either side. It can include electronics delivery, food delivery, ride sharing platforms, land/property rental or buy/sell apps, products re-sale platforms, home service apps, etc.
By moderating content on such business models, we ensure that the reviews are authentic and not meant to harm a product or brand’s reputation with malicious intent. AI can even help identify such cases faster and track the conversations between parties to ensure everyone’s safety.
Use Case 4:
On crowdsourced knowledge platforms where the public can share insightful articles, improvements, or corrections to existing articles, share blogs, views, and incorrect information is often published when there are no checks to pre-authenticate and verify the information.
Companies involved in knowledge sharing can benefit from the pre-moderation of such content to systematically verify any such articles or corrections thereto in order to build a reliable knowledge base of such reports.
The recent COVID-19 impact has forced almost all companies worldwide to go online and increase their virtual presence among users and prospective customers. The increasing access to business and product information online also makes content moderation solutions more necessary. People are exposed to tremendous manipulations and risks with online business information.
95% of travelers’ read online reviews before booking any leisure trip, according to TrustYou. As per Website Builder and Tnooz, the average leisure traveler spends 30 minutes reading reviews before booking, while 10% spend more than an hour reading reviews. A correct online image becomes extremely important for industries like travel, restaurants, hotels, etc.
According to a report by emarketer, consumers trust customer reviews 12 times more than reviews by manufacturers’, while Spiegel Research Center notes that online product reviews can increase conversion rates by more than 270%. The perception of user reviews makes close monitoring of online reviews and what they indicate about companies critical.
A Glassdoor study found that 83% of job seekers research company reviews and ratings online before applying for a job. A rating of three stars and below would not be considered by 33%.
According to BrightLocal’s Local Consumer Review Survey 2020, 79% of healthcare consumers prefer online reviews over personal recommendations. If you ask research firm Software Advice, 71% of patients go online to read reviews before finding a doctor.
There are instances when some fake reviews and suggestions which are misleading, are also not detected by guidelines or filters applied to see them including the AI systems as well.
Content moderators are constantly exposed to extreme, abusive, and malicious content for long periods, posing challenges to their mental and emotional wellbeing.
Content often depicts regional dialect or colloquial usage of language rooted in a particular geographic area, which might be acceptable in some regions, but not in others. Content moderators need to be aware of this localization of content to take action at the right time.
Having content moderation solutions in place does not ensure that it is a full-proof solution since users are located worldwide and speak different languages, each with its interpretation. The nature of the content is not binary. It is open to perspective and has a more personal aspect attached to it. Even so, not having a content moderation solution in place will impact businesses sooner or later.
How Wipro helps companies leverage this opportunity
With Wipro’s global delivery centres (offshore and onsite), clients can receive content moderation services.
By creating a filtering and reviewing mechanism, Wipro provides a human-driven content moderation solution to a wide range of clients. Wipro also uses industry and domain experts for technical content, particularly for closed community groups within a company.
Wipro’s technology enablers that further strengthen our solution for our clients’ content moderation services include:
- VANTAGE – The tool for moderation of visual content
The Wipro algorithm makes the content moderation process faster, more efficient, and robust.
VantagePlus supports ad intelligence, reputation management, social intelligence, and audience targeting, as well as text, video, audio, and images, and extracts intelligence from them.
- Wipro Base)))™ Harmony – A tool for capturing transformative knowledge.
Providing the customer with the ability to create standardized, efficient processes.
Through automatic process analysis, Wipro’s Harmony helps identify opportunities for improvement, thereby reducing training costs. Documentation effort is eliminated through Harmony’s automatic generation of process documents (SOPs, FMEA, etc.).
- Sift – Hybrid solution (automated and manual):
Sift and Wipro have partnered to provide clients with AI and manual content review services based on holistic analysis and our years of experience.
Sift enables us to perform within the AI platform, as Sift’s supervised machine learning incorporates new data in real-time for the most accurate predictions that get smarter by the millisecond.
The platform allows users to monitor and execute actions/warnings. The software detects risk content in l33t text (unnatural language) widely used on the internet for online gaming and other purposes.
Hybrid content moderation process
Wipro uses a hybrid (AI and manual) moderation process to filter content using its content moderation solution.