The whole world is literally at our fingertips today. The internet has over 4.66 billion users globally, of which 92.6% (4.32 billion) people access the internet via their mobile devices. 83.36% (3.96 billion) of these internet users have active social media accounts.
All these users generate billions of contents (images, video, messages, comments etc.) every day and that content amounts to around 1.14 trillion MB worth of data. And these numbers are only increasing by the day!
An average user today spends roughly 7 hours on the internet in a day, of which almost 2.5 hours are spent on social media.
Look at these stats:
1. 306.4 billion emails are sent daily.
2. 500 million tweets are shared daily.
3. 2 billion users on Facebook watch a collective 100 million hours of video and upload over 350 million photos in a day.
We are here talking of huge amount of unsupervised data that is being consumed in real time. If unchecked or unmoderated, this data paves way for possibly offensive content being published for the users, leading to unnecessary upsetting trolls and bullying.
Content moderation is the need of the hour
There is an urgent need for real time moderation to weed out inappropriate, illegal, and illicit content posted online. This will ensure a more positive environment for users to interact and surf internet freely without the risk of exposure to offensive content, which in turn shall reflect positively on the business’ credibility.
This is precisely the reason why the global content moderation market, which currently stands at around $5 billion, is expected to grow to $11 billion by 2026, clocking a CAGR of over 10%.
With the COVID-19 pandemic, the demand has only multiplied. And the onus of controlling the spread of false information or otherwise toxic/graphic content has fallen upon the social media platforms. The way the “viral” content spreads, it is important today that the companies move from a reactive approach to a pre-emptive or predictive approach.
Role of AI and ML in moderation
Ideally, an Artificial Intelligence or Machine Learning (AI/ML) based moderation model could be implemented, considering it would be more accurate, less time consuming, and cost-effective too. However, while the AI does learn and optimize the process through decision making of the reviewers, not all content can be defined into straightforward rules and datasets. Subjective decision making based on contextualization, regional nuances etc. need to be considered, which can only be brought in by human intervention. Hence, a hybrid hyperlocal moderation model supported by predictive insights-based AI, and culturally assigned associates, needs to be implemented which draws on the synergy of the two.
Balancing act: Human intervention with machine knowledge
What needs to be noted here is that the human-led part of content moderation should be supported by a holistic wellness approach, as it could be otherwise detrimental for the employees mental health. If not kept in check, this in turn could also hurt the firm’s brand in market, lead to high attrition rates, and may even result in legal ramifications. Suggested measures to counter this would be to include hiring the right (resilient) talent by putting in requisite filters, preparing the training material with emphasis on practical simulations, and a well-researched wellness curriculum prepared by consulting industry professionals.
Wellness would include not just round the clock counselling availability, but also regular team huddles, and limit on amount of content reviews, among others. With the need for a dynamic model, including the sudden shift to a work-from-home model, firms should consider upscaling their efforts towards employing automation in wellness to achieve an improved mental health for their employees. AI-driven chat bots, for example, are efficient for conversations and scheduling appointments with the counsellors. Again, automated bots can only help to an extent, at the end, it’s the counsellors who’d be able to help the employees holistically.
It is safe to conclude that while some day in the future, content moderation can be led entirely by automation, for now, it’s an ongoing process that needs a human element to learn and grow.
If you are interested in learning how Wipro is helping our clients achieve a holistic approach to content moderation, connect with us.