home icon

Saving the Day Online - Humans and Technology for Trust and Safety

Posted by Siddharth Goli
Comments ,(0)

Social web has powered new and innovative ways of sharing information, expertise, goods and services. Albeit appealing, the community has its perceived fears of the internet in all its darkness. Yellow journalism, illegal and hateful content, marketplace and advertising frauds, unsafe peer-to-peer transactions and so on - the perils that come with this kind of connectivity are unlimited. But how are we fighting them?

Be it partisan political news or fraudulent product and business reviews, misinformation is proliferating in the internet world. Facing criticism, online giants Facebook and Google have kicked off the fight against fake news and misinformation with a combination of automated and human reviews, and additional help from third-party checkers. The others are following suit, increasing technological research on Artificial Intelligence (AI) and Machine Learning algorithms.

Humans may have pioneered in investigation. However, with superpowers such as complex fact checking and validity deciphering for millions of lines of text per second without any emotional or political doggedness, Artificial Intelligence (AI) wins hands down. Machine Learning platforms have utilized Natural Language Processing (NLP) providing aid in the form of webpage scoring, predicting reputations and representing insights from large data sets. But textual misinformation is not our only throbbing headache.

Besides text, online content is also generated in the form of videos, audio and images, and more than a half of it is available on social media. Looming issues stemming from User Generated Content (UGC) include but are not limited to hate crime, victimization, offensive content, trademark infringement, banned substances showcase, identity theft, fraudulent promotions and excessive spam. Companies are racing to improve AI to block similar content - layer by layer neural learning, cue-filled algorithms and viewer reaction monitoring for video and image analysis.

Advanced Machine Learning can identify issues but, currently, it can only work with humans and not replace them. This is physiologically wrecking for human inspectors who work 24/7 warding off disturbing high-risk content that can cause post-traumatic stress and potentially lead to imitative behavior. With thousands of people being added to perform these operations, technological investments in automation is directed at improving assistance to human content reviewers where massive stand-alone machine moderation may take years to realize.

Today, while AI is less accurate, it is faster whereas humans are highly accurate but too slow and expensive. What we need here is a hybrid system that blends deep neural networks with manual intervention. The human-machine alliance is extremely important to uphold trust and safety in the community. Everything said and done, major progress in this area depends on us as the biggest content generators, becoming more conscientious about how we engage and what we share online.

About Author

Siddharth Goli- Solution Consultant, Media Practice, Wipro, Ltd.

Siddharth is an experienced solution consultant for the Media practice within Wipro BPS wherein he is responsible for building solutions and practice capabilities, designing marketing and demand generation strategies along with industry research. He holds an MBA from SIBM Pune specializing in Marketing Management. Siddharth has prior experience in the E-commerce, Brand Management and Marketing fields.

Read all blogs

Comments (0)

Post Comments