Is AI the only antidote to disinformation?

Sending
User Review
0 (0 votes)
  • AI-based programmes are being used to create deep fakes that can be used to sow the seeds of discord in society and create chaos in markets.
  • Algorithms will soon produce content that is indistinguishable from that produced by humans.
  • Human intervention is required to enhance AI-detection of disinformation but educating people to objectively evaluate online content is top priority.

The stability of our society is more threatened by disinformation than anything else we can imagine. It is a pandemic that has engulfed small and large economies alike. People around the world face threats to life and personal safety because of the volumes of emotionally charged and socially divisive pieces of misinformation, much of it fuelled by emerging technology. This content either manipulates the perceptions of people or propagates absolute falsehoods in society.

AI-based programmes are being used to create deep fakes of political leaders by adapting video, audio and pictures. Such deep fakes can be used to sow the seeds of discord in society and create chaos in markets. AI is also getting better at generating human-like content using language models such as GPT-3 that can author articles, poems and essays based on a single-line prompt. Doctoring of all types of content has been made so seamless by AI that open-source software like FaceSwap and DeepFaceLab can enable even discreet amateurs to be epicentres of social disharmony. In a time when humans can no longer comprehend where to place their trust, “technology for good” looks to be the only saviour.

An overview of coordinated fake activity on the Meta platform Image: Statista

Semantic analytics for basic filtering

The very first idea that comes to mind to combat disinformation with technology is content analytics. AI-based tools can perform linguistic analysis of textual content and detect cues including word patterns, syntax construction and readability, to differentiate computer-generated content from human-produced text. Such algorithms can take any piece of text and check for word vectors, word positioning and connotation to identify traces of hate speech. Moreover, AI algorithms can reverse engineer manipulated images and videos to detect deep fakes and highlight content that needs to be flagged.

But that’s not enough: generative adversarial networks are becoming so sophisticated that algorithms will soon produce content that is indistinguishable from that produced by humans. To add to these woes, such semantic analysis algorithms cannot interpret content inside hate speech images that have not been manipulated but rather are shared with the wrong or malicious context or additional content. It also cannot check if the claims made by some pieces of content are false. Linguistic barriers also add to the challenges. Basically, the sentiment of the online post can be assessed, not its veracity. This is where human intervention is required with AI.

For full article, click here.

The post Is AI the only antidote to disinformation? appeared first on For all the latest on all IT Tech like ERP, Cloud, Bot, AI, IoT,M2M, Netsuite, Salesforce.