National

NDTV Explains: What Is A Deepfake And How You Can Spot It


Deepfakes have emerged as a formidable threat to public trust and truth.

New Delhi:

The Centre today highlighted the emerging threat of deepfakes to democratic processes and announced that the government is preparing new regulations to address this challenge.

IT Minister Ashwini Vaishnaw affirmed that social media platforms have acknowledged the need for concrete and effective measures in areas such as deepfake detection, prevention, enhanced reporting mechanisms, and user education.

What Is Deepfake?

A deepfake is a type of synthetic media that uses artificial intelligence to manipulate or generate visual and audio content, often with a malicious motive, to appear authentic.

According to MIT, the term “deepfake” first emerged in late 2017 when a Reddit user with the same name created a platform on the online news and aggregation site to share pornographic videos generated using open-source face-swapping technology. Deepfake uses a form of AI called deep learning to make images or videos of fake events. 

Deepfakes, a potent blend of real and fabricated media, have emerged as a formidable threat to public trust and truth. By creating convincing videos and audio recordings of people saying or doing things they never did, deepfakes can manipulate public perception, spread misinformation, and tarnish reputations.

In the hands of cybercriminals, deepfakes become dangerous weapons that can disrupt and destroy businesses and governments. A fabricated video of a company’s top executive or a top politician can have serious repercussions on a company or a country’s reputation.

Over the years, we have witnessed many such instances of deepfake videos that went viral on social media. Most recently, actor Rashmika Mananna was a victim of a viral deepfake video, sparking serious concerns over the misuse of the technology which resulted in the Indian government drawing plans to tackle the menace.

Generating deepfakes often involves utilizing deep neural networks and a face-swapping technique. A target video acts as the bedrock of the deepfake, while a collection of video clips showcasing the desired individual is amassed.

The exponential growth of artificial intelligence has also triggered a concerning rise in deepfake pornography, where hyperrealistic images and videos can be produced with minimal effort and expenditure. 

How To Spot A Deepfake 

MIT says despite there not being a foolproof method to ensure absolute protection against deepfakes, there are certain indicators that can assist in determining the genuineness of the content you encounter online.

In the widely circulated Rashmika Mandana deepfake video, many viewers overlooked the fact that the individual’s face was different at the beginning. Ms Mandanna’s face was superimposed on the body of Zara Patel – a British model and influencer of Indian-origin and the original creator of the video. 

The deepfake, with Ms Mandanna’s face, manifested only when the individual fully entered the frame. 

In deepfakes, a person’s lip movements or blinking may not align well. This discrepancy arises from the AI algorithm’s potential inability to precisely track the person’s eye and mouth movements. However, the ever-improving AI makes it harder to spot what is fake or what is real given the advanced technological tools we have at our disposal today.

Underscoring the shared responsibility for deepfake content, IT Minister Mr Vaishnaw indicated that both creators and hosting platforms will be subject to potential penalties. The government is exploring measures to hold both parties accountable.

A comprehensive regulation from both government agencies and technology or social media companies, establishing alliances to develop cross-platform detection tools is crucial to curb the spread of deepfake.

There are also several online subscription-based tools like WeVerify or Sentinel that can help detect fake content. 



Source link


Discover more from Divya Bharat 🇮🇳

Subscribe to get the latest posts sent to your email.