In the wake of the recent deepfake controversy involving actor Rashmika Mandanna, the virtual world finds itself grappling with the alarming implications of this rapidly advancing technology. Union minister Rajeev Chandrasekhar has raised a red flag, labeling deepfakes as a more dangerous and damaging form of misinformation.
The IT rules instituted in April 2023 mandate platforms to swiftly address misinformation or face legal repercussions. This legal obligation has thrown a spotlight on the urgency of tackling the deepfake dilemma.
Deepfake tech is like a video wizard. It can tweak videos so well that telling what is real and what is not gets really tricky. Kanishk Gaur, a tech whiz, breaks down how GANs work. They are like creative dueling partners, making super convincing fake videos through a step-by-step dance.
Identifying a deepfake amidst the vast expanse of digital content requires a discerning eye. Gaur suggests vigilant observation for inconsistencies, such as unnatural blinking patterns, facial distortions or discrepancies between voice and lip movements. Acknowledging the evolving landscape, he highlighted the importance of emerging tools and software that leverage machine learning to detect subtle cues imperceptible to the human eye.
The old IT Act from 2000 is a good start for online issues, but with more deepfake tricks happening, Gaur says we need new rules. He is all about updating the laws to tackle deepfakes and helping fancy tools that can spot them.
Beyond legal frameworks and technological advancements, Gaur emphasizes the crucial role of education in empowering citizens to critically assess digital content. Collaborative efforts with tech companies and social media platforms are essential to detect and mitigate the dissemination of deepfakes.
Gaur wants everyone to talk globally about deepfakes. He says governments should team up with others to make rules and plans. The Rashmika Mandanna thing shows we need a big plan with laws, tech upgrades and teaching people to keep digital info real and safe.