Newchecker.in is an independent fact-checking initiative of NC Media Networks Pvt. Ltd. We welcome our readers to send us claims to fact check. If you believe a story or statement deserves a fact check, or an error has been made with a published fact check
Contact Us: checkthis@newschecker.in
AI/Deepfake
It began with a video that went viral ahead of Chhath Puja celebrations, one where Prime Minister Narendra Modi, standing at a podium, can be seen announcing a government scheme that assured every Indian household a free Splendor motorcycle. Thrilled at the thought of unexpected good news, citizens across the country shared the video in family WhatsApp groups, texting, “check this,” followed by a hopeful, “Maybe we should apply?”. It felt like a blessing arriving just in time for the festivities.
But the “good news” turned out to be a trap. The announcement had never been made, and the video was a deepfake.
Soon, Instagram feeds were flooded with reels carrying similar videos, just as convincing, promising motorcycles, smartphones, free recharges and cash rewards, bounced between friends, relatives and neighbours, fuelled by the anticipation of a festive surprise. The clips looked genuine, with the familiar gestures, the commanding voice and the official backdrop lending it authenticity.
Beneath the surface of these seemingly harmless videos, a more complex and troubling design emerges: they serve as conduits for phishing, systematic data harvesting, and an open passage to increasingly elaborate scams.
While the festival season brings in a wave of giveaways and discounts, it also opens the door for scammers looking to cash in on goodwill and trust. The viral clips are masterfully crafted deepfakes, designed not to spread joy but to steal.
Newschecker found that behind the festive façade, a growing web of AI-manipulated videos is exploiting the trust in public figures to lure unsuspecting users into revealing their data or drive them to fraudulent websites (seen here and here), a threat experts warn is evolving faster than India’s ability to regulate or detect it.

Nearly ninety per cent of Indians have now been exposed to fake celebrity endorsements powered by artificial intelligence, losing on average ₹34,500 per scam, as deepfake-driven fraud surges to new highs during the festive season. McAfee’s latest research finds that deceptive videos promising impossible giveaways continue multiplying across social media, targeting the excitement of holiday shoppers, even though no government department has ever announced such schemes. Despite Newschecker’s previous reporting confirming these viral reels are entirely fabricated, dozens of similar scams endure, exploiting human psychology as much as technology. The Press Information Bureau has repeatedly sounded the alarm, urging citizens to stay alert as deepfake threats and celebrity impostors become some of the most pervasive digital traps of the year.
Beneath the surface of these viral reels lies a clear, almost formulaic pattern driving the deception. Each scam unfolds in three distinct stages, from building the bait to cashing in on clicks.
It always starts with a video that looks convincing enough to fool anyone. A deepfaked Prime Minister, crisp studio backdrops, and a too-good-to-miss offer — free bikes, subsidised smartphones, or instant cash credits. Advanced AI voice cloning and perfectly timed lip-syncing lend it the sheen of an official government announcement, making it easy for viewers to drop their guard.
Once the bait is ready, the amplifiers step in. Instagram pages such as @kyplive37 (63.2K followers) and @saman_dukan (162K followers) are among those fuelling the spread, pushing the reels across endless feeds and timelines.
Their reels are plastered with bold all-caps text like “ध्यान रखना, एक आधार कार्ड पर एक स्प्लेंडर बाइक मिलेगी” — “Remember, one Aadhaar card will get you one Splendor bike.” The audio repeats the same claim, supposedly quoting the PM himself, while captions push viewers to “Click the link in bio” or “Share before midnight,” turning urgency into a weapon.
Interestingly, Newschecker notes that a few accounts slip in a token disclaimer, “For educational purpose only,” not as an act of transparency, but to slip past moderation and fact-check filters. It’s a convenient cover, offering just enough plausible deniability to keep the scam alive while cashing in on the reach.

The “link in bio” whisks users away to sites like MyTahuko.in (archived link), masquerading as quick loan providers, or to job portals such as Kyplive.com (archived link), their glossy fronts promising effortless registration for government “schemes.”
These fake “yojana” clips and their slick links dangle the prospect of windfalls, coaxing users to volunteer sensitive details — your mobile number, your Aadhaar, a one-time-password for “verification” — all under the pretence of official rewards. Sometimes, the page shuffles you through multiple advertisements, each click padding the operators’ pockets with traffic revenue.
The prize — a bike, a smartphone, free cash recharge — remains a mirage. No reward ever materialises. The real currency changing hands is your data, not a motorbike.
So why do these sites hunger for so much personal detail? Let’s peer into MyTahuko.in, one specimen among many, to see how the mechanism truly works.

At first glance, MyTahuko.in appears harmless, reading like a basic blog with posts about government offers, loans and festivals, but a quick audit paints a more troubling picture.
Behind the friendly façade, MyTahuko.in’s real business is elsewhere. Its bait, your data. This analysis suggests a classic data-harvesting ecosystem camouflaged under the guise of public benefit.
Behind these scams lies the quiet but dangerous goal of data harvesting. This is the systematic collection of personal information from websites, apps, and social platforms, sometimes by legitimate businesses working within the boundaries of law and transparency, for targeted advertising and user personalisation.
Yet, when orchestrated by fraudsters, data harvesting takes on a darker hue, becoming the method of choice for those seeking to siphon sensitive details from individuals and organisations alike.
Unlike traditional scams that demand money up front, these deepfake-led frauds collect sensitive user data such as names, phone numbers, Aadhaar IDs, even banking details, through deceptive means that appear legitimate, such as fake registration forms. This information is then resold, reused for phishing, or stitched into fake digital identities.
Cybersecurity expert Jiten Jain calls the misuse of deepfake videos to promote fraudulent government schemes a “fast-growing threat, powered by India’s high social media penetration and public trust in government messaging.”
“It now takes just a few minutes to clone a public figure’s likeness and voice,” Jain tells Newschecker. “Scammers use AI tools for synthesis, then exploit social media algorithms that boost short, engaging videos. These reels are often even promoted using bots or paid ads. Once a user clicks a link (often disguised as a government portal or registration site), they are led to a phishing or data-harvesting page. Beyond these, users may be tricked into investment scams or into installing malwares on their devices.”
Jain further explains the modus operandi:
Captured data can fuel multiple frauds: from loan applications and SIM card activations to WhatsApp scams and targeted phishing. “Once harvested, even partial data (like name and phone number) can be cross-referenced with existing leaks to build full identity profiles,” Jain says.
Weak platform detection, along with limited digital literacy and high trust in visual/verbal cues from public figures further works in favour of the fraudsters. “There is also the issue of fragmented enforcement. Platforms remove posts reactively rather than proactively,” Jain says, adding, “Deepfakes often evade filters because they’re low-resolution, use regional languages, or subtle manipulations. Detection tools must adapt locally, with faster takedowns and tighter ad verification for anything claiming to be a ‘government scheme.’”
Pamposh Raina, head of the Trusted Information Alliance’s (TIA) Deepfakes Analysis Unit (DAU), tells Newschecker that such viral “AI manipulated” content has different audio and visual elements, while the tools are trained to analyse the elements separately, not as a composite, and it becomes challenging to detect manipulation. “So, the best way is to extract the constituent elements and then run them through tools. The short duration of these videos also pose a problem while analysing.” As deepfakes and AI-manipulated content multiply across social platforms and regional languages, Raina emphasises that voice-detection tools and techniques need to adapt. “The tools need to be trained to analyse audio content in languages widely spoken in the Global South as well and not just those from the West.”

The Ministry of Electronics and Information Technology (MeitY) is set to bring out a comprehensive legislation on AI to address issues related to deepfakes and synthetically generated content. “Currently, the proposed rules are framed under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which derive their powers from the IT Act. Legal experts point out that since the Act does not specifically deal with AI, any such rules could be challenged in court,” reads a Financial Express report, dated November 2, 2025, adding that the concern of the government, set to finalise the draft after the public consultation on November 6, stems from the rapid rise of deepfake videos, images, and audio that are increasingly being used for deception and misinformation.
So, while India lacks a standalone “deepfakes law,” several existing frameworks already touch upon impersonation-led scams and data harvesting. According to Apar Gupta, advocate and founder-director of the Internet Freedom Foundation (IFF), the IT Act criminalises identity theft, cheating by personation and certain privacy harms online, and police typically book deepfake-enabled frauds under these, while the new Bharatiya Nyaya Sanhita also covers cheating by personation and forgery of ‘electronic records,’ which fits fake government websites or AI-generated letters.
“There isn’t a single ‘deepfakes law,’ but the conduct is largely capturable under existing buckets: impersonation and cheating (IT Act Sections 66D / BNS Sections 319, 336–341), forged electronic records, and where relevant, obscenity provisions. Enforcement has been reinforced by government advisories directing social media platforms to proactively identify deepfakes/misinformation and remove them within 36 hours of reporting, and by fresh draft amendments proposing mandatory labelling and provenance requirements for ‘synthetically generated information,’” Gupta tells Newschecker.
Additionally, Gupta says that consumer guidelines on misleading ads also apply to fake government-scheme reels, while Aadhaar laws prohibit unauthorised collection of identity data. “Where scammers misuse the State Emblem or ‘Government of India’ insignia, the State Emblem Act also provides an enforcement route,” he says.
However, Gupta points out critical gaps: “Non-sexual deepfakes that cause reputational or economic harm (like fake ‘free scheme’ videos) don’t fit neatly into one offence. They require stitching together multiple laws. And while the Digital Personal Data Protection Act (DPDP) sets standards for consent, enforcement is still in abeyance. That creates uneven remedies when data are harvested deceptively. That is precisely why police and regulators currently lean on the IT Act, BNS and consumer-protection guidelines as first response, and why coherent rules on synthetic media are needed.”
He explains that when data is collected under false pretences, any consent is legally invalid. “If a website harvests data under the guise of a government scheme, that consent isn’t free or informed under the DPDP Act – it’s unlawful. It also counts as misrepresentation under contract and consumer law, and as cheating or personation under criminal law.”
Looking ahead, Gupta recommends more efforts in enforcing existing provisions and capacity building to tackle the rising cyber crimes while ensuring that it is not used for blanket censorship. “We must remind ourselves that as much as we can debate the social utility of AI, or synthetically generated content, it is also speech and hence be mindful in placing any prior restraints through law,” he cautions.
The barrier to creating deepfakes has all but vanished; today’s scammers need little technical expertise to produce convincing AI-altered videos of public figures. With open-source tools able to replicate voices within minutes and sync lips almost flawlessly, fraudsters simply purchase pre-made templates — politician clips, corporate logos, catchy jingles — and customise them for local audiences.
Asked about countering these threats, Jain recommends verifying any supposed government offer on official (.gov.in) sites, avoiding unknown links, and enabling multi-factor authentication. For organisations and social media platforms, he stresses the need for deepfake detection technologies such as watermarking and provenance tools, monitoring viral keywords linked to “schemes,” and running user education campaigns in regional languages.
Pratim Mukherjee, senior engineering director at McAfee, warns that the festival season, a time of joy and generosity, now attracts tech-savvy scammers, who adapt quickly to new opportunities. Mukherjee urges simple safeguards: scrutinise websites, secure devices, and maintain vigilance to preserve the spirit of the season.

Until platforms implement real-time detection and stricter ad-vetting, these scams will persist, especially during festive periods when public caution is low. Those “Free Splendor Bike” reels flooding Instagram this Chhath Puja are not playful memes, but entry points into a sophisticated web of deception, proof that AI-driven fraud is evolving faster than India’s legal and technological defences, and emphasising how urgently new protections are needed.
Vasudha Beri
December 2, 2025
Vijayalakshmi Balasubramaniyan
November 12, 2025
Vasudha Beri
November 29, 2025