The party is proud to have used deepfakes“for good” for the first time, but experts are concerned about such manipulations.
Indian politician Manoj Tiwari (Manoj Tiwari) used the AI to create their own video on deepfakes Haryana and English in order to attract more voters, according to Vice. Experts are worried about how far the use of fabricated videos in politics can go.
One real and two fake videos
On February 7, the day before the Indian parliamentary election, WhatsApp gained popularity in two commercials in which the president of the Indian People’s Party, Manoah Tivari, campaigned to vote for himself. In one video, the politician speaks Hindi, and in another, the Hariani dialect.
Deepfake on Hariani
The party hired Ideaz Factory, a political communications company, to create the video. They generated a deepfake for a “positive election campaign” to attract voters speaking different languages.
“Deepfake technology has helped us strengthen the campaign like nothing else. With the help of the Hariani video, we convincingly conveyed a message to the target audience, whose language the candidate doesn’t speak, ”said Vice PR PR party Neilkant Bakshi.
In the Hariani video, Tiwari tried to dissuade migrant workers living in Delhi to vote for a rival party. Deepfake was distributed over 5800 WhatsApp chat rooms, and about 15 million people watched the video, Bakshi said. After the Hariani video gained popularity, INP created a video in English to attract urban residents.
Deepfake in English
Replaced lip movements but no audio generated
In most deepfake, the face of the hero is completely replaced, but a more sophisticated method is to change only the movements of the lips in the original video so that they coincide with the new audio. Ideaz Factory stated that they used this particular technology: “We trained Lipsink-Algorithm in Tivari’s speeches to create suitable lip movements.” The company hired a dubbing actor who read the text in Hariani – the sound was overlaid on top of the video.
Vice points out that WhatsApp users have noticed unnatural lip movements of the politician. Ideaz Factory recognized that the algorithm is not perfect: it still cannot generate voice using AI. But the Tiwari team is going to modify the algorithm and continue to use deepfakes in the election campaign.
Vice sent videos to researchers at the Rochester Institute of Technology in New York, who confirmed that the videos are indeed deepfakes. The Ideaz Factory refused to tell what technology they used, but the researchers said that they most likely created the deepfake with Nvidia’s vid2vid .
“Let the gin out of the bottle”
The party believes that for the first time they used deepfakes technology “for the good”: “Previously, this tool was used only in a negative way. We used it for a positive campaign. ”
The creator of the Tattle civic project , which creates a content archive on WhatsApp, Tarunima Prabhakar was skeptical of the “positive” use of deeppfakes: “The problem is that they let the genie out of the bottle,” because sooner or later someone will use the technology as “Weapon”. “If you allow political parties to use these types of deepfakes, then how to interpret this? Too much subjectivity, ”says Prabhakar.
Vice also felt that such content could circumvent fact-checking algorithms and trick security experts. The publication contacted the Indian fact-checking company AltNews, but it could not determine whether the fake video or not. “Now it has become impossible to verify the authenticity of something that does not immediately seem fake. This is dangerous. For the first time, something like this appears in India, ”said Pratik Sinha, head of AltNews.
In 2018, five travelers were beaten to death by residents of an Indian village because of the fakes spread on WhatsApp. Since then, the company has banned the sending of the same message to more than five contacts and launched the fact check service.
Social networks have different ways to deal with deepfeykami. Twitter will add a tag of manipulation to such content. Facebook never announced the new rules, but blocked several fake videos. Reddit has banned the publication of deepfakes in order to impersonate another person.