How AI turned Ukrainian YouTubers into Russians

How AI turned Ukrainian YouTubers into Russians

“I don’t want anyone to think that I have ever said such a horrible thing in my life. Using a Ukrainian girl for the face of promoting Russia. It doesn’t make sense.”

Olga Loiek has seen her face appear in various videos on Chinese social media – the result of easy-to-use generative AI tools available online.

“I can see my face and hear my voice. But it was all very creepy, because I saw myself saying things I never said,” the 21-year-old, who studies at the University of Pennsylvania, told the BBC.

Accounts that display her look have dozens of different names such as Sofia, Natasha, April and Stacy. These “girls” spoke in Mandarin – a language that Olga had never learned. They seem to be from Russia, and talk about China-Russia friendship or advertise Russian products.

“I saw like 90% of the videos talking about China and Russia, China-Russia friendship, that we should be strong allies, as well as advertisements for food.”

One of the biggest accounts is “Natasha’s imported food” with a following of more than 300,000 users. “Natasha” will say things like “Russia is the best country. It’s sad that other countries are turning away from Russia, and Russian women want to come to China”, before starting to promote products like Russian sweets.

This personally angered Olga, whose family is still in Ukraine.

But more broadly, his case has drawn attention to the dangers of a technology that is evolving so rapidly that regulating and protecting the public has become a real challenge.

From YouTube to Xiaohongshu
Olga’s Mandarin-speaking AI appearance first appeared in 2023 – not long after she started a YouTube channel that was not updated regularly.

About a month later, she started receiving messages from people who claimed they saw her speaking in Mandarin on Chinese social media platforms.

Intrigued, he began to search for himself, and found AI equivalents of himself on Xiaohongshu – an Instagram-like platform – and Bilibili, a video site similar to YouTube.

“There are a lot of them [accounts]. Some have things like the Russian flag in their bios,” said Olga who has found about 35 accounts using her likeness so far.

After his fiance tweeted about this account, HeyGen, a firm he claimed developed the tools used to create AI equations, responded.

They revealed more than 4,900 videos had been generated using his face. They said they have blocked his image from further use.

A company spokesperson told the BBC that their systems had been hacked to create what they called “unauthorized content” and added that they were immediately updating their security and authentication protocols to prevent further abuse of their platform.

But Angela Zhang, from the University of Hong Kong, said what happened to Olga was “very common in China”.

The country is “home to a vast underground economy that specializes in counterfeiting, misappropriation of personal data, and counterfeiting”, he said.

This is despite China being one of the first countries to attempt to regulate AI and its uses. It has even modified its civil code to protect the right of likeness from digital fabrication.

Statistics released by the public security department in 2023 show authorities arrested 515 individuals for “AI face-changing” activities. Chinese courts have also handled cases in this area.

But how did so many of Olga’s videos make it online?

One of the reasons may be because they promote the idea of friendship between China and Russia.

Beijing and Moscow have grown significantly closer in recent years. Chinese leader Xi Jinping and Russian President Putin said the friendship between the two countries “has no limits”. The two are scheduled to meet in China this week.

Chinese state media have repeated the Russian narrative justifying its invasion of Ukraine and social media have censored discussion of the war.

“It is not clear whether these accounts are coordinated under a collective purpose, but promoting a message that is in line with government propaganda certainly benefits them,” said Emmie Hine, a law and technology researcher from the University of Bologna and KU Leuven.

“Although these accounts are not explicitly linked to the CCP [Chinese Communist Party], promoting an aligned message may reduce the likelihood that their posts will be removed.”

But that means ordinary people like Olga remain vulnerable and at risk of falling foul of Chinese law, experts warn.

Kayla Blomquist, a technology and geopolitics researcher at the University of Oxford, warned that “there is a risk of individuals being framed with artificially generated, politically sensitive content” that could be subject to “swift punishment enacted without due process”.

He added that Beijing’s focus in relation to AI and online privacy policy is to build consumer rights against predatory private actors, but stressed that “citizens’ rights in relation to the government remain very weak”.

Ms Hine explained that “the fundamental aim of China’s AI regulation is to balance maintaining social stability with promoting innovation and economic development”.

“While the rules on the books appear strict, there is evidence of selective enforcement, particularly generative AI licensing rules, which may aim to create a more innovation-friendly environment, with the implicit understanding that the law provides a basis for firm action. if necessary,” he said.

‘Not the last victim’
But the impact of Olga’s case reaches far beyond China – it shows the difficulty of trying to regulate an industry that seems to be growing at breakneck speed, and where regulators are always playing catch-up. But that doesn’t mean they don’t try.

In March, the European Parliament approved the AI Act, the world’s first comprehensive framework for curbing technological risks. And last October, US President Joe Biden announced an executive order requiring AI developers to share data with the government.

While regulations at the national and international level are developing slowly compared to the fast-paced race of AI growth, we need “a clearer understanding and a stronger consensus about the most dangerous threats and how to mitigate them”, Ms Blomquist said.

“However, disagreements within and among countries prevent real action. The US and China are key players, but building consensus and coordinating the necessary joint action will be challenging,” he added.

Meanwhile, on an individual level, there seem to be very few people who can’t post anything online.

“The only thing to do is to not give them any material to work with: to not upload photos, videos or audio of ourselves to public social media,” Ms Hine said. “However, bad actors will always have a motive to impersonate others, so even if governments crack down, I expect we will see consistent growth in the midst of regulatory violations.”

Olga is “100% sure” that she will not be the last victim of generative AI. But he was determined not to let it chase him off the internet.

He has shared his experience on his YouTube channel, and said some Chinese online users have helped him by commenting on videos using his likeness and pointing out that they are fake.

He added that many of these videos have now been removed.

“I want to share my story, I want to make sure people understand that not everything you see online is true,” she said. “I love sharing my ideas with the world, and none of these scammers can stop me from doing that.”

About Kepala Bergetar

Kepala Bergetar Kbergetar Live dfm2u Melayu Tonton dan Download Video Drama, Rindu Awak Separuh Nyawa, Pencuri Movie, Layan Drama Online.

Leave a Reply

Your email address will not be published. Required fields are marked *