A spy reportedly used an AI-generated profile picture to connect with sources on LinkedIn

 theverge.com  06/13/2019 12:30:13   James Vincent
Examples of fake profile pictures and faces created using AI.

Over the past few years, the rise of AI fakes has got a lot of people very worried, with experts warning that this technology could be used to spread lies and misinformation online. But actual evidence of this happening has so far been thin on the ground, which is why a new report from the Associated Press makes for such interesting reading.

The AP says it found evidence of a what seems to be a would-be spy using an AI-generated profile picture to fool contacts on LinkedIn.

The publication says that the fake profile, given the name Katie Jones, connected with a number of policy experts in Washington. These included a scattering of government figures such as a senators aide, a deputy assistant secretary of state, and Paul Winfree, an economist currently being considered for a seat on the Federal Reserve.

The fake profile for Katie Jones spotted by the Associated Press.
Credit: The Associated Press

Using LinkedIn for this sort of low-risk espionage is commonplace, with the US and Europe particularly worried about large-scale operations launched by China. As William Evanina, director of the US National Counterintelligence and Security Center, told the AP: Instead of dispatching spies to some parking garage in the US to recruit a target, its more efficient to sit behind a computer in Shanghai and send out friend requests to 30,000 targets.

But what makes the case of Katie Jones unusual is the use of an AI method known as a generative adversarial network (or GAN) to create the accounts fake profile picture.

Using GANs to create fake faces has become incredibly easy in recent years, as demonstrated by the popularity of websites like ThisPersonDoesNotExist.com. Although spies using LinkedIn could easily grab stock images or random social media photos to create their account, using an AI fake adds a layer of protection. Because each image is unique, it cant be traced to a source with a reverse image search for an easy debunking.

For a while now people have been worrying about the threat of deepfakes, AI-generated personas that are indistinguishable, or almost indistinguishable, from real live humans. I think I may have caught an example of one in the wild:https://t.co/yvZbK8RoQt pic.twitter.com/4FaNqtivEY

 Raphael Satter @ RightsCon (@razhael) June 13, 2019

And while these fakes look convincing at a glance, they easily reveal themselves when you peer a little closer. In the case of Katie Jones, you can see that the face is slightly asymmetrical with an indistinct background. The edges of her hair and ear are also blurred and there are strange streaks on the flesh. Several experts the AP spoke to concluded the image was definitely created using machine learning techniques.

An incident like this isnt proof, of course, that AI fakes are going to destroy our notion of truth and evidence. But it does show that these concerns are not just theoretical, and that this technology  like any other  is slowly going to be adapted by malicious actors.

When it comes to LinkedIn spies, though, the big danger is not really AI fakes, but simple inattention. As Paul Winfree, the economist and would-be Federal Reserve board member told the AP: Im probably the worst LinkedIn user in the history of LinkedIn ... I literally accept every friend request that I get.

« Go back