Don't Be AI-Fooled This April
This might be the most dangerous April Fools' Day ever, let this cautionary tale of AI-powered deception be your reminder.
The digital information maelstrom is becoming increasingly difficult to manage as AI blurs the lines between reality and AI-generated imagination. This confusion is spreading throughout every digital channel. Fake social media bots, leading popular accounts with a convincing AI-generated face avatar, a healthy number of followers, and a decently populated posting history (generally starting no earlier than 2021), are just the tip of the security risk spear.
AI photos, videos, audio, and various forms of AI-powered deception have rapidly become the norm. Some of these AI-assisted fakes are even fooling veteran technology experts. On a recent episode of the Hundred Year Podcast, I spoke to one of them.
Perry Carpenter is the author of a new book called FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions. He is also the Chief Human Risk Management Strategist at KnowBe4, a security firm based in Florida. He told me how his firm almost fell victim to someone using AI mixed with human trust (aka social engineering) to hack their way into corporate America.
Adario Strange:
When I researched your company, the first thing that jumped out to me was a recent story about a hacker who apparently applied to work for your company remotely. He used AI to trick his way into getting a meeting and got pretty far.
Perry Carpenter:
The story behind it is fascinating. This person went through several rounds of interviews, all of them remote, over Zoom and email. They passed the coding and technical competency tests, so they were actually qualified to do the thing that we were hiring them to do. They were very skilled and seemed like they would be a good team fit. They passed the personality assessments that we did. [But] they had a stolen identity that was not detected at the time, [and] they passed background checks using a falsified identity that matched up well enough. I should mention that in the video interviews that they did, they were not using any kind of deepfake technology. They presented themselves as themselves.
However, the photo they submitted to us for the HR records that was going to be put on our intranet and through all those other systems that would propagate out to Slack, just to show their profile photo, was an AI-altered photo of a stock image.
They took a very common photo, and then they [used AI to] alter it to make it look enough like the person that was being interviewed that you would look at it, and you go, “Yeah, that looks like him.” And then he would just submit it through the systems. But it was really interesting when we started to pick everything apart; the initial photo, the unaltered photo of the tech executive, the stock one, was everywhere on the internet.
Strange:
Did your team notify people who had maybe been duped by this photo/person in the past?
Carpenter:
We got law enforcement involved. I don't know how many other people had been duped by the same photo, but we did launch an investigation. We brought a third party in called Mandiant, and we brought law enforcement in. That investigation is still ongoing, and I do believe that there is some other outreach to other companies that we started to understand might be having the same kind of attack happen. But it was super interesting because as soon as we started to detect something, our systems were on it. We sent this person a Mac. Most companies skip [security on] that, and they assume that Macs are very secure, but we have all of our devices locked down. So he was sending off red flags immediately, and we shut down the entire thing within 25 minutes of initial detection. and the way that this works.
This person was in North Korea trying to do their thing, but the laptop that we sent was to a U.S.-based address, and there's a manager over the address. We're learning how this works now, and the address is essentially a laptop farm. So they have all these people off-site that are applying for jobs here in the U.S. that will essentially become like sleeper agents. So they're there just to do the job, to bring in a paycheck, and then that goes to the bottom line in North Korea, but if they ever need to be activated for some reason, then they're just there, and they've already built up trust in a resume and everything else.
But that person was not in the U.S. We didn't know that because they were pretending to be the person that we hired, and we were asking them questions like, “Why did you just try to install this piece of software? What are you trying to do?” And they didn't have good answers that a real techie would understand or be able to explain, and so that set more red flags off.
The other weird thing about it was that after we shut everything down, we let them know that we were on to them. We demanded our equipment back. They cooperated with everything. Because the more they don't cooperate, the more they bring attention to themselves. So it's really interesting to see that immediate flip in the cooperation; it doesn't necessarily happen the way you'd think it would happen.
Strange:
Is there any chance that this was actually a different state actor, and somehow it was IP-masked as North Korea?
Carpenter:
As far as we can tell, it was North Korea. Most people outside of the tech community do assume that North Korea is not sophisticated. Folks who are in cybersecurity kind of see North Korea as one of the additional main threats. So it's Russia, China, North Korea, Iran, and a scattering of other countries. From North Korea, there's a ton of hacking, ransomware, and money laundering going on. And if you really want to chase that rabbit trail, I recommend the book Rinsed: From Cartels to Crypto: How the Tech Industry Washes Money for the World's Deadliest Crooks by Geoff White. He's a reporter from the UK. His past three books have all been on North Korean cyber gangs.
Strange:
If you had to guess, what percentage of people who are working remotely in the United States, or rather, for United States companies, are of this ilk, with some sort of sketchy intelligence or hacking network connection that we're unaware of?
Carpenter:
When I talk to my head of HR and our Chief Information Security Officer, they’ve both gone down the rabbit hole on this. They've done so much research. They both now say that after looking at tons of resumes, there are interesting words and phrases that come up over and over again from people who seem to be within this group of folks who are trying to infiltrate.
These words are also in their LinkedIn profiles [and it seems to] be a system of code that they use to notify each other of where they are. So if one person needs to activate someone else within that network, or get information about a certain company, they are able to quickly go through their contacts and say, ”Oh, this person is probably [connected]. Let me reach out to them and see how they respond."
You can check out the entire interview via audio or video on the Hundred Year Podcast!