Threats: Phishing in the age of deepfakes. Are you at risk?
Join tomorrow's webinar for a discussion of other opportunities and risks for your organization in the era of generative AI
This post will be the first in a series called “Threats”. This series will cover the ways in which generative AI might pose risks to mission-driven organizations or make your work harder, and how you can prepare or guard against those threats.
Last week I wrote a post demonstrating how I use ChatGPT to generate marketing materials (like this Substack!). I obviously believe that productivity opportunities like this abound. But the good guys aren’t the only ones who can use generative AI tools.
In particular, I’m worried that generative AI has the potential to make scammers and hackers way more effective — and I don’t want any of you to fall prey.
Social impact orgs need to worry about social engineering
Two years ago, scammers stole $650,000 from the nonprofit One Treasure Island. According to the Wall Street Journal, they used “a relatively low-tech hacking technique: an email-compromise attack. Hackers broke into the email system of the nonprofit’s third-party bookkeeper, then inserted themselves into existing email chains by using similar email addresses to pretend to be people associated with the nonprofit.”
One Treasure Island aren’t the only ones: According to CyberCrime Magazine, “[m]ore than half of all cyberattacks are committed against small-to-midsized businesses (SMBs)” — in which I would count the vast majority of social impact organizations — “and 60 percent of them go out of business within six months of falling victim to a data breach or hack.”
Social engineering refers to “techniques aimed at talking a target into revealing specific information or performing a specific action for illegitimate reasons.” So-called “deepfake” video and audio generators are already opening up new horizons for social engineers.
How AI will make it worse
As AI technology becomes more accessible, cybercriminals will be able to generate deepfake voicemails and even potentially live audio and video of someone who looks and sounds exactly like a coworker. They’ll be going phishing for data that lets them steal your money — like, you know, this guy:
Any organization with substantial financial resources or valuable confidential information could become a target for an entirely new level of sophisticated efforts around social engineering. (And by the way, it’s not just organizations — you should warn all your loved ones about these kinds of attacks, too.)
Want an early example of how hackers might use these tools? The Daily Mail reported on scammers who pretended they’d kidnapped a teenage girl. They called her mother, with the girl’s voice screaming and begging for help in the background. Her mother said, “It was completely her voice. It was her inflection. It was the way she would have cried…. I never doubted for one second it was her. That’s the freaky part that really got me to my core.”
Luckily, this particular mother was wise to the possibility of a scam, and did not pay off the criminals. But imagine if someone in your finance department got a phone call from [a hacker-generated facsimile of] your CFO, asking them to read out the account information for your organization’s main bank accounts, under some plausible pretext? Are you sure they wouldn’t do it? Are you sure YOU wouldn’t do it?
Safeguard Your Organization
Part of my job at AI Impact Lab is to surface risks like these for my clients. To be clear, I’m not a cybersecurity expert, and if you haven’t already, you should probably hire one to do an audit of your organization and make recommendations. But here are a few steps you can take to protect your organization from AI-powered attacks:
Educate your team: Keep staff aware of evolving risks and encourage vigilance when handling unexpected requests.
Implement strong security measures: Use multi-factor authentication, secure password management, and regular security audits.
Develop an incident response plan: Establish clear protocols for handling suspected phishing and social engineering attacks.
Run regular social engineering penetration testing — where your own cybersecurity experts pretend to be cybercriminals, test if they can use social engineering to get into your accounts, and figure out what additional trainings and systems your organization needs to make based on the results.
Stay up to date: Follow the latest developments in AI and cybersecurity to understand emerging threats and best practices.
Join the AI for Good webinar for more insights
Phishing is just one example of the ways in which generative AI could affect your organization’s ability to deliver on your mission — either positively or negatively.
To learn more about the opportunities and challenges that AI presents for social impact organizations, join me tomorrow on AI Impact Lab’s launch webinar:
AI For Good: How Mission-Driven Orgs Can Harness AI for Impact
When: Wednesday, April 19 at 4pm Eastern time / 1pm Pacific time