Cybersecurity expert and Warrenton resident Donnie Wendt warns that people may be vulnerable to scams run using artificial intelligence models that have become increasingly common in the last year.
This item is available in full to subscribers.
We have recently launched a new and improved website. To continue reading, you will need to either log into your subscriber account, or purchase a new subscription.
If you are a digital subscriber with an active subscription, or you are a print subscriber who had access to our previous wesbite, then you already have an account here. Just reset your password if you have not yet logged in to your account on this new site.
If you are a current print subscriber and did not have a user account on our previous website, you can set up a free website account by clicking here.
Otherwise, click here to view your options for subscribing.
Please log in to continue |
Cybersecurity expert and Warrenton resident Donnie Wendt warns that people may be vulnerable to scams run using artificial intelligence models that have become increasingly common in the last year.
Wendt is a marine veteran and has worked in cybersecurity, previously at Mastercard and now with Whiteglove AI, for the past 15 years. He was also recently named one of the top 10 expert leaders in cybersecurity by CIO Business World and teaches cybersecurity as an adjunct professor at Utica University.
He helps businesses and local governments to implement artificial intelligence models to improve their efficiency and the security of their enterprises. While he says these are powerful tools and can be used to dramatically increase productivity, the software is also readily available to those who would use it to inflict harm.
“We talk about social engineering, Where adversaries are using generative AI, whether that is to emulate your voice, it could be video to emulate both your voice and your face, or it could be phishing emails. Generative AI is allowing adversaries to do that much quicker and much more realistically,” said Wendt.
According to IBM Research, generative AI “refers to deep-learning models that can generate high-quality text, images and other content based on the data they were trained on.”
Wendt says with some of these tools, scammers can quickly produce a convincing facsimile of someone’s voice and even their face on a video call in order to lie to or steal from victims often with less than 15 seconds of a video or audio recording.
He went on to say that a common form these scams take is through a voicemail message, phone call or video call in which a scammer will impersonate the loved one of a victim and tell them they are in trouble and need money. These scammers frequently target the elderly who may be less aware of the capabilities of this technology, preying on their empathy.
There are ways to mitigate these risks.
“First of all, we have to have a little bit of skepticism, right?” said Wendt. “When getting that call or that email, follow up on that by contacting that person through other means.”
He also said that another, admittedly “old school,” method, would be to implement a code or phrase with loved ones that could be used to verify their identity over the phone.
While generative AI models are quickly improving, Wendt says there are still some tell-tale signs that the images and videos they create are not real.
“With the current level of technology, humans can detect them if they’re really paying close attention,” said Wendt.
They are very effective at recreating images and videos but they often miss small details. One example provided by Wendt is if a model is generating images of a person with glasses, the frames may not extend back to the ears.
Other red flags include unnatural facial movements or expressions and emotions. Wendt said these models have difficulties accurately emulating human expression and the subjects in AI-generated content may blink or move their mouths unnaturally.
While they can accurately emulate someone’s tone of voice from a short clip they are less effective at mirroring speech patterns and someone paying close attention may notice the model is not speaking as their loved one would.
These AI models are also being used to rampantly spread misinformation online. Using these models to create images and videos on its own is not always inherently problematic, but Wendt said issues arise “when disinformation becomes misinformation.”
“The first person putting it out there knows it’s fraudulent, that’s disinformation, right?” said Wendt. “But people are hearing this and believing that to be true, … They start sharing it and it’s that rapid proliferation that then causes the problem.”
The technology can easily be used to produce convincing, but false, news stories that can quickly gain traction on social media. Wendt advises using many of the same tools to try to catch AI-generated content before sharing or spreading what may be misinformation.
“We need to have a healthy dose of skepticism,” said Wendt. “Can I verify this through some other source?”
The extra steps of verification, either contacting a loved one in cases of scams or checking outside sources before believing what may be a fraudulent story, can be the difference in remaining vigilant or falling victim to AI-generated scams.