How to Protect Yourself from ChatGPT Scams
As ChatGPT’s popularity grows, so does the potential for scammers to use AI to trick people into divulging personal information. ChatGPT offers scammers the ability to create craftier cons by using the AI to generate phishing emails, malicious code, fake websites, photos, videos and phone calls. For the rest of us, this means that we need to be more vigilant as scams become harder to identify.
In this blog, we’ll go over scams cybercriminals have crafted using ChatGPT and provide best practices on how to protect yourself and stay cyber safe online.
Brand Impersonation
Brand impersonation can take the form of cleverly crafted phishing emails or imitating a website from a well-known and trusted brand.
With phishing emails, the phish could be branded with the company’s logo and can contain a message nearly identical to one you might receive from that company. The scammer can easily copy the messaging from the real email and swap out the real link with a malicious one generated by ChatGPT.
OpenAI, the creators of ChatGPT, have tried to stop the generation of malicious code through ChatGPT. It should come as no surprise, however, that fraudsters have found a work-around and can trick the AI to generate malicious code.
By changing the URL parameter, cybercriminals are also able to create convincing fake websites that look nearly identical to the real thing. INKY recently posted to their blog a cyber-attack that used both a cleverly crafted brand impersonation phishing email and a fake website.
In this attack, the fraudsters were able to convince employees to follow a malicious link that appeared to come from the organization they worked for which led them to a site mimicking their own. When the employee tried to log in, they received an error message and their login information was stolen.
Fake Calls, Photos and Videos
ChatGPT also has the capability to craft fake phone calls, photos and videos that cybercriminals can use to trick people into thinking they are communicating with a real person representing a well-known brand or service provider.
A cybercriminal can use ChatGPT to craft a script or even make a phone call to manipulate someone into divulging their personal information.
Fraudsters can also use realistic looking photos and pose as a customer service representative to trick someone into providing their login credentials.
Additionally, with the help of ChatGPT, fraudsters can create fake videos impersonating high-ranking personnel asking employees to send personal information, money or other information to the cybercriminal.
To generate these scams, the fraudster uploads a script of what they want the photo to look like, or the person they are impersonating to say, and AI does the rest. It’s surprisingly simple.
The danger of these scams is that people are more likely to believe cybercriminals if it seems like they are communicating with a real person, or someone they know.
How to Protect Yourself
Organizational policies around ChatGPT are taking a while to roll out. This is especially true for larger organizations and enterprises where these types of changes take longer to implement due to their size and the number of policies involved.
Fortunately, to try and get ahead of the curve, many organizations have shared cautionary communications with their teams and have begun to educate their employees while more sophisticated internal policies are being implemented.
So, what can you do in the meantime?
1. Slow down. Consider the email, phone call or video you are being presented with. Ask yourself these questions:
a. Were you expecting it?
b. Is it from someone you know?
c. Is this how this person or organization usually communicates with me?
If you answer no to any of these questions, chances are someone is trying to scam you!
2. Investigate. If the email, phone call or video is from someone you know or a well-known brand, do some investigative work. Look at the following things:
a. If there is a link, hover over it. Compare it to the organization’s link from their website. Does it match?
b. Look at the sender’s email address and compare it to the organizational address or personal address (you may need to search for this information). Do the email addresses match?
c. When investigating a video, look at video quality and sizing. Is the quality the same as you would expect? Are the proportions correct?
If you answer “no” to any of these questions, chances are someone is trying to scam you!
3. If in doubt, reach out! If you’ve completed the first two steps and still aren’t sure, reach out to the person or brand through another method of communication to confirm that the request is legitimate. You can always go through the organization’s support process through their website directly to mitigate any fake websites as well.
4. Remember, you’re more likely to fall for a scam:
a. Early in the morning and late at night.
b. Working after hours or on vacation.
Avoid checking your email and responding to messages during these times as you aren’t as alert and are more likely to fall for a scam.
Looking for more Information?
Check out our blog on ChatGPT and Getting our Minds Around Artificial Intelligence.