- Home
- Personal Banking
- Small BusinessSmall Business Banking Small Business Overview Small Business Checking Small Business Credit Card Small Business Savings Small Business Online BankingMerchant Services Merchant Services
- Commercial
- Community
- Knowledge CenterCalculators Auto Calculators Credit Calculators Home & Mortgage Calculators Retirement Calculators Savings CalculatorsSecurity & Privacy Center Card Security Fraud Prevention ID Theft Online and Mobile Security Security Posts
- Search
- Login
April 30, 2024 · Investment, Savings, Security
How Scammers Could Use Artificial Intelligence to Steal Your Money
Over approximately the past three years, what used to be a specialized area of developing advanced technology has become heavily publicized in the news media and available for use by general consumers. The relatively recent increased visibility and access to, Generative Artificial Intelligence (or just AI)—massive software programs that can learn and generate information based on their absorption of huge amounts of data and other content—have allowed consumers to find and retrieve AI curated and crafted details on a range of topics. AI can be used to create artwork, videos, articles, essays, stories, jokes, poetry and music. With AI, consumers can also engage in seemingly human interactive chat, including conversation through websites and mobile applications (apps) that provide support for services, products, and even emotional needs, such as for chat helplines for people experiencing a mental health crisis.
But similar to many easily accessible tools, AI can be used for bad actions and activities as well as good ones. The increasing use (and potential use) of AI for nefarious purposes to harm the general public has concerned the U.S. government. In June 2023, the U.S. government agency the Consumer Financial Protection Bureau (CFPB) noted in its public blog that “Emerging AI-marketed technologies can negatively impact civil rights, fair competition, and consumer protection. Because technology has the power to transform our lives, we must ensure that AI-marketed technologies do not become an excuse for evasion of the law and consumer harm. It is critical that these emerging technologies comply with the law.” In a joint paper from the U.S. Federal Bureau of Investigation (FBI), National Security Agency (NSA) and Cybersecurity and Infrastructure Security Agency (CISA) which was released in September 2023, consumers are being warned against “deepfakes”. “Deepfakes are AI-generated, highly realistic synthetic media that can be abused to…Enable access to networks, communications, and sensitive information”.
Keep in mind that the use of artificial intelligence is still experimental, emerging and not yet fully government regulated, which means that it can be used for illegal purposes, and that crooks may lie and make unrealistic and untrue statements about its capabilities. Since about 2022, scammers have been capable of using AI programs to trick consumers and steal their money. Following are actual and possible uses of artificial intelligence by scammers. Please note, it could be helpful to be aware of and wary of these fraudulent techniques.
One specific, very sophisticated example of AI technology that could potentially be used by cyber crooks is OpenVoice, developed by researchers at the Massachusetts Institute of Technology in the U.S., Tsinghua University in Beijing, China, and members of Canadian AI startup MyShell. According to VentureBeat.com, as of January 2024, the open-source OpenVoice AI technology can already provide “voice cloning that is nearly instantaneous and offers granular controls not found on other voice cloning platforms.”
How scammers can use artificial intelligence to target and take advantage of consumers
Some of the initial ideas for potential AI scamming might include the following:
- AI could clone a family member’s voice to create a more believable family emergency scam. For years, scammers have been using fake family emergency scams to target consumers, especially elderly grandparents. A family emergency scam is when a scammer impersonates a family member and calls a parent, grandparent, uncle, aunt or other relative claiming to be in urgent trouble that can only be solved by providing them with money immediately, such as with money sent electronically or with other generally untraceable and unrecoverable funds, including gift cards and cryptocurrency. While there may be multiple clues that a scam is taking place, trusting, anxious and uninformed relatives are still tricked by the scam. Now with the capabilities of AI available for minimal or no cost, scammers have a powerful new tool that potentially could let them create a far more plausible family emergency scam. According to another national government agency, the Federal Trade Commission (FTC), scammers could use voice cloning to almost perfectly imitate a real family member’s voice. Through replicating a person’s voice by calling them or through video or audio content of them speaking that’s been posted online, AI software create a digital simulation, an audio deepfake—and the program can be used to imitate that family member in a conversation. Scammers can also fake a family member’s phone number on caller identification to make the ruse even more believable. If a call from a family member seems unusual and you suspect something is wrong, hang up and call them back at the phone number you had for them.
- Scammers can create fake AI-connection apps for your mobile device. Last May, the venerable technology website TechSpot.com highlighted a burgeoning trend by hackers to design malware that purportedly could connect to legitimate artificial intelligence (AI) programs, but the AI connection programs are actually unauthorized fake “fleeceware" – software that quietly sticks users with subscription fees.” The app charges consumers for a real AI service that is free or the software may not connect to a true AI program. Although the Apple® App Store and Google® Play Store try to screen out fraudulent apps, it may be possible for bad apps to be included in online stores before their illegal or harmful activities are discovered and they are banned from being downloaded. It’s helpful for personal security to be careful about what you download and be aware of how you can be victimized by malicious apps.
- Even advertising for AI tools can be fake and connect to computer malware. Because interest in some AI services grew incredibly fast in 2022-2023, as usual scammers jumped in to take advantage of a popular topic and exploit it to take money from credulous consumers. According to a consumer alert published by federal consumer watchdog agency the FTC, in April 2023, “…criminals run bogus ads for AI tools and other software on social media sites and on search engines. These savvy cybercriminals can evade detection by systems designed to ferret out malicious advertising. They can also evade anti-virus software…If you click on a malicious ad, you end up on a cloned site that downloads malware onto your device.”
- AI can improve the spelling and readability of phishing attacks and imitate someone’s style of writing. Phishing attacks—usually fake emails designed to trick a person into clicking a link to download malware or go to a fake website to capture account information and possibly download computer or mobile device malware—often originate from countries outside of the United States, and the email’s creator may not be a native English speaker. This means that a phishing email may have English spelling and other grammatical errors that provide clues that it could be a fake message not coming from an authentic company or person. According to a June 5, 2023 story in the Wall Street Journal by Cheryl Winokur Munk, generative AI is sophisticated enough to write fluently and with no errors. Since AI is capable of learning, if it is given examples of a company or individual’s external communications, it has the ability to determine what makes their writing style unique and then mimic that style in it writing, providing a convincing fake message to potential phishing victims, one that is virtually indistinguishable from a legitimate communication. So now phishing emails or other communication can read as if they were actually coming from a legitimate company, executive, friend or family.
- Crooks can promise that AI will help create a successful business or successfully manage your investments, at low cost and provide high profits. Scammers may falsely claim that AI can help manage a business—such as an online store—and make it profitable, or that AI can bring high financial returns by using it to trade cryptocurrency or other types of speculative investments.
Currently, there are not a lot of reported cases of voice cloning scams, yet, but it’s useful to be informed and cautious
U.S. society (and the world) are dependent on the vast number of hardware and software marvels that have been created, especially in the last 40 years with the development of personal computers, mobile phones and the internet, but as with many benefits, there is often an intentional downside to them. Even advanced technology that would seem to be safe and helpful can be modified for illegal purposes. It’s important to always do your homework and research new types of software, hardware and services before trying them out for personal use. If you’re concerned about online safety, scams and financial accounts, then you should be especially cautious when using any new device or application that can access your accounts.
If you are ever concerned about the security of your Delta Community accounts then contact the Credit Union right away
- If you think your any of Delta Community accounts have been compromised, immediately contact our Member Care Center via our toll-free number at 800-544-3328 with whatever details you have, including dates, amounts of money, email messages, email addresses, text messages, phone numbers and names.
- Please remember that Delta Community will never call, text or email you to ask for your checking, savings or investment account, ATM, debit or credit card numbers or passwords, your telephone access (IVR) PIN or one-time passcode.
- The Credit Union will also never ask members to send money electronically as a test or share one-time passcodes received via email or text.
- If someone purporting to be from Delta Community calls and asks for any of this type of information, hang up and call the Credit Union Member Care Center at the number above.
Looking for more information on potentially harmful technology or managing money?
Delta Community’s blog and security posts have a lot of advice on handling technology, online security, and money:
- Avoid fake check scams.
- Learn check-writing tips.
- Think for a minute, and then don’t click 'unsubscribe' from spam emails and texts—managing spam.
- Learn how to make your mobile and online payments safer.
- How to protect yourself online while working or learning from home.
- Why to question your security questions.
- How to secure your home network.
- You should know how to tell if someone’s stolen your identity and how to prevent it.
- How to get a stronger password and use a password manager.
- Be on the lookout for phishing, smishing and vishing attacks.
- Be vigilant for spoofed phone calls.
- Harden your email account against an attack.
For more information that may help you manage buying, saving, and financial activities, look into the free Delta Community Financial Education Center webinars on a range of practical, “how to” topics that could potentially help save you money and enable you to better manage your income, financial assets and life. Please visit the Financial Education Center's Events & Seminars page to review and register for its monthly on-demand webinars.