Artificial intelligence (AI) is an evolving field that has dominated the media and tech landscape for the past several years. With any unprecedented technological innovation, financial professionals often worry and question how fraudsters or hackers could use it for malicious purposes. We frequently encounter concerns around AI in the forensic accounting and litigation support sphere, so we’ve created a brief but informative guide on how the fraud landscape is shifting as AI becomes an everyday part of our lives.
AI is often viewed as an omnipresent technological advancement that was developed in a lab and has no true definition. While it may have been developed in a lab, AI can be defined on a broad spectrum as technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity, and autonomy.1 Despite its newfound claim to fame, AI has been utilized by businesses for years prior to its pervasiveness in the media. Examples of AI in everyday business can be found in Netflix, Google, Windows, and chatbots you may encounter on customer service pages. But even prior to these modern developments, AI has been in the works for quite some time. One of the first utilizations of AI was in 1935, when Alan Turing began developing a computing machine with limitless memory. Fast forward to 1972, AI was utilized in a system known as MYCIN, which was developed by researchers at Stanford University to treat blood infections. While AI has been in focus in this current era, its development and implementation in society is certainly not a novel concept.
How AI Relates to Fraudulent Behavior
AI shifts some of the core elements of fraud. AI has impacted what fraud investigators refer to as the fraud triangle, a concept created to illustrate why individuals commit fraud. As its name indicates, it is composed of three components: opportunity, pressure, and rationalization.
1. Opportunity
Opportunity outlines an individual’s chances to commit fraud. To visualize the opportunity element of the fraud triangle, imagine an individual who encounters physical cash at a grocery store, perhaps a cashier. This physical encounter with a liquid asset like cash would give a cashier or manager a higher “opportunity” to commit fraud relative to other less opportunistic employees.
2. Pressure
Pressure relates to an individual’s motivations or reasons to commit fraud. Pressure can be as simple as needing additional funds to pay bills or can be as complex as wanting to give the impression of profitability to shareholders.
3. Rationalization
Rationalization pertains to an individual’s mentality to commit fraud. An example of rationalization would be an individual justifying stealing funds from their employer based on their belief they should be paid more.
We discuss the fraud triangle because AI has caused a change in some of these elements. For example, AI has shifted the opportunity element of the fraud triangle. AI allows individuals to commit fraud at a faster and more pervasive scale than we have currently seen. As discussed previously, AI allows hackers to use computers to mimic human behavior and thus AI can commit fraud on a faster scale than humans could. In addition, AI impacts the rationalization element of the fraud triangle. Since the dot-com boom, it has been easier for fraudsters to commit fraud as they’ve been able to place a screen between themselves and the victim(s) they are defrauding. The internet added a degree of separation between fraudsters and the pain they’re inflicting. AI adds an additional degree of separation, and thus makes it easier for fraudsters to rationalize the harm they cause, as they’re not directly seeing the impacts of their actions.
How AI Is Leveraged to Commit Fraud
In order to properly prepare for the estimated loss from generative AI, it is important to recognize how AI can be utilized to commit fraud. Two examples are phishing scams or “spoofing” and deepfakes. Spoofing is a kind of phishing attack. Bad actors send messages posing as a trusted person known to the message recipient. Deepfakes are a type of synthetic media where a person in an image or video is swapped with another person’s likeness.2 AI is being utilized to generate spoofing emails that do not feature the similar grammatical errors or language barriers that companies have seen in past phishing emails. AI has enabled fraudsters to transcend language, cultural, and company barriers and draft phishing emails that sometimes even the keenest eyes can miss. AI has also allowed fraudsters to generate images and videos depicting individuals within a company asking for certain information or requesting certain tasks when the actual individual has made no such request.
Enhancing Fraud Investigations With AI
However, AI isn’t just advancing the tool kit of fraudsters; it’s also aiding fraud investigators’ ability to look into fraudulent action. Programs like Intella utilize predictive coding that help fraud investigators analyze large amounts of data significantly faster. In addition, AI has advanced behavioral analysis and natural language processing, allowing computers to analyze data and provide meaningful results by searching for tonal variation and analyzing the data for emotional indicators.
If your organization suspects fraudulent activity, connect with our forensics team at Forvis Mazars today to see how we can assist.