Recently, a finance worker in Hong Kong deployed $200 million Hong Kong dollars – about $25.6 million – from the corporate account after a video call with the company’s chief financial officer. Unfortunately, the call was fraudulent and the “CFO” was a deep fake created by artificial intelligence. The money is now in the hands of tech-savvy scam artists. The risks from AI seem to be evolving just as quickly as the opportunities. It’s prompting financial firms and regulators to adopt a more cautious approach to this new technology.
In December, the Financial Stability Oversight Council, chaired by Treasury Secretary Janet Yellen, published a report that highlighted AI adoption as a new emerging risk for financial stability. Meanwhile, the Securities and Exchange Commission implemented new rules requiring broker-dealers to address conflicts of interest in the use of artificial intelligence in trading, in response to the impact of robo-advisors in the meme stock rally of 2021.
Experts believe this wave of regulations is just getting started. “The regulation of artificial intelligence is still in its infancy, meaning that private investors’ use of AI in 2024 will not necessarily be subject to specific AI regulations,” says Edward Machin, counsel in the data, privacy and cybersecurity group at law firm Ropes & Gray. He points to new AI rules emerging in Europe and Britain that could eventually permeate across the global market.
It could take years before these rules are fully formed and implemented, but Machin believes dealmakers shouldn’t wait for regulators to catch up. “Investors’ use of AI must comply with a host of existing laws, including those relating to data protection, intellectual property and consumer protection, and this will impact how they develop, use and purchase AI-enabled products and services,” he says. “At a minimum, investors should ask themselves: Where does the data come from and how is it collected? What have individuals been about the use of their data, and by whom? And what measures are in place to ensure the security of data?”
Leaking out sensitive data during the dealmaking process is a key concern. Last year, a group of Samsung employees accidentally leaked confidential corporate information while using ChatGPT for work. Meanwhile, a research group at Google discovered a simple hack could prompt ChatGPT to leak sensitive training information from other users.
These risks would perhaps have been justified if the technology was exceptionally productive. Unfortunately, that doesn’t seem to be the case. Chris Felderman, managing director and head of financial due diligence at Palm Tree LLC, says current AI models are error-prone.
“It’s still making mistakes, it can hallucinate, so to speak,” he says. “Right now, you have to take AI usage in this type of work with a grain of salt – be very skeptical – but that’s our job as due diligence providers – professional skepticism.
“I do think AI tools will improve. The more they are utilized, the smarter they become. The more information an individual dealmaker or firm can feed such tools internally, the more of a proprietary advantage they can lend to client engagements. It will eventually allow more reliable intelligence and insight to be provided to clients when they’re assessing an investment.”
Connect: | ||
---|---|---|
Edward Machin | Ropes & Gray | [email protected] |
Chris Felderman | Palm Tree LLC | [email protected] |