
Security experts address the US Treasury’s worries about artificial intelligence
The U.S. Department of the Treasury has published a report indicating that artificial intelligence (AI) is contributing to a rise in financial fraud. The agency explains that AI enables fraudsters to imitate speech or video using AI, persuading a victim to grant access to financial accounts or information.
While bigger financial institutions usually have the means to leverage AI for defense, smaller firms often lack the resources to do so. Moreover, organizations capable of implementing AI note that it can be difficult, as adopting AI technology for defense may require collaboration among various teams and entities, including technology, legal, compliance, and more. Consequently, many financial firms are slow in deploying AI technology for defense.
“The primary obstacle for smaller financial institutions in using AI for fraud detection lies not in model creation but in the quality and consistency of fraud data,” explains Narayana Pappu, CEO at Zendata. “Companies like financial institutions can act as a hub to consolidate the data. Startups could capitalize on opportunities by offering data standardization and quality assessment as a service. Techniques like differential privacy can aid in sharing information between financial institutions without exposing individual customer data, a concern that may inhibit smaller institutions from cooperating with others.
The report also notes that, in the financial sector, there is inconsistency in defining what AI entails. This lack of clarity may be detrimental to financial organizations, regulators, and clients, prompting the report to suggest creating and adopting a common AI terminology.
Marcus Fowler, CEO of Darktrace Federal, states, “As highlighted in the U.S. Department of the Treasury’s recent report, the growing use of AI presents both opportunities and risks for organizations. The tools employed by attackers and defenders, as well as the digital environments requiring protection, are constantly evolving and becoming more intricate. Although attackers’ use of AI is still in its infancy, it is already lowering the entry barrier for deploying sophisticated techniques, faster and on a larger scale. Effectively safeguarding organizations in the era of offensive AI will necessitate an expanding arsenal of defensive AI. Fortunately, defensive AI has been safeguarding against advanced threat actors and tools for years.
“Historically, financial services organizations have been prime targets for threat actors due to the nature of their operations. Consequently, these organizations often boast highly advanced cybersecurity programs, with many having begun leveraging AI for cybersecurity years ago according to the report. AI represents a significant advancement in enhancing our cyber workforce, and these organizations serve as exemplary instances of how AI can be successfully utilized in security operations to enhance agility and fortify defenses against new threats. We encourage these organizations to engage in open dialogues regarding their achievements and setbacks when deploying AI, in order to assist other organizations across sectors in speeding up their adoption of AI for cybersecurity.
“Collaboration and partnerships between the public and private sectors will play a vital role in ensuring global AI safety. Initiatives such as the U.S. Department of the Treasury’s report are crucial in accelerating organizations’ efforts to realize the positive opportunities and benefits of AI. This report serves as a starting point for all organizations — not just financial services — to contemplate their own AI adoption and approach, aligning AI endeavors with broader cybersecurity objectives and business initiatives.”