How to Protect Your Firm from AI Misuse?

Earlier this month, the New York Department of Financial Services (NYDFS) officially announced new guidance for addressing and mitigating AI-related cybersecurity risks.

In its industry letter, the regulator highlights four specific risks to pay special attention to:

  1. AI-enabled social engineering
  2. AI-enhanced cybersecurity attacks
  3. Exposure or theft of vast amounts of non-public information
  4. Increased vulnerabilities due to third-party, vendor, and other supply chain dependencies

To help you address this growing threat, we’re discussing what you need to know about the technology and best practices to implement at your firm.

Is it AI?

Genuine question—how well can you spot the use of AI?

Artificial intelligence has spread far and wide since the popularity of ChatGPT, one of the first generative AI programs available to the public.

Gen AI is the most well-known application of the technology that can generate a wide array of outputs beyond text alone, including audio and images. Along with gen AI, machine learning and deep learning are also common forms of the technology that power things like fraud detection models, image recognition, and even self-driving cars.

Popular applications of AI today include:

  • Speech recognition (i.e. Siri and Alexa)
  • Customer service chatbots
  • Computer vision (photo tagging, autonomous vehicles, etc.)
  • Recommendation engines
  • Automated stock trading

Best practices for minimizing the risks of AI at your firm

The NYDFS’s industry letter marks a solid step forward in AI regulation, but the technology remains largely unregulated. As such, minimizing the risk of AI remains the responsibility of each firm.

Some of the greatest AI risks for financial services include:

  • Sensitive data leakage
  • More sophisticated phishing attacks (one report reveals that 65% of people were tricked by AI-generated emails into revealing personal information)
  • Easier creation of custom malware
  • Deepfake audio, video, and photos that can be used to gain unauthorized access to data

With this in mind, we recommend the following four best practices for managing AI risks:

  1. Never use AI without consulting firm management first—avoiding shadow IT is key to keeping all firm data confidential and proprietary.
  2. Do not input sensitive data in AI programs—generative AI tools (ChatGPT, Gemini, etc.) should be treated like public forums when entering and receiving data.
  3. Always fact-check AI-generated information—the information provided by AI programs is only as good as the data the model is trained with. In many cases, AI programs may be trained using faulty, biased, or outdated data.
  4. Be aware of distinctive AI language—when reading emails or other correspondence, stay alert to commonly used AI words and phrases (here’s a list of some of the most common ones). When in doubt, double-check with managers or sources before clicking links or sharing information.
CONTACT ME ABOUT AI THREATS