An effective cyber program must have the ability to continuously evaluate your critically important vendors and train your team members on the most up-to-date cybersecurity standards.
Drawbridge’s CEO William Haney was recently joined by our CISO Simon Eyre and Spencer Wilson, SS&C Technologies’ Director of Strategy and Operations for a webinar all about the increasing overlap of AI, vendor, and human cybersecurity risks.
From this discussion, we’ve gained three valuable takeaways.
- Evaluate all vendors, even the big ones
As the prevalence of outsourcing business functions continues to grow, companies need to apply the same security standards to their vendors as they would if they had in-sourced a specific service or solution.Completing your due diligence across your entire selection of vendors ensures that no stone is unturned. After all, even big vendors can fall victim to attacks. Just look at the SolarWinds hack as an example, where one weak password affected around 18,000 customers. That weak password could have been identified in a risk assessment, saving the company substantial money and resources.Performing regular risk assessments on vendors can help you establish a baseline for those vendors’ cyber programs and benchmark them against your own standards. - Managing the human element in effective cyber
A Stanford University study revealed that around 88% of cyber incidents occur due to employee mistakes.Many sophisticated hacks target human error vulnerabilities rather than the systems themselves, such as through the use of fake or stolen identities to imitate a person’s boss or coworker.Every piece of information you put out online can be used to exploit your company’s cyber weaknesses, from press releases to social media platforms like LinkedIn.Managing these human-centric cyber risks requires ongoing employee training and precise controls that limit the amount of data and information each individual employee can access with their credentials. - Mitigating the risks of AI with education
Artificial intelligence (AI) presents a multitude of risks in cybersecurity. Generative AI programs like ChatGPT offer hackers a more sophisticated way to craft phishing emails and malicious code, while also presenting data security problems for businesses whose employees use these programs as well.Addressing the risks of AI comes down to increasing education and awareness around it.
AI models need to be trained using trustworthy data, as once an AI adds new information to its processing system, there’s no reversing that exchange of information. At the same time, employees need to be regularly trained and advised on how to best use these programs. In some cases, it may be best practice for a business to block access to particularly risky programs (i.e. ChatGPT) on all business systems.
AI-driven risks necessitate a forward-thinking approach to cybersecurity. Firms can’t merely rely on the vendors’ existing standards. Instead, they need a standard framework to follow across the board.
Need help creating dynamic governance policies?