The AI market is experiencing a surge in demand for AI-powered tools. From chatbots and virtual assistants to automated production line monitoring and grammar checkers, and everything in between. Companies are looking to AI to optimize their work day, improve their customer service and introduce new products quickly and efficiently.
In a study, conducted by Forrester Consulting on behalf of Tenable, 68% of organizations said that they planned to harness generative AI within the next 12 months to enhance security measures and align IT objectives with broader business goals. The study also revealed a worrying trend, however, as only 17% of organizations demonstrated high confidence in effectively implementing generative AI technologies.
For the channel, AI is both an opportunity and a challenge.
Learning from past mistakes
AI adoption has similarities to technology integration of old. Many security teams still bear the scars from Bring Your Own Device (BYOD) and shadow IT.
Today, it’s shadow AI that’s causing sleepless nights as organizations look to harness the possibilities AI offers. For the security team, it’s a race to introduce practices and policies that negate the risk to the business, such as vulnerability detection and remediation, containing data leakage, and reining in unauthorized AI use.
According to recent Tenable Research, during a 75-day period between late June and early September, over 9 million instances of AI applications on more than 1 million hosts. If history has taught us anything, it’s that stopping individuals using technology that helps with productivity and efficiency is a losing battle. Instead, it’s about ensuring security measures can keep pace with the rapid evolution of AI technologies. And that’s where channel partners can step up.
From a defence stance, particularly for MSSPs, it’s about helping organizations know where their biggest areas of risk are and taking steps to close them. With new AI instances daily, if not hourly, its imperative organizations can confidently expose and close AI risk, without inhibiting business operations. So that’s not a one and done action. Neither is it something that can happen in isolation.
And this is where AI can straddle the boundary of both creating but also addressing risk. By utilizing solutions that harness the power of AI, it’s possible to automate detection and labelling to continuously identify, prioritize, and manage risk for all resources, services, and data. Whether they are AI tools or not. This takes security from reactive to proactive to reduce risk across evolving attack surfaces.
That said, it’s important to remember that, if you fail to educate the AI model correctly, then the model fails to deliver reliable results. It’s gold in, gold out and vice versa. For now, humans should remain the ones making critical decisions on where and when to act.
Not just AI that needs educating
Another increasing challenge from generative AI being used to introduce more and more applications quickly and at scale, is the role of channel partners to educate users and secure these rapidly evolving technologies fully. This has caused a serious security issue as there is a lack of security controls, with many containing vulnerabilities. To date, hundreds of vulnerabilities have been disclosed in AI applications, including Microsoft Copilot, and Flowise.
The issue is that many developers lack understanding of the code that is being generated by these LLM engines. This can mean they are left unable to identify, nor find and fix, vulnerabilities in the code. In tandem, users and organisations are struggling to keep pace with the education and training needed to comprehensively understand and protect these technologies. It’s imperative that we recognize, at the moment, that harnessing generative AI does not mean we are generating reliable code.
For MSSPs, AI offers a real opportunity to change how they not only help, but also administrate, their relationship with customers. At a high level it’s about helping customers to strike a balance between driving forward technological adoption while ensuring the security and resilience of these tools.
There’s the opportunity to ensure their customers are optimizing the investment made in the technology, by helping educate and train them to use what they have deployed but also better understand any functionality not being utilized. For the MSSPs themselves, AIs analytics could provide them with better key risk indicators and key performance indicators of the technologies deployed within their customers’ environments. Lastly, when it comes to service level agreements, AI could help allow MSSPs to better manage prioritization, prevention, and decision-making, as well as incident handling of their client portfolio.
Legislation may help
The European Union’s (EU) AI Act is the first general legislation on artificial intelligence (AI) in the world. It aims to regulate the development, use, and placing on the market of AI systems that could pose risks to health, safety, or fundamental rights. From a security stance, it stipulates that high-risk AI systems will need to be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity. In terms of future proofing, the framework also says that they must perform consistently in those respects throughout their lifecycle.
While in principle, any framework that looks at ensuring the technology being adopted is as robust as it can be, alone it is never enough. Organizations need to remain vigilant and continue to focus on implementing secure work processes that reliably identify and scrutinise AI instances within their infrastructure, particularly for vulnerabilities or insecure working practices, to continue to protect their sensitive data and critical systems relied upon.
The rapid development and adoption of AI technologies in the past two years has introduced major cybersecurity and compliance risks. Perhaps more so than with any other new technology, there are many risk factors to consider, especially with rushed development and deployment.
While there’s obviously work to be done, for the channel there is also a massive opportunity to move from just shifting technology to offer strong strategic counsel and solutions that help organisations embrace the possibilities AI presents confidently and securely. MSSPs must focus on helping organisations to integrate AI into their systems securely, rather than viewing it as a risky proposition.
As we head into 2025, business leaders and security teams must strike a careful balance between innovation and security. For MSSPs, its about ensuring AI initiatives do not inadvertently open new doors for cyber attackers.
Source link