A new report has found that while the vast majority of businesses are now deploying Generative AI in some capacity, but a sizeable minority are failing to act on their fears of its misuse. More than four-in-five business leaders consider Generative AI to be a potential security risk, one-third have not put measures in place to secure its use within their organisation.
Founded in 2008, Zscaler is a global cloud security company, with headquarters in San Jose, California. The company offers enterprise cloud security services in more than 185 countries, Zscaler operates the world’s largest cloud security platform, protecting thousands of enterprises and government agencies from cyberattacks and data loss.
In late 2023, Zscaler commissioned Sapio Research to conduct a survey of 901 IT decision makers in Australia, New Zealand, France, Germany, India, Italy, Netherlands, Singapore, Spain, the UK, Ireland, and the USA. The respondents worked at companies of more than 500 employees, and across all industries – and their responses suggest a worrying trend with relation to one of the leading trends in global industry.
The start of the year saw an explosion of hype around generative AI technology. Products such as text-generator ChatGPT, image-generator DALL·E, and text-to-code generator Codex saw the majority of businesses rushing to invest in the technology, with the hope that generative AI could both boost productivity, and help them cut down on expenses incurred by human labour. In the process, however, many seem to have overlooked the ways in which adopting it so rapidly could leave their operations exposed to various risks.
Increasingly hyperbolic forecasts, including a prediction from McKinsey & Company that generative AI could boost global productivity by $44 trillion annually did not help with this. Amid the fervor with which firms rushed to avail themselves of this apparently unmissable opportunity, however, they have consistently overlooked putting practices in place which would get the most from the technology. For example, one study found just 14% of frontline staff have received training to address how AI will change their jobs, while 86% of employees felt they were inadequately trained for coming AI changes. Among other things, this means they are not properly drilled on the security risks which it poses.
Zscaler’s research further flags up this apparent indifference among corporate leadership toward the risks of generative AI. A although 89% of organizations consider GenAI tools like ChatGPT to be a potential security risk, 95% are already using them in some guise within their businesses. A 92% majority of respondents said that they expected interest in deploying the technology at their organisation to rise before 2023 is even over. This, in spite of 89% also admitting that generative AI tools could pose potential security risks – and almost half saying they posed more of a threat than an opportunity.
Even more worryingly, despite the huge number of firms anticipating risks from generative AI’s early forms, many are failing to take action to secure their operations. Zscaler found that 23% of the user group were not monitoring the usage at all, while 33% had yet to implement any generative AI-related security measures.
On the other hand, Zscaler did find a number of mitigating factors, which suggest the risks of generative AI for business security could still be minimised. First, most firms which had not implemented AI-related security measures said it was on their “road-map”, suggesting they are at least planning action. But further to that, most generative AI use is currently being driven by IT departments – where employees are likely to be better versed in its risks, and take appropriate action – rather than the wider workforce. Indeed, only 5% of respondents said its use stemmed from general employees, while 59% said it was being driven by IT teams directly.
Sanjay Kalra, vice president for product management at Zscaler, commented, “The fact that IT teams are at the helm should offer a sense of reassurance to business leaders. It signifies that the leadership team has the authority to strategically temper the pace of generative AI adoption and establish a firm hold on its security measures, before its prevalence within their organization advances any further. However, it’s essential to recognise that the window for achieving secure governance is rapidly diminishing.”