AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business
Is the quick adoption AI leaving companies vulnerable to new forms of hacking? Here is an excerpt from our latest paper on the the overlooked risks of Gen Ai for business
The recent hype around AI has seen many companies rush to incorporate generative AI to their business strategy.1 A recent IBM study found that nearly 80% of UK businesses have already deployed generative AI in their business or are planning to within the next year 2. The message to industry seems clear “Organizations are seizing the generative AI moment to capture opportunities … Those that don’t will be stuck in the control tower wondering why they’ve fallen behind.”3.
Generative AI models take large amounts of data and are then trained to produce data that resembles the most commonly found elements. A Large Language Model (LLM) is a type of generative AI model that assigns statistical probabilities to a sequence of words. These probabilities help to generate human like responses in natural language processing tasks4. Companies are using these LLMs such as ChatGPT, LLaMA, Claude, and Gemini to aid many areas of business. The areas which are most likely to see the potential of generative AI to improve businesses are areas such as sales, marketing, software engineering, customer service and product research and development5. The benefits of its implementation are still being tested, but there is early evidence that AI-based assistants can improve the performance of novice or low-skilled workers6.
However, there are growing concerns that the race to integrate generative AI is not being accompanied by adequate guardrails or safety evaluations 7. A recent global survey on AI found that few companies were fully prepared for the widespread use of generative AI 8. The rush to buy into the hype of generative AI, and not fall behind the competition, is potentially exposing organisations to broad and possibly catastrophic cyber-attacks or breaches. In the growing area of cyber security ethics, the hype around AI presents a novel risk, one which could lead companies to fail in their moral obligation to keep company and individual’s data safe and secure.
The risks are already showing
We have already seen Microsoft AI researchers accidently leak 38 TB of private training data9; Samsung employees inputting sensitive source code into ChatGPT, 10; and a bug in ChatGPT exposing active user’s chat history 11. Beyond the risk due to accidents or human error, there are more malicious threats posed by generative AI. Imagined scenarios could see targeted manipulation of the data driving a company’s model to spread misinformation or influence business decisions 12. Risks are also increased with the reliance on third-party AI providers, with more than half (55%) of AI related failures stemming from third-party tools, companies can be left vulnerable to unmitigated risks 13.
It is evident that generative AI poses new and novel threats to business security. A recent IBM survey found that 96% of surveyed business executives expect that adopting generative AI will make a security breach likely in the next three years 14. However, this report noted a “glaring disconnect between the understanding of generative AI cyber security needs and the implementation of cyber security measures” 15. Reportedly, only 24% of generative AI projects will include a cyber security component within the next 6 months, with 69% of executives saying that innovation takes precedence over cyber security for generative AI16. A separate study found that 53% of organisations saw cyber security as a generative AI-related risk, with only 38% working to mitigate that risk17.
The ethical concern
The hype around generative AI in business, therefore, presents an area of ethical concern. Ethics is at the core of cyber security, as it is increasingly required to prevent harm to people, not just information, and to protect our ability to live well [181920]. Companies have a duty of care toward their users, customers, and employees with regard to protecting the data they hold 21. The world is now so reliant on secure networks and systems to protect identities, personal information, and livelihoods that breaches to can have major disruptions and disastrous effects on individual’s lives 22. Beyond the effect on the public, it is in the financial interest of companies to focus on cyber security with the average cost of a data breach in 2023 being USD 4.45 million 23.
As our analysis of potential threats to generative AI models, such as LLMs, will show businesses need to be aware of the increased risk to privacy and security. While companies tout the vast benefits of generative AI for business productivity, there needs to be a greater focus on effective mitigation of threats posed to and by generative AI models 24. Conversations of these risks have generally been kept within cyber security industry professionals, but there needs to be a wider understanding of the vulnerabilities which generative AI is susceptible to before organisations jump to using them. There is an ethical responsibility for business to consider the cyber security risk associated with generative AI, and for this information to be shared with the general public.
The full paper can be found here: AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business.
Taken from: Humphreys, D., Koay, A., Desmond, D. et al. AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00443-4
IBM: Leadership in the age of AI. IBM: (2023)
IBM: The CEO’s Guide to Generative AI: Supply chain. IBM: (2023)
Carlini, N., Tramèr, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., Roberts, A., Brown, T.B., Song, D.X., Erlingsson, Ú., Oprea, A., Raffel, C.: Extracting Training Data from Large Language Models. In: USENIX Security Symposium. (2020)
McKinsey & Company: The Economic Potential of Generative AI: The next Productivity Frontier. McKinsey & Company (2023)
Brynjolfsson, E., Li, D., Raymond, L.: Generative AI at Work. National Bureau of Economic Research (2023)
Greshake, K., Abdelnabi, S., Mishra, S., Endres, C., Holz, T., Fritz, M.: More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models. arXiv preprint arXiv:2302.12173 (2023)
Chui, M., Yee, L., Singla, A., Sukharevsky, A.: The State of AI in 2023: Generative AI’s Breakout year. McKinsey & Company (2023)
Ben-Sasson, H., Greenberg, R.: 38 TB of data accidentally exposed by Microsoft AI researchers (2023). https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers. Accessed 22 November 2023
Park, K.: Samsung bans use of generative AI tools like ChatGPT after April internal data leak (2023). https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/. Accessed 22 November 2023
OpenAI: March 20 ChatGPT outage: Here’s what happened: (2023). https://openai.com/blog/march-20-chatgpt-outage
IBM: The CEO’s guide to generative AI: Cybersecurity. IBM: (2023)
Renieris, E.M., Kiron, D., Mills, S.: Building Robust RAI Programs as Third-Party AI tools proliferate. MIT Sloan Manage. Rev. (2023)
See note 12
See note 12
See note 12
See note 8
Vallor, S.: An Introduction to Cybersecurity Ethics. Markkula Center for Applied Ethics (2018). https://www.scu.edu/media/ethics-center/technology-ethics/IntroToCybersecurityEthics.pdf
Formosa, P., Wilson, M., Richards, D.: A principlist framework for cybersecurity ethics. Computers Secur. 109, 102382 (2021). https://doi.org/10.1016/j.cose.2021.102382
Blanken-Webb, J., Palmer, I., Campbell, R.H., Burbules, N.C., Bashir, M.: Cybersecurity Ethics. Foundations of Information Ethics, pp. 91–101. American Library Association (2019)
Morgan, G., Gordijn, B.: A care-based stakeholder approach to ethics of cybersecurity in business. In: Christen, M., Gordijn, B., Loi, M. (eds.) The ethics of cybersecurity https://doi.org/ (2020). https://doi.org/10.1007/978-3-030-29053-5_6, pp. 119–138
Agrafiotis, I., Nurse, J.R.C., Goldsmith, M., Creese, S., Upton, D.: A taxonomy of cyber-harms: Defining the impacts of cyber-attacks and understanding how they propagate. J. Cybersecur. 4 (2018). https://doi.org/10.1093/cybsec/tyy006
IBM: Cost of a Data Breach Report 2023. IBM: (2023)
See note 7