AI and Cyber Security Risks

It seems like everybody is talking about AI (Artificial Intelligence) in terms of its benefits but not many are considering the unique risks that AI presents and there are many.

AI and cyber security risks

AI is a relatively new threat for businesses. ChatGPT came into prominence in 2023. ChatGPT is a chatbot developed by OpenAI.

In January 2024, the U.K.’s National Cyber Security Centre warned that the global ransomware threat is expected to rise due to the availability of AI technologies.

U.K. businesses are shown to be particularly at risk in a recent Microsoft report finding that 87% are either “vulnerable” or “at high risk” of cyber-attacks.

Cyber criminals can use AI technology to stage even more convincing social engineering attacks which are used to gain initial network access. According to Google Cloud’s global Cybersecurity Forecast report, generative AI “will be increasingly offered in underground forums as a paid service and used for various purposes such as phishing campaigns and spreading disinformation.”

Real-time cloning software that uses AI to swap a video caller’s face with someone else’s when it is used with AI voice cloning software could be used to trick staff into believing that they are having a meeting with their boss. This could have a devastating impact on businesses of all shapes and sizes.

OpenAI recently announced that it is taking a “cautious and informed approach” when it comes to releasing its voice cloning tool, Voice Engine, to the general public “due to the potential for synthetic voice misuse.” The tool can convincingly replicate a user’s voice with just 15 seconds of recorded audio.

We are moving into an age where seeing is not always believing, and verification remains the key to security. Corners must never be cut when it comes to cyber security and all staff need to be made aware of (real-time cloning software) which is about to explode over the next 12 months.

And what about data? There are obviously issues around the use of data when using generative AI tools. Businesses should take steps to ensure employees are aware of the potential risks of exposing client data, company data and development code to AI. There could also be data integrity issues in using AI generated text such as blogs; is the data you receive back from the generative AI applications correct and accurate? Technologies like Data Loss Prevention (DLP) software can also enable organisations to stop sensitive information from being copied and pasted into chatbots.

Artificial intelligence and cyber security risks

There could also be intellectual property issues relating to generative AI tools when inputting data or development code into the tools; who owns the data and who is the data then shared with, does it comply with or potentially breach contractual requirements? Tools like ChatGPT could leak that inputted data while answering prompts of users outside your business.

Educating employees on the safe handling of sensitive data should be part of our business processes. But the introduction of new AI tools into the workplace commands special attention be paid to the new data security threats that come along with it. Ensure that employees understand what data can and cannot be shared with AI tools and applications. Ensure employees are aware of the increase in malware and phishing campaigns that may result from generative AI. Guidelines or an AI usage policy on the safe use of AI is a great idea explaining what AI is and the potential risks and how to use AI tools and applications safely.

Generative AI can have some great business benefits and help in our business processes but just make sure everybody is aware of the potential risks to data and cyber security.

If you would like us to help you with consultancy or training your staff in cyber security awareness, please contact us today and we can tailor a solution to fit your unique needs.

Previous
Previous

What is the Difference Between Confidentiality and Privacy in SOC 2?

Next
Next

Benefits of SOC 2 Compliance