Security Do’s, Don’ts and What-Were-They-Thinking Use Cases for AI

By: Troy Leach, Chief Strategy Officer of Cloud Security Alliance

By now, you are likely aware and tried one of the many Large Language Models (LLM) like ChatGPT, Google’s Gemini or Anthropic’s Claude that have captured the imagination with their creative problem-solving, reasoning, pattern identification and the ability to communicate incredibly well back into any language or with any tone.  Personally, I’ve begun asking ChatGPT for the daily news, collected from the top 5-6 news feeds, analyzed for the collective tops stories and then shared with me in the style of George Carlin.  It has been a time saver and wildly entertaining.

You have also probably heard of several concerns over the models being able to create harm by generating new forms of malware or exposing sensitive information or creating a false sense of accuracy.  One colleague once said that the biggest problem with how we use AI is that we treat it as a simple machine and not with the fallacy of a human.  Indeed, even the best of the models act like a “Genius 13-year-old. Overconfident with short attention span and no street smarts”. 

While we do expect these models to grow up and mature over time, there is still an abundance of value in using GenAI today.  But we want to be safe and smart for how we go about protecting our business and our customers data to maintain PCI DSS requirements or other expectations.  So I’d like to share some guidance when incorporating a public model like ChatGPT.

There’s a policy for that
ChatGPT has astonished with its ability to aggregate large amounts of information, consolidate into relevant points and communicate back in a easily readable and understood form.  Scary good sometimes.  But to do so, there is a need to consume lots of information to generate the best response.  That includes training on data provided by users.

There have been instances already where sales teams have submitted their spreadsheet of customers to GenAI models to determine future forecasts or customers most likely to leave in the next quarter only to find that information in the hands of a competitor prompting for potential customers. 

Recommendation:  Every company, regardless of size, should have an AI Policy in place regarding what is acceptable practices for using GenAI.  You should also be thinking about the most likely use cases for AI within your business and planning good security to meet those scenarios. 

There is often confusion over what is private and what is a public GenAI model.  Private GenAI restricts usage with controls to protect sensitive data and proprietary assets.  Remember the Samsung developer that placed proprietary code into ChatGPT to fix errors?  A policy that all staff are aware of could prevent PCI Payment Data, proprietary software and other sensitive information from unintentionally being shared.

Monitor the shadows
But even with a policy, you should still be aware of a growing amount of shadow access.  As the term infers, shadow access is the use of AI applications and tools without the explicit knowledge, approval or oversight by the company.  The more people try and get comfortable with using LLMs, the more likely they are to try and see if AI can help expedite parts of their jobs.  And they should but under the guidance and monitoring of their IT department.

There is another type of monitoring that you should be doing as well, which is regularly watching for changes in regulation when it comes to AI.  There are many U.S. states that have either passed legislation (e.g. Colorado) or considering having laws specifically regarding the use of AI.  And other regions of the world have already enacted legislation like the EU AI Act which will require transparency for when customers are interacting with AI, classifying the risk for use of the AI and many other provisions.

Recommendation:  Create a monitoring and audit practice that continues to look for anomalies.  I’d also suggest looking at solutions that monitor the output of GenAI prompts to confirm what was generated from the prompt does not violate the policy and has the LLM accidently sharing sensitive information, promoting a competitor in marketing collateral (it’s happened) or providing code back to the prompt user that does not execute as expected.

Thinking security these days means thinking differently
Even when you do have company-approved GenAI tools that meet your policy criteria,  you should still be confirming that the tools are restricted for intended use.  Case in point, Microsoft’s Copilot is an incredible tool that I personally recommend that can help in so many facets of your daily work life.  However, a recent audit earlier this year conducted by a cybersecurity firm found that approximately half of the hundreds of thousands of copilot instances they audited had privileges above what the employee should have had.  This led to the ability to ask for sensitive information such as all the employee salaries throughout the company and other requests that should not have been available.  You could probably imagine the conversations that ensued after finding that type of information.

One (of the many) security benefits of AI that I’ve heard is controls to monitor for inappropriate access requests. If that occurs, GenAI can have an immediate ‘chat’ with the user to understand better why they made the request, what information they possibly needed for their job and then provide training for the user to better understand the security implications.  It also provides a collection of new understanding for management to understand what employees are looking for in their work with GenAI creating reports.

Recommendation:  Don’t be afraid to incorporate AI security techniques into your systems.  Just spend the time to investigate how best to deploy and train the security team to be equipped to monitor.  With the acceleration of threats generated by malicious GPTs, you need capabilities that AI can bring to identify more quickly vulnerabilities in code, abnormalities in network traffic and a wealth of many other security benefits.

For more information on securing AI, please visit our website for many free research papers on the topic:  https://cloudsecurityalliance.org/research/artifacts?term=artificial-intelligence