As the use of generative AI systems such as ChatGPT and other large language models (LLMs) continue to quite literally change the way business is done, cybersecurity researchers have started to raise a warning sign. Their concern is the covert poisoning of AI training data along with the fact that it is becoming increasingly more difficult to spot it.
During the RSA Conference that took place in San Francisco, security experts exhibited a case to prove how easy it is for an LLM to be derailed by a few, apparently harmless inputs. Still, C-level executives at Checkmarx and other prominent security companies believe that the greatest threat is not noticeable mistakes, but the silent changes that might be hidden for many months or even years.
“Invisible Threats” Changing the Scene
At the event, Erez Yalon, Head of Security Research at Checkmarx, and his team confirmed that well-known open-source models can be easily misled to produce incorrect and risky outcomes. He demonstrated his example by creating a shopping list that was populated with a dangerous product (rat poison) promoted as a healthy food, which shocked the viewers.
On the other hand researcher Erez Yalon indicated that not the ones that are the most evident are the biggest threats. Rather, it is the possibility that the attacker might embed a tiny, hidden bias so deep in the LLM’s behavior code that it is only triggered by specific conditions when it comes to the surface.
“These silent poisonings are the real threat,” the expert pointed out. “You might think your AI is working perfectly — until one day, under the right trigger, it behaves in ways you never expected.”
Attacks On AI By Poisoning Can Affect Critical Industries
A joint effort of a panel and the RSAC event of subtle poisoning could actually serve as a blueprint for the ways by which different domains might get affected, like:
- Healthcare: Incorrect AI-made diagnoses, or misleading medical advice.
- Finance: Faulty robotic trading algorithms.
- Software Creation: Code suggestions that have dangerous flaws.
- Public Safety: AI-powered fake news in crisis response tools.
“If malicious acts lead to poisoned data entering the system, then this could not only result in the downfall of the supply chains and infrastructure but also pose a threat to the national security,” warned Cassie Crossley, who wrote a book on Software Supply Chain Security.
Open-Source AI Has Become More Vulnerable
Growing communities are using open-source algorithms because they are free and easy to access, and this is the reason they are the most vulnerable to those with ill intentions, said one IT expert during a webinar.
Cybersecurity vet Ira Winkler, who served as a moderator of the discussion, provided a gloomy prediction: “We witnessed this exact thing in the past with software libraries. Now, we are back to it with AI models.”
He equated the seriousness of the situation to the most famous catastrophes such as the SolarWinds breach but mentioned that AI supply chain attacks would be even more difficult to follow.
Ways to Secure AI Systems of Organizations
Panel members at the RSA Conference (RSAC) suggested a number of ways of protecting the AI systems from being attacked by the AI poisoning.
- Check AI Training Data: Do thorough checks prior to using datasets for open-source AI training.
- Make Internal Adjustments: Adjusting one’s own system is better than external assistance for fine-tuning.
- Utilize Watermarking and Provenance Tools: Keep track of data sources and changes made on the AI model.
- Integrate Threat Detection into AI Pipelines: Monitor outputs for unusual behavior patterns.
First of all, it is immensely necessary for a company to not treat LLMs any different from other of their critical supply chain counterparts—namely, do away with the ‘black box’ mentality.
What You Need to Know
Even the simple request from the frequent users of AI apps for recipes or travel tips only exposes AI to risk.However, in the associations of AI with banking processes, healthcare and security, the risks are much higher.
Yalon ended his speech with a clear and direct message: “AI is not some kind of mystique. It is just like a wheel in the supply chain. Plus, any supply chain can be exposed to attacks.”
Just as AI tools change over time, the strategies for their security must also be dynamic in order to guarantee their safety.