As AI has been on the rise for some time now, it has entirely changed the way we interact with technologies and computers and likely stuff.

While this technology suffices to deliver the best of what we CISOs, not discarding other professionals of the IT sector, could imagine only a decade back, it has also brought the reliability of these technologies to the brink of collapse, compelling us to brace for the worst to come as there also exists a cult who’s making the best of it, and exploiting it to inflict harm to businesses.

As CISOs are supposed to safeguard the IT infrastructure of an organisation by devising strategies that also encompass cybersecurity, on the other end, looming artificial intelligence (AI) threats are there to do the opposite.

Given that, we CISOs will be facing this hazard in a two-front battle in view of AI risk. Not only will we need to be wary of the threats brought by AI attacks against enterprise deployments of AI and machine learning (ML) models, but we will also have to protect ourselves from greater attacks blown by bad guys' use of AI.

How AI is affecting CISO’s planning

Model Poisoning: What’s been annoying the fraternity CISOs is something called model poisoning which harms the reliability of AI. When it comes to keeping data and processes safe, these attacks mess with the data that a learning model is trained on. The purpose is to either mess up the model without a specific target or change its results to benefit the attacker.

AI and ML models: On the basis of my own experience in the realms of data science and AI/ML research, I’d emphasise the crucial role of collaboration and iterative development. This involves extensive sharing, no matter if it’s data sharing or model sharing. But this might pose a significant risk to the AI supply chain like those in application security deal with issues related to the security of software supply chains.

Attackers might implant malicious code into pre-trained machine learning models, leading to a ransomware attack on an organisation utilising ML models from public repositories. Attackers are seizing a legitimate model from a repository, corrupting it to serve their own purposes, and then reloading the model to carry out the attack.

Data security: Some of the most significant risks AI has posed include data security and privacy threats. AI models that aren't equipped with robust privacy measures, provide sufficient margin for attackers to compromise the confidentiality of the data used in their training. Certain attacks, such as membership inference, have the capability to interrogate models in a way that reveals whether specific data is included in the model.

Model inversion: Furthermore, attacks like model inversion are stealing training data, effectively reconstructing the original data.

“An ML system that is trained up on confidential or sensitive data will have some aspects of that data built right into it through training," said Gary McGraw, co-founder of Berryville Institute of Machine Learning.

Source code theft

It's essential to recognise that there's a risk of attackers stealing the unique method used in a specific AI/ML model, commonly referred to as model theft attacks. Here, attackers opt for direct methods, like breaking into private source code repositories through phishing or password attacks to control entire models.

These attacks reconstruct how a model makes predictions by systematically querying the model. This presents a particular concern for CISOs overseeing organizations that heavily invest in proprietary AI models connected to their core products.

SpongeBob attacks

The problems CISOs are going to tackle in the coming years include what is called sponge attacks. In this attack type, attackers conduct a denial of service attack on an AI model by altering input to destroy the model's use of hardware consumption.

Evasion attacks

These attacks have been sneaky because they trick systems that are supposed to recognise things like facial recognition or the vision systems deployed in autonomous cars. For instance, putting tricky stickers on a stop sign could confuse a self-driving car.

In a recent incident at the Machine Learning Evasion Competition (MLSEC 2022), there was an attack that made an AI facial recognition system change celebrity photos, making the system think they were entirely different people.