Data poisoning: a new front in the AI cyber war

By Paddy Smith
Data poisoning exploits training data to deliberately mislead machine learning algorithms, and it’s on the rise. Here’s what you need to know...

Machine learning is big business. It’s a core element in the design of ‘automagical’ tools, which intelligently parse data to give humans a critical edge in anything from strategy planning for business to identifying the plants in their flower beds. It’s also frequently (and somewhat mistakenly) conflated with artificial intelligence (AI). It’s so effective that few large enterprises are not at least considering its implications for improving data analysis and automating parts of their operational machinery; it’s a core pillar of digital transformation projects.

An attack on the fidelity of machine learning’s ability to correctly identify types of data could be catastrophic now, and has the potential to be apocalyptic in a digitally-transformed future. Which is why data poisoning – the deliberate corruption of machine learning algorithms – is such a critical threat.

What is data poisoning?

Machine learning algorithms are impressive at dealing with large volumes of data, but they must be trained properly using well-labelled, accurate training data. Corrupting the training data leads to algorithmic missteps that are amplified by ongoing data crunching using poor parametric specifications. Data poisoning exploits this weakness by deliberately polluting the training data to mislead the machine learning algorithm and render the output either obfuscatory or harmful.

Data poisoning isn’t new exactly. Early examples, where spam filters where targeted by cybercriminals, were seen as long ago as 2004.

How does data poisoning work?

Data poisoning relies on the inherent weaknesses of machine learning. While human brains are adept at recognising what is important in a pattern and rejecting what is not, software can only work with the basics and is (currently) unable to tune out interference that may be incidental, rather than indicative. To theorise an example: a machine learning program is shown 500 pictures of black dogs labelled as ‘dog’ and 500 pictures of white cats labelled as ‘cat’. Now the algorithm is shown a picture of a white dog. Output: cat.

The training data in the example is woefully inadequate for a real-world scenario. Yet machine learning software has been tricked by simple visual elements such as logos and watermarks precisely because it cannot – as a human would – identify this visual information as being incidental to the pertinent image information.

Similar tricks can be played with numerical and text data sources.

Who are the bad actors in data poisoning?

Just as machine learning can create competitive advantage, it can be used by unscrupulous competitors to frustrate business operations. Think of data poisoning as a new type of corporate espionage, yet instead of finding out your competitor’s secrets, you hide their own information from them, or deliberately lead them to poor interpretations of their own data.

A bad actor could also use data poisoning to obfuscate transactional data at a bank, preventing AI-led identification of money laundering operations, for example. Or it could be used as ransomware, or a tool for activists who want to frustrate a business operation. Financial markets could also be used to profit from data-led swings orchestrated by feeding poisoned data to quantitative analysis software. A data poisoning cyberattack at government or military level might also be possible. A terrorist faction could, theoretically, use data poisoning to subvert AI-led air traffic control at a major airport.

Data poisoning can also be used in software certification, allowing cybercriminals to circumvent cybersecurity by ‘teaching’ the algorithm to treat malicious code tagged in the correct way as clear for deployment.

How does data become poisoned?

Although machine learning is capable of tripping itself up without guidance, to achieve a specific result a human bad actor needs access to the training data. In the case of an organisation using its own data, this requires infiltration. However, a major concern is that ‘pre-packed’ training data could be an easier target, and such data is already in common usage by companies who are managing project costs. It’s also the case that training data could be poisoned on a platform level, where a company opts to use third-party services to manage its AI requirements.

How to eliminate the data poisoning threat?

The best defence against a data poisoning attack is to use your own training data and be vigilant about who labels it and how. But a better holistic defence might be to look at training a secondary tier of AI to spot mistakes in your primary data analysis. Technology companies such as IBM are already white-hatting data poisoning attacks to find solutions.

In the interim of truly effective oversight or solutions it’s worth bearing in mind that, despite all its advances, machine learning is in its infancy. Companies should retain human oversight on data analysis to check for anomalies in algorithmic learning.

One of the best known real-world data poisoning hacks was orchestrated by data scientists at New York University, who were able to train autonomous vehicle software to recognise a stop sign as a speed limit sign. The lesson, for drivers of semi-autonomous cars and the business intelligence community is: keep your eyes on the road and your hands on the wheel.

Share

Featured Articles

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

We attended the IOC Olympic AI Agenda Launch for Olympic Games Paris 2024 to learn about its AI strategy and enterprise partnerships to transform sports

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Qualtrics has appointed Microsoft veteran Gurdeep Singh Pall as its new President of AI Strategy to transform the company’s AI offerings for customers

Should Tech Leaders be Concerned About the Power of AI?

Technology

Andrew Ng Joins Amazon Board to Support Enterprise AI

Machine Learning

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

Machine Learning