Microsoft launches open source tool to prevent AI hacking

By Sam Steers
Microsoft has launched a counterfeit open source tool in an attempt to prevent AI hacking and allow businesses to evaluate machine learning security...

Microsoft has announced that it has launched a counterfeit open source tool to try and prevent AI systems being hacked.

The Counterfeit project, released on GitHub, allows business developers to evaluate the severity of a cyber attack by simulating a threat against an AI system. 

In a statement, Microsoft said: “This tool is part of broader efforts at Microsoft to empower engineers to securely develop and deploy AI systems.”

Security professionals are able to set up the project in three specific ways: scanning AI systems for vulnerabilities, logging attacks against AI models, and by pen testing and red teaming AI systems. 

Scanning AI systems

Scanning AI systems regularly for vulnerabilities allows businesses to gain an understanding of potential weaknesses in their system’s environment. It also helps in preventing cyber attacks that could severely damage valuable software. 

What is pen testing?

Also known as penetration testing, unlike manual testing, pen testing allows for increased system and software security by ensuring that there are no loopholes in it once the testing is complete. The results of tests are also more accurate than those of a manual test, meaning they would be more reliable. 

Benefits of using artificial intelligence to prevent cyber attacks

There are several benefits to using artificial intelligence to help stop cyber threats. Firstly, AI can process much larger volumes of data than a human can, meaning that they can pick up any threats earlier and faster. Another advantage is it reduces the likelihood of any errors in a company's cybersecurity software, allowing for security that is more trustworthy. 

AI also increases the response and detection times when searching for threats. This also allows threats to be spotted and thwarted more quickly and efficiently than a cybersecurity system not enhanced by AI. Artificial intelligence has the ability to spot multiple threats at once, which increases the wall of security around the software and information that needs to be protected. While securing data is not easy, artificial intelligence can make it easier by taking care of threat prevention as it works with the cybersecurity programme.

Microsoft says that the tool comes with attack algorithms preloaded with developers and security experts being able to use the cmd2 scripting engine built into the tool to carry out the tests. 

The company also claims that companies can alternatively create baselines by scanning AI systems using the attack simulations which aims to help measure the company’s progress. 

According to Microsoft, several of its partners and government agencies have collaborated with the company to test the tool in their own environments. 


Featured Articles

Why are the UK and China Leading in Gen AI Adoption?

China and the UK are leading the adoption of Gen AI, which although sounds surprising to begin with, becomes clearer as you dig into their state strategies

Moody's Gen AI Tool Alerts CRE Investors on Risk-Posing News

Moody's new Early Warning System will use Gen AI to monitor breaking news headlines and alert clients on how it could impact their real estate portfolios

AWS Unveils AI Service That Makes Enterprise Apps in Minutes

Amazon Web Services' new App Studio offers low-code solutions so those not too familiar with code create applications for internal processes in minutes

Jitterbit CEO: Confronting the Challenges of Business AI

AI Strategy

Graphcore: Who is the Nvidia Challenger SoftBank Acquired?


Amazon Takes On AI Hallucinations Across Its AI Portfolio

Machine Learning