Microsoft launches open source tool to prevent AI hacking

By Sam Steers
Microsoft has launched a counterfeit open source tool in an attempt to prevent AI hacking and allow businesses to evaluate machine learning security...

Microsoft has announced that it has launched a counterfeit open source tool to try and prevent AI systems being hacked.

The Counterfeit project, released on GitHub, allows business developers to evaluate the severity of a cyber attack by simulating a threat against an AI system. 

In a statement, Microsoft said: “This tool is part of broader efforts at Microsoft to empower engineers to securely develop and deploy AI systems.”

Security professionals are able to set up the project in three specific ways: scanning AI systems for vulnerabilities, logging attacks against AI models, and by pen testing and red teaming AI systems. 

Scanning AI systems

Scanning AI systems regularly for vulnerabilities allows businesses to gain an understanding of potential weaknesses in their system’s environment. It also helps in preventing cyber attacks that could severely damage valuable software. 

What is pen testing?

Also known as penetration testing, unlike manual testing, pen testing allows for increased system and software security by ensuring that there are no loopholes in it once the testing is complete. The results of tests are also more accurate than those of a manual test, meaning they would be more reliable. 

Benefits of using artificial intelligence to prevent cyber attacks

There are several benefits to using artificial intelligence to help stop cyber threats. Firstly, AI can process much larger volumes of data than a human can, meaning that they can pick up any threats earlier and faster. Another advantage is it reduces the likelihood of any errors in a company's cybersecurity software, allowing for security that is more trustworthy. 

AI also increases the response and detection times when searching for threats. This also allows threats to be spotted and thwarted more quickly and efficiently than a cybersecurity system not enhanced by AI. Artificial intelligence has the ability to spot multiple threats at once, which increases the wall of security around the software and information that needs to be protected. While securing data is not easy, artificial intelligence can make it easier by taking care of threat prevention as it works with the cybersecurity programme.

Microsoft says that the tool comes with attack algorithms preloaded with developers and security experts being able to use the cmd2 scripting engine built into the tool to carry out the tests. 

The company also claims that companies can alternatively create baselines by scanning AI systems using the attack simulations which aims to help measure the company’s progress. 

According to Microsoft, several of its partners and government agencies have collaborated with the company to test the tool in their own environments. 


Featured Articles

ABBYY partner with Arsenal Women to offer AI solutions

Digital solutions provider ABBYY becomes Arsenal Women’s first official intelligent automation partner to offer expertise in business transformation

SAP announces Joule, its enterprise generative AI assistant

SAP's enterprise generative AI chatbot Joule is company's latest addition to its enterprise offering, promising to transform the way businesses run

Virgin Atlantic accelerates AI transformation with Amperity

Leading enterprise customer data platform will help Virgin Atlantic leverage a data-driven strategy to deliver highly personalised customer experiences

Sustainability LIVE: Event for AI leaders

AI Strategy

VMware and NVIDIA AI Foundation unlocks business potential

Machine Learning

TimeAI Summit Oct 2023 to unite tech giants and visionaries