ICO launches AI risk assessment toolkit for businesses

The ICO has launched a toolkit to help organisations using AI to process personal data understand the risks and ways of complying with data protection law

The Information Commissioner's Office (ICO) is launching a risk assessment toolkit for businesses so they can check if their use of AI systems breaches data protection laws.

The AI and Data Protection Risk Assessment Toolkit, now available in beta, draws upon the Guidance on AI and Data Protection, as well as the ICO’s co-badged guidance with The Alan Turing Institute on Explaining Decisions Made With AI. It is also part of their commitment to enable good data protection practice in AI.

The toolkit contains risk statements that organisations can use while processing personal data to understand the implications this can have for the rights of individuals. It will also provide suggestions for best practices that companies can put in place to manage and mitigate risks and ensure they're complying with data protection laws. 

According to the ICO, it’s based on an auditing framework, which was developed by its internal assurance and investigation teams following a call for help from industry leaders back in 2019. 

 

The importance of protecting personal data

 

The framework provides a clear methodology to audit AI applications and ensures they process personal data in compliance with the law. The ICO said that if an organisation is using AI to process personal data, then by using its toolkit, it can have high assurance that it is complying with data protection legislation.

"We are presenting this toolkit as a beta version and it follows on from the successful launch of the alpha version in March 2021," said Alister Pearson, the ICO's Senior Policy Officer for Technology and Innovation Service. "We are grateful for the feedback we received on the alpha version. We are now looking to start the next stage of the development of this toolkit.

"We will continue to engage with stakeholders to help us achieve our goal of producing a product that delivers real-world value for people working in the AI space. We plan to release the final version of the toolkit in December 2021."

Recently the ICO published their annual tracking survey where they found that 77% of people say protecting their personal information is essential. The main reasons given by the public for having a low level of trust and confidence (rating 1-2 out of 5) in companies and organisations storing and using their personal information are similar to those cited in 2020 and are centred around the belief that companies sell personal information to third parties, as well as concerns about data misuse, data hacking, and data leaks/breaches.

Share

Featured Articles

Securing the future of IoT with AI biometric technology

The world needs an IoT future, so It’s time to forget your password – for good. A new breed of AI-powered biometric security measures is required

EU networks plan to build a foundation for trustworthy AI

Artificial intelligence technologies are in their infancy, but commercial stakeholders must build them on foundations of trust, say research experts

ICYMI: Power users distrust AI and new National Robotarium

A week is a long time in artificial intelligence, so here’s a round-up of AI Magazine articles that have been starting conversations around the world

Reducing the impact of ecommerce with AI and ML

AI Applications

Now is the time for intelligent products and services

AI Strategy

The eyes of a machine: automation with vision

Machine Learning