ICO launches AI risk assessment toolkit for businesses

The ICO has launched a toolkit to help organisations using AI to process personal data understand the risks and ways of complying with data protection law

The Information Commissioner's Office (ICO) is launching a risk assessment toolkit for businesses so they can check if their use of AI systems breaches data protection laws.

The AI and Data Protection Risk Assessment Toolkit, now available in beta, draws upon the Guidance on AI and Data Protection, as well as the ICO’s co-badged guidance with The Alan Turing Institute on Explaining Decisions Made With AI. It is also part of their commitment to enable good data protection practice in AI.

The toolkit contains risk statements that organisations can use while processing personal data to understand the implications this can have for the rights of individuals. It will also provide suggestions for best practices that companies can put in place to manage and mitigate risks and ensure they're complying with data protection laws. 

According to the ICO, it’s based on an auditing framework, which was developed by its internal assurance and investigation teams following a call for help from industry leaders back in 2019. 

 

The importance of protecting personal data

 

The framework provides a clear methodology to audit AI applications and ensures they process personal data in compliance with the law. The ICO said that if an organisation is using AI to process personal data, then by using its toolkit, it can have high assurance that it is complying with data protection legislation.

"We are presenting this toolkit as a beta version and it follows on from the successful launch of the alpha version in March 2021," said Alister Pearson, the ICO's Senior Policy Officer for Technology and Innovation Service. "We are grateful for the feedback we received on the alpha version. We are now looking to start the next stage of the development of this toolkit.

"We will continue to engage with stakeholders to help us achieve our goal of producing a product that delivers real-world value for people working in the AI space. We plan to release the final version of the toolkit in December 2021."

Recently the ICO published their annual tracking survey where they found that 77% of people say protecting their personal information is essential. The main reasons given by the public for having a low level of trust and confidence (rating 1-2 out of 5) in companies and organisations storing and using their personal information are similar to those cited in 2020 and are centred around the belief that companies sell personal information to third parties, as well as concerns about data misuse, data hacking, and data leaks/breaches.

Share

Featured Articles

IBM's VP of Build on Where Embeddable AI Stands to Benefit

IBM EMEA's VP of Build Dawn Herndon explains what embeddable AI is and where its main use cases and benefits will come from

Davies Increasing AI Focus with First Group Chief AI Officer

Although the first Group Chief AI Officer role at the firm, the appointment of Paul O'Brien is one step in a long walk to building their AI strategy

Tech & AI LIVE New York: Speaker Announcement

Executives from Ping Identity, ServiceNow and Consumer Technology Association are announced to be joining the line-up at Tech & AI LIVE New York

MLOps: What Is It and How Can It Enhance Operations?

Machine Learning

Kyocera CISO Talks Good Data Security in the Age of Gen AI

Data & Analytics

Sony & AI Singapore Join to Build Language Diversity in LLMs

Machine Learning