The modern hurdles preventing widespread AI adoption
Artificial Intelligence is used to inform and shape strategies across a range of industries, but there are still several challenges holding it back from widespread adoption. 2020 has proved the need for digital services and supporting AI is essential, but in many ways AI is not there yet. Ethical considerations must be addressed and operational difficulties, such as building a team with the right skill set, always provide an obstacle.
COVID-19 has given organisations across the world the need to expand their digital services. At first glance this would appear to benefit the spread of machine learning. When more people move their financial transactions and activity online, there is more data to tally and learn from. The question now becomes – is AI robust enough for the challenge?
In 2021 I believe AI will cross the chasm, becoming a reliable and safe, mainstream business technology — but maybe not how, or for reasons why, you might expect.
We often see technology developing at speeds that regulation cannot match. It can be a laborious task to bring new legislation into effect but, once ready, new tech can be swiftly implemented to meet regulation. This is why it is no longer good enough for AI-using organisations to ‘just do their best’. They must document and audit AI development around defined corporate standards of responsible AI.
Organisations must formally document and enforce their model development and operationalization standards and set them in the context of the three pillars of responsible AI: explainability, accountability, and ethics.
- Explainability: Organisations relying on an AI decision system must ensure they have an algorithmic construct that captures and communicates the relationship between the decision variables to arrive at a final business decision.
- Accountability: AI models must be properly built and focus has to be placed on the limitations of machine learning and careful thought applied to the algorithms used.
- Ethics: Adding to the requirements of explainability and accountability, ethical models must be tested continuously, and any discrimination removed.
There is no question about it, building responsible AI models takes time and is painstaking work. In a recent survey, more than 93% of data and analytics executives said that ethical considerations represented a barrier to AI adoption within their organizations. The meticulous and essential scrutiny is an ongoing process to ensure AI is used responsibly. This scrutiny must include regulation, audit and advocacy.
Regulations play an important role in setting the standard of conduct and rule of law for use of algorithms. In the end, however, regulations are either met or not, and demonstrating alignment with regulation requires audit. Organizations that adopt technology such as model governance blockchains will be in the best positions to respond.
Building a team with the right set of skills can be difficult. This is exhibited across a range of industries, with analytics leaders consistently ranking it as a high or medium barrier to entry.
Integrating new technologies, however, is often seen as the biggest problem in creating a machine learning framework. If an organisation is a long-standing business, it is highly likely it will face issues around legacy estates and integration of new AI technology into operational systems.
The list of challenges is long but by no means outweigh the benefits AI brings with it. As its advocates become more vocal and industry grapple with the rapid acceleration of digital, we will see Responsible AI rise up to cement itself in industries across the world.
By Dr. Scott Zoldi, chief analytics officer at FICO