Tricentis’ David Colwell Talks New Hurdles in AI Development
In recent years, the rapid advancement of AI has revolutionised numerous industries, transforming the way we work, communicate, and solve complex problems.
However, as AI technologies become increasingly sophisticated and pervasive, concerns about their potential risks and impacts have grown. Governments and regulatory bodies worldwide are responding to these concerns by introducing new guidelines and regulations aimed at ensuring the responsible development and deployment of AI systems.
These new regulations, such as the US Executive Order on AI and the EU AI Act, are reshaping the landscape of AI development. They introduce stringent requirements for transparency, safety testing, and accountability, placing greater responsibility on software development teams to ensure their AI systems are not only innovative but also ethical, safe, and compliant with evolving standards. But how will these new laws affect AI’s development?
To find out more, we spoke with David Colwell, VP of AI and ML at Tricentis about how recent AI laws and guidelines will affect AI development.
The imperative of testing in AI development
David emphasises the critical role of testing in AI-driven development, highlighting its unique challenges compared to traditional software testing. "Testing is a vital piece of the puzzle when it comes to AI-driven development because the system under test is far less transparent than a coded or constructed algorithm," he explains. This lack of transparency introduces new failure modes and types that weren't previously encountered in software development.
The complexity of AI systems necessitates a more comprehensive approach to testing. David elaborates, "AI has new failure modes and types, from tone of voice to implicit biases, inaccurate or misleading responses, regulatory failures, and more."
This multifaceted nature of AI failures underscores the need for rigorous testing strategies that can uncover potential issues across various dimensions of AI functionality.
Testing: the challenges
One of the primary challenges in AI testing, as David points out, is the inherent uncertainty in AI systems' behaviour. "Even after completing development, development teams may not be able to affirm the reliability of the system under different conditions," he notes.
This uncertainty necessitates a proactive approach to testing, where quality champions play a crucial role in exploring edge cases and exposing undetected biases and failure modes.
The importance of this thorough testing approach, however, cannot be overstated. "Testing verifies the integrity, reliability, and stability of AI-based tools, protects against security risks and establishes high-quality performance for a smooth and consistent user experience." In essence, comprehensive testing is not just about meeting regulatory requirements; it's about ensuring that AI systems can be trusted and relied upon in real-world applications.
Building a robust AI testing strategy
Developing an effective AI testing strategy requires a multifaceted approach. David outlines several key elements that should be incorporated:
- 1. Risk Assessment: Software development teams must assess the potential risks associated with their AI system, from legal, reputational, and performance risks to new security threats or operational and cost impact. This comprehensive risk assessment forms the foundation of a targeted testing approach.
- 2. Education and Understanding: Without this thorough understanding, spotting potential issues, understanding the system's behaviour, and extracting maximum value will be much harder. This education should cover various aspects of AI, including training methods, data science basics, and the limitations of AI's learning capacity.
Evolving for dynamic AI systems
As AI systems continue to evolve, so too must the strategies used to test them.
"AI systems are constantly evolving, however, so testing strategies must evolve with them,” David explains. “Organisations must regularly review and update their testing approaches to adapt to new developments and requirements in AI technology and emerging threats."
This need for continuous evolution in testing strategies underscores the dynamic nature of AI development and the ongoing commitment required to ensure AI systems remain safe, effective, and compliant with changing regulations.
Their is a critical role that software development teams play in ensuring the responsible development of AI technologies.
"Without thorough testing, software development teams cannot hope to meet evolving regulatory standards or ensure that AI tools are reliable, accessible, accurate, and responsible for public use," David states.
This responsibility extends beyond mere compliance with regulations. The development of efficient testing strategies is an integral element of providing a safe and secure user experience built around trust and reliability.
In essence, software development teams are at the forefront of building trust in AI technologies, a task that is crucial for the continued adoption and integration of AI across various sectors of society.
As AI continues to shape our world, the diligence and responsibility of software development teams in rigorously testing and refining AI systems will be paramount in realising the full potential of this transformative technology whilst mitigating its risks.
******
Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
AI Magazine is a BizClik brand