Three ways to leverage machine learning for test automation
In recent years, software development has largely shifted to Agile and DevOps methodologies, aiming to finally make a more mature continuous integration (CI) / continuous delivery (CD) pipeline a reality. As part of this leap forward, organisations automated several processes, including coding, monitoring, and, of course, testing.
Done right, this is a huge advancement for DevOps teams, who can now move more quickly to meet the needs of buyers. However, the flip side is that failures with test scripts and frameworks account for many issues that DevOps teams face.
DevOps processes involve a wide range of practitioners, including product managers, product owners, developers, test automation engineers, business testers and operation engineers. This means the data originates from different tools and personas, and needs to be normalised. To succeed in a complex DevOps digital journey, teams must adopt automated continuous testing that is reliable, self-maintained (as much as possible) and brings value with each test execution cycle.
Organisations that implement continuous testing within Agile and DevOps execute a large variety of testing types multiple times a day. With each test execution, the amount of test data that’s being created grows significantly, making the decision-making process harder.
With artificial intelligence (AI) and machine learning (ML), executives should be able to better slice and dice test data, understand trends and patterns, quantify business risks, and make decisions faster and continuously.
Machine learning is vital for DevOps
Scaling test automation and managing it over time remains a challenge for DevOps teams. Development teams can utilise ML both in the platform’s test automation authoring and execution phases, as well as in the post-execution test analysis that includes looking at trends, patterns and impact on the business.
Before going any further, it’s important to understand the root causes to why test automation is so unstable when not utilising these technologies:
- The testing stability of both mobile and web apps are often impacted by elements within them that are either dynamic by definition (e.g. react native apps), or that were changed by the developers.
- Testing stability can also be impacted when changes are made to the data that the test is dependent on, or more commonly, changes are made directly to the app (i.e. new screens, buttons, user flows or user inputs are added).
- Non-ML test scripts are static, so they cannot automatically adapt and overcome the above changes. This inability to adapt results in test failures, flaky/brittle tests, build failures, inconsistent test data and more.
Let’s look at three ways ML can help your DevOps organisation with test automation.
Make sense of high volumes of test data
Organisations that implement continuous testing within Agile and DevOps execute a large variety of testing types multiple times a day. This includes unit, API, functional, accessibility, integration and other testing types.
With each test execution, the amount of test data that’s being created grows significantly, making the decision-making process harder. From understanding where the key issues in the product are, through visualising the most unstable test cases and other areas to focus on, ML in test reporting and analysis makes life easier for executives.
With AI/ML systems, executives should be able to better slice and dice test data, understand trends and patterns, quantify business risks, and make decisions faster and continuously. For example, learning which CI jobs are more valuable or lengthy, or which platforms under test (mobile, web, desktop) are faultier than others.
Without the help or AI or ML, the work is error prone, manual and sometimes impossible. With AI/ML, practitioners of test data analysis have the opportunity to add features around such things as test impact analysis, security holes and platform-specific defects.
Make actionable decisions around quality for specific releases
With DevOps, feature teams or squads are delivering new pieces of code and value to customers almost on a daily basis. Understanding the level of quality, usability and other aspects of code quality on each feature is a huge benefit to the developers.
By utilising AI/ML to automatically scan the new code, analyse security issues and identify test coverage gaps, teams can advance their maturity and deliver better code faster. As an example, code-climate are able to automatically review any code changes upon a pull request and spot quality issues, and optimise the entire pipeline. In addition, many DevOps teams today leverage the feature flags technique to gradually expose new features, and hide them in cases of issues.
With AI/ML algorithms, such decision making could be made easier by automatically validating and comparing between specific releases based on predefined datasets and acceptance criteria.
Enhance test stability over time
In traditional test automation projects, the test engineers often struggle to continuously maintain the scripts each time a new build is being delivered for testing, or new functionality is added to the app under test.
In most cases, these events break the test automation script. This is either because a new element ID that was introduced or changed since the previous app, or a new platform-specific capability or popup was added that interferes with the test execution flow. In the mobile landscape specifically, new OS versions typically change the UI and add new alerts or security popups on top of the app. These kinds of unexpected events would break a standard test automation script.
With AI/ML and self-healing abilities, a test automation framework can automatically identify the change made to an element locator (ID), or a screen/flow that were added between predefined test automation steps, and either quickly fix them on the fly, or alert and suggest the quick fix to the developers. Obviously, with such capabilities, test scripts that are embedded into CI/CD schedulers will run much smoother and require less intervention by developers.
There is no doubt that ML will shape the next generation of software defects with new categories and classification of issues. But most importantly, it will increase the quality and efficiency of releases.
By Jonathan Zaleski, Head of Applause Labs
Google is using AI to design faster and improved processors
Engineers at Google are now using artificial intelligence (AI) to design faster and more efficient processors, and then using its chip designs to develop the next generation of specialised computers that run the same type of AI algorithms.
Google designs its own computer chips rather than buying commercial products, this allows the company to optimise the chips to run its own software, but the process is time-consuming and expensive, usually taking two to three years to develop.
Floorplanning, a stage of chip design, involves taking the finalised circuit diagram of a new chip and arranging the components into an efficient layout for manufacturing. Although the functional design of the chip is complete at this point, the layout can have a huge impact on speed and power consumption.
Previously floorplanning has been a highly manual and time-consuming task, says Anna Goldie at Google. Teams would split larger chips into blocks and work on parts in parallel, fiddling around to find small refinements, she says.
Fast chip design
They have created a convolutional neural network system that performs the macro block placement by itself within hours to achieve an optimal layout; the standard cells are automatically placed in the gaps by other software. This ML system should be able to produce an ideal floorplan far faster than humans at the controls. The neural network gradually improves its placement skills as it gains experience, according to the AI scientists.
In their paper, the Googlers said their neural network is "capable of generalising across chips — meaning that it can learn from experience to become both better and faster at placing new chips — allowing chip designers to be assisted by artificial agents with more experience than any human could ever gain."
Generating a floorplan can take less than a second using a pre-trained neural net, and with up to a few hours of fine-tuning the network, the software can match or beat a human at floorplan design, according to the paper, depending on which metric you use.
"Our method was used to design the next generation of Google’s artificial-intelligence accelerators, and has the potential to save thousands of hours of human effort for each new generation," the Googlers wrote. "Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.