How model-based AI counteracts the “black box” problem
As demonstrated during the recent Covid-19 pandemic, AI at its best can accelerate research, automate procedures, reduce operational costs, streamline planning, optimise resources and test out multiple, concurrent “what if?” hypotheses – at a depth and scale unachievable by human operators.
Early AI systems relied on collecting a large amount of “Big Data” based on historical transactions. The AI system then identified key patterns in that data, before simulating or forecasting how a product, organisation or system might perform in future based on the past trends identified in the data.
While AI is undoubtedly a brilliant analytical and prediction tool, increasing adopted by businesses of every stripe, one of the key issues with AI remains the ‘black box’ problem. Relating to a virtual, rather than a literal, box, the black box problem occurs in any computational system where the user can see the input and the output but is unable to observe the operational processes that lie in between. It’s not so much a problem with the output, but the user’s faith in the output.
When there is no oversight or understanding of the operational processes, users feel uneasy or unsure about how the result was attained, and ultimately whether it can be trusted.
This inherent, human need for processes to be seen to be believed is a fundamental constraint of traditional or ‘black box’ AI. It is the opacity in the process – not the process itself and/or the user’s inability to comprehend the process – that causes mistrust or doubt in the results. When the output demonstrates unusual or outlying results this can be exacerbated and lead to a vicious cycle where users become increasingly cautious and distrustful about using this type of technology.
Solving the black box problem
Fortunately, the black box problem is not insurmountable and can be solved by using model-based AI instead of more traditional AI systems. Model-based AI eliminates the black box by creating a rich representation of meaning across concepts that can then be manipulated explicitly, traced and tracked. The underlying rationale behind the decisions made by the technology is therefore transparent to the operator, who understands – and therefore has more confidence in – the output.
Model-based AI represents concepts, objects or ideas present in the real world with meaningful computational representations known as “agents”. Each key element in a business or organisational systems – such as an asset, a facility, a resource or a decision-maker – is represented as an agent and configured with specific characteristics. The AI model is the product of each of these individual agents operating and interacting with each other over a period of time to simulate the overall activity of the operation. The agents create data that can be aggregated into a set of outputs that determine key performance indicators such as cost, availability, capacity or utilisation. By changing the way an agent is configured, numerous alternative hypothetical approaches to the operational activity can be examined and compared, using the “what if?" analysis.
Model-based AI is increasingly relied on in sectors where accurate predictions are vital. These include industries such as aerospace, defence, oil and gas, manufacturing and pharmaceuticals, where AI is key to scheduling and planning complex, multi-year operations. Where the safety and security of human life are at stake, and costs into the millions, even billions, of dollars are in the balance, model-based AI provides the confidence that black box AI just doesn’t deliver.
In the aerospace industry, model-based AI allows a user to ask the system a hypothetical question such as: “If engine X or aircraft Y was removed from service, what impact would that have on the rest of the fleet?”. To solve a problem such as this, model-based AI uses labelling to recognise the concept of “a flight” or “an aircraft” or “a maintenance operative” as something we understand within the system, together with their associated relationships, such as between as aircraft and a flight. The system can house a complex system of entities – for example, an aircraft, an engine and a maintenance operative – and represents these computationally. The relationships between entities can be verified, and therefore, trusted.
Model-based AI is a predictive tool that allows the user to interrogate the data and the output conclusions in a meaningful way. It does not assume the future is purely a function of the past; it operates according to a behavioural model and the actions are the result of the combination of individual agents interacting together. The user can configure the capability and behaviour required from each agent and adjust where necessary. Each agent “actor” in the simulation can be examined to see what actions it took, and the justification for such a decision. This can then be compared against observable, real-world situations to see if the agent made the correct decision given the currently available options.
Model-based AI even allows the user to model non-discrete, operational activities and “black swan” events – unforeseen events which stimulate or provoke rapid change and create potential “crunch points”. A good example of this would be if a new contract was taken on at a future date which drastically changes the composition and size of the demand for sustainment. Equally, it could model a scenario when demand increases, the maintenance activity increases, and then maintenance capacity maxes out, so bottlenecks form and aircraft groundings spike up.
Model-based AI is the key to avoiding the black box problem and eradicating mistrust in the black box. With every decision made by the system traceable and monitored, the output can be trusted by the human operators to perform – accurately and transparently – the task that is required.