InRule Technology: Delivering fairness through awareness

Created in 2002, InRule Technology was born out of a dream of giving everyone the power of computing without the complexity of programming by CEO Rik Chomko and CTO Loren Goodman.
InRule Technology provides AI-enabled, end-to-end automation for the enterprise. IT and business personnel rely on the company’s decision automation, machine learning and digital process technologies to increase productivity, grow revenue and delight customers.
The InRule Decision Platform empowers both technical and business rule authors to write and manage automated decisions and business logic. xAI Workbench, the company’s suite of modeling engines, provides explainability by delivering machine learning predictions.
InRule Technology’s low-code digital process automation technology, Barium Live, brings efficiency, consistency, and transparency to processes and workflows.
Introducing bias detection
Recently the company has introduced bias detection features within its xAI Workbench, the company’s suite of machine learning modeling engines.
The bias detection features support organisations whose machine learning models are associated with predictions that could contain bias toward a protected class (gender, race, age, etc.) or impact an individual’s well-being, such as clinical trials, population health management, incarceration recidivism, loan origination, insurance policy rating and more.
“Organisations leveraging machine learning need to be aware of the harmful ways that bias can creep into models, leaving them vulnerable to significant legal risk and reputational harm,” said David Jakopac, Ph.D., vice president, Engineering and Data Science, InRule Technology. “Our explainability and clustering engines provide unprecedented visibility that enables organisations to quantify and mitigate harmful bias through action in order to advance the ethical use of AI.”
In addition to reducing risk and preventing harmful algorithmic bias, the automated bias detection in xAI Workbench helps minimise bottlenecks in the model ops lifecycle by giving data science teams a set of automated tools to accelerate their development process, leading to faster model deployments with greater confidence.
Evaluating fairness and ensuring equal treatment
According to InRule, bias detection in xAI Workbench delivers “fairness through awareness” and minimises risk for organisations that leverage machine learning predictions at scale within business operations. Augmenting xAI Workbench with bias detection allows enterprises to quantify and mitigate potential hazards when complying with federal, state and local regulations or corporate policies.
Unlike platforms that exclusively measure whether the distribution of data has changed over time, xAI Workbench bias detection evaluates the fairness of the model, ensuring people who are similar (on the basis of reasons most relevant to make the modeled decision) receive equal treatment. Additionally, the bias detection in xAI Workbench scours the subsets of the model, exploring millions of data paths to ensure that the model operates with equal fairness within groups and between groups.