Gartner: How D&A leaders develop a successful AI governance
AI governance is the process of assigning and assuring organisational accountability, decision rights, risks, policies, and investment decisions for applying AI. In short, AI governance is asking the right questions and giving the answers to put the right barriers in place.
Part of AI governance is decision rights, this means resolving dilemmas by considering risks, policies, and investment decisions for applying AI.
Because AI governance is new to the organisation, it is often separated from existing governance practices. However, AI governance is more successful if it extends existing governance practices with AI-specific considerations.
To reduce risks and achieve a successful AI governance implementation, there are four key actions that data and analytics (D&A) leaders must consider.
1: Document Facts About AI Models for Each Project
Keeping track of up-to-date technical and governance activities is key to enabling AI-powered systems. Document facts about AI models for each project to obtain trust amongst business and IT leaders and provide project transparency.
The documentation can include data visualisations, metadata, model deployments, and fundamentally, automation. it is not widely known that modern AI and machine learning (ML) platforms can automatically generate model updates and keep the information as up-to-date as possible.
2: Create Basic Standards for Building and Deploying AI
When establishing AI governance, develop standards as soon as feasible that AI teams who build or deploy AI must follow. These standards can apply to collaboration, data preparation, data formats, and bias mitigation. For example, for bias mitigation, put in place a standard for explainability and interpretability. The reason for this is that bias can favour a specific data structure or a specific problem to solve.
Creating standards early will improve development and production workflow. A lack of standards can slow AI expansion within the organisation. Additionally, early standardisation of tools and concepts can prevent unplanned proliferation of tools and techniques.
3: Focus AI Governance on the most critical areas, not on everything
Prioritise governance of the most critical areas. This approach will keep the business’s best interest front of mind.
Only the most critical elements need continuous aggressive governing. These are typically compliance, security, common AI tools and shared data. Less critical areas are usually specific AI initiatives or projects that involve several lines of business. In this instance, balancing AI value and risk j requires just temporary governance to be applied for the duration of the project.
The least critical content needs to be agreed upon by the business and left alone. Data scientists enjoy freedom: If they are working on something that does not affect multiple lines of business and does not present a risk, let them do what they want. Counterintuitively, in this case, no governance is a governance decision.
4: Collaborate With Legal and Compliance on AI Initiatives
Collaborating with legal and compliance counterparts to identify laws and regulations that AI initiatives must comply with, is essential. Jointly decide the necessary actions required from AI teams and governance stakeholders alike. These fall into two key categories: existing laws and regulations that AI must meet; and laws and regulations that specifically target AI.
Legal and compliance counterparts will be familiar with the existing laws and so AI governance organisations first consult them on the measures and approaches they must adopt. These measures include guidelines, review gates and industry-specific validation related to privacy, data protection, intellectual property, competition, and corporate law.
The laws and regulations that specifically target AI are vast and inconsistent among various jurisdictions, and most of them are still not enforced. They give teams a directional understanding of what is expected from AI governance in the future. Educate your legal and compliance counterparts about AI to jointly decide on the appropriate course of action.
AI is also inherently hard to govern because companies must meet the demands for safety and value under conditions that involve complexity. This complexity and the AI life cycle’s relative novelty often leads to a lack of clarity about AI’s reputational, business, and societal impacts.
It is important not only to educate businesses but ultimately structure the implementation of AI governance in this way to ensure maximum benefit to the business.