Gartner VP: Businesses Need to Wake Up to AI
In the wake of the EU AI Act being passed, workforces are now assessing if they have done enough to be compliant with new legislation.
AI and its power is already revolutionising the workplace and can offer businesses access to greater efficiencies, clearer data and more specialised strategies. However, although businesses remain eager to invest in AI technologies, organisations are starting to become increasingly concerned that their employees do not have the skillset to use the technology responsibly.
As AI regulations are set to continue impacting the technology industry, ensuring compliance and responsible AI use is paramount. With this in mind, AI Magazine hears from Nader Henein, VP Analyst at Gartner, about how businesses can best harness AI amid a growing regulatory climate.
“AI is not a monolith, it is a technology that is pervasive,” he states.
Avoiding oversights
A common oversight that businesses may make when it comes to new AI regulations is believing that they are not at risk. As Nader explains, companies often mistakenly believe they are only responsible if they are building AI capabilities.
“The reality is that organisations are also responsible for the AI capabilities they buy,” he says. “This sounds simple enough, but AI is nothing new, it's going through somewhat of a renaissance, but AI capabilities have been embedded in almost every product we use for the past decade, in fact organisations would struggle to look at any of the software products they use and not find a dozen embedded AI capabilities.”
Moving forward, businesses will benefit from being more mindful of AI regulations, in order to avoid unnecessary delays or even sanctions. According to Nader, organisations are starting to come to terms with the inevitability of AI, prompting the need to document the various use cases of AI across the enterprise and assess associated risks.
“The EU’s AI Act is not going to be an outlier, it’s simply the first domino to tilt,” Nader comments. “Take stock and know where you stand. This way we are treating the issues that impact us the most in a repeatable and consistent approach.”
Gartner: AI deployment classes could help with AI regulations
In order to help businesses better understand AI regulations, Nader has proposed a strategy to better help them discover their AI usage. This is based on the following adoption classes:
AI in the wild:
This category includes AI tools that are available in the public domain that employees are using for work-related purposes, including OpenAI’s ChatGPT and Google’s Gemini.
“When you break it down, this is the age-old problem of employee awareness and may involve refreshing some of the old policies to put them into context,” Nader explains.
Embedded AI:
Nader states that this category includes AI capabilities built into standard solutions and SaaS offerings used within the enterprise.
“Service providers have been complementing their offerings with AI capabilities for the better part of the past decade, many of which are completely invisible to the organisation,” he says. “This is nothing new. You will need to ask vendors new questions about the AI capabilities they packaged into the products they sold you. Fun questions like whether they are using your data to train their models.”
Hybrid AI:
According to Nader, Hybrid AI includes enterprise offerings that come with a pre-trained foundational model that is augmented - or further trained - using enterprise data to achieve a more desirable outcome.
He says: “This adoption class is rapidly emerging, it is also the most complex from a risk perspective because it is a combination of a pre-trained model and enterprise data. Most of the risk management for this class is done by monitoring and tuning outcomes which are heavily platform specific.”
AI in-house:
This consists of AI capabilities developed and deployed internally within a business, where they hold full visibility into the data, technologies and AI model tuning.
Nader says: “If this sounds familiar, then chances are you have a software engineering team or a data science team who are doing this work and who should take on the associated governance and risk management responsibilities.”
Confronting a changing AI regulation landscape
By the end of 2024, the first set of AI regulations as part of the EU AI Act will come into effect. This will mark a pivotal shift in the industry’s landscape, particularly across Europe, which will see businesses having to conform to a new model of innovation.
Moving forward, Nader envisions further changes.
“In the first half of 2025, legislators across the world will start introducing similar requirements for risk-assessment of AI usage and the introduction of controls associated with higher risk use cases,” he explains. “US regulations of automated decision making will roll state-by-state but a federal mandate will remain elusive.”
******
Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
AI Magazine is a BizClik brand