Germany, France and Italy have reportedly reached an agreement on how AI should be regulated in the future, which is anticipated to accelerate European-level negotiations.
First reported by Reuters, the governments are in support of “mandatory self-regulation through codes of conduct” for AI foundation models, but oppose “un-tested norms.” The paper acknowledges that the AI Act regulates the application of AI, with risk lying in the application of AI systems and not the technology itself.
The three European Union nations are also advocating for voluntary commitments from AI providers which will be monitored by a European authority.
Global AI regulation could lead to safer models
In line with the reported agreement having been reached, reports have said that the three governments support commitments that are voluntary, but binding on small and large AI providers in the EU that sign up to them. The agreement will not only impose rules on smaller European AI companies, but also the US tech giants participating in AI development.
The paper also explained that all developers of AI foundation models must house model cards. It states: “The model cards shall include the relevant information to understand the functioning of the model, its capabilities and its limits and will be based on best practices within the developers community.”
Currently, no sanctions will be imposed on those who do not follow the rules, but if violations are identified then this could lead to the setup of sanctions.
The German government is also hosting a digital summit which includes representatives from politics, business and science - placing AI at the focus of the discussions. Issues of AI and machine learning will also be on the agenda when the governments of Germany and Italy hold talks in Berlin on Wednesday (22nd November 2023).
International collaboration is crucial to safe AI development
With the explosive growth of AI development, nations are continuing to look to collaborate with technology companies to ensure safety. In particular, the US announced in October 2023 that it now requires technology companies to share data on AI safety in the hope of continuing a global precedent concerning responsible systems.
Speaking on these developments, Gregory Hanson, SVP at Informatica, says: “AI’s powerhouse is the data that fuels it. The move for self-regulation by France, Germany and Italy recognises the transformative potential of AI and the associated risks. But it's important that EU policymakers pay attention to ensuring the accuracy of the data that feeds the technology, alongside regulating the AI system themselves.
“A binding voluntary commitment for all players - large and small – will help protect the integrity of AI. Yet most companies are still learning what data the AI algorithms need. Ultimately, AI needs the right metadata to be effective. This means there needs to be unity among regulators and policymakers surrounding the importance of data accuracy, clarity and governance.
“While the onus needs to be on businesses to bring discipline and resilience to AI by ensuring traceability, governance and quality are baked in.”
Please also check out our upcoming event - Sustainability LIVE Net Zero on 6 and 7 March 2024.
BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.
BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.