With AI continuing to rapidly expand worldwide, it is important for organisations investing in the software to consider digital safety in its practices.
Ethical AI is increasingly becoming a priority for businesses who wish to invest in AI, as it ensures that people’s data is better protected. Being responsible in this way will be beneficial for innovation long-term. In addition, the hope is that unfair biases will be reduced with proper AI principles and governance.
AI Magazine considers some of the largest companies that are working with ethical AI practices, as well as exploring what these companies are doing in terms of AI ethics.
Accenture is actively working to design and deploy responsible AI solutions that are ethical, transparent and trustworthy. The company’s 2022 tech vision research found that only 35% of global consumers trust how AI is being implemented by organisations, with 77% thinking that organisations must be held accountable for AI misuse.
The company has established a generative AI and LLM Centre of Excellence (CoE) with 1600 employees, including a significant presence in India. It has also trained an additional 40,000 employees in AI to support the CoE, emphasising its role in capability building.
Deloitte offers an AI Risk Management Framework, which provides information on effective operating models, crisis management and responsible AI. It cites the objective of trustworthy AI is to ensure that it does not lead to biased or unfair outcomes, is well-governed and works as intended.
The company provides services in its ethics framework to help organisations govern AI responsibly, helping with organisation structure. It offers roundtable workshops to discuss emerging topics and training for senior leaders, as well as AI practitioner engagement in line with ethical considerations.
With DeepMind merging with Google Brain earlier in 2023, the company hopes to continue tackling AI research and products that aim to improve human lives and transform industries. The company has achieved many scientific breakthroughs with AI, with the most recent being AlphaFold which can predict protein structures. This discovery improves human workforce operations through more efficient data analysis.
The company also has a blog for ideas and the latest research on a wide range of other topics, including competitive programming and interactive video games for readers to learn more about ethical practices.
Tata Consultancy Services (TCS) is part of Tata Group, India’s largest multinational business group. It considers that ethical AI will be one of the key drivers for business revenue in the near future. The company uses the concept of the Machine First Delivery Model (MFDM) to address ethics in AI services and solutions.
TCS states that sustainable AI requires initiatives across technology, process and enterprise culture in order to succeed. With this in mind, in July 2023, the company announced plans to train 25,000 engineers to be certified on Microsoft Azure Open AI.
Apple fully integrates hardware and software across every device, with researchers and engineers collaborating to improve user experience and protect user data. In particular, the tech giant joined a research group, Partnership on AI, to demonstrate its commitment to AI ethics.
Its teams explore AI to help solve real-world, large-scale problems. Areas of work include deep learning, reinforcement learning and research. Apple also has podcasts on practical AI ethics through machine learning and data science. Its Ethics in Artificial Intelligence: Morality and Regulation in particular highlights the importance of ethical training for AI.
AWS prides itself on being committed to developing fair and accurate AI/ML services, aiming to provide tools and guidance needed to build these applications responsibly. The company believes that this use of technologies is key to fostering continued innovation.
In addition to responsible use guides and machine learning experts helping to maximise operations, the company also provides education and training through programs like the AWS Machine Learning University, for example. Amazon SageMaker Clarify also helps to mitigate bias by detecting potential bias during data preparation, after model training and in a deployed model by examining specific attributes.
IBM, and IBM Research, aims to help people and organisations adopt AI responsibly via ethical principles to encourage systems built based on trust. The pillars of ethical AI at IBM are: explainability, fairness, robustness, transparency and privacy.
IBM Consulting recently announced a Center of Excellence (CoE) for Generative AI with over 1,000 consultants with specialised generative AI expertise to help transform global clients’ core business processes like supply chain, finance and talent, as well as customer experiences and IT operations.
With five pillars of responsibility that inform its work, Meta grounds itself in designing and using AI responsibly around a set of core values: privacy and security, fairness and inclusion, robustness and safety, transparency and control and accountability and governance.
The company is addressing issues of AI fairness through research by creating and distributing more diverse datasets. These datasets are then used to train AI models to better improve fairness. In addition, it has developed a novel use of machine learning technology to help distribute adverts in a more equitable way across its apps.
Microsoft’s goal is to create a lasting AI that can be used responsibly. The company is still committed to ensuring safe practice within AI, with the Microsoft Responsible AI Standard principles shaping the way in which it creates AI systems, guiding how it designs, builds and tests models.
The company is also collaborating with researchers and academics worldwide in an effort to advance responsible AI practices and technologies. It wants to continue working towards innovating safely whilst also empowering users to cultivate a responsible AI-ready business culture through shared learning.
1: Google AI
Google continually works towards eliminating biases in its AI teams, from having a robust human-centred design approach and examining raw data. It also offers advice on how businesses can create AI with good fairness and inclusion and how algorithms can function to reflect these goals.
It has actively stated that the company will not pursue AI applications in technologies such as weapons or surveillance, as well as those that violate human rights. In addition to its work towards eliminating bias, the company is also conducting work on improving skin tone evaluation in machine learning. It sees this research as beneficial to its practice of sharing, learning and evolving its work.