5 minutes with Brian Mullins, CEO at Mind Foundry

Share
AI Magazine speaks to the CEO of Mind Foundry, Brian Mullins, to learn more about the company, its technology, ethical AI and plans for the future

Can you tell me about Mind Foundry?

Mind Foundry is an Oxford University company founded by two of the world’s most renowned thinkers on artificial intelligence (AI), Professors Stephen Roberts and Michael Osborne. Mind Foundry develops AI technologies that help organisations in the public and private sector tackle high-stakes problems ... problems that have the potential to significantly impact human lives both on an individual and population level. 

Mind Foundry’s AI systems are designed so that they can continuously learn from changes in their environment, and not only improve their function but also improve how they learn. These systems are also designed in a way that incorporates humans not just as a user, but as an essential component of the decision chain. This enables Mind Foundry to more readily deploy into high-impact settings where responsibility isn’t just about philosophy, but is about mathematical proof necessary to created trusted systems that can become reliable and accountable members of your team. If your AI is learning to play chess you can afford for it to lose a billion games... But if you are going to use AI to make a decision in the real world, with real impact on the life of an individual, or entire populations, you need to do better.  You need to be confident that you can stand by the decisions your AI makes because the real world is not a game.  

What is your role and responsibilities at the company?

I am the CEO of Mind Foundry.  I joined the founders, Professors Stephen Roberts and Michael Osborne, in 2019.  In my career I’ve found that the CEO has three responsibilities that are the most important to the success of the company.  First is to create the vision and help to communicate it to our team, customers, and partners so that everyone understands how they can play their part in moving forward.  Second, is to grow the team; both by hiring new team members that are at the top of their game across fields of expertise and by creating a culture that supports the growth of the existing team to push their own limits and have opportunities to progress personally and professionally that they wouldn’t have anywhere else.  And finally, to drive the growth of revenue for the business.  Not as the end goal, but as the way to create the resources that we need to achieve our mission.  These are the things that can help any company to grow and have the ability to make an impact in the world.   

How does your company utilise AI to support its operations?

AI is a fundamental part of everything that we do, beginning with our mission to create a future where humans and AI work together to solve the world’s most important problems. As an Oxford University company, we retain close ties to the cutting edge, scientifically principled research coming out of the world’s best universities and we utilise our unique expertise to bridge the gap between research and customers with real-world problems by creating AI products and solutions optimised for a variety of very complex problems. 

How can AI leaders and experts ensure AI is utilised in an ethical way?

We can’t. It’s always a risk. From the printing press to the invention of the computer, and to almost every breakthrough idea that will come in the future, there will be players who intentionally use it to do harm to someone else. But that won’t be where most of the harmful outcomes are created. As with any rapidly evolving technology, AI brings with it a steep learning curve that increases the likelihood that mistakes and miscalculations will result in unanticipated and harmful impacts. Many of these will arise out of good old-fashioned human laziness, lack of understanding, and a failure to consider the total systemic impacts of the decisions that are being made, by both the humans and the AI. And this is the place where we, as AI leaders, can make a difference.

The first step towards this is understanding the technology in its entirety - including its risks and failure modes, as well as its trade-offs and benefits. It is only by making sure that people can understand how the technology works, as well as the broader context in which it is being deployed, that business leaders can ensure AI systems are being used for good in both the short and the long term, and not unconsciously perpetuating unfairness, discrimination, or biases. In practice, this means creating AI products that are equally powerful in their capacity to create value in the world as they are in their ability to help us understand, shape, and manage how this value is being created. 

But it’s not just about product design, it’s also about education. As humans, we have learned how to interact with other humans by watching and observing social interactions since our birth. Living with AI is new for all of us, and since we don’t yet have the first-hand experiences to give us an intuitive sense for how things work, we can start to bridge the gap with the right educational materials.

What can we expect from Mind Foundry in the future?

We believe the most exciting thing that the future holds for AI is the way in which humans and AI will partner together more often and more deeply. We’ll be able to interact with AI on a regular basis, in a way where both the humans and the AI will help each other do more, much more actively than we both do today. Today, it’s a lot more siloed. Humans and AI are often apart from one another; the AIs do what they do and the humans do what they do. In the future, at Mind Foundry and through the products we’re building, there will be a lot more teamwork. That’s going to be a really exciting time. 

Share

Featured Articles

Harnessing AI to Propel 6G: Huawei's Connectivity Vision

Huawei Wireless CTO Dr. Wen Tong explained how in order to embrace 6G to its full capabilities, operators must implement AI

Pegasus Airlines Tech Push Yields In-Flight AI Announcements

Pegasus Airlines has developed its in-house capabilities via its Silicon Valley Innovation Lab to offer multilingual AI announcements to its passengers

Newsom Says No: California Governor Blocks Divisive AI Bill

California's Governor Gavin Newsom blocked the AI Bill that divided Silicon Valley due to lack of distinction between risks with model development

Automate and Innovate: Ayming Reveals Enterpise AI Use Areas

AI Strategy

STX Next AI Lead on Risk of Employing AI Without a Strategy

AI Strategy

Huawei Unveils Strategy To Lead Solutions for Enterprise AI

AI Strategy