Patrick Bangert
Senior Vice President Data, Analytics and AI at Searce
During a climate of rapidly growing AI, there have been global calls for increased regulation of the technology.
With the EU having reached a provisional agreement on its AI Act, and the United States imposing regulations on technology companies, regulatory guidelines – intended to lead to safer models, less biases and an overall improved AI ethics landscape – are being discussed more frequently.
From working at the likes of NASA and Samsung in business strategy, research and AI, SVP at Searce Patrick Bangert offers us insight into his role in data and analytics, into the future of AI regulation and how businesses can ensure safe and responsible uses of the technology.
Tell us about your career background and your current role at Searce
Starting out as a theoretical physics student and then doing my PhD in mathematics at University College London, I was always fascinated by how the world works. Mathematics has an uncanny ability to describe what happens, what will happen, and how to proactively influence that. It’s mathe-magical. I pursued this path as professor for applied mathematics in Germany until I figured out the academic environment would not let me apply these methods to real life.
That’s when I founded my own company, Algorithmica Technologies, in 2005 to bring AI methods to the process industry — the industry that starts with mining and oil and gas, continues with chemistry and petrochemistry, and culminates in power generation. These industries are asset heavy in the sense that the costs are dominated by physical equipment, facing primary challenges in the maintenance and control of machinery. I continued to publish scientific papers and three books on these subjects, which kept me busy around the world until I exited the business in 2020. In the meantime, I relocated to the San Francisco Bay Area.
During this period, Samsung approached me to spearhead their AI division based in San Jose. Over the course of three enriching years, I led projects on various AI topics predominantly focused on unstructured data, particularly images. Our team successfully created a platform designed to handle large-scale model training efficiently. This technological leap paved the way for building image-based models to detect medical conditions like cancer.
A noteworthy accomplishment that stands out is our innovation in addressing the cost-intensive aspect of image labelling. Recognising this challenge, we devised a novel technology that significantly reduced labelling effort. This breakthrough was documented in AI Magazine, solidifying the impact of our work in the broader AI community.
In Q2 of 2023, I joined Searce to lead two business units — data analytics and AI. We are a global technology consulting company working on cloud projects on behalf of our clients. My role involves overseeing practice and the delivery teams associated with them. Here, the practice does everything up to contracting, and delivery conducts the projects themselves. Specialising in financial services, logistics, retail, and healthcare, Searce provides proofs of value as well as full scale production deployments of data and AI projects in the cloud.
How successfully do you see enterprises adopting Gen AI in 2024?
The year 2023 was characterised by many conversations and little action. As is often the case, various pundits proclaimed Gen AI would be our ruin, our salvation, a Ponzi scheme, a grand lie, or something in between. Choose your poison.
Collectively we’ve calmed down since then. It’s apparent that Gen AI can genuinely do some things well and others poorly. Some of the things it does well are firmly in the cool and entertaining category. There are a few things that represent true business value to companies prepared to adjust their human processes. I expect that businesses will put these use cases into productive use in 2024. As much as we technologists would like to see AI as a well-established field, the fact is that it’s very early days indeed and anyone adopting these use cases in 2024 is an early adopter. It will be years before Gen AI is a widespread common use.
Gen AI offers valuable support to software developers by generating initial code drafts, saving up to 50% in development time. Additionally, it extends its efficiency to textual documents like reports, saving 30-40% of writing time. The platform’s natural language processing capabilities facilitate swift and accurate queries across vast repositories of documents, videos, or audio files – an application known as enterprise search. Notably, Gen AI excels in summarising long documents or videos emphasising key points for enhanced comprehension. These are just some examples of applications that are truly useful.
You’ve said in the past that companies will often adopt AI ‘for the sake of it’. What must businesses be mindful of when considering their AI strategy?
AI, whether generative or not, is a collection of tools. When you want to build a beautiful kitchen, you will not ask the wrench to be good looking but useful. When applying AI, it is the business outcome that should concern us and not the tool. The conversations often turn this around as vendors dominate the media debate to sell their tools. Some top-level leaders have declared that they will adopt AI and then asked their teams to find application areas. That is putting the cart before the proverbial horse.
Additionally, AI is one tool but not the only tool needed to reap the benefits. Other tools, some software, and some hardware, will have to be integrated. Most importantly, humans will have to be trained and motivated to participate in the complex change management process that is necessary to implement a new AI-based initiative.
It is common knowledge that approximately 80% of all AI projects fail. However, they do not fail because the software or mathematics don’t work. Far from it. They fail because users either do not use the tools or use them incorrectly.
My advice is to focus on the challenge and its business value. If AI is the right tool, then use it. In deciding to use, reserve enough time and budget for the human parts of the project to enable success.
Why do you think that 2024 needs to see greater AI regulation?
One of the principal obstacles to AI adoption is the great legal uncertainty at the moment. On the one hand, there are many court cases currently in flight, each of which has the potential to set a precedent and be a landmark case, as there are almost no precedents available right now. On the other hand, many countries and states are currently thinking about regulations but most have not finalised them.
The result is near complete uncertainty on what is legal, what will be regulated and how, and what the penalties are, if any. What seems certain is that many places in the world will answer these questions very differently. In an industry that mainly lives on the internet with vendors, data centres, and customers in different countries, it is unclear how these rules will be enforced effectively.
Businesses buying and applying tools to modify processes need assurance that these changes are legal and long-lasting. Currently, they do not have that reassurance, so I forecast the next big change in AI will not come from technology makers but lawmakers.
What is the value of businesses utilising the cloud moving forward?
The shift to cloud has several significant advantages over on-premises data centres. It transforms spending from a Capex to an Opex model, allowing companies to concentrate on their core business focus without dealing with IT complexities and managing hardware intricacies. Many software tools are usually baked into the cloud and pre-integrated with each other, minimising the effort required for setting up and maintaining the environment. Previously difficult and expensive processes like backup and security have become no brainers as cloud providers take care of them in the background. Software and data become available easily, anywhere, and anytime.
Perhaps the biggest benefit is scaling. If you need more, or less, storage or compute infrastructure, the cloud allows an efficient, dynamic, and fully automatic scaling both up and down.
What is Searce doing to ensure safe AI functionality?
AI becomes safe if your data remains private and if you can trust the answers coming out of the system. Holistic safety would also require a stable regulatory system that collectively we lack at present, but this is out of Searce’s hands.
Robust data privacy and security are assured easily through advanced technological measures such as encryption, data governance, and other suitable IT tools and environments. Compliance with privacy regulations is similarly no problem. The identification and handling of speech such as toxic, insulting, or discriminating content are managed through automated detection, flagging, and removal. Additionally, the automatic detection and removal of personally identifiable information further fortifies the protection of sensitive data.
Leveraging strategies like prompt engineering increases the chances of getting accurate responses from AI, but we must acknowledge that AI will make mistakes. Therefore, the underlying process in which AI is used must be robust enough to absorb the occasional mistake. At Searce, we assist our clients in crafting resilient processes that encompass both human intervention and policy elements to effectively and gracefully handle these exceptions.
Ensuring ethical practices in AI, Searce has established an ethical AI framework. We meticulously evaluate all projects against this framework to ensure our AI initiatives cause no harm and document all decisions of moral character. This commitment underscores our dedication to responsible and conscientious AI implementation.
******
Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
AI Magazine is a BizClik brand