Five minutes with: Seth Deland, The MathWorks

The MathWorks' Seth Deland speaks to AI Magazine about how simulation influences and impacts AI models, and what applications could benefit from simulation

Tell us yourself, and your role at MathWorks

I’m a product marketing manager at MathWorks for MATLAB machine learning and data science products. My background is in mechanical engineering, with a B.S and M.S in mechanical engineering from Michigan Tech University. 

I’ve been with MathWorks for over 11 years now. We’re the leading developer of mathematical computing software - our products are relied on by engineers and scientists all over the world to accelerate innovation and development. 

MATLAB is one of our core products. Simply put, it’s the language of technical computing - a programming environment for algorithm development, data analysis, visualisation, and numeric computation. Then we have Simulink, which is a graphical environment for simulation and Model-Based Design for multidomain dynamic and embedded systems. Both MATLAB and Simulink are also globally utilised as fundamental teaching and research tools.

How does simulation influence and impact the design of AI models? What best practices does simulation bring to the development of AI models?

Well, it’s not enough to just train a model and say, “My model has 99.9% accuracy. I’m done.” Remember, an AI model is trained on data, but that data doesn’t always perfectly represent every scenario it will see in practice. You need to understand how it’s really going to behave when it’s integrated into the engineered system it’s going into. 

Simulation does that for you. As a result, engineers can iterate much faster, in the early stages of the design process when it’s way easier and cheaper to make changes to a computer simulation than it is to change some physical device or expensive prototype later on.

So I’d say simulation is one of the most important things in the modern engineer’s toolbox. It enables them to validate that their designs are going to work as intended before getting to the hardware stage. Before I go out and actually build my prototype in the real world, I can validate everything works together and interacts with its environment as I would expect it to. 

Now, as engineers we need to be aware that a model is only going to be as good as the data it’s given. That means the best thing you can do is get the data set ready – first fully understand what data you have, where your gaps are and so on – then build your model from there. This current trend in the market around data-centric AI is basically all about recognising that a lot of the work in being successful with AI is actually going into making sure your data set is right, not just the model that you’re training to that data set. 

What are the common challenges that engineers face when implementing simulation into AI modelling? How can they overcome these challenges?

One of the biggest challenges is presented by models that change frequently. You might build your initial version, you’re happy enough with it and you deploy that to production, but as soon as it’s in production, it’s seeing new data. 

This is why the AI model needs to be implemented in a way that allows it to be updated faster than a lot of the other infrastructure around it. You want to be thinking about building your system in a way that we can update these AI models in the future to improve the performance of the overall system - just making sure there’s an easy pathway for that. Hopefully, if you have the right logging in place, you’re collecting that data over time and then you can continuously improve your model as you get more data and understand better how it works in practice. 

Another challenge we see often is multiple frameworks brought in on the same project. We’ll have engineers using MATLAB and Simulink, working with AI models they’ve built in these tools, but then they’re also working with colleagues who might be implementing models in other frameworks. So, we’ve put a lot of work into making sure we can incorporate and integrate models from other frameworks, not just our own.  

In what ways does simulation help with integrating, implementing, and testing AI model components, as well as with reducing hardware costs?

I think the top-level benefit is that it helps you understand how the model axis is part of the overall system – getting more and more of a feel for what the actual system is going to be. Does it have any unintended consequences on the adjacent parts of the system? Is the performance of the model good enough to keep up with the real-time requirements or near real-time requirements of the rest of the system? Will I be able to run my AI model on the low-powered processor or hardware that’s available for running the rest of the controls algorithm that the system is on? 

Asking these questions is important to get more of a feel for what the actual system is going to be like. With model-based design, we often talk about running your simulations, but then addressing elements such as software in the loop processor, in the loop hardware, in the loop testing, as you get closer and closer to what the actual design is going to be. All those things help you validate that you’re going to be able to run this model and that it’s going to behave as expected on the actual hardware that you’re heading towards.

And it all results in minimising hardware-driven testing cost. Doing as much as possible in simulation reduces the amount of hardware and testing that’s going to need to happen in the future. So, that’s really the value proposition here. In a world with infinite resources, I’m sure engineers would love to build prototypes early, but those are expensive. This is the fastest way to get things to market for a reasonable cost.

Which applications or markets could benefit most from simulation, and what examples can MathWorks share on the work it’s doing in this area?

We’ve seen a lot of interest from automotive and aerospace customers who are working on various complicated systems. Using AI, creating that reduced order modelling, enables them to scale up the simulations, really expand what they’re able to simulate and the number of simulations that they’re able to run.

For instance, we’re working with automotive companies on their exhaust after-treatment systems, hybrid-powered trains, those sorts of things. The medical device space is another area where AI and simulation plays a key role in innovation. We have customers implementing AI algorithms that are being used to help diagnose conditions, or to infer things from the sensor data that’s coming from certain devices. 

One key element for these customers is that at MathWorks, we have a lot of experience with verification, validation, and certification – not just building these models but being able to conform to different requirements put forth in different industries. For instance, this means that we are well placed to be able to help customers get through FDA approval for those medical devices, or other regulatory requirements for other industries.

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology