Deep learning used to simulate aircraft turbulence
Scientists from the University of Illinois have made a breakthrough in representing physics through AI.
The researchers focused on aircraft turbulence to test their model, which was able to successfully simulate the physics of the phenomenon without knowing its precise mathematics.
They hope their method will help aerospace engineers to design aircraft and spacecraft prototypes which respond to wind currents more effectively. It may also have further applications in industry for models with uncertain mathematical details.
The study – DPM: a deep learning PDE augmentation method with application to large-eddy simulation – was published in the Journal of Computational Physics. It is part of the Blue Waters supercomputer project.
Jonathan Freund, head of the department of aerospace engineering at the University of Illinois, said, “We don’t know how to mathematically write down all of turbulence in a useful way. There are unknowns that cannot be represented on the computer, so we used a machine learning model to figure out the unknowns. We trained it on both what it sees and the physical governing equations at the same time as a part of the learning process. That’s what makes it magic and it works.
“It’s an old problem. People have been struggling to simulate turbulence and to model the unrepresented parts of it for a long time.
“We learned that if you try to do the machine learning without considering the known governing equations of the physics, it didn’t work. We combined them and it worked.
‘Simulations for any physical phenomena’
“Anyone who wants to do simulations of physical phenomena might use this new method. They would take our approach and load data into their own software. It’s a method that would admit other unknown physics. And the observed results of that unknown physics could be loaded in for training.
“The turbulent flow we used to demonstrate the method is a very simple configuration. Real flows are more complex. I’d also like to use the method for turbulence with flames in it – a whole additional type of physics. It’s something we plan to continue to develop in the new Center for Exascale-enabled Scramjet Design, housed in NCSA.
“Universities were very active in the first turbulence simulations, then industry picked them up. The first university-based large-eddy simulations looked incredibly expensive in the 80s and 90s. But now companies do large-eddy simulations. We expect this prediction capability will follow a similar path. I can see a day in the future with better techniques and faster computers that companies will begin using.”
Facebook Develops AI to Crackdown on Deepfakes
In light of the large tidal wave of increasingly believable deepfake images and videos that have been hitting the feeds of every major social media and news outlet in recent years, global organisations have started to consider the risk factor behind them. While the majority of deepfakes are created purely for amusement, their increasing sophistication is leading to a very simple question: What happens when a deepfake is produced not for amusement, but for malicious intent on a grander scale?
Yesterday, Facebook revealed that it was also concerned by that very question and that it had decided to take a stand against deepfakes. In partnership with Michigan State University, the social media giant presented “a research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it.”
The promise is that Facebook’s method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with.
Why Reverse Engineering?
Right now, researchers identify deepfakes through two primary methods: detection, which distinguishes between real and deepfake images, and image attribution, which identifies whether the image was generated using one of the AI’s training models. But generative photo techniques have advanced in scale and sophistication over the past few years, and the old strategies are no longer sufficient.
First, there are only so many images presented in AI training. If the deepfake was generated by an unknown, alternative model, even artificial intelligence won’t be able to spot it—at least, until now. Reverse engineering, common practice in machine learning (ML), can uncover unique patterns left by the generating model, regardless of whether it was included in the AI’s training set. This helps discover coordinated deepfake attacks or other instances in which multiple deepfakes come from the same source.
How It Works
Before we could use deep learning to generate images, criminals and other ill-intentioned actors had a limited amount of options. Cameras only had so many tools at their disposal, and most researchers could easily identify certain makes and models. But deep learning has ushered in an age of endless options, and as a result, it’s grown increasingly difficult to identify deepfakes.
To counteract this, Facebook ran deepfakes through a fingerprint estimation network (FEN) to estimate some of their details. Fingerprints are essentially patterns left on an image due to manufacturing imperfections, and they help identify where the image came from. By evaluating the fingerprint magnitude, repetition frequency, and symmetrical frequency, Facebook then applied those constraints to predict the model’s hyperparameters.
What are hyperparameters? If you imagine a generative model as a car, hyperparameters are similar to the engine components: certain properties that distinguish your fancy automobile from others on the market. ‘Our reverse engineering technique is somewhat like recognising [the engine] components of a car based on how it sounds’, Facebook explained, ‘even if this is a new car we’ve never heard of before’.
What Did They Find?
‘On standard benchmarks, we get state-of-the-art results’, said Facebook research lead Tal Hassner. Facebook added that the fingerprint estimation network (FEN) method can be used for not only model parsing, but detection and image attribution. While this research is the first of its kind, making it difficult to assess the results, the future looks promising.
Facebook’s AI will introduce model parsing for real-world applications, increasing our understanding of deepfake detection. As cybersecurity attacks proliferate, and generative AI falls into the hands of those who would do us harm, this method could help the ‘good guys’ stay one step ahead. As Hassner explained: ‘This is a cat-and-mouse game, and it continues to be a cat-and-mouse game’.