Oct 9, 2020

Measuring modern high-performance computing for AI and more

AI
Nvidia
high performance computing
healthcare
Scot Schultz
4 min
The largest HPC and AI systems today are enabling a new wave of recommendation systems and conversational AI applications
The largest HPC and AI systems today are enabling a new wave of recommendation systems and conversational AI applications...

 Supercomputers are deployed all over the world to solve some of the biggest challenges faced by humankind. These room-sized machines, millions of times more powerful than any laptop, are capable of dizzyingly fast computational feats. These giants were once exclusively at the disposal of organisations like large government laboratories, NASA and the topmost elite of players in vertical sectors like manufacturing, finance, oil and gas and aerospace. But now changes are afoot in the way supercomputers are designed and built, opening them up to a new range of use cases. Benefitting from a new generation of processing power and ultra-fast networking, we are entering a new and perhaps more democratised era of high-performance computing (HPC).

Graphics processing units (GPUs) are replacing central processing units (CPUs) for processing, resulting in significantly more computational throughput. GPU-based systems offer a smaller footprint than legacy HPC systems, and they also operate at greater efficiency and have a lower operational cost.

But as computing horsepower increases, so does the demand for maximal data throughput. This need for high throughput and very low latency is being met by InfiniBand, a networking standard commonly used in the world of HPC.

A strong supporting ecosystem is another factor that must be considered as a sure sign of democratisation. With more than 600 HPC applications that now take advantage of GPUs and InfiniBand networking to accelerate performance, adoption continues to be strong in the arenas of both business and research.

Pioneering the next generation of AI

Another emerging use for this increasingly accessible processing power lies in enabling artificial intelligence. There is a trend towards using massive AI models, and that is changing how AI is built. 

Microsoft, for example, is a pioneer in AI and uses both GPUs and InfiniBand at scale. By utilising state-of-the-art supercomputing in its Azure platform to power a new class of large-scale models, Microsoft is enabling a whole new generation of AI. By using massive amounts of data, these large-scale models only need to be trained once. Then, the models can be fine-tuned for different tasks and domains with much smaller datasets and resources.

The importance of measuring performance

As HPC use cases broaden, more supercomputers are being built to faster and more powerful specifications. It remains as important as ever to understand how different HPC machines compare with each other. Hence the significance of the TOP500 project which ranks and details the 500 most powerful non-distributed computer systems in the world. The project started way back in 1993 and still publishes an updated list of supercomputers twice a year, now including a far greater range of machines than in its early days. 

The value of the TOP500 project lies in providing a reliable basis for tracking and detecting trends in high-performance computing. But let’s consider for a moment the benchmarks that are used to quantify HPC. 

Historically the foremost of these has been the long-standing HPL benchmark. HPL is a portable implementation of the High-Performance Linpack Benchmark. It is used as reference to provide data for the TOP500 and is a key tool in the ranking of supercomputers worldwide. However, it only measures compute power in the form of flops.

The HPCG benchmark (High Performance Conjugate Gradients) was created as an alternative, offering another metric for ranking HPC systems and intended as a complement to HPL. It is not, however, integrated into the TOP500 ranking.

As we have already seen, artificial intelligence is now a key part of the HPC landscape, and so a new and more suitable benchmark is regarded by some as a necessary recognition of this trend.

A new metric for modern day HPC systems

MLPerf is a new type of benchmarking organisation. Right in line with the age of AI supercomputing, its mission is to build fair and useful benchmarks for measuring training and inference performance of machine learning (ML) hardware, software and services. Its growing acceptance is making it a useful tool for researchers, developers, hardware manufacturers, builders of machine learning frameworks, cloud service providers, application providers, and of course end users.

Its goals revolve around accelerating the progress of ML via fair and useful measurement to serve both the commercial and research communities. It also seeks to enable a more equitable basis for the comparison of competing systems, while encouraging innovation. Perhaps the part of its ethos that marks it out most from other HPC benchmarks it its commitment to keeping benchmarking affordable so all can participate. MLPerf is backed by organisations including Amazon, Baidu, Facebook, Google, Harvard, Intel, Microsoft and Stanford, and it is constantly evolving to remain relevant as AI itself evolves.

The largest HPC and AI systems today are tackling not only new methods of traditional HPC workloads by way of GPUs with InfiniBand networking, but are also enabling a new wave of recommendation systems and conversational AI applications, while others power the quest for personalised and precision medicine. All the while we are moving beyond the traditional CPU-based systems that used to dominate the world of HPC research. Compute at the top level is no longer the preserve of an elite.

By Scot Schultz, Sr. Director, HPC / Technical Computing, NVIDIA networking business unit

Share article

Jun 17, 2021

Facebook Develops AI to Crackdown on Deepfakes

Facebook
MSU
AI
Deepfakes
3 min
Social media giant, Facebook, has developed artificial intelligence that can supposedly identify and reverse-engineer deepfake images

In light of the large tidal wave of increasingly believable deepfake images and videos that have been hitting the feeds of every major social media and news outlet in recent years, global organisations have started to consider the risk factor behind them. While the majority of deepfakes are created purely for amusement, their increasing sophistication is leading to a very simple question: What happens when a deepfake is produced not for amusement, but for malicious intent on a grander scale? 

 

Yesterday, Facebook revealed that it was also concerned by that very question and that it had decided to take a stand against deepfakes. In partnership with Michigan State University, the social media giant presented “a research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it.” 

 

The promise is that Facebook’s method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with. 

Why Reverse Engineering? 

Right now, researchers identify deepfakes through two primary methods: detection, which distinguishes between real and deepfake images, and image attribution, which identifies whether the image was generated using one of the AI’s training models. But generative photo techniques have advanced in scale and sophistication over the past few years, and the old strategies are no longer sufficient. 

 

First, there are only so many images presented in AI training. If the deepfake was generated by an unknown, alternative model, even artificial intelligence won’t be able to spot it—at least, until now. Reverse engineering, common practice in machine learning (ML), can uncover unique patterns left by the generating model, regardless of whether it was included in the AI’s training set. This helps discover coordinated deepfake attacks or other instances in which multiple deepfakes come from the same source. 

 

How It Works 

Before we could use deep learning to generate images, criminals and other ill-intentioned actors had a limited amount of options. Cameras only had so many tools at their disposal, and most researchers could easily identify certain makes and models. But deep learning has ushered in an age of endless options, and as a result, it’s grown increasingly difficult to identify deepfakes.

 

To counteract this, Facebook ran deepfakes through a fingerprint estimation network (FEN) to estimate some of their details. Fingerprints are essentially patterns left on an image due to manufacturing imperfections, and they help identify where the image came from. By evaluating the fingerprint magnitude, repetition frequency, and symmetrical frequency, Facebook then applied those constraints to predict the model’s hyperparameters. 

 

What are hyperparameters? If you imagine a generative model as a car, hyperparameters are similar to the engine components: certain properties that distinguish your fancy automobile from others on the market. ‘Our reverse engineering technique is somewhat like recognising [the engine] components of a car based on how it sounds’, Facebook explained, ‘even if this is a new car we’ve never heard of before’. 

 

What Did They Find? 

‘On standard benchmarks, we get state-of-the-art results’, said Facebook research lead Tal Hassner. Facebook added that the fingerprint estimation network (FEN) method can be used for not only model parsing, but detection and image attribution. While this research is the first of its kind, making it difficult to assess the results, the future looks promising. 


Facebook’s AI will introduce model parsing for real-world applications, increasing our understanding of deepfake detection. As cybersecurity attacks proliferate, and generative AI falls into the hands of those who would do us harm, this method could help the ‘good guys’ stay one step ahead. As Hassner explained: ‘This is a cat-and-mouse game, and it continues to be a cat-and-mouse game’.

Share article