Mar 24, 2021

Is the future of machine learning open source?

AI
Technology
IT
ML
Tilly Kenyon
3 min
Amazon Web Services partners with Hugging Face to simplify AI-based natural language processing
Amazon Web Services partners with Hugging Face to simplify AI-based natural language processing...

Voice enabled digital assistants surround us, from our smartphones to our speakers, and we often forget about the integrated technology that enables these devices to recognise us. 

These capabilities rely on a technology called Natural Language Processing (NLP) that, put simply, trains machine learning models on data sets of text and speech to recognise words, understand the context and structure they are present in, and also derive meaning from the presentation in order to take some sort of action. NLP has been worked on by engineers for years to help refine and make the technology more accurate. It has expanded the number of languages, dialects and accents it can recognise.

Hugging Face and Amazon Web Services (AWS) have now partnered to bring over 7,000 NLP models to Amazon SageMaker with accelerated inference and distributed training. 

What is Hugging Face?

Founded in 2016, Hugging Face is a global leader in open-source machine learning (ML), with head-quarters in New York and Paris. It is well known for its Transformers library, which makes it easier to access a range of popular natural language neural networks trained on AI frameworks such as PyTorch and TensorFlow. Transformers provide thousands of "pre-trained models to perform tasks on texts, such as classification, information extraction, question answering, summarization, translation and text generation in more than 100 languages."

“Hugging Face is a resource for startups and other businesses around the world. Our transformers can help them build virtually any natural language processing application at a fraction of the time, cost, and complexity they’d could achieve their own, helping organizations take their solutions to market quickly,” said Clement Delangue, CEO of Hugging Face.

Why does AWS' involvement matter? 

The partnership between AWS and Hugging Face will bring more than 7,000 NLP models to Amazon SageMaker, an ML service used to build, train and deploy machine learning models. 

Hugging Face announced a couple of new services which are built using Amazon SageMaker including AutoNLP, this provides an automatic was to train and deploy state of the art NLP models for different tasks, and the Accelerated Inference API, which is used to build, train and deploy machine learning models in the cloud and at the edge. 

The startup has also chosen AWS as its preferred cloud provider. This collaboration will allow customers from both AWS and Hugging Face to be able to easily train their language models and "take advantage of everything from text generation to summarization to translation to conversational chat bots, reducing the impacts of language barriers and lack of internal machine learning expertise on a business’s ability to expand."

What's the future of open source and AI? 

Open source has had an undeniable impact on the IT industry over the past few years, and when it comes to AI and machine learning, open source technology is all about high speed innovation. 

The influx of new technologies such as machine learning, AI and robotic developments has allowed developers to successfully solve testing and other issues by using the open source community and learning from some of the best developers. 

There is no doubt that in the future technology will continue developing, and it is likely that AI and open source technologies will continue to grow alongside.

Share article

Jun 10, 2021

Google is using AI to design faster and improved processors

AI
ML
Google
processors
2 min
Google scientists claim their new method of designing Google’s AI accelerators has the potential to save thousands of hours of human effort

Engineers at Google are now using artificial intelligence (AI) to design faster and more efficient processors, and then using its chip designs to develop the next generation of specialised computers that run the same type of AI algorithms.

Google designs its own computer chips rather than buying commercial products, this allows the company to optimise the chips to run its own software, but the process is time-consuming and expensive, usually taking two to three years to develop.

Floorplanning, a stage of chip design, involves taking the finalised circuit diagram of a new chip and arranging the components into an efficient layout for manufacturing. Although the functional design of the chip is complete at this point, the layout can have a huge impact on speed and power consumption. 

Previously floorplanning has been a highly manual and time-consuming task, says Anna Goldie at Google. Teams would split larger chips into blocks and work on parts in parallel, fiddling around to find small refinements, she says.

Fast chip design

In a new paper, Googlers Azalia Mirhoseini and Anna Goldie, and their colleagues, describe a deep reinforcement-learning system that can create floorplans in under six hours. 

They have created a convolutional neural network system that performs the macro block placement by itself within hours to achieve an optimal layout; the standard cells are automatically placed in the gaps by other software. This ML system should be able to produce an ideal floorplan far faster than humans at the controls. The neural network gradually improves its placement skills as it gains experience, according to the AI scientists. 

In their paper, the Googlers said their neural network is "capable of generalising across chips — meaning that it can learn from experience to become both better and faster at placing new chips — allowing chip designers to be assisted by artificial agents with more experience than any human could ever gain."

Generating a floorplan can take less than a second using a pre-trained neural net, and with up to a few hours of fine-tuning the network, the software can match or beat a human at floorplan design, according to the paper, depending on which metric you use.

"Our method was used to design the next generation of Google’s artificial-intelligence accelerators, and has the potential to save thousands of hours of human effort for each new generation," the Googlers wrote. "Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.

Share article