Both companies will collaborate across open science, open source, cloud and hardware to allow businesses to build their own AI with the latest open models from Hugging Face and the latest cloud and hardware features from Google Cloud.
Hugging Face’s ultimate goal is to make it easy for data scientists and machine learning engineers and developers to access the latest AI models and use them within the platform of their choice. This comes in the midst of industry leaders increasingly promoting the importance of open-sourced AI.
Making the latest AI research accessible
The partnership between both companies is designed to enable new experiences for Google Cloud customers to easily train and deploy Hugging Face models within Google’s Kubernetes Engine (GKE) and Vertex AI.
As a result, customers should benefit from the unique hardware capabilities available in Google Cloud. It will allow them to train, tune and serve open models quickly and cost-effectively by leveraging AI-optimised infrastructure such as TPUs and GPUs.
This announcement comes in the wake of companies like Microsoft and Meta becoming increasingly interested in deploying new AI technologies. With the ever-evolving nature of AI, comes increased enterprise demand for the technology.
Businesses of all sizes have become increasingly interested in deploying AI for their own purposes. This could include boosting their own products or serving internal needs such as improving workplace productivity and efficiencies.
As a result of these growing needs, larger AI developers are starting to make their AI more accessible and open-sourced so that businesses can build their own AI software that is optimised for specific tasks.
“Google Cloud and Hugging Face share a vision for making Gen AI more accessible and impactful for developers,” says Thomas Kurian, CEO of Google Cloud.
He tells Reuters that the demand for cloud-based AI computing suggests that, at its current trajectory, it will overtake the traditional cloud software market in the future. He says: “It always starts infrastructure up because you first have to put in the machines that reflect the demand - and grow from there.”
Utilising AI for good: Protecting business data
Another popular use case is protecting against cyberattacks or data breaches by putting safety frameworks in place with AI. This is because, with increasingly sophisticated AI, cyberattacks will become more personalised, both for individuals and small businesses.
As these types of breaches become more integrated into everyday life, businesses are seeking new and improved ways to protect their data. In addition, they are having to consider more ethical implications of harnessing AI for good.
This ever-changing AI landscape has led to further announcements last week (January 2024), with the United States announcing that it has teamed up with leading technology companies like Microsoft, Amazon and IBM, among others, to launch an AI pilot programme to provide resources for researchers and educators to access high-powered AI technologies.
The programme will open up AI research via access to diverse AI resources, as well as aiming to reach new communities through education, training, user support and outreach.
With its commitment to open source and open models, promoting AI safety, Hugging Face has fast become one of the most popular platforms for hosting models, datasets and inference endpoints.
By making AI software open-sourced, Hugging Face’s goal is to democratise AI and make it so that businesses can build their own models easily.
“With this new partnership, we will make it easy for Hugging Face users and Google Cloud customers to leverage the latest open models together with leading optimised AI infrastructure and tools from Google Cloud including Vertex AI and TPUs to meaningfully advance developers ability to build their own AI models,” says Clement Delangue, CEO of Hugging Face.
AI Magazine is a BizClik brand