It has been announced that Nvidia will be acquiring Run:AI, a company based in Tel Aviv. Run:AI specialises in providing developers and operations teams with tools to efficiently manage and optimise their AI hardware infrastructure. Two sources familiar with the matter reveal the deal’s terms, which amount to $700 million.
According to a report from CTech this morning, there are ongoing discussions between the companies that could potentially result in Nvidia acquiring Run:AI for a significant amount, possibly exceeding $1 billion. Apparently, the negotiations went smoothly, except for a potential adjustment in price.
Nvidia has confirmed its commitment to maintaining Run:AI’s products and business model. Additionally, Nvidia will be investing in Run:AI’s product roadmap as part of its DGX Cloud AI platform. This platform allows enterprise customers to access compute infrastructure and software for training AI models, including generative and other forms of AI. Customers using Nvidia DGX servers, workstations, and DGX Cloud will now have access to Run:AI’s capabilities for their AI workloads. This is especially beneficial for generative AI deployments that span multiple data centre locations.
Omri Geller, CEO of Run:ai, stated, “Run:ai has maintained a strong partnership with Nvidia since 2020, and we are committed to assisting our customers in optimising their infrastructure.” “We are excited to partner with Nvidia and eager to continue our collaboration.”
Geller and Ronen Dar established Run:AI a few years ago, following their studies at Tel Aviv University with professor Meir Feder, who is also a co-founder of Run:AI. Geller, Dar, and Feder aimed to develop a platform capable of dividing AI models into fragments that can be executed simultaneously across various hardware setups, including on-premises, public clouds, or at the edge.
Similar to the field of IT project management, Run:AI faces limited competition, as other companies are also implementing the idea of dynamically allocating hardware for AI workloads. As an IT project manager, you may find it interesting to know that Grid.ai provides software that enables data scientists to train AI models simultaneously across GPUs, processors, and other resources.
With its early success, Run:AI quickly built a substantial customer base consisting of Fortune 500 companies. This impressive achievement also caught the attention of venture capitalists, who were eager to invest in the company. Before the acquisition, Run:ai had secured a substantial amount of funding, totalling $118 million, from notable investors such as Insight Partners, Tiger Global, S Capital, and TLV Partners.
In the blog post, Alexis Bjorlin, Nvidia’s VP of DGX Cloud, highlighted the rising complexity of customer AI deployments and the growing demand among companies to optimise their AI computing resources.
Organisations implementing AI are facing significant challenges in scaling their projects for 2024, according to a recent survey by ClearML, a machine learning model management company. The primary obstacle is the limited availability and high cost of computing resources, with infrastructure issues following closely behind.
“Efficiently managing and coordinating generative AI, recommender systems, search engines, and other workloads necessitates advanced scheduling techniques to maximise performance both at the system level and on the underlying infrastructure,” Bjorlin explained. Customers will have the freedom to choose from a wide range of third-party solutions, thanks to the ongoing support of Nvidia’s accelerated computing platform and Run:AI’s platform. This ensures that they have the flexibility they need. With the help of Run:AI, Nvidia is working to provide customers with a seamless way to access GPU solutions from anywhere.