Nvidia and Dell are trying to breathe new life into artificial intelligence in buildings with Project Helix

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more

Dell and Nvidia are expanding their long-term partnership with a new initiative, Project Helix, to deploy the power of generative AI within enterprises.

Project Helix is ​​an effort to combine hardware, software and services from two vendors to help enterprises take advantage of the emerging capabilities of large language models (LLMs) and generative AI. The initiative will include validated projects and reference deployments to help organizations deploy generative AI workloads.

The hardware side will see Dell PowerEdge servers, including the PowerEdge XE9680 and R760a, benefit from Nvidia H100 tensor core GPUs. The integrated hardware stack will integrate with Dell PowerScale and Dell ECS enterprise storage. The software package includes Nvidia AI Enterprise as well as features from the Nvidia NeMo framework for generative AI.

Much of the generative AI work to date involves the cloud, but that’s not necessarily something all businesses want to burden.


Transformation 2023

Join us in San Francisco July 11-12, where senior leaders will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Sign up now!

“We provide security and privacy for enterprise customers,” Kari Brisky, Nvidia’s VP of software product management, told VentureBeat. “Every business needs an LLM for their business, so it just makes sense to do it locally.”

Project Helix is ​​trying to connect LLMops for enterprises

The reality for many businesses is that there is no need to build a new LLM from scratch. Rather, most enterprises will adapt a prebuilt foundation model to understand the organization’s data.

Brisky noted that he recognizes that the term “generative AI” is a buzzword. The combination of Dell hardware with Nvidia hardware and software is also about enabling what Briski called LLMops, that is, being able to run LLM for enterprise use cases.

Nvidia and Dell are hardly strangers. the two suppliers have been collaborating on hardware solutions for years. Briskey emphasized, however, that Project Helix is ​​different from anything the two companies have collaborated on to date.

“What we haven’t done is provide these pre-made fundamental models that can be easily replicated,” he said.

Leveraging AI, regardless of location

Briskey explained that Project Helix’s blueprints will provide guidance to help enterprises deploy generative AI workloads that can be tailored to an organization’s specific use cases. He noted that it can be daunting for an organization to optimize the latency and throughput model in real time.

Varun Chhabra, SVP of Dell’s infrastructure solutions group and telecom product marketing, told VentureBeat that it’s important to understand how compute, storage and networking work together to enable real-time genetic AI workloads. Determining the right mix of computing resources is important, and best practices for this are included within the Project Helix initiative.

By running generative AI on Dell hardware, Chhabra expects that organizations will also be able to take advantage of AI wherever they want to deploy it, whether on-premises, on the edge or in the cloud.

Chhabra is particularly optimistic about Project Helix’s potential. The name Helix is ​​a nod to the double helix structure of DNA, which is the basic building block of significant life on Earth.

“If you think about the double helix and what it means for life, we felt it was a very apt metaphor for what we think is happening with generative artificial intelligence, not just to change people’s lives, but more specifically what will happen in enterprises and what this will open up for our customers,” said Chhabra.

VentureBeat’s mission should be a digital town square for technical decision makers to gain knowledge for transforming enterprise technology and transaction execution. Discover our briefings.

Source link