Nutanix announced new functionality for Nutanix GPT-in-a-Box, including integrations with NVIDIA NIM inference microservices and Hugging Face Large Language Models (LLMs) library. Additionally, the company announced the Nutanix AI Partner Program, aimed at bringing together leading AI solutions and services partners to support customers looking to run, manage, and secure generative AI (GenAI) applications on top of Nutanix Cloud Platform and GPT-in-a-Box. Nutanix GPT-in-a-Box is a full-stack solution purpose-built to simplify Enterprise AI adoption with tight integration with Nutanix Objects and Nutanix Files for model and data storage.

The company announced GPT-in-a-Box 2.0, which will deliver expanded NVIDIA accelerated computing and LLM support, along with simplified foundational model management and integrations with NVIDIA NIMs microservices and the Hugging Face LLM library. GPT-in-a-Box 2.0 will include a unified user interface for foundation model management, API endpoint creation, end-user access key management, and will integrate Nutanix Files and Objects, plus NVIDIA Tensor Core GPUs. GPT-in-a-Box 2.0 will bring Nutanix simplicity to the user experience with a built-in graphical user interface, role-based access control, auditability, and dark site support, among other benefits.

It will also provide a point-and-click-user interface to deploy and configure NVIDIA NIM, part of the NVIDIA AI Enterprise software platform for the development and deployment of production-grade AI, to easily deploy and run GenAI workloads in the Enterprise and at the Edge.