New offering is the first in a series of industry and domain-specific AI applications with future support planned for climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation.

HPE GreenLake for LLMs will run on accessible, world-leading supercomputers and AI software powered by nearly 100% renewable energy1

HPE enters the AI cloud market with the introduction of HPE GreenLake for Large Language Models (LLMs) for any enterprise to privately train, tune and deploy large-scale AI through an on-demand, multi-tenant supercomputing cloud service

HPE Discover 2023 - Hewlett Packard Enterprise (NYSE: HPE) today announced it has entered the AI cloud market through the expansion of its HPE GreenLake portfolio to offer large language models for any enterprise, from startups to Fortune 500 companies, to access on-demand in a multi-tenant supercomputing cloud service.

With the introduction of HPE GreenLake for Large Language Models (LLMs), enterprises can privately train, tune, and deploy large-scale AI using a sustainable supercomputing platform that combines HPE's AI software and market-leading supercomputers. HPE GreenLake for LLMs will be delivered in partnership with HPE's first partner Aleph Alpha, a German AI startup, to provide users with a field-proven and ready-to-use LLM to power use cases requiring text and image processing and analysis.

HPE GreenLake for LLMs is the first in a series of industry and domain-specific AI applications that HPE plans to launch in the future. These applications will include support for climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation.

'We have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud,' said Antonio Neri, president and CEO, at HPE. 'HPE is making AI, once the domain of well-funded government labs and the global cloud giants, accessible to all by delivering a range of AI applications, starting with large language models, that run on HPE's proven, sustainable supercomputers. Now, organizations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models, at scale and responsibly.'

HPE is the global leader and expert in supercomputing, which powers unprecedented levels of performance and scale for AI, including breaking the exascale speed barrier with the world's fastest supercomputer, Frontier.

Unlike general-purpose cloud offerings that run multiple workloads in parallel, HPE GreenLake for LLMs runs on an AI-native architecture uniquely designed to run a single large-scale AI training and simulation workload, and at full computing capacity. The offering will support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at once. This capability is significantly more effective, reliable, and efficient to train AI and create more accurate models, allowing enterprises to speed up their journey from POC to production to solve problems faster.

Introducing HPE GreenLake for LLMs, the first in a series of AI applications

HPE GreenLake for LLMs will include access to Luminous, a pre-trained large language model from Aleph Alpha, which is offered in multiple languages, including English, French, German, Italian and Spanish. The LLM allows customers to leverage their own data, train and fine-tune a customized model, to gain real-time insights based on their proprietary knowledge.

This service empowers enterprises to build and market various AI applications to integrate them into their workflows and unlock business and research-driven value.

'By using HPE's supercomputers and AI software, we efficiently and quickly trained Luminous, a large language model for critical businesses such as banks, hospitals, and law firms to use as a digital assistant to speed up decision-making and save time and resources,' said Jonas Andrulis, founder and CEO, Aleph Alpha. 'We are proud to be a launch partner on HPE GreenLake for Large Language Models, and we look forward to expanding our collaboration with HPE to extend Luminous to the cloud and offer it as a-service to our end customers to fuel new applications for business and research initiatives.'

HPE provides supercomputing scale for AI training, tuning and deployment

HPE GreenLake for LLMs will be available on-demand, running on the world's most powerful, sustainable supercomputers, HPE Cray XD supercomputers, removing the need for customers to purchase and manage a supercomputer of their own which is typically costly, complex and requires specific expertise. The offering leverages the HPE Cray Programming Environment, a fully integrated software suite to optimize HPC and AI applications, with a complete set of tools for developing, porting, debugging, and tuning code.

In addition, the supercomputing platform provides support for HPE's AI/ML software which includes the HPE Machine Learning Development Environment to rapidly train large-scale models, and HPE Machine Learning Data Management Software to integrate, track, and audit data with reproducible AI capabilities to generate trustworthy and accurate models.

HPE GreenLake for LLMs runs on sustainable computing

HPE is committed to delivering sustainable computing for its customers. HPE GreenLake for LLMs will run in colocation facilities, such as with QScale in North America as the first region to deliver purpose-built design to support the scale and capacity of supercomputing with nearly 100% renewable energy.1

Availability

HPE is accepting orders now for HPE GreenLake for LLMs and expects additional availability by the end of the calendar year 2023, starting in North America with availability in Europe expected to follow early next year.

HPE also today announced an expansion to its AI inferencing compute solutions. The new HPE ProLiant Gen11 servers are optimized for AI workloads, using NVIDIA H100 and L4 Tensor Core GPUs as well as L40 GPUs. The HPE ProLiant DL380a and DL320 Gen11 servers boost AI inference performance by more than 5X over previous models.2 For more information, please visit HPE ProLiant Servers.

HPE Services provides a comprehensive portfolio of services spanning strategy and design, operations, and management for AI initiatives.

For more information on HPE GreenLake for Large Language Models (LLMs), please visit: http://hpe.com/hpe-greenlake-large-language-models

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions as a service. With offerings spanning Cloud Services, Compute, High Performance Computing & AI, Intelligent Edge, Software, and Storage, HPE provides a consistent experience across all clouds and edges, helping customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com

Media Contact

Nahren Khizeran

Nahren.Khizeran@hpe.com

1	HPE GreenLake for Large Language Models (LLMs) will run on supercomputers initially hosted in QScale's Quebec colocation that provides power from 99.5% renewable sources

2 NVIDIA: Comparison for image generative AI performance of NVIDIA L40(TensorRT 8.6.0) versus T4 (TensorRT 8.5.2), Stable diffusion v2.1 (512x512)

(C) 2023 Electronic News Publishing, source ENP Newswire