Red Hat has launched Red Hat Enterprise Linux AI (RHEL AI), described as a foundation model platform that allows users to more seamlessly develop and deploy generative AI models.
Announced May 7 and available now as a developer preview, RHEL AI includes the Granite family of open-source large language models (LLMs) from IBM, InstructLab model alignment tools based on the LAB (Large-Scale Alignment for Chatbots) methodology, and a community-driven approach to model development through the InstructLab project, Red Hat said.
The entire solution is packaged as a bootable RHEL image for individual server deployments across the hybrid cloud and is part of OpenShift AI, Red Hat’s hybrid machine learning operations (MLOps) platform for running models and InstructLab at scale across distributed cluster environments. RHEL AI provides a supported, enterprise-ready runtime environment for AI models across AMD, Intel, and Nvidia hardware platforms, Red Hat said.
Red Hat said that its enterprise customers have begun moving from early evaluations of generative AI services to building out AI-enabled applications. With InstructLab alignment tools, Granite models, and RHEL AI, Red Hat aims to apply the benefits of open-source projects to remove obstacles to implementing an AI strategy, such as a scarcity of data science skills and financial requirements. RHEL AI creates a foundation model for bringing open-source licensed generative AI models into the enterprise, the company said.
Next read this: