Red Hat supports AI via hybrid cloud with AI Red Hat

Red Hat, Open Source Provider, presented the Red Hat AI update, its portfolio of products and services designed to speed up the development and deployment of AI solutions across a hybrid cloud.

Red Hat AI provides AI enterprise platform for model training and inference, which issues an increased effective, simplified experience and flexibility for deployment, either across a hybrid cloud.

Although businesses are looking for ways to reduce the cost of deploying large language models (LLM) on a scale to deal with the growing number of use cases, they still face the integration of these models with their profile data that these use, and at the same time are able to where it exists in a public cloud or even on the edge.

Red Hat AI deals with these concerns that are closed by the Red Hat OpenShift AI and Red Hat Enterprise Linux AI (RHEL AI), and providing the company platform AI, which allows users to receive more efficient and optimized models, to tune in to business spatial data, as well as extensive accelerated architectures.

Red Hat Openshift Ai

Red Hat OpenShift AI provides a complete AI platform for the management of predictive and generative life cycles AI (Gen AI) across a hybrid cloud, including machine learning operations (PUPS) and LLMOPS. The platform provides functionality for creating predictive models and tuning gene AI models, along with tools to simplify AI management, from science on data and model and monitoring of the model to management and more.

The Red Hat OpenShift AI 2.18, the latest release of the platform, adds new updates and capabilities to support the Red HAT goal and bring betting to the hybrid cloud and more efficient AI models. The key features include:

  • Distributed portions: Delivered via the VLM Inference server, distributed server allows IT teams to divide a model serving across multiple graphics units (GPUs). This helps to reduce Barden on any single server, speeds up training and fine -tuning, and increases more efficient use of computing resources, all while helping to distribute services across AI nodes.
  • Experience with tuning the End-to-end model: Using the InstructLab and Red Hat OpenShift AI Data Science Science Science, this new feature helps simplify LLM fine fine -tuning, making it more scalable, more efficient and audited in a large production environment while adding the Red Hat OpenShift control panel.
  • Ai railing: Red Hat OpenShift AI 2.18 helps to improve accuracy, performance, performance, latency and transparency through technological preview of AI railing for monitoring and better protect the interactions of users input and model outputs. AI Guardrails offers additional detection points and helps IT teams identify and mitigate potential hateful, offensive or profane speech, personally identifiable information, compatible information or other data limited by corporate principles.
  • Evaluation of the model: Using the LM-Event component components to provide important information about the overall quality of the model, the evaluation of the model allows scientists to compare the performance of their LLMS in various tasks, from logical and mathematical thinking to the opponent of natural language and more help in creating more efficient and adapted AI models.

Rhel ai

Part of the AI ​​Red Hat AI, RHEL AI is a platform for more development, testing and launching LLM to power business applications. RHEL AI provides customers with Granite LLMS and instructions for aligning the instrument of the instrument, which are packed as a boot image server Red Hat Enterprise Linux and can be deployed across a hybrid cloud.

Rhel 1.4 was launched in February 2025, added several new improvements:

  • Granite 3.1 8b model support for the latest addition to an open source license family. The model adds multilingual support for inference and customization of taxonomy/knowledge (developer preview) along with the 128K context window for improved summarization results and generation search tasks (RAG).
  • The new graphical user interface for posts and knowledge available as a developer preview, adds their skills and contributions to AI.
  • Document Bench (DK-Bench) for easier comparison of AI models finely tuned to relevant private data with the performance of the same unusual basic models.

Red Hat AI Instructlab on IBM cloud

Businesses are increasingly looking for AI solutions that preferred data accuracy and security, while maintaining the lowest costs and complexity. Red Hat AI Instructlab deployed as an IBM Cloud service is designed to simplify, scalance and help improve the safety track for AI training and deployment. By simplifying the tuning of the instrument, you can create more efficient models adapted to the unique needs of organizations while maintaining your data.

Training of foundations AI without costs

AI is a transformation opportunity that redefines how businesses work and compete. To support organizations in this dynamic landscape, Red Hat now offers AI Foundations online training without costs. Red Hat provides two AI learning certificates for experienced leaders and newcomers to AI, helping to educate users of all levels about how AI can help business operations, make decision making and manage innovations. AI Foundations Training training leads users about how to apply this knowledge when using AI Red Hat.

Red Hat AI Instructlab on IBM Cloud will soon be available. Red Hat AI Foundations training is now available to customers.

Joe Fernandes, Vice President and CEO, AI Business Unit, Red Hat, said: “Red Hat knows that businesses will need ways to handle the growing cost of their generative AI, bring more cases to production and running.

Régis Lesbarreres, Advanced Analytics and AI Innovation Manager, Digital Innovation, Airbus Helicopters, said: To make all of these goals that could be done to everything that this is all that is possible to make all these goals.

Javier Olaizola Casin, Global Managing Partner, Hybrid Cloud and Data, IBM Consulting, Said: “Businesses are increasingly applying ai to transform core business processes, and they Need aa solutions that are flexible, cost-effect and tuned with CONSISSENCY, Connutify and Speed ​​That Neeed to Build and Deplos AI Models and Applies and Applies Across Hybrid Cloud Scenarios.

Torsten Volks, the main analyst, application modernization, ESG, said: “The leading organizations use AI -focused AI -focused data.

Anand Swamy, Executive Vice President and Global Ecosystems, HCLTEM, said “to realize the full potential of generative AI, the organization must prefer agile and flexible infrastructure. Combining the Red Hat and AI capacitive hat Professional AI AI.

Do you want to know more about cyber security and cloud from industry leaders? Check out Cyber ​​Security & Cloud Expo in Amsterdam, California and London.

Explore other upcoming events and webinars with technology and webinars driven Techforge here.

Leave a Comment