Page 47 - Red Hat PR REPORT - MAY-JUNE 2025
P. 47

5/26/25, 11:40 AM                                            Latest News
        ? Hybrid cloud evolves to deliver enterprise innovation — Wednesday, May 21, 8-9:30

        a.m. EDT (YouTube)




        Supporting Quotes

        Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat

        “Inference is where the real promise of gen AI is delivered, where user interactions are met

        with fast, accurate responses delivered by a given model, but it must be delivered in an
        effective and cost-efficient way. Red Hat AI Inference Server is intended to meet the


        demand for high-performing, responsive inference at scale while keeping resource
        demands low, providing a common inference layer that supports any model, running on

        any accelerator in any environment.”




        Ramine Roane, corporate vice president, AI Product Management, AMD

        “In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient

        generative AI in the enterprise. Red Hat AI Inference Server enabled on AMD Instinct™

        GPUs equips organizations with enterprise-grade, community-driven AI inference

        capabilities backed by fully validated hardware accelerators.”




        Jeremy Foster, senior vice president and general manager, Cisco

        “AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat
        AI Inference Server is designed to deliver. This innovation offers Cisco and Red Hat


        opportunities to continue to collaborate on new ways to make AI deployments more
        accessible, efficient and scalable—helping organizations prepare for what’s next.”




        Bill Pearson, vice president, Data Center & AI Software Solutions and Ecosystem, Intel

        "Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on

        Intel® Gaudi® accelerators. This integration will provide our customers with an optimized

        solution to streamline and scale AI inference, delivering advanced performance and

        efficiency for a wide range of enterprise AI applications."




        John Fanelli, vice president, Enterprise Software, NVIDIA


      https://www.arabbnews.com/english/Latest-News.asp?id=18378                                                    4/6
   42   43   44   45   46   47   48   49   50   51   52