Page 15 - Red Hat PR REPORT - MAY-JUNE 2025
P. 15

Press Release


               Supporting Quotes
               Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat

               “Inference is where the real promise of gen AI is delivered, where user interactions are met with
               fast, accurate responses delivered by a given model, but it must be delivered in an effective and
               cost-efficient way. Red Hat AI Inference Server is intended to meet the demand for high-performing,
               responsive inference at scale while keeping resource demands low, providing a common inference
               layer that supports any model, running on any accelerator in any environment.”

               Ramine Roane, corporate vice president, AI Product Management, AMD
               “In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient generative
               AI in the enterprise. Red Hat AI Inference Server enabled on AMD Instinct™ GPUs equips
               organizations with enterprise-grade, community-driven AI inference capabilities backed by fully
               validated hardware accelerators.”


               Jeremy Foster, senior vice president and general manager, Cisco
               “AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat AI
               Inference Server is designed to deliver. This innovation offers Cisco and Red Hat opportunities to
               continue to collaborate on new ways to make AI deployments more accessible, efficient and
               scalable—helping organizations prepare for what’s next.”

               Bill Pearson, vice president, Data Center & AI Software Solutions and Ecosystem, Intel
                                                                                                    ®      ®
               "Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on Intel  Gaudi
               accelerators. This integration will provide our customers with an optimized solution to streamline
               and scale AI inference, delivering advanced performance and efficiency for a wide range of
               enterprise AI applications."

               John Fanelli, vice president, Enterprise Software, NVIDIA
               “High-performance inference enables models and AI agents not just to answer, but to reason and
               adapt in real time. With open, full-stack NVIDIA accelerated computing and Red Hat AI Inference
               Server, developers can run efficient reasoning at scale across hybrid clouds, and deploy with
               confidence using Red Hat Inference Server with the new NVIDIA Enterprise AI validated design.”

               Additional Resources

                     Read a technical deep dive on Red Hat AI Inference Server
                     Hear more about Red Hat AI Inference Server from Red Hat executives
                     Find out more about Red Hat AI
                     Learn more about Red Hat OpenShift AI
                     Learn more about Red Hat Enterprise Linux AI
                     Read more about the llm-d project
                     Learn about the latest updates to Red Hat AI
                     Learn more about Red Hat Summit
                     See all of Red Hat’s announcements this week in the Red Hat Summit newsroom
                     Follow @RedHatSummit or #RHSummit on X for event-specific updates

               Connect with Red Hat


                     Learn more about Red Hat
                     Get more news in the Red Hat newsroom
                     Read the Red Hat blog
   10   11   12   13   14   15   16   17   18   19   20