Page 4 - Can AI take education to a new level
P. 4
ANALYSIS
Home
News Intel said the models will be trained on scientific texts, code Beyond datacentre computing, ARM said it is collaborating
and science datasets at scales of more than one trillion parame- with Meta on the deployment of neural networks at the edge.
How the chip sector ters from diverse scientific domains to support multiple scientific ARM and Meta are working on ExecuTorch, which brings the
is gearing up for disciplines, including biology, cancer research, climate science, open source PyTorch deep learning framework to ARM-based
the AI revolution
cosmology and materials science. mobile and embedded platforms at the edge.
Using the Intel Max Series GPU architecture and the Aurora According to ARM, ExecuTorch simplifies the deployment of
How ISC2 aims
to overcome supercomputer system, Intel said the hardware is capable of neural networks that are needed for advanced AI and machine
cyber barriers handling one trillion-parameter models with just 64 nodes, which learning (ML) workloads across mobile and edge devices.
it claimed is far fewer than would be typically required. According ARM’s long-term goal is to ensure AI and ML models can be
Editor’s comment to Intel, the GPU Max Series 1550 outperforms the Nvidia easily developed and deployed with PyTorch and ExecuTorch.
H100 PCIe card by an average of 36% (1.36 times) on diverse In December, AMD is also set to unveil the work it is doing on
Buyer’s guide HPC workloads. AI. Given the investments in AI hardware being made by the
to the future of Intel also showed benchmark tests conducted with Dell using hyperscalers, the public cloud is gearing up to become the pre-
business software
the STAC-A2 independent benchmark suite for real-world market ferred deployment choice for ML and AI inference workloads. So
risk analysis workloads. Compared to eight Nvidia H100 PCIe long as IT governance policies are followed, corporate data can
Harnessing large
language models GPUs, Intel said that four Intel Data Center GPU Max 1550s be deployed on AI hardware infrastructure in the public cloud
for education had 26% higher warm Greeks 10-100k-1260 performance and safely, to enable companies to run machine learning and build
4.3 times higher space efficiency. data models for their AI-based applications.
LLMs generally rely on vast swathes of public data combined
collAborAtive cApAbilities with domain-specific data, which is largely proprietary, meaning
ARM recently announced that it was working with several com- that internal data cannot easily be integrated with public data to
panies – including AMD, Intel, Microsoft, Nvidia, and Qualcomm improve accuracy based on company-specific information.
Technologies – on a range of initiatives focused on enabling Used internally, an HPC environment effectively offers enter-
advanced AI capabilities for more responsive and more secure prise users access to AI-acceleration hardware inside the cor-
user experiences. ARM said these partnerships would create porate network. The fact that HPE and Intel are focusing on
the foundational frameworks, technologies and specifications HPC for AI may pave the way to widening the application areas
required for more than 15 million ARM developers to deliver arti- for supercomputers beyond research and development and
ficial intelligence experiences. scientific computing. n
computerweekly.com 21-27 November 2023 4