Habana Labs Announces Gaudi AI Training Processor

Habana Labs, Ltd. (www.habana.ai), a leading developer of AI processors, today announced the Habana Gaudi™ AI Training Processor. Training systems based on Gaudi processors will deliver an increase in throughput of up to four times over systems built with equivalent number GPUs.

Gaudi’s innovative architecture enables near-linear scaling of training systems performance, as high throughput is maintained even at smaller batch sizes, thus allowing performance scaling of Gaudi-based systems from a single-device to large systems built with hundreds of Gaudi processors.

In addition to record-breaking performance, Gaudi brings another industry first to AI training: on-chip integration of RDMA over Converged Ethernet (RoCE v2) functionality within the AI processor, to enable the scaling of AI systems to any size, using standard Ethernet. With Gaudi, Habana Labs’ customers can now utilize standard Ethernet switching for both scaling-up and scaling-out AI training systems. Ethernet switches are multi-sourced, offering virtually unlimited scalability in speeds and port-count, and are already used in datacenters to scale compute and storage systems. In contrast to Habana’s standards-based approach, GPU-based systems rely on proprietary system interfaces, that inherently limit scalability and choice for system designers.

“With its new products, Habana has quickly extended from inference into training, covering the full range of neural-network functions,” commented Linley Gwennap, principal analyst of The Linley Group. “Gaudi offers strong performance and industry-leading power efficiency among AI training accelerators. As the first AI processor to integrate 100G Ethernet links with RoCE support, it enables large clusters of accelerators built using industry-standard components.”

Eitan Medina, Chief Business Officer of Habana Labs. Photo credit – Habana Labs

The Gaudi processor includes 32GB of HBM-2 memory and is currently offered in two forms:

  • HL-200 – a PCIe card supporting eight ports of 100Gb Ethernet;
  • HL-205 – a mezzanine card compliant with the OCP-OAM specification, supporting 10 ports of 100Gb Ethernet or 20 ports of 50Gb Ethernet.

Habana is also introducing an 8-Gaudi system called HLS-1, which includes eight HL-205 Mezzanine cards, with PCIe connectors for external Host connectivity and 24 100Gbps Ethernet ports for connecting to off-the-shelf Ethernet switches, thus allowing scaling-up in a standard 19’’ rack by populating multiple HLS-1 systems.

Gaudi is the second purpose-built AI processor to be launched by Habana Labs in the past year, following the Habana Goya™ AI Inference Processor. Goya has been shipping since Q4, 2018, and has demonstrated industry-leading inference performance, with the industry’s highest throughput, highest power efficiency (images-per-second per Watt), and real-time latency.

“Training AI models require exponentially higher compute every year, so it’s essential to address the urgent needs of the datacenter and cloud for radically improved productivity and scalability. With Gaudi’s innovative architecture, Habana delivers the industry’s highest performance while integrating standards-based Ethernet connectivity that enables unlimited scale,” said David Dahan, CEO and Co-founder of Habana Labs. “Gaudi will disrupt the status quo of the AI Training processor landscape.”

“Facebook is seeking to provide open platforms for innovation around which our industry can converge,” said Vijay Rao, Director of Technology, Strategy at Facebook. “We are pleased that the Habana Goya AI inference processor has implemented and open-sourced the backend for the Glow machine learning compiler and that the Habana Gaudi AI training processor is supporting the OCP Accelerator Module (OAM) specification.”

The Gaudi Processor is fully programmable and customizable, incorporating a second- generation Tensor Processing Core (TPC™) cluster, along with development tools, libraries, and a compiler, that collectively deliver a comprehensive and flexible solution. Habana Labs’ SynapseAI™ software stack consists of a rich kernel library and open toolchain for customers to add proprietary kernels.

Habana will be sampling the Gaudi to select customers in the second half of 2019. For more information on Gaudi AI Training and Goya AI inference processors, please visit www.habana.ai

Lihi

Comments are closed.

Recent Posts

BeyondTrust Acquires Entitle, Strengthening Privileged Identity Security Platform with Paradigm Shifting Just-in-Time Access and Identity Governance

Entitle is a pioneering privilege management solution that discovers, manages, and automates just-in-time (JIT) access and modern identity governance and…

2 weeks ago

Samtec Introduces SIBORG Tool to Speed Component Launch Designs

Available freely to Samtec customers under NDA, SIBORG (Signal Integrity Breakout Region Guru) works with Ansys HFSS 3D Layout to…

2 weeks ago

Accelerating Mass Business AI Adoption: NeuReality Launches Developer Portal for NR1 Inference Platform, Expanding Affordable AI Access

Entire NR1 system purpose-built for a more affordable AI infrastructure allowing for faster deployment; furthering AI’s reach into more parts…

2 weeks ago

Dot Compliance Raises a $17.5 Million Up-Round in Series B Extension Funding to Advance New Category of AI-driven Compliance

Following rapid growth in its customer base to over 400, funding will fuel further AI development and create a hybrid…

2 weeks ago

Tektronix and recently acquired EA Elektro-Automatik now offer expanded power portfolio for engineers who are electrifying our world

The addition of EA’s high-efficiency regenerative power supplies greatly expands Tektronix’s trusted offering Tektronix, Inc, a leading provider in test…

2 weeks ago

Melexis unveils fully integrated inductive switch

Melexis reveals its groundbreaking Induxis® switch, the MLX92442. Contactless, magnet-free, and strayfield immune, this monolithic solution directly detects conductive targets.…

2 weeks ago