LATEST NEWS

Baidu Named Development Partner on Intel Nervana Neural Network Processor for Training

This week, at the Baidu Create AI developer conference in Beijing, Intel Corporate Vice President Naveen Rao announced that Baidu* is collaborating with Intel on development of the new Intel® Nervana™ Neural Network Processor for Training (NNP-T). The collaboration involves the hardware and software designs of the new custom accelerator with one purpose – training deep learning models at lightning speed.

“The next few years will see an explosion in the complexity of AI models and the need for massive deep learning compute at scale. Intel and Baidu are focusing their decade-long collaboration on building radical new hardware, co-designed with enabling software, that will evolve with this new reality – something we call ‘AI 2.0.’”
–Naveen Rao, Intel corporate vice president and general manager of the AI Products Group

Why It Matters

Artificial intelligence (AI) isn’t a single workload; it’s a pervasive capability that will enhance every application, whether it’s running on a phone or in a massive data center. Phones, data centers and everything in between have different performance and power requirements, so one-size AI hardware doesn’t fit all. Intel offers exceptional choice in AI hardware with enabling software, so customers can run complex AI applications where the data lives. The NNP-T is a new class of efficient deep learning system hardware designed to accelerate distributed training at scale. Close collaboration with Baidu helps ensure Intel development stays in lock-step with the latest customer demands on training hardware.

How Intel and Baidu Collaborate

 Since 2016, Intel has been optimizing Baidu’s PaddlePaddle* deep learning framework for Intel® Xeon® Scalable processors. Now, the companies give data scientists more hardware choice by optimizing the NNP-T for PaddlePaddle.

The impact of these AI solutions is enhanced with additional Intel technologies. For example, Intel® Optane™ DC Persistent Memoryprovides improved memory performance that allows Baidu to deliver personalized mobile content to millions of users through its Feed Stream* service and Baidu’s AI recommendation engines for a more efficient customer experience.

Additionally, with data security critically important to users, Intel and Baidu are working together on MesaTEE*, a memory-safe function-as-a-service (FaaS) computing framework based on the Intel Software Guard Extensions (SGX) technology.

Shir

Recent Posts

NVIDIA and AWS Expand Full-Stack Partnership, Providing the Secure, High-Performance Compute Platform Vital for Future Innovation

AWS integrates NVIDIA NVLink Fusion into its custom silicon, including the next-generation Tranium4 chip, Graviton…

4 days ago

Molex Names Top 10 Connectivity and Electronics Design Predictions for 2026, Fueled by Far-Reaching Impact of Artificial Intelligences Across Major Industries

Intensifying AI demands continue to proliferate across aerospace and defense, automotive, consumer electronics, data center,…

4 days ago

Tria Technologies to bring Qualcomm DragonwingTM IQ-6 Series to market with two new compute modules

 TRIA SM2S-IQ615 and TRIA OSM-LF-IQ615 modules enable next-generation edge AI systems across a wide range…

4 days ago

At NeurIPS, NVIDIA Advances Open Model Development for Digital and Physical AI

NVIDIA releases new AI tools for speech, safety and autonomous driving — including NVIDIA DRIVE…

4 days ago

OMRON eases PCB-relay assembly and replacement with P6K surface-mountable sockets

 P6K sockets for G6K through-hole relays ensure reliability, flexibility, and repairability  OMRON Electronic Components Europe…

4 days ago

GEOX.AI and Mitsui Sumitomo Insurance Launch AI-Powered Initiative to Assess Building Risk Across Japan

GEOX.AI, a global leader in AI-driven property intelligence, announced today a strategic partnership with Mitsui…

4 days ago