Showcases significant benefits of machine learning technology inside SSDs and storage accelerators spanning data center, edge and end points to increase application performance while lowering overall total cost of ownership
Santa Clara, Calif. (August 9, 2018) – Marvell (NASDAQ:MRVL) will demonstrate today at the Flash Memory Summit how it will provide artificial intelligence capabilities to a broad range of industries by incorporating NVIDIA’s Deep Learning Accelerator (NVDLA) technology in its family of data center and client SSD controllers.
Marvell’s AI SSD controller proof-of-concept architecture solution will highlight how machine learning can help applications accelerate with minimal network bandwidth and no host CPU processing, delivering a significant reduction in overall total cost of ownership. The architecture is anticipated to enable a new era of storage SSD solutions in areas such as cloud and edge data centers, automotive, industrial, communications networking, environmental monitoring, banking and client, among others.
Big data analytics systems require enormous amounts of information to be processed to gain important insights. Metadata tagging is required for this processing to run efficiently and effectively. Storage solutions must become more intelligent to enable the generation of this metadata at the storage end points to help optimize overall efficiencies while increasing user experiences and business productivities.
By adding NVDLA to its SSD controllers, Marvell is bringing deep learning inference to forms of SSDs, improving efficiency, reducing power consumption, maximizing scalability and optimizing distribution of resources. Even large-scale datasets can be fully supported, while still reducing hardware investment and operational expenditure. The scalability of this solution will also allow enterprises and cloud service providers to add more offerings and capabilities to their product portfolios leveraging AI technologies. This programmable architecture will enable AI models to be quickly updated, so that new use cases can be addressed as they emerge.
“As greater and greater amounts of data get generated at edge and end points, it is critical new end-to-end architecture solutions are developed to increase overall productivity while addressing the pain points of application response time and total cost of delivery,” said Nigel Alvares, VP of SSD and Data Center Storage Products, Marvell. “Our AI SSD controller proof-of-concept architecture solution leveraging NVIDIA’s NVDLA technology offers our customers and eco-system partners a framework to collaborate and develop the next generation of SSD and client-to-cloud infrastructure architecture solutions needed to enable and deliver tomorrow’s applications.”
“It is access to data that will fuel big data analytics,” added Noam Mizrahi, VP of Technology & Architecture at Marvell. “Systems will need to be able to analyze large quantities of data – of different types and from different locations. The proper generation of metadata to represent all of this data will be key to efficient processing. AI technology running right at the storage device may be used to effectively generate this metadata, preparing it for further analytics by higher processing layers. Our advanced AI SSD controller proof of concept solution sets a new paradigm in utilizing available system resources more efficiently, resulting in the scalable, cost-effective data storage expected for all kinds of machine learning tasks.”
“NVIDIA and Marvell share the goals of making AI more accessible and creating exciting new AI-based solutions,” said Deepu Talla, VP and General Manager of Autonomous Machines at NVIDIA. “Our open NVDLA architecture, based on advanced Xavier technology, achieves this by providing partners with state-of-the-art deep learning capabilities.”