Samsung has developed a new kind of memory product which it calls HBM-PIM. HEXUS regulars will be familiar with the first half of that abbreviation, HBM = High Bandwidth Memory, but it is worth spelling out that PIM is short for Processing-In-Memory.
"Our groundbreaking HBM-PIM is the industry's first programmable PIM solution tailored for diverse AI-driven workloads such as HPC, training and inference," said SVP of Memory Product Planning at Samsung Electronics, Kwangil Park. "We plan to build upon this breakthrough by further collaborating with AI solution providers for even more advanced PIM-powered applications".
Why bother making HBM-PIM? The idea is to bring powerful AI computing capabilities inside high-performance memory and Samsung says this integration will benefit large-scale processing in data centres, high performance computing (HPC) systems, and AI-enabled mobile applications. It explains that most of today's computer systems are based upon the von Neumann architecture, with separate processor and memory but sequential processing in this architecture means a constant back and forth. Instead, HBM-PIM implements "a DRAM-optimized AI engine inside each memory bank — a storage sub-unit — enabling parallel processing and minimizing data movement".
The practical benefits are quite startling. Applied to Samsung's existing HBM2 Aquabolt solution, the new architecture enables double the system performance while reducing energy consumption by 70 per cent in use cases Samsung tested. Interestingly, Samsung adds that "HBM-PIM does not require any hardware or software changes, allowing faster integration into existing systems".
One of the first places to test Samsung HBM-PIM will be the U.S. Department of Energy’s Argonne National Laboratory, home of several supercomputers including the Mira and Theta, with the first exascale system in the US (Aurora) due to be delivered later this year.