vacancies advertise contact news tip The Vault
facebook rss twitter

HGST to demo phase change non-volatile memory fabric

by Mark Tyson on 10 August 2015, 13:08

Tags: HGST

Quick Link: HEXUS.net/qactne

Add to My Vault: x

HGST will be demonstrating its breakthrough persistent memory fabric at the Flash Memory Summit 2015, Santa Clara, Ca., over the next couple of days. The firm's Phase Change Memory (PCM) based tech is expected to deliver DRAM-like performance at a lower cost of ownership and with greater scalability, enabling the growth of in-memory computing.

Some see in-memory computing as a cornerstone of data centre processing in the future. Currently the adoption of in-memory computing is held back by technical issues with DRAM. HGST claims that current data centres consume 20 to 30 per cent of their power budget due to the DRAM employed. Its non-volatile PCM doesn't require powered refresh and can thus be scaled, yet still offer users DRAM-like performance.

HGST outlines a network-based approach where applications access non-volatile PCM across multiple computers to scale out as needed. The system uses the Remote Direct Memory Access (RDMA) protocol over networking infrastructures, such as Ethernet or InfiniBand and it is said to be reliable, scalable, low-power and can be deployed without BIOS modification nor rewriting of applications.

Last year HGST demonstrated its PCM PCIe SSD, pictured above, delivering a record-breaking three million IOPS. It says that, in collaboration with Infiniband solutions company Mellanox, it can offer random access latency below 2ms for 512 byte reads and throughput exceeding 3.5GB/s for 2KB block sizes using RDMA over InfiniBand.

"DRAM is expensive and consumes significant power, but today's alternatives lack sufficient density and are too slow to be a viable replacement," said Steve Campbell, HGST’s chief technology officer. "Last year our Research arm demonstrated Phase Change Memory as a viable DRAM performance alternative at a new price and capacity tier bridging main memory and persistent storage. To scale out this level of performance across the data centre requires further innovation. Our work with Mellanox proves that non-volatile main memory can be mapped across a network with latencies that fit inside the performance envelope of in-memory compute applications."

HGST revealed that it makes the best of low latency PCM over a network by applying PCI Express Peer-to-Peer technology to create low latency storage fabric.

As mentioned above, the HGST phase-change SSD was first seen at the Flash Memory Summit a year ago. Hopefully this new demonstration of PCM in an in-memory computing system will mean it is closer to market. The 2015 Flash Memory Summit begins tomorrow. I will update this post with any new breakthrough information from the event, concerning HGST's persistent memory fabric.



HEXUS Forums :: 6 Comments

Login with Forum Account

Don't have an account? Register today!
If this stuff's anything like as fast as it's purported to be and as resistant to failure as conventional DRAM then this could be the end of conventional SSD's and C:\ drives.
Just think, having 512GB (or more) of PCM Ram with your OS, Programs and Games on and then maybe a large storage drive beyond that to swap programs in and out of PCM.

But then again we've been hearing variants of this story for years, decades even. “This is the next big thing in memory, it'll replace DRAM and is persistant like Flash… here see this is a white paper all about it…” and then we hear no more about it except as a footnote on a wiki page somewhere.
Cost is another major factor; if the cost is closer to DRAM then it won't be taking over from NAND any time soon, let alone HDDs.
watercooled
Cost is another major factor; if the cost is closer to DRAM then it won't be taking over from NAND any time soon, let alone HDDs.

Not in the consumer market place, but this is aimed at data centres where long term running costs have a more significant impact on the total budget than hardware costs. I think this and other similar technologies would eventually be filtered down to consumer and enthusiast level but that's not a priority for developments such as this, at the moment it is interesting to see that they are considering alternatives and it might just end up going nowhere as they could find that costs would be too high, there could be reliability issues, manufacturing difficulties or someone puts a patent on part of the process and then decides to stop others from using that design etc…
This might be a game changer. Well, in the long run, anyway. For long time, there's been a dream of removing the third layer of storage (CPU L cache, DRAM and slow storage). SSD's have upped the ante, but if this is anything like they say (even though there's a long way from 50ns DRAM latency to 2ms PCM latency), it's gonna change quite a few things. Intel and Micron developed something similar. HP is already working on hardware and software that workes inside RAM (which will be non volatile). You can remove quite a bit of logic from the CPU that hides latency. You can makes caches a bit smaller, given the fact that you don't have that slow storage anymore. U can have shared memory pool between GPU and CPU and work on a fast memory with no worry for power loss. In a not quite distant future, you'll be able to have an SoC with everything on it. The board would be only to route signals for IO.
yeeeeman
This might be a game changer. Well, in the long run, anyway. For long time, there's been a dream of removing the third layer of storage (CPU L cache, DRAM and slow storage). SSD's have upped the ante, but if this is anything like they say (even though there's a long way from 50ns DRAM latency to 2ms PCM latency), it's gonna change quite a few things. Intel and Micron developed something similar. HP is already working on hardware and software that workes inside RAM (which will be non volatile). You can remove quite a bit of logic from the CPU that hides latency. You can makes caches a bit smaller, given the fact that you don't have that slow storage anymore. U can have shared memory pool between GPU and CPU and work on a fast memory with no worry for power loss. In a not quite distant future, you'll be able to have an SoC with everything on it. The board would be only to route signals for IO.

That's something that I hope I get to see someday in the not too distant future.