Semiconductor Memory Aids
By Brian Fuller
It’s not hard to forget that semiconductor memory remains one of the most relentless challenges in system design. It sometimes doesn’t get the ink that sexier semiconductor design topics do, but it’s there. Always.
Twenty years ago this year, University of Virginia computer scientists William Wulf and Sally McKee published a paper that popularized the term semiconductor “memory wall.” They argued that while both microprocessor and DRAM speed was improving exponentially, CPU speed would outpace DRAM speed eventually.
They wrote: “The difference between diverging exponentials also grows exponentially; so, although the disparity between processor and memory speed is already an issue, downstream someplace it will be a much bigger one.”
They argued the best solution would be for the industry to come up with a “cool, dense memory technology whose speed scales with that of processors. We are not aware of any such technology and could not affect its development in any case.”
Another brick in the wall?
Flash forward 20 years and the industry is still standing in front of the semiconductor memory wall, although the prospects for scaling it or knocking right through it look promising, at least if you listened to some of the presentations at the recent MemCon conference in Santa Clara.
Mike Black, Technology Strategist, Hybrid Memory Cube Technology, with Micron, talked about how Hybrid Memory Cube technology can help us scale the memory wall.
Products based on the first-generation HMC spec are expected in the coming quarters. This is thanks primarily to the work of the 110-member Hybrid Memory Cube Consortium. Its working group is already specifying the second generation, which will double the throughput of Gen 1 approaches, Black said.
With the first generation, two interfaces support different PC board trace lengths and signaling rates. The short-reach interface support 8- to 10-inch traces, at up to 15Gb/s; the ultra-short reach spec supports 2- to 3-inch traces, at up to 10Gb/s. The second-generation spec will push the short-reach interface to 30 Gb/s and the ultra-short reach to 15 Gb/s or higher. A draft is expected in the next two months, with the full Gen 2 specification due out next year, Black added.
But the memory cube architecture is just one path to navigate the memory minefield, as even Black acknowledged during his keynote.
Bob Brennan, senior vice president of the Memory System Architecture Lab at Samsung Semiconductor, called in his MemCon keynote for new DRAM and flash memory architectures if we’re to continue scaling and churning out more productive devices. For example, to scale NAND Flash capacity, Brennan argued, a 3D approach is needed. (In fact, Samsung’s already announced a 3D NAND device). As for DRAM challenges, Brennan talked up the hybrid memory cube technology as well as Wide I/O and Wide I/O 2 with a tiered memory structure.
Martin Lund, senior vice president of research and development for Cadence’s IP Group, offered optimism tempered with cold hard reality. For instance, 3D memory architectures are definitely a step in the right direction to continue semiconductor scaling but they add a level of complexity to design engineering.
“Who pays for the stack of memories when one of those has a bad bit?” he asked. “How do we prove who is it who has to pay for the whole thing when something goes wrong?”
And of course you can’t have industry progress without divergent opinions, and a lively panel laid out the challenges of memory design in the coming years. Bill Gervasi, memory technology analyst, with Discobolus Designs, put the situation in context: “One of the things we’re seeing is the fragmentation of the industry, and fragmentation doesn’t work. We’re seeing a lot of blue sky. A few minutes ago I saw a slide that showed five types of memory [standards] that are taking off. Are all going to succeed? I don’t think so.”
And the memory beat goes on.
—Brian Fuller (firstname.lastname@example.org) is editor-in-chief at Cadence.