Lessons From Past Architecture Wars
By Marc David Levenson
There was an interesting IEEE panel discussion in Silicon Valley recently, reviewing the microprocessor architecture wars of the ’70s, ’80s and ’90s. How did the Intel x86 architecture become so dominant when there were other capable designs, including more efficient RISC (Reduced Instruction Set Computing) chips? How did the x86s overcome competition from Zilog, Motorola, MIPS, Sun and IBM microprocessors? John Hollar of the Computer History Museum moderated the panel discussion that included industry veterans David House from Intel, John Mashey from MIPS and Anant Agrawal of Sun.
Bottom line: It wasn’t an accident back in the old days! According to Mashey, Intel started with two advantages—the best silicon processing and a team of applications engineers. (Mashey claimed that MIPS had to beg for fab time from its partners.) House also emphasized the role of those applications engineers, ensuring customers believed they could make the Intel chips do what they wanted and also that future Intel chips met future customer needs. Intel chose only to make the chips, not whole computers, so they did not have as much control of the ultimate user experience as did Sun. Their CISC architectures ran slower in terms of cycles than MIPS’ RISC chips, but advanced silicon processing meant that the clock speeds were faster. With Intel pushing Moore’s Law aggressively, the difference in real computing speed was less than six months of Moore’s law progress, which was much less that it would take to implement a RISC design. So Intel stuck with the x86 architecture even though they knew it would theoretically never be the best, according to House.
In retrospect, when IBM adopted the Intel chip for the IBM PC and then made the design public, opening the opportunity for cheap clones, x86 dominance of the PC market became inevitable. But it did not have to go on forever. Andy Grove’s famous paranoia kept his team on its toes, countering potential threats from allies and adversaries. Margins at Intel went to 90%! Meanwhile Sun “put all the wood behind” the SPARC architecture and worked to optimize their networked workstations for the tasks ahead. MIPS found a niche for its RISC chips as embedded processors in everything. Various attempts to replace the Wintel x86 PC hegemony with new operating systems running on RISC processors came to naught.
But that victory is now long past. David House pointed out that no dominant computer architecture (and no dominant company) has continued paramount after industry transitions. New companies and new architectures appeared when mainframes gave way to minicomputers and minis became PCs. Now tablets and smartphones are replacing PCs and the ARM architecture seems likely to dominate the new era. Those mobile devices, though, need lots of server farms and lots of network routers in the cloud, creating new opportunities beyond the consumer economy.
So what is the lesson? Serving the customer seems key (if often forgotten), but so is the ability to build what you imagine and the will to keep improving it. Having allies to craft motherboards, operating systems, applications and explanations—so you can focus on what you do best—proved important. But to me, the singular point made by the panel was the virtue of one thing becoming the standard. But it was an open, dynamic standard, the baseline against which everything is judged, but not so ossified as to prevent innovation. To maintain hegemony, a company or design must evolve, and not just protect “the legacy” or 90% margins. Going your own way can be rewarding and fun, but perhaps not scalable. To be successful you need followers, and not just on Facebook! Unique strength in one aspect of a complex business can compensate for weakness in another, but sub-optimal solutions—even when adopted as “standard”—do not remain entrenched forever. In Silicon Valley, there is always another cycle and another opportunity.