Why 450-mm wafers?
Why is 450-mm development so important to Intel (and Samsung and TSMC)?
A few years ago, Intel and TSMC began heavily promoting the need for a transition from the current standard silicon wafer size, 300 mm, to the new 450-mm wafers. While many have worked on 450-mm standards and technology for years, it is only recently that the larger wafer has received enough attention and support (not to mention government funding) to believe that it may actually become real. While there has been much talk about the need for a larger wafer, I’d like to put my spin on the whole debate.
First, a bit of history. Silicon wafer sizes have been growing gradually and steadily for the last 50 years, from half-inch and one-inch silicon to today’s 300-mm diameter wafers. The historical reasons for this wafer size growth were based on three related trends: growing chip size, growing demand for chips, and the greater chip throughput (and thus lower chip cost) that the larger wafer sizes enabled. And while chip sizes stopped increasing about 15 years ago, the other two factors have remained compelling. The last two wafer size transitions (6 inch to 8 inch/200 mm, and 200 mm to 300 mm) each resulted in about a 30% reduction in the cost per area of silicon (and thus cost per chip). And since our industry is enamored with the thought that the future will look like the past, we are hoping for a repeat performance with the transition to 450-mm wafers.
But a closer look at this history, and what we can expect from the future, reveals a more complicated picture.
First, how does increasing wafer size lower the cost per unit area of silicon? Consider one process step as an example – etch. Maximum throughput of an etch tool is governed by two basic factors: wafer load/unload time and etch time. With good engineering there is little reason why these two times won’t remain the same as the wafer size increases. Thus, wafer throughput remains constant as a function of wafer size, so that chip throughput improves as the wafer size increases. But “good engineering” is not free, and it takes work to keep the etch uniformity the same for a larger wafer. The larger etch tools also cost more money to make. But if the tool cost does not increase as fast as the wafer area, the result is a lower cost per chip. This is the goal, and the reason why we pursue larger wafer sizes.
As a simplified example, consider a wafer diameter increase of 1.5X (say, from 200 mm to 300 mm). The wafer area (and thus the approximate number of chips) increases by 2.25. If the cost of the etcher, the amount of fab floor space, and the per-wafer cost of process chemicals all increase by 30% at 300 mm, the cost per chip will change by 1.3/2.25 = 0.58. Thus, the etch cost per chip will be 42% lower for 300-mm wafers compared to 200-mm wafers.
While many process steps have the same fundamental scaling as etch – wafer throughput is almost independent of wafer size – some process steps do not. In particular, lithography does not scale this way. Lithography field size (the area of the wafer exposed at one time) has been the same for nearly 20 years (since the era of step-and-scan), and there is almost zero likelihood that it will increase in the near future. Further, the exposure time for a point on the wafer for most litho processes is limited by the speed with which the tool can step and scan the wafer (since the light source provides more than enough power).
Like etch, the total litho process time is the wafer load/unload time plus the exposure time. The load time can be kept constant as a function of wafer size, but the exposure time increases as the wafer size increases. In fact, it takes great effort to keep the scanning and stepping speed from slowing down for a larger wafer due to the greater wafer and wafer stage mass that must be moved. And since wafer load/unload time is a very small fraction of the total process time, the result for lithography is a near-constant wafer-area throughput (rather than the constant wafer throughput for etch) as wafer size is changed.
One important but frequently overlooked consequence of litho throughput scaling is that each change in wafer size results in an increase in the fraction of the wafer costs caused by lithography. In the days of 6-inch wafers, lithography represented roughly 20 – 25% of the cost of making a chip. The transition to 200-mm (8-inch) wafers lowered the (per-chip) cost of all process steps except lithography. As a result, the overall per-chip processing costs went down by about 25 – 30%, but the per-chip lithography costs remained constant and thus become 30 – 35% of the cost of making a chip.
The transition to 200-mm wafers increased the wafer area by 1.78. But since lithography accounted for only 25% of the chip cost at the smaller 6-inch wafer size, that area improvement affected 75% of the chip cost and gave a nice 25 – 30% drop in overall cost. The transition to 300-mm wafers gave a bigger 2.25X area advantage. However, that advantage could only be applied to the 65% of the costs that were non-litho. The result was again a 30% reduction in overall per-chip processing costs. But after the transition, with 300-mm wafers, lithography accounted for about 50% of the chip-making cost.
Every time wafer size increases, the importance of lithography to the overall cost of making a chip grows.
And there lies the big problem with the next wafer size transition. Each wafer size increase affects only the non-litho costs, but those non-litho costs are becoming a smaller fraction of the total because of wafer size increases. Even if we can achieve the same cost savings for the non-litho steps in the 300/450 transition as we did for the 200/300 transition, its overall impact will be less. Instead of the hoped-for 30% reduction in per-chip costs, we are likely to see only a 20% drop in costs, at best.
So we must set our sights lower: past wafer size transitions gave us a 30% cost advantage, but 450-mm wafers will only give us a 20% cost benefit over 300-mm wafers. Is that good enough? It might be, if all goes well. But the analysis above applies to a world that is quickly slipping away – the world of single-patterning lithography. If 450-mm wafer tools were here today, maybe this 20% cost savings could be had. But shrinking feature sizes are requiring the use of expensive double-patterning techniques, and as a result lithography costs are growing. They are growing on a per-chip basis, and as a fraction of the total costs. And as lithography costs go up, the benefits of a larger wafer size go down.
Consider a potential “worst-case” scenario: at the time of a transition to 450-mm wafers, lithography accounts for 75% of the cost of making a chip. Let’s also assume that switching to 450-mm wafers does not change the per-chip litho costs, but lowers the rest of the costs by 40%. The result? An overall 10% drop in the per-chip cost. Is the investment and effort involved in 450-mm development worth it for a 10% drop in manufacturing costs? And is that cost decrease enough to counter rising litho costs and keep Moore’s Law alive?
Maybe my worst-case scenario is too pessimistic. In five or six years, when a complete 450-mm tool set might be ready, what will lithography be like? In one scenario, we’ll be doing double patterning with EUV lithography. Does anyone really believe that this will cost the same as single-patterning 193-immersion? I don’t. And what if 193-immersion quadruple patterning is being used instead? Again, the only reasonable assumption will be that lithography accounts for much more than 50% of the cost of chip production.
So what can we conclude? A transition to 450-mm wafers, if all goes perfectly (and that’s a big if), will give us less than 20% cost improvement, and possibly as low as 10%. Still, the big guys (Intel, TSMC, IBM, etc.) keep saying that 450-mm wafers will deliver 30% cost improvements. Why? Next time, I’ll give my armchair-quarterback analysis as to what the big guys are up to.