Part of the  

Solid State Technology


About  |  Contact

Posts Tagged ‘IC’

Are We At an Inflection Point with Silicon Scaling and Homogeneous ICs?

Wednesday, October 15th, 2014


By Bill Martin, President and VP of Engg, E-System Design

In the late 1940’s, three physicists (Bardeen, Brattain and Shockley) invented the first transistor and were later awarded the Nobel Prize in 1956 (Figure 1).  Texas Instruments commercialized the integrate silicon transistor (IC) in 1954 revolutionizing consumer products.  The IC invention and commercialization came at a perfect point in history1,2.

During 1950-1970s, the US population grew by 33% (Figure 2), income grew 170% and disposable spending increased 259% (Figure 3).  Disposable income was aided by increasing income but also by significant changes to our marginal tax rates.  (Figure 4).  Consumer demand for better IC based products and their spending $s provided a perfect Petri dish to hone a new technology requiring new processes (silicon, packaging and pcb); supply chains and high tech marketing for future IC based technologies3,4.

In this period, Moore’s Law was ‘coined’ and quickly drove and guided silicon manufacturers to prove their processing prowess. It also drove product companies and their marketing staffs to harness the guaranteed 2x density, improved performance and less expensive next generation silicon technology within their products.  Like an atomic clock, the market expected and received the new capabilities every 18-24 months.

The IC treadmill was at full speed replacing older, larger, slower, higher maintenance products with ICs.  As ‘they’ conquered existing products, new uses from the significant (medical devices), to the trivial (musical greeting cards) were developed to capture the growing disposable income.  In the early days, it was cheap to create any type of product to test market acceptance.

Since the 1970’s the environment has significantly improved:

US population is over 330 million, annual income is over $50K, disposable spending is approaching 50% and the tax rates continue to fall.  In addition, the entire world, 7 billion strong and growing, many wanting to have the latest products.  Today’s product success benchmark has elevated to a million or more units purchased during a product’s life span.  Some very well designed and marketed products attain this volume on the initial day of sale (iPhone)!

Cracks in the foundation:  Inflection omen?

But there were cracks in the foundation starting to appear.   More resources, more time and additional physical effects that had to be analyzed and resolved.  But engineers are very good at solving these issues that arise with each new generation.  One aspect that has not been addressed and is racing out of control is a design’s silicon mask costs.  Masks allow silicon foundries to build up ICs one layer at a time and define all geometries required for an IC to work.  Each physical layer may require 1 or 2 masks.  Until the mid 1990’s, mask costs were manageable.  But as the industry continued to drive toward smaller geometries, 90nm silicon mask costs passed $1M per design5.  Process engineers had accomplished their goals of producing smaller geometries but this caused an escalation in the required number of masks per layer and the finer geometries increased the cost to create and inspect each mask.  Both factors led to a geometric impact on mask costs.  Once past the $1M per mask set, the next process’ mask prices quickly escalated to $3-4M for a 65nm set.  This is just for the masks and does not include other product development costs, wafer/assembly/test manufacturing costs, marketing or sales costs.  Quick math:  a product with 1 million units of sales that contains one 65 nm integrate circuit will attribute $3-4 dollars to pay back ONLY the mask set expense.   FPGA, as a design platform, is one solution but this assumes that your design can be implemented in an FPGA.  Many high volume parts still want a dedicated, non-FPGA solution due to per unit costs.  Think what the end product’s sale’s price must be for a decent return on investment (ROI).  Economics used to be a friend of silicon linear scaling but we might be at the economic inflection point for linear scaling.  A recent SemiWiki post by Paul McLellan highlights the complexity and change required to continue the silicon scaling6:

“The problem with double patterning is that it is possible to design layouts that cannot be split into two masks…
To make things worse, this is not a local phenomenon…
The introduction of both multi-patterning and FinFETs has a huge impact …

the entire place and route flow needs to be completely revamped.”

Economics drive the inflection

Will all these technology issues get resolved?  Scientists and engineers have conquered most of what they focus on (flying, space, ocean, medical, etc).  In time, all of the technical issues can be resolved but what will be the cost to use these ‘solutions’?  Economics on the product development side (development vs. revenues generated) will cause many product developers to search for alternative solutions or cancel projects that are ROI infeasible.

Moore’s Law V1.0 was based upon manufacturing unit learning curves.  Each doubling of volume helped decrease the costs to produce the next unit by improving yields.  Improving yields allowed designers to create larger die with more transistors and functionality but at a higher cost (at least they could get >0% yield).  But this higher cost drove product companies to search for the next generation silicon node that shrunk the die to improve costs:  a perfect circular system re-enforcing itself.

Time for Moore’s Law 2.0 (Figure 6:  More than Moore modified)

Changing to another solution requires persistence, energy and small successes to gain inertia for Moore’s Law 2.0.   Packaging becomes the focus on Moore’s Law 2.0:  2.5D and 3D allow the mixing and matching of many building blocks into miniaturized systems.  Blocks already designed, proven with known histories, costs and suppliers:  significantly reducing risks and development costs.

Homogeneous silicon will never be able to integrate all into a single piece of silicon but must always be available.   Too many compromises in the homogeneous processing will reduce the effectiveness of a given function (ie AMS/RF or memory or MEMS or…) and the cost of:  tools, masks and processing complexity will quickly cancel any products looking for a positive ROI.

Maybe not a secret any longer….

Secrets are hard to keep when more and more people start to talk.  In recent months, increasing articles and press releases discuss companies that are exploring and/or using 2.5/3D packaging for impressive gains.   Many of these efforts have been hidden for either keeping a competitive edge or for fear of public failure.  But like many trends, once a trend gains momentum, it is difficult to stop.  Moore’s Law V1.0 is a perfect example.

If your company is on the Moore’s Law V2.0 bandwagon, continue to re-examine old thoughts with a fresh perspective.

If your company is not investigating Moore’s Law V2.0, you might want to ask why not?


1 First transistor picture. and


3 “100 Years of U.S. Consumer Spending”, U.S. Departments of Labor and Statistics, May 2006.

4 Annenberg Learner website:

5 C.R. Helms, Past President & CEO International SEMATECH, “Semiconductor Technology Research, Development, & Manufacturing:  Status, Challenges, & Solutions” p16,, 2003.

6 Place & Route with FinFETs and Double Patterning, Paul McLellan, Sept 29, 2014,

Design and Manufacturing Technology Development in Future IC Foundries

Tuesday, September 16th, 2014


By Ed Korczynski, Sr. Technical Editor

Virtual Roundtable provides perspective on the need for greater integration within the “fabless-foundry” ecosystem

Q1:  The fabless-foundry relationship in commercial IC manufacturing was established during an era of fab technology predictability—clear litho roadmaps for smaller and cheaper devices—but the future of fab technology seems unpredictable. The complexity which must be managed by a fabless company has already increased to justify leaders such as Apple or Qualcomm investing in technology R&D with foundries and with EDA- and OEM-companies. With manufacturing process technology integrating more materials with ever smaller nodes, how do we manage such complexity?

ANSWER:  Gregg Bartlett, Senior Vice President, Product Management, GLOBALFOUNDRIES

The vast majority of Integrated Device Manufacturers (IDMs) have either gone completely fabless or partnered with foundries for their leading-edge technology needs instead of making the huge investments necessary to keep pace with technology. The foundry opportunity is increasingly concentrated at the leading edge; this segment is expected to drive 60 percent of the total foundry market by 2016, representing a total of $27.5 billion. Yet there are fewer high-volume manufacturers that have the capabilities to offer leading-edge technologies beyond 28nm, even as the major companies have accelerated their technology roadmaps at 20nm and 14nm and added new device architectures.

This has led to a global capacity challenge. Leading-edge fabs are more expensive and fewer than ever. At the 130nm node, the cost to build a fab was just over $1B. For a 28nm fab, the cost is about $6B and a 14nm fab is nearly $10B. Technology development costs are rising at a similar rate, growing from a few $10M’s  at 130nm to several $100M’s at 28nm.

On top of these technology and manufacturing challenges, product life cycles are shrinking and end users are expecting more and more from their devices in terms of performance, power-efficiency, and features. Competing on manufacturing expertise alone is no longer a viable strategy in today’s semiconductor industry, and solutions developed in isolation are not adequate. The industry must work closer across all levels of the supply chain to understand these dynamics and how they put demands on the silicon chip.

Fortunately, the fabless/foundry model is evolving to accommodate these changing dynamics. We have been promoting this idea for years with what we like to call “Foundry 2.0.” In the 1970s/1980s, the industry was dominated by the IDM. Then the foundry model was invented and grew to prominence in the 1990s and early 2000s, but it was much more of a contract manufacturing model. A fabless company developed a design in isolation and then “threw it over the wall” to the foundry for manufacturing. There was not much need for interplay between the two companies. Of course, as technology complexity has increased in the past decade, this dynamic has changed dramatically. We have entered the era of collaborative device manufacturing. Collaboration is a buzz word that gets thrown around a lot, but today it really is critical and it needs to happen across all vectors, including design flow development, manufacturing supply chain, and customer engagement.

Q2:  3D in packaging started with wire-bonded-chip-stacks and now includes silicon-interposers (a.k.a. “2.5D”) and the memory-cube using through-silicon via (TSV). How about the complexity of 3D products using chip-package co-design, and many players in the ecosystem being needed hroughout design-ramp-HVM?

ANSWER:  Sesh Ramaswami, Managing Director, TSV and Advanced Packaging, Silicon Systems Group of Applied Materials

Enabling 3D requires the participation of the extended ecosystem. These include contributions from CAD, design tools for die architecture, floor plan, and layout circuit design test structures, as well as methodology wafer level process equipment and materials, wafer-level test assembly and packaging stacked die and package level testing.

Q3:  Due to challenges with lithographic scaling below 45nm half-pitch, how does the need to integrate new materials and device structures change the fabless-foundry relationship? How much of fully-depleted channels using SOI wafers and/or finFETs, followed by alternate channels can the industry afford without commited damand from IDMs and major fabless players for specific variants?

ANSWER:  Adam Brand, Director of Transistor Technology, Silicon Systems Group of Applied Materials

New materials and device structures are going to play a key role in advancing the technology to the next several nodes.

With EUV delayed, multi-patterning is growing in use, and new materials are enabling the sophisticated and precise extension of multi-patterning to the 7nm node and beyond.  The multi-patterning schemes however bring specific restrictions on layout which will affect the design process.

For device structures, Epi in particular is going to enable the next generation of complex device designs with improved mobility and by supporting very thin precisely defined channel structures to scale to smaller gate length and pitch. For these next generation devices, the R&D challenges will be high, but the industry cannot afford to skimp on R&D to find the winning solution to the low power transistor technology required for the 7nm and 5nm and beyond nodes.

Q4:  Mobile consumer devices now seem to drive the leading edge of demand for many ICs. However, the Internet-of-Things (IoT) is often spoken of needing just 65nm node chips to keep costs as low as possible, and these designs are expected to run in high volume for many years. How will these different devices that will continue to evolve in different ways get integrated together?

ANSWER: Michael Buehler-Garcia, Senior Director of Marketing, Calibre Design Solutions of Mentor Graphics

IOT has become the new industry buzz word.  What it has done is spotlight the multiple elements of a complete solution that do not require emerging process technologies for their chip design. Moreover, while a chip may use a well established process node, the actual design may be very complex. For example Mentor is participating in the German RESCAR program to increase the reliability of automotive electronics using our Calibre PERC solution. The initial reliability checks written are targeted for 180nm and older process nodes. Why? Because today’s 180nm and older node designs are much more complex than when these nodes were mainstream digital nodes and as such require more advanced verification solutions. Bottom line:  as opposed to a strategy of only moving to the next process node, chip design companies today have multiple options.  It is up to the ecosystem to provide solutions that allow the designers be able to make trade-offs without major changes in their design flows.

FIGURE: Reliability simulation as part of “RESCAR” program. (Source: Fraunhofer IZM)