Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘design for’

Blog Review: November 25, 2013

Monday, November 25th, 2013

Zvi Or-Bach, president and CEO of MonolithIC 3D, blogs about a recent announcement by Intel CEO Brian Krzanich on company expansion focused on a foundry plan. Or-Bach said that if Intel could keep the traditional 30% cost reduction per node from 28nm to 10nm, and the foundry’s cost per transistor is staying flat, then Intel would be able to provide their foundry customers SoC products at a third of the other foundries cost, and accordingly Intel should be able to do very well in its foundry business.

Vivek Bakshi, EUV Litho, Inc. reports on work presented at the 2013 Source Workshop (Nov 3-7, 2013, Dublin, Ireland), including data on the readiness of 50 W EUV sources to support EUVL scanners. At the meeting, keynoter Vadim Banine of ASML said that 50 W EUV sources have now demonstrated good dose control and are now available for deployment in the field. ASML also presented data on the feasibility of source power of 175 W at the first focus (720 W at source), and utilizing new, protective cap layers to give collectors six months of life.

At the GaTech Global Interposer Technology Workshop (GIT) in Atlanta, the pervasive theme appeared to be whether a change in substrate is required to lower overall costs and help drive HVM (high volume manufacturing) applications. Phil Garrou reports on the workshop, including presentations from Ron Huemoeller of Amkor and David McCann of GLOBALFOUNDRIES.

Pete Singer provides a preview of a special focus session at the upcoming IEEE International Electron Devices Meeting (IEDM), scheduled for December 9 – 11, 2013. The session covers many of today’s hot topics: memory, LEDs, silicon photonics, interposers, SOI finFETS and 450mm.

Dr. Lianfeng Yang of ProPlus Design Solutions, Inc., blogs that these days, circuit designers are talking about the increasing giga-scale circuit size. Semiconductor CMOS technology downscaled to nano-scale, forcing the move to make designing for yield (DFY) mandatory and compelling them to re-evaluate how they design and verify their chips.

Design for Yield Trends

Tuesday, November 12th, 2013

By Sara Ver-Bruggen

Should foundries establish and share best practices to manage sub-nanometer effects to improve yield and also manufacturability?

Team effort

Design for yield (DFY) has been referred to previously on this site as the gap between what the designers assume they need in order to guarantee a reliable design and what the manufacturer or foundry thinks they need from the designer to be able to manufacture the product in a reliable fashion. Achieving and managing this two-way flow of information becomes more challenging as devices in high volume manufacturing have 28 nm dimensions and the focus is on even smaller dimension next-generation technologies. So is the onus on the foundries to implement DFY and establish and share best practices and techniques to manage sub-nanometer effects to improve yield and also manufacturability?

Read more: Experts At The Table: Design For Yield Moves Closer to the Foundry/Manufacturing Side

‘Certainly it is in the vital interest of foundries to do what it takes to enable their customers to be successful,’ says Mentor Graphics’ Senior Marketing Director, Calibre Design Solutions, Michael Buehler, adding, ‘Since success requires addressing co-optimization issues during the design phase, they must reach out to all the ecosystem players that enable their customers.’

Mentor refers to the trend of DFY moving closer to the manufacturing/foundry side as ‘design-manufacturing co-optimization’, which entails improving the design both to achieve higher yield and to increase the performance of the devices that can be achieved for a given process.

But foundries can’t do it alone. ‘The electronic design automation (EDA) providers, especially ones that enable the critical customer-to-foundry interface, have a vital part in transferring knowledge and automating the co-optimization process,’ says Buehler. IP suppliers must also have a greater appreciation for and involvement in co-optimization issues so their IP will implement the needed design enhancements required to achieve successful manufacturing in the context of a full chip design.

As they own the framework of DFY solutions, foundries that will work effectively with both the fabless and the equipment vendors will benefit from getting more tailored DFY solutions that can lead to shorter time-to-yield, says Amiad Conley, Applied Materials’ Technical Marketing Manager, Process Diagnostics and Control. But according to Ya-Chieh Lai, Engineering Director, Silicon and Signoff Verification, at Cadence, the onus and responsibility is on the entire ecosystem to establish and share best practices and techniques. ‘We will only achieve advanced nodes through a partnership between foundries, EDA, and the design community,’ says Ya-Chieh.

But whereas foundries are still taking the lead when it comes to design for manufacturability (DFM), for DFY the designer is intimately involved so he is able to account for optimal trade-off in yield versus PPA that result in choices for specific design parameters, including transistor widths and lengths.

For DFM, foundries are driving design database adjustments required to make a particular design manufacturable with good yield. ‘DFM modifications to a design database often happen at the end of a designer’s task. DFM takes the “ideal” design database and manipulates it to account for the manufacturing process,’ explains Dr Bruce McGaughy, Chief Technology Officer and Senior Vice President of Engineering at ProPlus Design Solutions.

The design database that a designer delivers must have DFY considerations to be able to yield. ‘The practices and techniques used by different design teams based on heuristics related to their specific application are therefore less centralized. Foundries recommend DFY reference flows but these are only guidelines. DFY practices and techniques are often deeply ingrained within a design team and can be considered a core competence and, with time, a key requirement,’ says McGaughy.

In the spirit of collaboration

Ultimately, as the industry continues to progress requiring manufacturing solutions that increasingly tailored and more and more device specific, this requires earlier and deeper collaboration between equipment vendors and foundry customers in defining and developing the tailored solutions that will maximize the performance of equipment in the fab. ‘It will also potentially require more three-way collaboration between the designers from fabless companies, foundries, and equipment vendors with the appropriate IP protection,’ says Conley.

A collaborative and open approach between the designer and the foundry is critical and beneficial for many reasons. ‘Designers are under tight pressures schedule-wise and any new steps in the design flow will be under intense scrutiny. The advantages of any additional steps must be very clear in terms of the improvement in yield and manufacturability and these additional steps must be in a form that designers can act on,’ says Ya-Chieh. The recent trend towards putting DFM/DFY directly into the design flow is a good example of this. ‘Instead of purely a sign-off step, DFM/DFY is accounted for in the router during place and route. The router is able to find and fix hotspots during design and, critically, to account for DFM/DFY issues during timing closure,’ he says. Similarly, Ya-Chieh refers to DFM/DFY flows that are now in place for custom design and library analysis. ‘Cases of poor transistor matching due to DFM/DFY issues can be flagged along with corresponding fixing guidelines. In terms of library analysis, standard cells that exhibit too much variability can be systematically identified and the cost associated with using such a cell can be explicitly accounted for (or that cell removed entirely).’

‘The ability to do “design-manufacturing co-optimization” is dependent on the quality of information available and an effective feedback loop that involves all the stakeholders in the entire supply chain: design customers, IP suppliers, foundries, EDA suppliers, test vendors, and so on,’ says Buehler. ‘This starts with test chips built during process development, but it must continue through risk manufacturing, early adopter experiences and volume production ramping. This means sharing design data, process data, test failure diagnosis data and field failure data,’ he adds.

A pioneer of this type of collaboration was the Common Platform Consortium initiated by IBM. Over time, foundries have assumed more of the load for enabling and coordinating the ecosystem. ‘GLOBALFOUNDRIES has identified collaboration as a key factor in its overall success since its inception and been particularly open about sharing foundry process data,’ says Buehler.

TSMC has also been a leader in establishing a well-defined program among ecosystem players, starting with the design tool reference flows it established over a decade ago. Through its Open Innovation Platform program TSMC is helping to drive compatibility among design tools and provides interfaces from its core analysis engines and third party EDA providers.

In terms of standards Si2 organizes industry stakeholders to drive adoption of collaborative technology for silicon design integration and improved IC design capability. Buehler adds: ‘Si2 working groups define and ratify standards related to design rule definitions, DFM specifications, design database facilities and process design kits.’

Open and trusting collaboration helps understand the thriving ecosystem programs that top-tier foundries have put together. McGaughy says: ‘Foundry customers, EDA and IP partners closely align during early process development and integration of tools into workable flows. One clear example is the rollout of a new process technology. From early in the process lifecycle, foundries release 0.x versions of their PDK. Customers and partners expend significant amounts of time, effort and resources to ensure the design ecosystem is ready when the process is, so that design tapeouts can start as soon as possible.’

DFY is even more critically involved in this ramp-up phase, as only when there is confidence in hitting yield targets will a process volume ramp follow. ‘As DFY directly ties into the foundation SPICE models, every new update in PDK means a new characterization or validation step. Only a close and sustained relationship can make the development and release of DFY methodologies a success,’ he states.

Experts At The Table: Design For Yield (DFY) moves closer to the foundry/manufacturing side

Friday, November 8th, 2013

By Sara Verbruggen

SemiMD discussed the trend for design for yield (DFY) moving closer to the foundry/manufacturing side with Dr Bruce McGaughy, Chief Technology Officer and Senior Vice President of Engineering, ProPlus Design Solutions, Ya-Chieh Lai, Engineering Director, Silicon and Signoff Verification, Cadence and Michael Buehler, Senior Marketing Director, Calibre Design Solutions, Mentor Graphics, and Amiad Conley, Technical Marketing Manager, Process Diagnostics and Control, Applied Materials. What follows are excerpts of that conversation.

SemiMD: What are the main advantages for design for yield (DFY) moving closer to the manufacturing/foundry side, and is it a trend with further potential?

Forte: Mentor refers to this trend as ‘design-manufacturing co-optimization’ because in the best scenario it involves improving the design both to achieve higher yield and to increase the performance of the devices that can be achieved for a given process. Companies embrace this opportunity in different ways. At one end of the scale, some fabless IC companies do the minimum they have to do to pass the foundry sign-off requirements. However, some companies embrace co-optimization as a way to compete, both by decreasing their manufacturing cost (higher yield means lower wafer costs), and by increasing the performance of their products at a given process node compared to their competition. Having a strong DFY discipline also enables fabless companies to have more portability across foundries, giving them alternate sources and purchasing power.

Ya-Chieh: Broadly speaking there are three typical insertion points for design for manufacturability (DFM)/DFY techniques. The first is in the design flow as design is being done. The second is as part of design sign-off. The last is done by the foundry as part of chip finishing.

The obvious advantage of DFY/DFM moving closer to the manufacturing/foundry side is in terms of ‘access’ to real fab data. This information is closely guarded by the fab and access is still only in terms of either encrypted data or models that closely correlate to silicon data but that have been carefully scrubbed of too many details.

However, the complexity of modern designs requires that DFM/DFY techniques need to be as far upstream in the design flows as possible/practicable. Any DFM/DFY technique that requires a modification to the design must be comprehended by designers so that any design impact can be properly accounted for so as to prevent the possibility of design re-spins late in the design cycle.

What we are seeing is not that DFM/DFY is moving closer to the manufacturing, or foundry, side, but that different techniques have been needed over the years to address the need of the designer for information as early as possible. Initially much of DFM/DFY was in the form of complex rule-based extensions to DRC, but much of this has since moved to include model-based and, in many cases, pattern-based checks (or some combination thereof).  More recently, the trend has been towards deeper integration with design tools and more automated fixing or optimization. DFM/DFY techniques that merely highlight a “hotspot” is insufficient. Designers need to know how to fix the problem and in the event there is a large number of fixes designers need to know how to automatically fix the problem. In other words the trend is about progressing towards better techniques for providing this information upstream and in ways that can be actionable by designers.

Conley: The key benefit in DFY approach is the ability to provide tailored solutions to the relevant manufacturing steps in a way that optimize performance based on device specific characteristics. This trend will definitely evolve further. We definitely see the trend in the defect inspection and review loops in foundries, which are targeted to generate paretos of the representative killer defects at major process steps. Due to the defects becoming smaller and the optical limitation of the detection tools, design information is used today to enable smarter sampling and defect classification in the foundries. To accelerate yield ramp going forward, robust infrastructure development is needed as an enabler to extract relevant information from chip design to the defect inspection, defect review and metrology equipment.

McGaughy: The foundation information used by designers in DFY analysis comes from the fab/foundry. This information is encapsulated in the form of statistical device models provided to the design community as part of the process design kit (PDK). Statistical models and, more recently, layout-dependent effect information is used by designers to determine the margin their design has for a particular process. This allows the designers to optimize their design to achieve the desired yield versus power, performance, area (PPA) trade-off. Without visibility into process variability via the foundry-provided Simulation Program with Integrated Circuit Emphasis (SPICE) models, DFY would not be viable. Hence, foundries are clearly at the epicenter of DFY. As process complexity increases and more detailed information of process variation effects are captured into SPICE models and made available to designers, it can be expected that the role of the foundry will continue to be more important in this respect over time.

SemiMD: So does this place a challenge on the EDA industry, or, how are EDA companies, such as ProPlus, helping to enable this trend?

McGaughy: The DFY challenge that designers face creates an opportunity for the EDA industry. As process complexity increases, there is less ‘margin’. Tighter physical geometries, lower single supply voltage (Vdd) and threshold voltage (Vth), new device structures, new process techniques and more complex designs all push margins. Margins refer to the slack that designers may have to ensure they can create a robust design. That not only works at nominal conditions, but under real-world variability.

Tighter margins mean a greater need to carefully asses the yield versus PPA trade-off that creates the need for DFY tools. This is where companies such as ProPlus come in. ProPlus helps designers use the foundry-provided process variation information effectively and designers can validate and even customize foundry models for specific application needs with the industry’s de-facto golden modeling tool from ProPlus.

SemiMD: Is this trend for DFY moving closer to the foundry/manufacturing side the only way to improve yields, as the industry continues to push towards further scaling, and all of the challenges that this entails?

Ya-Chieh: Actually we believe the trend is actually towards tighter integration with design, not less!

Conley: DFY solutions alone are not sufficient and they need to be developed in conjunction with wafer fabrication equipment enhancements. Looking at the wafer inspection and review (I&R) segment, the need to detect smaller defects and effectively separate yield killer defects from false and nuisance defects leads to an increased usage of SEM-based defect inspection tools that have higher sensitivity. At Applied Materials, we are very focused on improving core capabilities in imaging and classification. In our other technology segments there are also a lot of innovations on deposition and removal chamber architecture and process technologies that are focused on yield improvement. DPY schemes, as well as advancement in wafer fabrication equipment, are needed to improve yields as the industry advances scaling.

Forte: Strategies aside, the fact is that beyond about 40nm, IC designs must be optimized for the target manufacturing process. At each progressive node, the design rules become more complex and the yield becomes more specific to an individual design. For example, layouts now have to be checked to make sure they do not contain specific patterns that cannot be accurately fabricated by the process. This is mainly due to the fact that we are imaging features that are much smaller than the wavelength of the light currently used in production steppers. But there are many other complexities at advanced nodes associated with etch characteristics, via structures, fill patterns, electrical checks, chemical-mechanical polishing, double patterning, FinFET transistor nuances, and many others.

These issues are too numerous and too complex to deal with after tapeout. The foundries simply cannot remove all yield limiters by adjusting their process. For one thing, some of the issues are simply beyond the control of the process engineers. For example some layout patterns simply cannot be imaged by state-of-the-art steppers, so they must be eliminated from the design. Another problem, or challenge, is that foundries need to run designs from many customers. In most cases, very large consumer designs aside, foundries cannot afford to optimize their process flow for one customer’s design. Bottom line, design-manufacturing co-optimization issues must be taken into consideration during the physical design process.

McGaughy: More and more yield is a shared responsibility. At older nodes when defect density limits were responsible for optimal yields, the foundries took on most of the responsibility. At deep nanometer nodes, this is no longer the case. Now, the design yield must be optimized via trade-offs. Foundries are pushed to provide ever better performance at each new node and this means that they too have less process margin. Rather than guard band for process variation, foundries now provide the designer with detailed visibility into how the process variation will behave. Designers in turn can now make the choices they need to make, such as whether they need performance to be competitive or how best to achieve optimal performance with lowest yield risk. This shared responsibility for yield has pushed the DFY trend to the forefront. It serves to bridge the gap between design and manufacturing and will continue to do so as process technology scales.

Monte Carlo Analysis Has Become A Gamble

Monday, October 21st, 2013

Dr. Bruce McGaughy, CTO and SVP of Engineering at ProPlus Design Solutions, Inc. blogs about the wisdom of Monte Carlo analysis when high sigma methods are perhaps better suited to today’s designs.

Years ago, someone overhead a group of us talking about Monte Carlo analysis and thought we were referring to the gambling center of Monaco and not computational algorithms that have become the gold standard for yield prediction. All of us standing by the company water cooler had a good laugh. That someone was forgiven because he was a recent college graduate with a degree in Finance and a new hire. As a fast learner, he quickly came to understand the benefits of Monte Carlo analysis.

I was recently reminded of this scene as the limitations of Monte Carlo analysis approaches are becoming more acute because of capacity. No circuit designer would mistake Monte Carlo analysis for a roulette wheel, though chip design may seem like a game of chance today. We continue to use the Monte Carlo approach for high-dimension integration and failure analysis even as new approaches emerge.

Emerging they are. For example, high sigma methods with proven techniques are becoming more prevalent for the design of airplanes, bridges, financial models, integrated circuits and more. Moreover, high sigma methods also are used for electronic design for various applications and are proving to be accurate by validation in hardware.

New technologies, such as16nm FinFET, add extra design challenges that require high sigma greater than six and closer to 7 sigma, making Monte Carlo simulation even less desirable.

Let’s explore a real-world scenario using a memory design as an example where process variations at advanced technologies become more severe, leading to a greater impact on SRAM yield.

The repetitive structure circuits of an SRAM design means extremely low cell failure rate is necessary to ensure high chip yield. Traditional Monte Carlo analysis is impractical in this application. In fact, it’s nearly impossible to finish the needed sampling because it typically requires millions or even billions of runs.

Conversely, a high sigma method can cut Monte Carlo analysis sampling by orders of magnitude. A one megabyte SRAM would require the yield of a bitline cell to reach as high as 99.999999% in order to achieve a chip yield of 99%. Monte Carlo analysis would need billions of samples. The high sigma method would need mere thousands of samples to achieve the same accuracy, shortening the statistical simulation time and making it possible for designers to do yield analysis for this kind of application.

High sigma methods are able to identify and filter sensitive parameters, and identify failure regions. Results are shared in various outputs and include sigma convergence data, failure rates, and yield data equivalent to Monte Carlo samples.

Monte Carlo analysis has had a good long run for yield prediction, but for many cases it’s become impractical. Emerging high sigma methods improve designer confidence for yield, power, performance and area, shorten the process development cycle and have the potential to save cost. The ultimate validation, of course, is in hardware and production usage. High sigma methods are gaining extensive silicon validation over volume production.

Let’s not gamble with yield prediction and take a more careful look at high sigma methods.

About Bruce McGaughy

Bruce McGaughy, CTO and Senior VP of Engineering at ProPlus Solutions in San Jose, CA.
Bruce McGaughy, CTO and Senior VP of Engineering at ProPlus Solutions in San Jose, CA.

Dr. Bruce McGaughy is chief technology officer and senior vice president of Engineering of ProPlus Design Solutions, Inc. He was most recently the Chief Architect of Simulation Division and Distinguished Engineer at Cadence Design Systems Inc. Dr. McGaughy previously also served as a R&D VP at BTA Technology Inc. and Celestry Design Technology Inc., and later an Engineering Group Director at Cadence Design Systems Inc. Dr. McGaughy holds a Ph.D. degree in EECS from the University of California at Berkeley.


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.