Part of the  

Solid State Technology


The Confab


About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

An EDA view of semiconductor manufacturing

Thursday, July 24th, 2014

By Gabe Moretti, Contributing Editor

The concern that there is a significant break between tools used by designers targeting leading edge processes, those at 32 nm and smaller to be precise, and those used to target older processes was dispelled during the recent Design Automation Conference (DAC).  In his address as a DAC keynote speaker in June at the Moscone Center in San Francisco Dr. Antun Domic, Executive Vice President and General Manager, Synopsys Design Group, pointed out that advances in EDA tools in response to the challenges posed by the newer semiconductor process technologies also benefit designs targeting older processes.

Mary Ann White, Product Marketing Director for the Galaxy Implementation Platform at Synopsys, echoed Dr. Domic remarks and stated:” There seems to be a misconception that all advanced designs needed to be fabricated on leading process geometries such as 28nm and below, including FinFET. We have seen designs with compute-intensive applications, such as processors or graphics processing, move to the most advanced process geometries for performance reasons. These products also tend to be highly digital. With more density, almost double for advanced geometries in many cases, more functionality can also be added. In this age of disposable mobile products where cellphones are quickly replaced with newer versions, this seems necessary to remain competitive.

However, even if designers are targeting larger, established process technologies (planar CMOS), it doesn’t necessarily mean that their designs are any less advanced in terms of application than those that target the advanced nodes.  There are plenty of chips inside the mobile handset that are manufactured on established nodes, such as those with noise cancellation, touchscreen, and MEMS (Micro-Electronic Sensors) functionality. MEMS chips are currently manufactured at the 180nm node, and there are no foreseeable plans to move to smaller process geometries. Other chips at established nodes tend to also have some analog capability, which doesn’t make them any less complex.”

This is very important since the companies that can afford to use leading edge processes are diminishing in number due to the very high ($100 million and more) non recurring investment required.  And of course the cost of each die is also greater than with previous processes.  If the tools could only be used by those customers doing leading edge designs revenues would necessarily fall.

Design Complexity

Steve Carlson, Director of Marketing at Cadence, states that “when you think about design complexity there are few axes that might be used to measure it.  Certainly raw gate count or transistor count is one popular measure.  From a recent article in Chip Design a look at complexity on a log scale shows the billion mark has been eclipsed.”  Figure 1, courtesy of Cadence, shows the increase of transistors per die through the last 22 years.

Fig 1

Steve continued: “Another way to look at complexity is looking at the number of functional IP units being integrated together.  The graph in figure 2, provided by Cadence, shows the steep curve of IP integration that SoCs have been following.  This is another indication of the complexity of the design, rather than of the complexity of designing for a particular node.  At the heart of the process complexity question are metrics such as number of parasitic elements needed to adequately model a like structure in one process versus another.”  It is important to notice that the percentage of IP blocks provided by third parties is getting close to 50%.

Fig 2

Steve concludes with: “Yet another way to look at complexity is through the lens of the design rules and the design rule decks.  The graphs below show the upward trajectory for these measures in a very significant way.” Figure 3, also courtesy of Cadence, shows the increased complexity of the Design Rules provided by each foundry.  This trend makes second sourcing a design impossible, since having a second source foundry would be similar to having a different design.

Fig 3

Another problem designers have to deal with is the increasing complexity due to the decreasing features sizes.  Anand Iyer, Calypto Director of Product Marketing, observed that: “Complexity of design is increasing across many categories such as Variability, Design for Manufacturability (DFM) and Design for Power (DFP). Advanced geometries are prone to variation due to double patterning technology. Some foundries are worst casing the variation, which can lead to reduced design performance. DFM complexity is causing design performance to be evaluated across multiple corners much more than they were used to. There are also additional design rules that the foundry wants to impose due to DFM issues. Finally, DFP is a major factor for adding design complexity because power, especially dynamic power is a major issue in these process nodes. Voltage cannot scale due to the noise margin and process variation considerations and the capacitance is relatively unchanged or increasing.”

Impact on Back End Tools.

I have been wondering if the increasing dependency on transistors geometries and the parasitic effects peculiar to each foundry would eventually mean that a foundry specific Place and Route tool would be better than adapting a generic tool to a Design Rules file that is becoming very complex.  I my mind complexity means greater probability of errors due to ambiguity among a large set of rules.  Thus by building rules specific Place and Route tools would directly lower the number of DR checks required.

Mary Ann White of Synopsys answered: “We do not believe so.  Double and multiple patterning are definitely newer techniques introduced to mitigate the lithographic effects required to handle the small multi-gate transistors. However, in the end, even if the FinFET process differs, it doesn’t mean that the tool has to be different.  The use of multi patterning, coloring and decomposition is the same process even if the design rules between foundries may differ.”

On the other hand Steve Carlson of Cadence shares the opinion.  “There have been subtle differences between requirements at new process nodes for many generations.  Customers do not want to have different tool strategies for second source of foundry, so the implementation tools have to provide the union of capabilities needed to enable each node (or be excluded from consideration).   In more recent generations of process nodes there has been a growing divergence of the requirements to support

like-named nodes. This has led to added cost for EDA providers.  It is doubtful that different tools will be spawned for different foundries.  How the (overlapping) sets of capabilities get priced and packaged by the EDA vendors will be a business model decision.  The use model users want is singular across all foundry options.  How far things diverge and what the new requirements are at 7nm and 5nm may dictate a change in strategy.  Time will tell.”

This is clear for now.  But given the difficulty of second sourcing I expect that a de4sign company will choose one foundry and use it exclusively.  Changing foundry will be almost always a business decision based on financial considerations.

New processes also change the requirements for TCAD tools.  At the just finished DAC conference I met with Dr. Asen Asenov, CEO of Gold Standard Simulations, an EDA company in Scotland that focuses on the simulation of statistical variability in nan-CMOS devices.

He is of the opinion that Design-Technology Co-Optimization (DTCO) has become mandatory in advanced technology nodes.  Modeling and simulation play an increasing important role in the DTCO process with the benefits of speeding up and reducing the cost of the technology, circuit and system development and hence reducing the time-to-market.  He said: “It is well understood that tailoring the transistor characteristics by tuning the technology is not sufficient any more. The transistor characteristics have to meet the requirement for design and optimization of particular circuits, systems and corresponding products.  One of the main challenges is to factor accurately the device variability in the DTCO tools and practices. The focus at 28nm and 20nm bulk CMOS is the high statistical variability introduced by the high doping concentration in the channel needed to secure the required electrostatic integrity. However the introduction of FDSOI transistors and FinFETs, that tolerate low channel doping, has shifted the attention to the process induced variability related predominantly to silicon channel thickness or shape  variation.”  He continued: “However until now TCAD simulations, compact model extraction and circuit simulations are typically handled by different groups of experts and often by separate departments in the semiconductor industry and this leads to significant delays in the simulation based DTCO cycle. The fact that TCAD, compact model extraction and circuit simulation tools are typically developed and licensed by different EDA vendors does not help the DTCO practices.”

Ansys pointed out that in advanced finFET process nodes, the operating voltage for the devices have drastically reduced. This reduction in operating voltage has also lead to a decrease in operating margins for the devices. With several transient modes of operation in a low power ICs, having an accurate representation of the package model is mandatory for accurate noise coupling simulations. Distributed package models with a bump resolution are required for performing Chip-Package-System simulations for accurate noise coupling analysis.

Further Exploration

The topic of Semiconductors Manufacturing has generated a large number of responses.  As a result the next monthly article will continue to cover the topic with particular focus on the impact of leading edge processes on EDA tools and practices.

This article was originally published on Systems Design Engineering.

The Week in Review: June 20, 2014

Friday, June 20th, 2014

GS Nanotech, microelectronics products development and manufacture center, plans to launch mass assembly of 3D stacked TSV (through-silicon via) microcircuits in next few years.

Bookings and billings maintained a consistent pace in May 2014 as North American semiconductor equipment industry posts May 2014 book-to-bill ratio of 1.00.

Inpria Corporation, a developer of high-resolution photoresists, announced that it has received additional equity investment and commitments totaling $1.45 million.

Entegris Inc. inaugurated its new i2M Center for Advanced Materials Science in Bedford, Massachusetts.

memsstar Limited, a provider of etch and deposition equipment and technology solutions to manufacturers of semiconductors and micro-electrical mechanical systems (MEMS), announced that it has relocated to a new, larger facility.

A UC Riverside-led research project is among the 32 named by U.S. Energy Secretary Ernest Moniz as an Energy Frontier Research Centers (EFRCs), designed to accelerate the scientific breakthroughs needed to build a new 21st-century energy economy in the United States.

Cadence Design Systems, Inc. announced that it has completed the acquisition of Jasper Design Automation, Inc.

Analog Devices, Inc., a developer of high-performance semiconductors for signal processing applications, announced that Dr. Edward Frank has been elected as a Director of the Company.

The Week in Review: April 25, 2014

Friday, April 25th, 2014

GaN-on-Si is entering in production. Under this context, what is the patent situation? Yole Developpement and KnowMade investigate.

Spansion is adding three new Serial NOR and three new NAND memory densities specifically qualified to meet the extended temperature ranges and stringent quality requirements of the automotive industry.

The IEEE Photonics Conference 2014 (IPC-2014) has announced a Call for Papers seeking original technical presentations in lasers, optoelectronics, optical fiber networks and related topics for the industry’s premier fall photonics conference.

Zeta Instruments, Inc., an optical profiling and inspection company providing solutions for high-tech manufacturing, has announced that Jeff Donnelly has joined the company as its chief operating officer.

Intermolecular, Inc. announced that Epistar Corp. and Intermolecular have signed a multi-year extension of their existing collaborative development program and royalty-bearing IP licensing agreement to increase the efficiency and reduce the cost of Epistar’s LED devices.

GaN Systems, a developer of gallium nitride power switching semiconductors, has announced that Julian Styles, Director of Business Development USA, has been elected to the Board of the Power Electronics Industry Collaborative (PEIC).

Recognizing the changing dynamics of the microelectronics industry in Southeast Asia, SEMI announced the expanded scope of its industry-leading SEMICON regional exposition which will now rotate between Singapore and other locations within Southeast Asia. The new SEMICON Southeast Asia will be launched in 2015.

Cadence announced plans to acquire Jasper Design Automation for $170 million in cash. The transaction is expected to close in the second quarter of fiscal 2014.

Solid State Watch: April 18-25, 2014

Friday, April 25th, 2014

Big sell: IP Trends and Strategies

Monday, March 10th, 2014

By Sara Ver-Bruggen, SemiMD Editor

Experts at the table: Continued strong growth for semiconductor intellectual property (IP) through 2017 has been forecast by Semico Research. Semiconductor Manufacturing & Design invited Steve Roddy, Product Line Group Director, IP Group at Cadence, Bob Smith, Senior Vice President of Marketing and Business Development at Uniquify and Grant Pierce, CEO at Sonics to discuss how the IP landscape is changing and provide some perspectives, as the industry moves to new device architectures.

SemiMD: How are existing SIP strategies adapting for the transition to 20 nm generation of system- on-chips (SoCs)?

Roddy: The move to 22/16 nm process nodes has accelerated the trend towards the adoption of commercial Interface and physical IP. The massive learning curve in dealing with new transistor structures (FinFET, fully depleted SOI, high-k) raised the price of building in-house physical IP for internal consumption, thus compelling yet another wave of larger semiconductor IDMs and fabless semi vendors to leverage external IP for a greater share of their overall portfolio of physical IP needs.

Pierce: With 20 nm processes, the number of SIP cores and the size of memory accessed by those cores is seeing double digit growth. This growth translates into tremendous complexity that requires a solution for abstracting away the sheer volume of data generated by chip designs. The 20 nm processes will drive the need for SoC subsystems that abstract away the detailed interaction of step-by-step processing. For example, raising the abstraction of a video stream up to the level of a video subsystem; the collection of the various pieces of video processing into a single unit.
In this scenario, the big challenge becomes integration of subsystem units to create the final SoC. Meeting this challenge places a premium value on SIP that facilitates the efficient management of memory bandwidth to feed the growing number of SoC subsystems in the designs. Furthermore, 20 nm SoC designs will also place higher value on SIP that helps manage and control power in the context of applications running across these subsystems.

Smith: We are seeing many of the larger systems companies bypassing 20 nm entirely and moving from 28nm process technologies to the upcoming generation of 16 nm/14 nm FinFET technologies. FinFET offers the benefits of much lower power at equivalent performance or much higher performance at similar power to existing technologies. While 20 nm offers some gains, there are compelling competitive reasons to move quickly beyond 28/20 nm.
The demand for FinFET processes will naturally push the demand for the critical SIP blocks needed to support SoC designs at this node. SIP providers will need to migrate SIP blocks to the new technology and, for the most critical, will need to prove them out in silicon. The foundries will need to encourage this activity as SIP will typically make up more than 60-70% of the designs that will be slated for the new FinFET processes.

SemiMD: Within the semiconductor intellectual property (SIP) SoC subsystems market, which subsystem categories are likely to see most growth and how is the market evolving in the near term?

Pierce: Internet of Things (IoT) is causing an explosion in the number of sensors per device that are collecting huge amounts of data to be used locally or in the cloud. However, many of these sensors will need to operate at very low power levels, off of tiny batteries or scavenged energy. Sensor subsystems will need to carefully integrate the required processing and memory resources without support from the host processor. Some of the most interesting and challenging sensor subsystems will be imaging-related, where the processing loads can be highly dynamic, but the power requirements can be particularly challenging. Additionally, MEMS subsystems will grow in importance because this technology will often be used for power harvesting in IoT endpoint devices.

Smith: High-speed interfaces will see the most growth. DDR is at the top with DDR typically being the highest performance interface in the system and also the most critical. The DDR interface is at the heart of system operation and, if it does not operate reliably, the system won’t function. Other high-speed interfaces especially for video will also see tremendous growth, particularly in the mobile area.

Roddy: The emergence of a ‘subsystems’ IP market is to date over-hyped. That’s not to say that customers of IP are content with the status quo of 2008 where many IP blocks were purchased in isolation from a multitude of vendors. Customers do want a large portfolio of IP blocks that they can quickly stitch together, with known interoperability, provided with useful and usable verification IP. For that reason, we’ve seen a consolidation in the semiconductor IP business within the past five years, accelerating even further in 2012 and 2013. Larger providers such as Cadence can deliver a broad portfolio of IP while ensuring consistency, common support infrastructure, consistent best-in-class verification, and lowered transaction costs. But what customers don’t want is a pre-baked black-box that locks down system design issues that are best answered by the SOC designer in the context of the specific chip architecture. For that reason we expect to see slow growth in the class of ready-made, fully-integrated subsystems where the cost of development for the IP vendor far outweighs the added value delivered.

SemiMD: How will third party SIP outsourcing models become more important as the industry embarks 20 nm generation SoCs and what are IP vendors doing to enable the industry’s transition to the 20 nm generation of SoCs?

Roddy: As the costs of physical IP development scale up with the increasing costs of advance process node design, more consumers of IP are increasing the percentage of IP they outsource. Buyers of IP will always analyze the make versus buy equation by weighing several factors, including the degree of differentiation that a particular piece of IP can bring to their chips. Fully commoditized IP is easy to decide to outsource. Highly proprietary IP stays in house. But the lines are never black and white – there are always shades of grey. The IP vendors that can provide rapid means to customize pre-existing IP blocks are the vendors that will capture those incrementally outsourced blocks. The Cadence IP Factory concept of using automation to assemble and configure IP cores is one way that IP vendors can offer a blend of off-the-shelf cost savings with an appropriate touch of value added differentiation.

Pierce: From a business perspective, SIP outsourcing is inevitable for all functions that are not proprietary to the end system or SoC. It will not be feasible to develop and maintain all the expertise necessary to design and build a 20 nm device. The demand to abstract up to a subsystem solution will drive a consolidation of SIP suppliers under a common method of integration, for example a platform-based approach built around on-chip networks. Platform integration will be a key requirement for SIP suppliers.

Smith: SIP vendors are looking to the foundries and/or large systems companies to become partners in the development of the critical IP blocks needed to support the move to FinFET.

SemiMD: Are there examples of the ‘perfect’ SIP strategy in the industry, in terms of leveraging internal and third party SIP?

Smith: Yes. Even the largest semiconductor companies go outside for certain SIP blocks. It is virtually impossible for any individual company to have the resources (both human and capital) to develop and support the wide variety of SIP needed in today’s most complex SoC designs.

Pierce: The perfect SIP strategy in the industry is one that readily enables use of any SIP in any chip at any time. Pliability of architecture over a broad range of applications is a winning strategy. Agile integration of SIP cores and subsystems will become a critical strategic advantage. No one company exemplifies perfect SIP strategy today, but the rewards will be great for those companies that get closest to perfection first.

Roddy: There is no one-size-fits-all IP strategy that is perfect for all SOC design teams. The teams have to carefully consider their unique business proposition before embarking on an IP procurement strategy. For example, the tier 1 market leader in a given segment is striving to define and exploit new markets. That Tier 1 vendor will need to push new standards; add new value-add software features; and innovate in hardware, software and business models. For the Tier 1, building key value-add IP in-house, or partnering with an IP vendor that can rapidly customize standards-based IP is the way to go. On the other end of the spectrum, the ‘fast follower’ company looking to exploit a rapidly expanding market will be best served by outsourcing as close to 100% as possible of the needed IP. For this type of company, speed is of the essence and critical is the need to partner with IP vendors with the broadest possible portfolio to get a chip done fast and done right.

SemiMD: What challenges and also what opportunities is China’s growing SIP subsystems market presenting for the semiconductor industry?

Roddy: China is one of the most dynamic markets today for semiconductor IP. The overall Chinese semiconductor market is growing rapidly and a growing number of Chinese system OEMs are increasing investment levels, including taking on SOC design challenges previously left to the semiconductor vendors. By partnering with the key foundries to enable a portfolio of IP in specific process technology nodes for mutual customers, the leading IP providers such as Cadence are setting the buffet table at which the Chinese SOC design teams will fill their plates with interoperable, compatible, tested and verified physical IP blocks that will ensure fast time to market success.

Pierce: China is a fast growing market for SIP solutions in general. It is also a market that highly values the time-to-market benefit that SIP delivers as the majority of China’s products are consumer-oriented with short design cycles. SIP subsystems will be the most palatable for consumption by the China market. However, because China has adopted a number of regional standards, there will be substantial pressure on subsystem providers to optimize for local standards.

Smith: We see tremendous opportunities in terms of new business for SIP from both established companies and many entrepreneurial startups. Challenges include pricing pressure and the concern over IP leakage or copying. While this has become less of an issue over the years, it is still a concern. The good news is that the market in China is very aggressive and willing to take risks to get ahead.

Experts At The Table: Exploring the relationship between board-level design and 3D, and stacked, dies

Tuesday, December 17th, 2013

By Sara Verbruggen

SemiMD discussed what board level design can tell us about chip-level (three-dimensional) 3D and stacked dies with Sesh Ramaswami, Applied Materials’ Managing Director, TSV and Advanced Packaging, Advanced Product Technology Development, and Kevin Rinebold, Cadence’s Senior Product Marketing Manager. What follows are excerpts of that conversation.

SemiMD: What key, or major, challenge does the transition to 3D and stacked dies – and increasingly ‘advanced packaging’ – present when it comes to board-level design?

Ramaswami: The three-layer system architecture comprising the printed circuit board (PCB) system board, organic packaging substrate and silicon die offers the greatest integration flexibility. From a design perspective, this configuration places the most intensive co-design challenges on the die and substrate layers. On the substrate, the primary challenges are dielectric material, copper (Cu) line spacing and via scaling. However, when the packaged die attaches to the PCB through the ball grid array (BGA), surface-mount packaging, used for used for integrated circuits for devices such as microprocessors, the design challenges are more considerable. For example, they include limitations on chip size (I/O density), warpage and worries about co-efficient of thermal expansion mismatch between the materials.

Rinebold: Any advanced ‘BGA style’ package, regardless if it is three-dimensional (3D) or flat can have a significant impact on PCB layer count, route complexity, as well as cost. Efficient package ball pad net assignment and patterning of power and ground pins can make the difference between a four-layer and a six-layer PCB. Arriving at the optimal ball pad assignment necessitates coordinated planning across the entire interconnect chain from chip level macros to board level components. This planning requires new tools and flows capable of delivering a multi-fabric view of the system hierarchy while providing access to domain specific data like macro placement, I/O pad ring devices, bump patterns, ball pad assignments, and placement of critical PCB components and connectors.

SemiMD: 3D chip stacking and stacked die chip-scale packaging is favoured by the consumer electronics industry to enable better performing mobile electronics – in terms of faster performance, less power hungry devices, and so forth – but how do PCB design and testing tools need to adapt?

Rinebold: One benefit of these package formats is that they entail moving most of the high-performance interconnect and components off the PCB onto their own dedicated substrate. With increasing data rates and lower voltages there is little margin for error across the entire system placing a premium on signal quality and power delivery between the board and package.

In addition to high-speed constraints and checking, design tools must provide innovative functionality to assist the designer in implementing high-performance interconnect. In some situations complete automation (like auto-routing) cannot provide satisfactory results and still enforce the number of diverse and sometime ambiguous constraints. Designers will require auto-interactive tools that enable them to apply their experience and intuition supported by semi-automatic route engines for efficient implementation of constraints and interconnect. Example of such tools include the ability to plan and implement break-out on two ends of an interface connecting to high pin count BGAs to reduce route time and via counts. Without such tools the time to route high pin count BGAs can increase significantly.

Methodologies must adapt to incorporate electrical performance assessment (EPA) into the design process. EPA enables designers to evaluate electrical quality and performance throughout the design process helping avoid the backend analysis crunch – possibly jeopardizing product delivery. It utilizes extraction technology in a manner that provides actionable feedback to the designer helping identify and avoid issues related to impedance discontinuities, timing, coupling, or direct current (DC) current density.

SemiMD: More specifically, what impact will this trend towards greater compactness – i.e. smaller PCB footprint, but with more stacked dies and complex packaging – have on interconnection technologies?

Ramaswami: The trend towards better quality, higher-component density PCBs capable of supporting a wide range of die has significant implications for interconnect design. An additional challenge, is attaching complex chips on both sides of a board. Furthermore, with PCBs going thinner to fit the thin form factor requirements for mobile devices, dimensional stability and warpage must be addressed.

Rinebold: In some regards stacked applications simplify board level layout by moving high-bandwidth interconnect off the PCB and consolidating it on smaller, high density advanced package substrates. However, decreasing package ball pad pitch and increased pin density will drive use of build-up substrate technology for the PCB. This high density interconnect (HDI) enables smaller feature sizes and manufacturing accuracy necessary to support the fan-out routing requirements of these advanced package formats. Design tools must support HDI constraints and rules to ensure manufacturability along with functionality to define and manipulate the associated structures like microvias.

SemiMD: How will PCB manufacturing processes, tools and materials need to change to address this challenge?

Ramaswami: To manufacture a more robust integrated 3D stack, I think several fundamental innovations are needed. These include improving defect density and developing new materials such as low warpage laminates and less hygroscopic dielectrics. Another essential requirement is supporting finer copper line/spacing. Important considerations here are maintaining good adhesion while watching out for corrosion. Finally, for creating the necessary smaller vias, the industry needs new etching techniques to replace mechanical drilling techniques.

SemiMD: So as 3D chip stacking and stacked dies become more mainstream technologies, how will board level design need to develop, in the years to come?

Rinebold: One challenge will be visibility and consideration of the PCB during chip-level floor-planning and awareness of how decisions made early on impact downstream performance and cost. New tools that deliver a multi-fabric view of the system hierarchy while providing access to domain specific data will facilitate the necessary visibility for coordinated decision making. However these planning tools are just one component of an integrated flow encompassing logic definition, implementation, analysis, and sign-off for the chip, package, and PCB.

Design for Yield Trends

Tuesday, November 12th, 2013

By Sara Ver-Bruggen

Should foundries establish and share best practices to manage sub-nanometer effects to improve yield and also manufacturability?

Team effort

Design for yield (DFY) has been referred to previously on this site as the gap between what the designers assume they need in order to guarantee a reliable design and what the manufacturer or foundry thinks they need from the designer to be able to manufacture the product in a reliable fashion. Achieving and managing this two-way flow of information becomes more challenging as devices in high volume manufacturing have 28 nm dimensions and the focus is on even smaller dimension next-generation technologies. So is the onus on the foundries to implement DFY and establish and share best practices and techniques to manage sub-nanometer effects to improve yield and also manufacturability?

Read more: Experts At The Table: Design For Yield Moves Closer to the Foundry/Manufacturing Side

‘Certainly it is in the vital interest of foundries to do what it takes to enable their customers to be successful,’ says Mentor Graphics’ Senior Marketing Director, Calibre Design Solutions, Michael Buehler, adding, ‘Since success requires addressing co-optimization issues during the design phase, they must reach out to all the ecosystem players that enable their customers.’

Mentor refers to the trend of DFY moving closer to the manufacturing/foundry side as ‘design-manufacturing co-optimization’, which entails improving the design both to achieve higher yield and to increase the performance of the devices that can be achieved for a given process.

But foundries can’t do it alone. ‘The electronic design automation (EDA) providers, especially ones that enable the critical customer-to-foundry interface, have a vital part in transferring knowledge and automating the co-optimization process,’ says Buehler. IP suppliers must also have a greater appreciation for and involvement in co-optimization issues so their IP will implement the needed design enhancements required to achieve successful manufacturing in the context of a full chip design.

As they own the framework of DFY solutions, foundries that will work effectively with both the fabless and the equipment vendors will benefit from getting more tailored DFY solutions that can lead to shorter time-to-yield, says Amiad Conley, Applied Materials’ Technical Marketing Manager, Process Diagnostics and Control. But according to Ya-Chieh Lai, Engineering Director, Silicon and Signoff Verification, at Cadence, the onus and responsibility is on the entire ecosystem to establish and share best practices and techniques. ‘We will only achieve advanced nodes through a partnership between foundries, EDA, and the design community,’ says Ya-Chieh.

But whereas foundries are still taking the lead when it comes to design for manufacturability (DFM), for DFY the designer is intimately involved so he is able to account for optimal trade-off in yield versus PPA that result in choices for specific design parameters, including transistor widths and lengths.

For DFM, foundries are driving design database adjustments required to make a particular design manufacturable with good yield. ‘DFM modifications to a design database often happen at the end of a designer’s task. DFM takes the “ideal” design database and manipulates it to account for the manufacturing process,’ explains Dr Bruce McGaughy, Chief Technology Officer and Senior Vice President of Engineering at ProPlus Design Solutions.

The design database that a designer delivers must have DFY considerations to be able to yield. ‘The practices and techniques used by different design teams based on heuristics related to their specific application are therefore less centralized. Foundries recommend DFY reference flows but these are only guidelines. DFY practices and techniques are often deeply ingrained within a design team and can be considered a core competence and, with time, a key requirement,’ says McGaughy.

In the spirit of collaboration

Ultimately, as the industry continues to progress requiring manufacturing solutions that increasingly tailored and more and more device specific, this requires earlier and deeper collaboration between equipment vendors and foundry customers in defining and developing the tailored solutions that will maximize the performance of equipment in the fab. ‘It will also potentially require more three-way collaboration between the designers from fabless companies, foundries, and equipment vendors with the appropriate IP protection,’ says Conley.

A collaborative and open approach between the designer and the foundry is critical and beneficial for many reasons. ‘Designers are under tight pressures schedule-wise and any new steps in the design flow will be under intense scrutiny. The advantages of any additional steps must be very clear in terms of the improvement in yield and manufacturability and these additional steps must be in a form that designers can act on,’ says Ya-Chieh. The recent trend towards putting DFM/DFY directly into the design flow is a good example of this. ‘Instead of purely a sign-off step, DFM/DFY is accounted for in the router during place and route. The router is able to find and fix hotspots during design and, critically, to account for DFM/DFY issues during timing closure,’ he says. Similarly, Ya-Chieh refers to DFM/DFY flows that are now in place for custom design and library analysis. ‘Cases of poor transistor matching due to DFM/DFY issues can be flagged along with corresponding fixing guidelines. In terms of library analysis, standard cells that exhibit too much variability can be systematically identified and the cost associated with using such a cell can be explicitly accounted for (or that cell removed entirely).’

‘The ability to do “design-manufacturing co-optimization” is dependent on the quality of information available and an effective feedback loop that involves all the stakeholders in the entire supply chain: design customers, IP suppliers, foundries, EDA suppliers, test vendors, and so on,’ says Buehler. ‘This starts with test chips built during process development, but it must continue through risk manufacturing, early adopter experiences and volume production ramping. This means sharing design data, process data, test failure diagnosis data and field failure data,’ he adds.

A pioneer of this type of collaboration was the Common Platform Consortium initiated by IBM. Over time, foundries have assumed more of the load for enabling and coordinating the ecosystem. ‘GLOBALFOUNDRIES has identified collaboration as a key factor in its overall success since its inception and been particularly open about sharing foundry process data,’ says Buehler.

TSMC has also been a leader in establishing a well-defined program among ecosystem players, starting with the design tool reference flows it established over a decade ago. Through its Open Innovation Platform program TSMC is helping to drive compatibility among design tools and provides interfaces from its core analysis engines and third party EDA providers.

In terms of standards Si2 organizes industry stakeholders to drive adoption of collaborative technology for silicon design integration and improved IC design capability. Buehler adds: ‘Si2 working groups define and ratify standards related to design rule definitions, DFM specifications, design database facilities and process design kits.’

Open and trusting collaboration helps understand the thriving ecosystem programs that top-tier foundries have put together. McGaughy says: ‘Foundry customers, EDA and IP partners closely align during early process development and integration of tools into workable flows. One clear example is the rollout of a new process technology. From early in the process lifecycle, foundries release 0.x versions of their PDK. Customers and partners expend significant amounts of time, effort and resources to ensure the design ecosystem is ready when the process is, so that design tapeouts can start as soon as possible.’

DFY is even more critically involved in this ramp-up phase, as only when there is confidence in hitting yield targets will a process volume ramp follow. ‘As DFY directly ties into the foundation SPICE models, every new update in PDK means a new characterization or validation step. Only a close and sustained relationship can make the development and release of DFY methodologies a success,’ he states.

Experts At The Table: Design For Yield (DFY) moves closer to the foundry/manufacturing side

Friday, November 8th, 2013

By Sara Verbruggen

SemiMD discussed the trend for design for yield (DFY) moving closer to the foundry/manufacturing side with Dr Bruce McGaughy, Chief Technology Officer and Senior Vice President of Engineering, ProPlus Design Solutions, Ya-Chieh Lai, Engineering Director, Silicon and Signoff Verification, Cadence and Michael Buehler, Senior Marketing Director, Calibre Design Solutions, Mentor Graphics, and Amiad Conley, Technical Marketing Manager, Process Diagnostics and Control, Applied Materials. What follows are excerpts of that conversation.

SemiMD: What are the main advantages for design for yield (DFY) moving closer to the manufacturing/foundry side, and is it a trend with further potential?

Forte: Mentor refers to this trend as ‘design-manufacturing co-optimization’ because in the best scenario it involves improving the design both to achieve higher yield and to increase the performance of the devices that can be achieved for a given process. Companies embrace this opportunity in different ways. At one end of the scale, some fabless IC companies do the minimum they have to do to pass the foundry sign-off requirements. However, some companies embrace co-optimization as a way to compete, both by decreasing their manufacturing cost (higher yield means lower wafer costs), and by increasing the performance of their products at a given process node compared to their competition. Having a strong DFY discipline also enables fabless companies to have more portability across foundries, giving them alternate sources and purchasing power.

Ya-Chieh: Broadly speaking there are three typical insertion points for design for manufacturability (DFM)/DFY techniques. The first is in the design flow as design is being done. The second is as part of design sign-off. The last is done by the foundry as part of chip finishing.

The obvious advantage of DFY/DFM moving closer to the manufacturing/foundry side is in terms of ‘access’ to real fab data. This information is closely guarded by the fab and access is still only in terms of either encrypted data or models that closely correlate to silicon data but that have been carefully scrubbed of too many details.

However, the complexity of modern designs requires that DFM/DFY techniques need to be as far upstream in the design flows as possible/practicable. Any DFM/DFY technique that requires a modification to the design must be comprehended by designers so that any design impact can be properly accounted for so as to prevent the possibility of design re-spins late in the design cycle.

What we are seeing is not that DFM/DFY is moving closer to the manufacturing, or foundry, side, but that different techniques have been needed over the years to address the need of the designer for information as early as possible. Initially much of DFM/DFY was in the form of complex rule-based extensions to DRC, but much of this has since moved to include model-based and, in many cases, pattern-based checks (or some combination thereof).  More recently, the trend has been towards deeper integration with design tools and more automated fixing or optimization. DFM/DFY techniques that merely highlight a “hotspot” is insufficient. Designers need to know how to fix the problem and in the event there is a large number of fixes designers need to know how to automatically fix the problem. In other words the trend is about progressing towards better techniques for providing this information upstream and in ways that can be actionable by designers.

Conley: The key benefit in DFY approach is the ability to provide tailored solutions to the relevant manufacturing steps in a way that optimize performance based on device specific characteristics. This trend will definitely evolve further. We definitely see the trend in the defect inspection and review loops in foundries, which are targeted to generate paretos of the representative killer defects at major process steps. Due to the defects becoming smaller and the optical limitation of the detection tools, design information is used today to enable smarter sampling and defect classification in the foundries. To accelerate yield ramp going forward, robust infrastructure development is needed as an enabler to extract relevant information from chip design to the defect inspection, defect review and metrology equipment.

McGaughy: The foundation information used by designers in DFY analysis comes from the fab/foundry. This information is encapsulated in the form of statistical device models provided to the design community as part of the process design kit (PDK). Statistical models and, more recently, layout-dependent effect information is used by designers to determine the margin their design has for a particular process. This allows the designers to optimize their design to achieve the desired yield versus power, performance, area (PPA) trade-off. Without visibility into process variability via the foundry-provided Simulation Program with Integrated Circuit Emphasis (SPICE) models, DFY would not be viable. Hence, foundries are clearly at the epicenter of DFY. As process complexity increases and more detailed information of process variation effects are captured into SPICE models and made available to designers, it can be expected that the role of the foundry will continue to be more important in this respect over time.

SemiMD: So does this place a challenge on the EDA industry, or, how are EDA companies, such as ProPlus, helping to enable this trend?

McGaughy: The DFY challenge that designers face creates an opportunity for the EDA industry. As process complexity increases, there is less ‘margin’. Tighter physical geometries, lower single supply voltage (Vdd) and threshold voltage (Vth), new device structures, new process techniques and more complex designs all push margins. Margins refer to the slack that designers may have to ensure they can create a robust design. That not only works at nominal conditions, but under real-world variability.

Tighter margins mean a greater need to carefully asses the yield versus PPA trade-off that creates the need for DFY tools. This is where companies such as ProPlus come in. ProPlus helps designers use the foundry-provided process variation information effectively and designers can validate and even customize foundry models for specific application needs with the industry’s de-facto golden modeling tool from ProPlus.

SemiMD: Is this trend for DFY moving closer to the foundry/manufacturing side the only way to improve yields, as the industry continues to push towards further scaling, and all of the challenges that this entails?

Ya-Chieh: Actually we believe the trend is actually towards tighter integration with design, not less!

Conley: DFY solutions alone are not sufficient and they need to be developed in conjunction with wafer fabrication equipment enhancements. Looking at the wafer inspection and review (I&R) segment, the need to detect smaller defects and effectively separate yield killer defects from false and nuisance defects leads to an increased usage of SEM-based defect inspection tools that have higher sensitivity. At Applied Materials, we are very focused on improving core capabilities in imaging and classification. In our other technology segments there are also a lot of innovations on deposition and removal chamber architecture and process technologies that are focused on yield improvement. DPY schemes, as well as advancement in wafer fabrication equipment, are needed to improve yields as the industry advances scaling.

Forte: Strategies aside, the fact is that beyond about 40nm, IC designs must be optimized for the target manufacturing process. At each progressive node, the design rules become more complex and the yield becomes more specific to an individual design. For example, layouts now have to be checked to make sure they do not contain specific patterns that cannot be accurately fabricated by the process. This is mainly due to the fact that we are imaging features that are much smaller than the wavelength of the light currently used in production steppers. But there are many other complexities at advanced nodes associated with etch characteristics, via structures, fill patterns, electrical checks, chemical-mechanical polishing, double patterning, FinFET transistor nuances, and many others.

These issues are too numerous and too complex to deal with after tapeout. The foundries simply cannot remove all yield limiters by adjusting their process. For one thing, some of the issues are simply beyond the control of the process engineers. For example some layout patterns simply cannot be imaged by state-of-the-art steppers, so they must be eliminated from the design. Another problem, or challenge, is that foundries need to run designs from many customers. In most cases, very large consumer designs aside, foundries cannot afford to optimize their process flow for one customer’s design. Bottom line, design-manufacturing co-optimization issues must be taken into consideration during the physical design process.

McGaughy: More and more yield is a shared responsibility. At older nodes when defect density limits were responsible for optimal yields, the foundries took on most of the responsibility. At deep nanometer nodes, this is no longer the case. Now, the design yield must be optimized via trade-offs. Foundries are pushed to provide ever better performance at each new node and this means that they too have less process margin. Rather than guard band for process variation, foundries now provide the designer with detailed visibility into how the process variation will behave. Designers in turn can now make the choices they need to make, such as whether they need performance to be competitive or how best to achieve optimal performance with lowest yield risk. This shared responsibility for yield has pushed the DFY trend to the forefront. It serves to bridge the gap between design and manufacturing and will continue to do so as process technology scales.

The Week in Review: Nov. 1, 2013

Friday, November 1st, 2013

Toshiba this week announced the launch of new embedded NAND flash memory modules integrating NAND chips fabricated with 19nm second generation process technology. The company’s new 32-gigabyte (GB) embedded device integrates four 64Gbit (equal to 8GB) NAND chips fabricated with Toshiba’s 19nm second generation process technology and a dedicated controller into a small package measuring only 11.5 x 13 x 1.0mm.Mass production will start from the end of November.

Mentor Graphics announced the Valor Information Highway and the Valor Warehouse Management products, two supply chain-focused tools designed to enhance Enterprise Resource Planning (ERP) effectiveness and assist electronics manufacturers in reducing material costs. Together, the two new products provide real-time material consumption and spoilage data, facilitating total materials management and traceability over the entire warehouse infrastructure, logistics, shop floor storage areas and direct points of use.

Senior executives from Semiconductor Industry Association (SIA) member companies and other multi-national semiconductor companies around the globe sent a letter to Chinese Vice Premiers Wang Yang and Ma Kai, encouraging China to support duty-free coverage for semiconductor products in an expanded Information Technology Agreement (ITA). The ITA promotes fair and open trade by providing for duty-free treatment of certain information technology products, including semiconductors, but the list of covered products has not been updated since the ITA’s inception in 1996.

Graphene may command the lion’s share of attention but it is not the only material generating buzz in the electronics world. Vanadium dioxide is one of the few known materials that acts like an insulator at low temperatures but like a metal at warmer temperatures starting around 67 degrees Celsius. This temperature-driven metal-insulator transition, the origin of which is still intensely debated, in principle can be induced by the application of an external electric field. That could yield faster and much more energy efficient electronic devices. To determine the origin of the metal-insulator transition of vanadium dioxide, Aetukuri and a collaboration of researchers led by Stuart Parkin, of SpinAps and the IBM Almaden Research Center and Hermann Dürr of the SLAC National Laboratory, studied thin films of the material at Berkeley Lab’s Advanced Light Source (ALS). Using ALS beamline 4.0.2, an undulator beamline that can provide soft X-rays with variable linear polarization, they performed a series of strain-, polarization- and temperature-dependent X-ray absorption spectroscopy tests, in conjunction with X-ray diffraction and electrical transport measurements.

Cadence announced the availability of Cadence Interconnect Workbench. A software solution providing cycle-accurate performance analysis of interconnects throughout the system-on-chip (SoC) design process, Interconnect Workbench quickly identifies design issues under critical traffic conditions and enables users to improve device performance and reduce time to market. Interconnect Workbench works in conjunction with Cadence Interconnect Validator for a complete functional verification and performance validation solution.

Peregrine Semiconductor and GLOBALFOUNDRIES, a provider of semiconductor manufacturing technology, are sampling the first RF Switches built on Peregrine’s new UltraCMOS 10 RF SOI technologies. In a joint development effort, GLOBALFOUNDRIES and Peregrine created a new fabrication flow for the versatile, new, 130 nm UltraCMOS 10 technology platform. This new technology delivers a more than 50-percent performance improvement over comparable solutions. UltraCMOS 10 technology gives smartphone manufacturers unparalleled flexibility and value without compromising quality for devices ranging from 3G through LTE networks.

SEMI announced its support of the Revitalize American Innovation and Manufacturing Act of 2013, along with the National Association of Manufacturers (NAM). The Revitalize American Manufacturing and Innovation Act of 2013 is modeled on the National Additive Manufacturing Innovation Institute (NAMII), a public-private manufacturing hub located in Youngstown, Ohio. The legislation is designed to bring together industry, universities and community colleges, federal agencies, and all levels of government to accelerate manufacturing innovation. It would establish public-private institutes to bridge the gap between basic research and product development.

The Week In Review: Oct. 25, 2013

Friday, October 25th, 2013

Semiconductor Research Corporation launched the Semiconductor Synthetic Biology (SSB) research program on hybrid bio-semiconductor systems to provide insights and opportunities for future information and communication technologies. The program will initially fund research at six universities: MIT, the University of Massachusetts at Amherst, Yale, Georgia Tech, Brigham Young and the University of Washington. Approximately $2.25M will be invested by SRC-GRC for Phase 1 research.

North America-based manufacturers of semiconductor equipment posted $975.3 million in orders worldwide in September 2013 (three-month average basis) and a book-to-bill ratio of 0.97, according to the September EMDS Book-to-Bill Report published today by SEMI.   A book-to-bill of 0.97 means that $97 worth of orders were received for every $100 of product billed for the month. The three-month average of worldwide bookings in September 2013 was $975.3 million. The bookings figure is 8.3 percent lower than the final August 2013 level of $1.06 billion, and is 6.8   percent higher than the September 2012 order level of $912.8 million. The three-month average of worldwide billings in September 2013 was $1.01 billion. The billings figure is 7.1 percent lower than the final August 2013 level of $1.08 billion, and is 13.6 percent lower than the September 2012 billings level of $1.16 billion.

Mentor Graphics Corporation announced its new Mentor Embedded Hypervisor product for in-vehicle infotainment (IVI) systems, telematics, advanced driver assistance systems (ADAS) and instrumentation. The Mentor Embedded Hypervisor is a small footprint Type 1 hypervisor developed specifically for embedded applications and intelligent connected devices. With the Mentor Embedded Hypervisor, developers can create high-performance systems that integrate and consolidate applications on multicore processors and make use of the ARM TrustZone. Development of new systems can be accelerated by reusing existing proprietary software and protecting intellectual property while incorporating Linux to leverage the open source ecosystem.

Cadence Design Systems, Inc. announced results for the third quarter of fiscal year 2013. Cadence reported third quarter 2013 revenue of $367 million, compared to revenue of $339 million reported for the same period in 2012. On a GAAP basis, Cadence recognized net income of $39 million, or $0.13 per share on a diluted basis, in the third quarter of 2013, compared to net income of $59 million, or $0.21 per share on a diluted basis, in the same period in 2012.

Researchers and physicians at Johns Hopkins University will collaborate with the nanoelectronics R&D center imec to advance silicon applications in healthcare, beginning with development of a device to enable a broad range of clinical tests. Imec and Johns Hopkins University hope to develop the next generation of “lab on a chip” concepts based on imec technology. The idea is that such a disposable chip could be loaded with a sample of blood, saliva or urine and then quickly analyzed using a smartphone, tablet or computer, making diagnostic testing faster and easier for applications such as disease monitoring and management, disease surveillance, rural health care and clinical trials.

A new Department of Energy grant will fund research to advance an additive manufacturing technique for fabricating three-dimensional (3D) nanoscale structures from a variety of materials. Using high-speed, thermally-energized jets to deliver both precursor materials and inert gas, the research will focus on dramatically accelerating growth, improving the purity and increasing the aspect ratio of the 3D structures. Known as focused electron beam induced deposition (FEBID), the technique delivers a tightly-focused beam of high energy electrons and an energetic jet of thermally excited precursor gases – both confined to the same spot on a substrate. Secondary electrons generated when the electron beam strikes the substrate cause decomposition of the precursor molecules, forming nanoscale 3D structures whose size, shape and location can be precisely controlled. This gas-jet assisted FEBID technique allows fabrication of high-purity nanoscale structures using a wide range of materials and combination of materials.

Despite a drop in global television unit demand in 2013, the semiconductor market for TVs is forecast to increase by an estimated seven percent to $13.1 billion, according to data presented in IC Insights’ upcoming IC Market Drivers 2014 report. Technologies such as wireless video connections, networking interfaces, multi-format decoders and LED backlighting have boosted the average semiconductor content in TV sets even as global TV unit shipments are forecast to decline by an estimated three percent in 2013, according to the report. IC Insights projects that total global semiconductor revenue for televisions will grow 12 percent to $14.7 billion in 2014 due to an uptick in new TV sales in advance of the 2014 Winter Olympic Games and the 2014 FIFA World Cup.  Between 2012 and 2017, the semiconductor market for DTVs is forecast to grow at a healthy pace of 10 percent percent annually, increasing to $19.8 billion at the end of the forecast period.

CEA-Leti, Fraunhofer IPMS-CNT and three European companies — IPDiA, Picosun and SENTECH Instruments — have launched a project to industrialize 3D integrated capacitors with world-record density. The two-year, EC-funded PICS project is designed to develop a disruptive technology through the development of innovative ALD materials and tools that results in a new world record for integrated capacitor densities (over 500nF/mm2) combined with higher breakdown voltages. It will strengthen the SME partners’ position in several markets, such as automotive, medical and lighting, by offering an even higher integration level and more miniaturization.

Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.