Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘DFM’

D2S Releases 4th-Gen IC Computational Design Platform

Friday, September 30th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

D2S (www.design2silicon.com) recently released the fourth generation of its computational design platform (CDP), which enables extremely fast (400 Teraflops) and precise simulations for semiconductor design and manufacturing. The new CDP is based on NVIDIA Tesla K80 GPUs and Intel Haswell CPUs, and is architected for 24×7 cleanroom production environments. To date, 14 CDPs across four platform generations are in use by customers around the globe, including six of the latest fourth generation. In an exclusive interview with SemiMD, D2S CEO Aki Fujimura stated, “Now that GPUs and CPUs are fast-enough, they can replace other hardware and thereby free up engineering resources to focus on adding value elsewhere.”

Mask data preparation (MDP) and other aspects of IC design and manufacturing require ever-increasing levels of speed and reliability as the data sets upon which they must operate grow larger and more complex with each device generation. The Figure shows a mask needed to print arrays of sub-wavelength features includes complex curvilinear shapes which must be precisely formed even though they do not print on the wafer. Such sub-resolution assist features (SRAF) increase in complexity and density as the half-pitch decreases, so the complexity of mask data increases far more than the density of printed features.

Sub-wavelength lithography using 193nm wavelength requires ever-more complex masks to repeatably print ever smaller half-pitch (HP) features, as shown by (LEFT) a typical mask composed of complex nested curves and dots which do not print (RIGHT) in the array of 32nm HP contacts/vias represented by the small red circles. (Source: D2S)

GPUs, which were first developed as processing engines for the complex graphical content of computer games, have since emerged as an attractive option for compute-intensive scientific applications due in part to their ability to run many more computing threads (up to 500x) compared to similar-generation CPUs. “Being able to process arbitrary shapes is something that mask shops will have to do,” explained Fujimura. “The world could go 193nm or EUV at any particular node, but either way there will be more features and higher complexity within the features, and all of that points to GPU acceleration.”

The D2S CDP is engineered for high reliability inside a cleanroom manufacturing environment. A few of the fab applications where CDPs are currently being used include:

  • model-based MDP for leading-edge designs that require increasingly complex mask shapes,
  • wafer plane analysis of SEM mask images to identify mask errors that print, and
  • inline thermal-effect correction of eBeam mask writers to lower write times.

“The amount of design data required to produce photomasks for leading-edge chip designs is increasing at an exponential rate, which puts more pressure on mask writing systems to maintain reasonable write times for these advanced masks. At the same time, writing these masks requires higher exposure doses and shot counts, which can cause resist proximity heating effects that lead to mask CD errors,” stated Noriaki Nakayamada, group manager at NuFlare Technology. “D2S GPU acceleration technology significantly reduces the calculation time required to correct these resist heating effects. By employing a resist heating correction that includes the use of the D2S CDP as an OEM option on our mask writers, NuFlare estimates that it can reduce CD errors by more than 60 percent, and reduce write times by more than 20 percent.”

In the E-beam Initiative 2015 survey, the most advanced reported mask-set contained >100 masks of which ~20% could be considered ‘critical’. The just released 2016 survey disclosed that the most complex single-layer mask design written last year required 16 TB of data, however platforms like D2S’ CDP have been used to accelerate writing such that the average reported write times have decreased to a weighted average of 4 hours. Meanwhile, the longest reported mask write time decreased from 72 to 48 hours.

Closed-Loop DFM Solution Accelerates Yield Ramps

Monday, June 6th, 2016

thumbnail

Mentor Graphics Corp. announced that Samsung Foundry’s closed-Loop design-for-manufacturing (DFM) solution uses production Mentor Calibre and Tessent platforms to accelerate customer yield ramps.

In the Closed-Loop DFM flows, Samsung integrates its DFM kits with its testing and manufacturing expertise to identify integrated circuit design patterns that are most likely to impact manufacturing yield, thereby helping customers improve design quality, yield, and ramp to production.

“We can detect the risks in customer products and prevent them,” said K.K. (Kuang-Kuo) Lin, Director, Foundry Marketing Ecosystem, Samsung Semiconductor. “We have seen yield gain of up to 8.5%. In terms of the post-manufacturing yield analysis, we have seen the benefits of around 2%. These numbers are not guaranteed because each product is different, but from our experience, these are the numbers we have seen.”

The Samsung solution extracts customer yield-averse design patterns, feeds that information forward to optimize manufacturing and testing, and closes the loop with feedback from silicon results for product design and yield improvement. This solution is not only useful to initial customer designs, but it also allows learning from current production designs to be applied to next-generation designs from that same customer across entire product families.

As shown in Figure 1, Samsung’s foundry offerings cover the needs of devices, ranging from the IoT to consumer, mobile computing, high end computing to automotive. The company, which first got into the foundry business in 2005, claims to be the first foundry to have high-k metal gates in production (in 2011), the first foundry to offer FinFET risk production (in 2013) and the first foundry to tape out a 10nm product. “We are also at the forefront of 7nm. We call it 7LPP, which will be based on EUV,” he added.

Figure 1

With the end goal of rapid yield ramp for new production introduction, Samsung turned to Mentor Graphics tools for pre-production DFM, which it calls PRISM (pattern recognition and identity scoring methods), which runs on Mentor’s Calibre platform. For this pre-production phase, “we provide very comprehensive process-aware DFM sign-off kits and optimization flow for the designers so they can double-check and verify, prevent any DFM issues during the design phase,” Lin said.

The other component of closed-loop DFM is in post-manufacturing. Samsung has developed as set of tools called FLARE (Failure analysis And yield Rank Estimation with DFM hotspot database), which runs on Mentor’s Tesset platform.

Figure 2 shows how PRISM and FLARE work together in a closed-loop fashion for pre- and post-production DFM.

Figure 2

“Every design has its idiosyncrasies and its unique signatures because layout designers can be pretty creative,” Lin explained. “We use PRISM to do extensive pattern analysis and then do optimization during the data prep and also use the pattern analysis result to drive in-line inspection.”

Once the wafer is manufactured in the fab, FLARE involves mapping a yield learning database with EDS, (electrical engineering die-sort data). “We’ll combine them to do yield pareto data analysis and also mapping analysis. From those deep learning, we are able to prioritize which part of the fab process we can improve. We can also feedback to the DFM kit which we use in the design phase, which gives the designer feedback on what they can further improve,” Lin said.

At the heard of PRISM is a defect database built from test vehicles and existing products (Figure 3). “We put all the patterns that we know into this defect database,” Lin explained. “We also couple it with some very novel things. We use a layout schematic generator from Mentor to increase the coverage, to enumerate all the possible patterns. And then we also have meta data and simulators to do yield prediction of those known defects from different sources.”

Figure 3

“Once a customer product comes into Samsung foundry, we will check against the known defect database. Then we will do prediction in terms of the process margin and feed-forward this data into the subsequent steps of data prep or retargeting, and in-line inspection so we can prioritize our resources to know what to inspect and what not to in the manufacturing steps,” Lin said (see Figure 4).

Figure 4

“FLARE accelerates the learning in the fab to bring up customer products in our foundry. It helps the customer achieve their time to market. It also saves on fab operation costs, so it’s a win-win situation for everyone,” Lin said.

The Closed-Loop DFM flows are in production use today for customers of Samsung Foundry services. While proven in 14 nm technology, the flows can be used for ICs manufactured with other Samsung process nodes.

At the 2016 Design Automation Conference, Mentor and Samsung are co-hosting a lunch seminar entitled “Accelerate Yield Ramps with Samsung Foundry Closed-Loop DFM and Mentor Tools.” The event is Monday, June 6, from 12:00 to 1:30 PM. Interested customers can register for the event using this registration link.

https://www.mentor.com/products/ic_nanometer_design/events/samsung-dac-lunch-seminar

Samsung’s Closed-Loop DFM Solution Accelerates Yield Ramps

Sunday, June 5th, 2016

thumbnail

Mentor Graphics Corp. announced that Samsung Foundry’s closed-Loop design-for-manufacturing (DFM) solution uses production Mentor Calibre and Tessent platforms to accelerate customer yield ramps.

In the Closed-Loop DFM flows, Samsung integrates its DFM kits with its testing and manufacturing expertise to identify integrated circuit design patterns that are most likely to impact manufacturing yield, thereby helping customers improve design quality, yield, and ramp to production.

“We can detect the risks in customer products and prevent them,” said K.K. (Kuang-Kuo) Lin, Director, Foundry Marketing Ecosystem, Samsung Semiconductor. “We have seen yield gain of up to 8.5%. In terms of the post-manufacturing yield analysis, we have seen the benefits of around 2%. These numbers are not guaranteed because each product is different, but from our experience, these are the numbers we have seen.”

The Samsung solution extracts customer yield-averse design patterns, feeds that information forward to optimize manufacturing and testing, and closes the loop with feedback from silicon results for product design and yield improvement. This solution is not only useful to initial customer designs, but it also allows learning from current production designs to be applied to next-generation designs from that same customer across entire product families.

As shown in Figure 1, Samsung’s foundry offerings cover the needs of devices, ranging from the IoT to consumer, mobile computing, high end computing to automotive. The company, which first got into the foundry business in 2005, claims to be the first foundry to have high-k metal gates in production (in 2011), the first foundry to offer FinFET risk production (in 2013) and the first foundry to tape out a 10nm product. “We are also at the forefront of 7nm. We call it 7LPP, which will be based on EUV,” he added.

Figure 1

With the end goal of rapid yield ramp for new production introduction, Samsung turned to Mentor Graphics tools for pre-production DFM, which it calls PRISM (pattern recognition and identity scoring methods), which runs on Mentor’s Calibre platform. For this pre-production phase, “we provide very comprehensive process-aware DFM sign-off kits and optimization flow for the designers so they can double-check and verify, prevent any DFM issues during the design phase,” Lin said.

The other component of closed-loop DFM is in post-manufacturing. Samsung has developed as set of tools called FLARE (Failure analysis And yield Rank Estimation with DFM hotspot database), which runs on Mentor’s Tesset platform.

Figure 2 shows how PRISM and FLARE work together in a closed-loop fashion for pre- and post-production DFM.

Figure 2

“Every design has its idiosyncrasies and its unique signatures because layout designers can be pretty creative,” Lin explained. “We use PRISM to do extensive pattern analysis and then do optimization during the data prep and also use the pattern analysis result to drive in-line inspection.”

Once the wafer is manufactured in the fab, FLARE involves mapping a yield learning database with EDS, (electrical engineering die-sort data). “We’ll combine them to do yield pareto data analysis and also mapping analysis. From those deep learning, we are able to prioritize which part of the fab process we can improve. We can also feedback to the DFM kit which we use in the design phase, which gives the designer feedback on what they can further improve,” Lin said.

At the heard of PRISM is a defect database built from test vehicles and existing products (Figure 3). “We put all the patterns that we know into this defect database,” Lin explained. “We also couple it with some very novel things. We use a layout schematic generator from Mentor to increase the coverage, to enumerate all the possible patterns. And then we also have meta data and simulators to do yield prediction of those known defects from different sources.”

Figure 3

“Once a customer product comes into Samsung foundry, we will check against the known defect database. Then we will do prediction in terms of the process margin and feed-forward this data into the subsequent steps of data prep or retargeting, and in-line inspection so we can prioritize our resources to know what to inspect and what not to in the manufacturing steps,” Lin said (see Figure 4).

Figure 4

“FLARE accelerates the learning in the fab to bring up customer products in our foundry. It helps the customer achieve their time to market. It also saves on fab operation costs, so it’s a win-win situation for everyone,” Lin said.

The Closed-Loop DFM flows are in production use today for customers of Samsung Foundry services. While proven in 14 nm technology, the flows can be used for ICs manufactured with other Samsung process nodes.

At the 2016 Design Automation Conference, Mentor and Samsung are co-hosting a lunch seminar entitled “Accelerate Yield Ramps with Samsung Foundry Closed-Loop DFM and Mentor Tools.” The event is Monday, June 6, from 12:00 to 1:30 PM. Interested customers can register for the event using this registration link.

https://www.mentor.com/products/ic_nanometer_design/events/samsung-dac-lunch-seminar

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.

—E.K.

Changes and Challenges Abound in Multi-patterning Lithography

Monday, January 26th, 2015

thumbnail

By Jeff Dorsch

Multi-patterning lithography is a fact of life for many chipmakers. Experts in the fields of electronic design automation and lithography address the issues associated with the technology. Providing responses are David Abercrombie, Design for Manufacturing Program Manager, Mentor Graphics; Gary Zhang, Vice President Marketing, ASML Brion; and Dr. Donis Flagello of Nikon Research Corporation of America.

1. What are the significant considerations in semiconductor manufacturing and design with multi-patterning lithography?

David Abercrombie: Like most process/design trade-offs moving from one node to another it comes down to cost vs area and performance. Without multi-patterning or EUV you will struggle do design at 20nm or below limiting the opportunity to take advantage of design area and performance scaling. Essentially, Moore’s Law slows to a crawl without it. Multi-patterning affects almost all aspects of design and manufacturing. For physical design it adds additional design rule constraints and constrains cell placement and routing depending on cell architecture. For electrical design it adds additional parasitic variability to consider in timing analysis. For DFM it adds additional requirements for fill and lithographic checking. In manufacturing it adds additional masks, process steps and increases stepper utilization. All of these increase complexity and have an associated cost. It ultimately has to make business sense. Because of this you are seeing fewer companies moving to these advanced nodes as quickly as before, as they must have the volume and profit margins to justify the increased cost. Fortunately, there are products that do need the newest and most advanced process nodes, and because of those needs we continue to move forward into these new technology nodes on a regular schedule.

Gary Zhang: Multiple patterning (MPT) using immersion lithography is required for the semiconductor industry to continue device scaling until extreme ultraviolet (EUV) comes into full production (EUV is expected for a mid-node insertion in the 10nm logic node, and for 7nm node development and production in the 2015-2017 time frame). Multiple-patterning lithography brings the following new challenges from design to manufacturing. ASML has been collaborating with the chipmakers in a holistic lithography framework to tackle these challenges with innovative hardware and software solutions, including scanner systems, computational lithography, metrology and process control.

Integrated circuit designs have to be multiple patterning compatible. Industry has been developing methods to enable MPT-compatible designs via layout decomposition (coloring) and conflict resolution using multiple patterning rules as constraints. This applies to standard-cell libraries, cell boundaries, and placement and route to ensure full chip layouts meet all manufacturing requirements and can be decomposed into separate masks without any post-coloring MPT conflicts. Structured layouts with highly restricted design rules seem to be a key enabler for MPT-compliant designs.

The rule-based approach to MPT compatible designs tends to run the risk of pattern defects from design hot spots, especially when design rules are pushed aggressively for competitive die size. The lithography process window of these design hot spots can be enlarged using source-mask optimization (SMO). Brion’s Tachyon SMO has been routinely used to co-optimize scanner optics such as illumination source and projection lens wavefront and mask enhancements including sub-resolution assist features (SRAF) and optical proximity correction (OPC) for any given designs. Take triple patterning of a 10nm node metal layer as an example. Tachyon SMO enables a 23% larger process window for the selected SRAM and logic designs (Figure 1). By evaluating a range of design variations, SMO can help optimize design rules and MPT coloring rules to eliminate design hot spots in the technology development stage. For production mask data preparation, Brion’s multiple patterning OPC and LMC (Lithography Manufacturability Check) are widely used by the leading chipmakers to deliver the best full chip process window in wafer manufacturing. A combination of SMO, OPC and LMC makes up ASML’s process window enhancement solutions to the design hot spot problem.

Figure 1. Source-mask optimization (SMO) of a 10 nm node metal layer in triple patterning lithography. Overlapping process window of all three splits (masks) is improved by 23% for selected SRAM and logic patterns imaged with the same illumination setup.

Multiple patterning drives tighter CD, focus and overlay requirements to account for more process variations from the additional processing steps. Overlay is used here as an example to show the increasing complexity in multiple patterning process control from single exposure at 28nm node, to double patterning at 14nm node, to triple patterning at 10nm node (Figure 2). Tighter overlay specification has to be met for the exponentially increasing number of critical masks and metrology steps at 14nm and 10nm nodes. To deliver the required overlay control on product wafers, scanner matching and process control have to include high order corrections (Figure 3). ASML’s latest generation of immersion scanners have a large number of flexible actuators and are capable of sub-3 nm matched-machine overlay, dynamic lens heating and reticle heating corrections, and high-order interfield and intrafield corrections for imaging, focus and overlay.

Figure 2. A comparison of overlay metrology and control for single exposure at 28 nm node, double patterning at 14 nm node and triple patterning at 10 nm node, using the Metal 1 (M1) to Metal 2 (<2) process loop as an example.

Figure 3. On-product overlay roadmap showing the ever tighter specification from 28 nm node to 14 and 10 nm nodes and the requirement of advanced scanner correction capabilities (such as dynamic and high-order).Two different production scenarios are considered, namely scanner/chuck dedication and mix and match of different scanners.

With the introduction of multiple patterning below 28 nm node, the increasing number of masks and metrology steps translates to lower wafer throughput per scanner and longer wafer cycle time from start to finish. This then leads to cost per wafer significantly higher than the historical cost scaling trend from the previous technology nodes. ASML has been continuously driving the scanner innovation to increase the throughput and improve productivity in terms of wafer output per day. ASML’s YieldStar integrated metrology is another innovative solution to reduce wafer cycle time and improve on-product performance for effective productivity gain and overall cost benefit.

In summary, a full suite of design and manufacturing solutions are required to address the new challenges in multiple-patterning lithography. ASML has taken a holistic approach and worked in close collaboration with the chipmakers to optimize design, scanner, mask and process control altogether for the best manufacturability and yield. Figure 4 gives an example on how holistic lithography enables focus roadmap down to 1x nm node. In the design phase, process window enhancement solutions such as SMO, OPC and LMC are used to eliminate the design hot spots and maximize the full chip process window. In the wafer manufacturing phase, process window control solutions such as scanner matching and high order corrections are implemented to optimize CD, overlay and focus control dynamically from tool to tool, field to field, wafer to wafer and lot to lot. A combination of the largest process window and the tightest process control delivers the most robust manufacturability and yield in volume production.

Figure 4. An example of how holistic lithography enables focus roadmap down to 1x nm node (DPT: double patterning; MPT: multiple patterning). A combination of process window enhancement and process window control solutions delivers robust manufacturability and yield in volume production.

Donis Flagello: Multiple patterning brings a host of issues due to the added complexity associated with imaging and processing multiple patterns within the same design layer. From the exposure tool point of view, we need to ensure that the overall cost of ownership is maintained and the tool can enable further scaling. We are concentrating on many aspects of the technology. One of the most critical is overlay. This must be as low as possible such that the ensemble overlay of all the exposures within a layer is equal or better than a single exposure. Simultaneously, we need to increase the throughput of the tools to ensure that cost per wafer per hour is also continuously improved.  Both of these aspects drive a huge amount of innovation and technology development.

2. How do you deal with color assignment?

Abercrombie: The answer to that depends on the foundry and layer being discussed. Colorless, partial coloring and full coloring flows exist. In colorless flows the designer does not assign colors. There are specialized checks (like odd cycle checks in double patterning) that make sure the layout can be decomposed into multiple masks later once the design is taped-out to the foundry. In a partial coloring flow most of the layout follows the colorless flow, but the designer can manually assigns some parts of the layout to a particular color to manage subtle variation concerns. For instance, making sure matched circuitry also has matched coloring. In a fully colored flow the designer is responsible for producing the final mask assignments for all polygons in the layer. A GDS layer is dedicated to each mask. To assign a polygon to a given mask a copy of it is placed on the appropriate mask color layer. EDA companies provide various automation capabilities to assist with color assignment in custom, P&R and batch full chip applications.

It is best to use an EDA solution like Calibre that not only can address all different coloring flows but also provides the same checks/algorithms for all phases a design goes through from initial IP blocks to final full chip signoff.

Zhang: Layout decomposition or coloring has to deliver split patterns on separate masks which are free of any process rule violations and can then be patterned in single exposure with sufficient process window. A double patterning (DPT) using a litho-etch-litho-etch process is shown as an example (Figure 5). In the DPT coloring step, any non-native color conflicts are resolved in a layer aware implementation with stitches that are properly located away from the overlap region between layers (such as a metal line contacting a via) and have the least impact on the device performance and manufacturing yield. Process robust stitching must have sufficient overlap margin to tolerate misalignment between the exposures of the split masks. This is the concept of overlay aware stitching.

Figure 5. An example of design to manufacturing work flow for a litho-etch-litho-etch double patterning (DPT) process, from layer aware coloring to overlay aware stitching, to model based OPC, to the final contour after litho and etch processes.

Color balancing is another critical care-about in layout decomposition. MPT coloring not only needs to deliver split layouts free of MPT conflicts but also has to ensure the pattern density is balanced between the split masks. Color balancing is beneficial for litho and etch process control so that robust and uniform patterning qualities can be achieved.

Coloring can also be optimized for best process window using a model based approach, as described above in the “Design hot spots” section. Model-based coloring is not suitable for full chip application. It can be either used in source-mask optimization for MPT rule development or applied in local hot spot fix during the mask data preparation.

3. How does design rule check change? How is it the same?

Abercrombie: In a fully colored flow the design rules change slightly. First for every traditional spacing check there are essentially two checks for double patterning (DP): a minimum spacing for different colored polygons, and a larger minimum spacing for same colored polygons. In addition, there are usually additional density checks making sure the ratio between the colors is reasonably equal. In colorless flows specialized new checks have been developed to verify if a valid coloring exists for a given layout construct. In double patterning these specialized checks include odd cycle checks. For triple patterning (TP) and quadruple patterning (QP) new types of checks are required.

Zhang: Triple patterning (TPT) coloring is a lot more difficult and complex than DPT coloring. It is extremely hard to determine if a layout is TPT compatible, known as NP-complete problem in graph theory. There is no efficient way to find a solution on the full chip level. There are no existing methods for determining the number of conflicts and their locations.

Stitches are color-dependent in TPT and candidate stitch locations can be determined only after or during coloring.

Therefore it is important to ensure TPT compliance by design construct.

4. What are the complexities and issues in transitioning from double-patterning to triple-patterning?

Abercrombie: Although checking and decomposing a layout for two colors is complex, the algorithmic processing scales reasonably by design size. However, the generalized solution for triple and quadruple patterning has exponentially increasing run time as the number of polygons processed increases. This is, of course, is not a practical solution. So the problem must be constrained such that reasonable heuristic algorithmic approaches can be applied that provide reasonably scalable run times. So the complete set of design rules and design methodology need to be properly tuned to constrain the graph-complexity of the layouts produced so these checking and decomposition heuristic tools can be utilized. In addition, specialized checks may be needed so that layout constructs that do not meet the complexity constraints can be diverted from processing (to keep run time from exploding) and flagged to the user for modification until they can be properly processed.

The other challenge in moving from DP to TP and QP is colorless error visualization. If you are doing a colorless flow and need to check if the design can legally be colored, you need a way to highlight constructs for which no valid coloring solution exists in a way that the designer can understand so he/she can make changes in the layout to fix it. For DP this was odd cycle error visualization. An even-numbered cycle of interacting polygons can be colored and an odd numbered cycle of interacting polygons cannot. For TP and QP this is not the case. Any simple even or odd cycle can be colored. The constructs which cannot be colored are much more complex than in DP. In addition, narrowing down the implicated constructs to the “root” of the problem is more difficult. To address these issues Mentor Calibre is developing a new array of error visualization layers to help inform and guide the user to appropriate and productive fixes.

Flagello: Years ago many industry observers did not believe that double patterning was viable. Today double and triple patterning is being done. However, there are some key differences between the two. Depending on the technology used, double exposure from a tool perspective is more or less straightforward. Mask alignment is usually based on the previous layer mark. However, moving to triple exposure often results in much more of an optimization problem to determine the best alignment strategy. Sometimes, the previous layer alignment mark may have a poor signal depending on the number of films involved in the multiple-patterning schemes. While increasing the number of patterning steps increases some of the complexity, the solutions become more of an optimization and controls challenge.

5. What issues in IC design and verification emerge with multi-patterning?

Abercrombie: The designer should expect to see new design rules, more parasitic variation, more complexity in design and methodology constraints, increased wafer cost, and the need for new EDA tools and additional CPU hardware to process their designs. This is really not new as this increased complexity and cost has existed between every node transition. The difference is that the delta may be more than between previous nodes. It is important that design teams educate themselves early on the impacts of moving to multi-patterned process nodes. That includes getting information from the foundry and EDA partners as well as reading available material on the subject. I have a whole series of articles covering much of the questions in this round table in significant detail: http://www.mentor.com/solutions/foundry/solutions/multi-patterning

Zhang: In addition to the power, performance and area metrics, designers now have to ensure their IC designs are MPT compliant and free of design hot spots so that they can be manufactured cost effectively with the best yield using multiple-patterning lithography. From lithography point of view, design hot spots are the major yield detractor. Device performance such as RC timing delay, cross talk, leakage (such as IDDQ), breakdown voltage and final yield is heavily influenced by MPT process variations. Brion’s LMC has been used to evaluate the impact of realistic dose, focus, mask and overlay variations on MPT hot spots both intra-layer and interlayer. Identification of such MPT hot spots helps drive design and OPC improvements so that they can be eliminated in wafer manufacturing.

An EDA view of semiconductor manufacturing

Thursday, July 24th, 2014

By Gabe Moretti, Contributing Editor

The concern that there is a significant break between tools used by designers targeting leading edge processes, those at 32 nm and smaller to be precise, and those used to target older processes was dispelled during the recent Design Automation Conference (DAC).  In his address as a DAC keynote speaker in June at the Moscone Center in San Francisco Dr. Antun Domic, Executive Vice President and General Manager, Synopsys Design Group, pointed out that advances in EDA tools in response to the challenges posed by the newer semiconductor process technologies also benefit designs targeting older processes.

Mary Ann White, Product Marketing Director for the Galaxy Implementation Platform at Synopsys, echoed Dr. Domic remarks and stated:” There seems to be a misconception that all advanced designs needed to be fabricated on leading process geometries such as 28nm and below, including FinFET. We have seen designs with compute-intensive applications, such as processors or graphics processing, move to the most advanced process geometries for performance reasons. These products also tend to be highly digital. With more density, almost double for advanced geometries in many cases, more functionality can also be added. In this age of disposable mobile products where cellphones are quickly replaced with newer versions, this seems necessary to remain competitive.

However, even if designers are targeting larger, established process technologies (planar CMOS), it doesn’t necessarily mean that their designs are any less advanced in terms of application than those that target the advanced nodes.  There are plenty of chips inside the mobile handset that are manufactured on established nodes, such as those with noise cancellation, touchscreen, and MEMS (Micro-Electronic Sensors) functionality. MEMS chips are currently manufactured at the 180nm node, and there are no foreseeable plans to move to smaller process geometries. Other chips at established nodes tend to also have some analog capability, which doesn’t make them any less complex.”

This is very important since the companies that can afford to use leading edge processes are diminishing in number due to the very high ($100 million and more) non recurring investment required.  And of course the cost of each die is also greater than with previous processes.  If the tools could only be used by those customers doing leading edge designs revenues would necessarily fall.

Design Complexity

Steve Carlson, Director of Marketing at Cadence, states that “when you think about design complexity there are few axes that might be used to measure it.  Certainly raw gate count or transistor count is one popular measure.  From a recent article in Chip Design a look at complexity on a log scale shows the billion mark has been eclipsed.”  Figure 1, courtesy of Cadence, shows the increase of transistors per die through the last 22 years.

Fig 1

Steve continued: “Another way to look at complexity is looking at the number of functional IP units being integrated together.  The graph in figure 2, provided by Cadence, shows the steep curve of IP integration that SoCs have been following.  This is another indication of the complexity of the design, rather than of the complexity of designing for a particular node.  At the heart of the process complexity question are metrics such as number of parasitic elements needed to adequately model a like structure in one process versus another.”  It is important to notice that the percentage of IP blocks provided by third parties is getting close to 50%.

Fig 2

Steve concludes with: “Yet another way to look at complexity is through the lens of the design rules and the design rule decks.  The graphs below show the upward trajectory for these measures in a very significant way.” Figure 3, also courtesy of Cadence, shows the increased complexity of the Design Rules provided by each foundry.  This trend makes second sourcing a design impossible, since having a second source foundry would be similar to having a different design.

Fig 3

Another problem designers have to deal with is the increasing complexity due to the decreasing features sizes.  Anand Iyer, Calypto Director of Product Marketing, observed that: “Complexity of design is increasing across many categories such as Variability, Design for Manufacturability (DFM) and Design for Power (DFP). Advanced geometries are prone to variation due to double patterning technology. Some foundries are worst casing the variation, which can lead to reduced design performance. DFM complexity is causing design performance to be evaluated across multiple corners much more than they were used to. There are also additional design rules that the foundry wants to impose due to DFM issues. Finally, DFP is a major factor for adding design complexity because power, especially dynamic power is a major issue in these process nodes. Voltage cannot scale due to the noise margin and process variation considerations and the capacitance is relatively unchanged or increasing.”

Impact on Back End Tools.

I have been wondering if the increasing dependency on transistors geometries and the parasitic effects peculiar to each foundry would eventually mean that a foundry specific Place and Route tool would be better than adapting a generic tool to a Design Rules file that is becoming very complex.  I my mind complexity means greater probability of errors due to ambiguity among a large set of rules.  Thus by building rules specific Place and Route tools would directly lower the number of DR checks required.

Mary Ann White of Synopsys answered: “We do not believe so.  Double and multiple patterning are definitely newer techniques introduced to mitigate the lithographic effects required to handle the small multi-gate transistors. However, in the end, even if the FinFET process differs, it doesn’t mean that the tool has to be different.  The use of multi patterning, coloring and decomposition is the same process even if the design rules between foundries may differ.”

On the other hand Steve Carlson of Cadence shares the opinion.  “There have been subtle differences between requirements at new process nodes for many generations.  Customers do not want to have different tool strategies for second source of foundry, so the implementation tools have to provide the union of capabilities needed to enable each node (or be excluded from consideration).   In more recent generations of process nodes there has been a growing divergence of the requirements to support

like-named nodes. This has led to added cost for EDA providers.  It is doubtful that different tools will be spawned for different foundries.  How the (overlapping) sets of capabilities get priced and packaged by the EDA vendors will be a business model decision.  The use model users want is singular across all foundry options.  How far things diverge and what the new requirements are at 7nm and 5nm may dictate a change in strategy.  Time will tell.”

This is clear for now.  But given the difficulty of second sourcing I expect that a de4sign company will choose one foundry and use it exclusively.  Changing foundry will be almost always a business decision based on financial considerations.

New processes also change the requirements for TCAD tools.  At the just finished DAC conference I met with Dr. Asen Asenov, CEO of Gold Standard Simulations, an EDA company in Scotland that focuses on the simulation of statistical variability in nan-CMOS devices.

He is of the opinion that Design-Technology Co-Optimization (DTCO) has become mandatory in advanced technology nodes.  Modeling and simulation play an increasing important role in the DTCO process with the benefits of speeding up and reducing the cost of the technology, circuit and system development and hence reducing the time-to-market.  He said: “It is well understood that tailoring the transistor characteristics by tuning the technology is not sufficient any more. The transistor characteristics have to meet the requirement for design and optimization of particular circuits, systems and corresponding products.  One of the main challenges is to factor accurately the device variability in the DTCO tools and practices. The focus at 28nm and 20nm bulk CMOS is the high statistical variability introduced by the high doping concentration in the channel needed to secure the required electrostatic integrity. However the introduction of FDSOI transistors and FinFETs, that tolerate low channel doping, has shifted the attention to the process induced variability related predominantly to silicon channel thickness or shape  variation.”  He continued: “However until now TCAD simulations, compact model extraction and circuit simulations are typically handled by different groups of experts and often by separate departments in the semiconductor industry and this leads to significant delays in the simulation based DTCO cycle. The fact that TCAD, compact model extraction and circuit simulation tools are typically developed and licensed by different EDA vendors does not help the DTCO practices.”

Ansys pointed out that in advanced finFET process nodes, the operating voltage for the devices have drastically reduced. This reduction in operating voltage has also lead to a decrease in operating margins for the devices. With several transient modes of operation in a low power ICs, having an accurate representation of the package model is mandatory for accurate noise coupling simulations. Distributed package models with a bump resolution are required for performing Chip-Package-System simulations for accurate noise coupling analysis.

Further Exploration

The topic of Semiconductors Manufacturing has generated a large number of responses.  As a result the next monthly article will continue to cover the topic with particular focus on the impact of leading edge processes on EDA tools and practices.

This article was originally published on Systems Design Engineering.

Calibre RealTime: Placing Signoff Verification into the Custom Designer’s Hands

Thursday, January 24th, 2013

How to reduce custom/AMS design cycle time while improving design quality with on-demand, in-design, signoff-quality verification from Calibre RealTime.

To download this white paper, click here.

Not Your Father’s DFM

Tuesday, February 15th, 2011

By John Blyler

Historically, the design-for-manufacturing (DFM) approach had two goals. One was to ensure that a given design actually could be manufactured. The second goal was to determine how much yield improvement could be achieved with a given tool. But the costs to quantify the improvements in yield with actual data were too high to justify the effort.

The problem is exacerbated as you go below 40nm. Semiconductor fabs will not customize their manufacturing process to a customer specific design style—save for a few large customers like Apple. Instead, fabs have normalized their process activities to serve the greatest number of chip customers and different markets.

But the fabless companies are not without resources. Many have a great deal of data that is returned with their test and production wafers after the first few tapeouts. While chip companies may have 30 to 40 tapes or more for a specific process node, the first several tapeouts typically provide data from tens of thousands of test and probe wafers per month. This is valuable data that could be fed back to designers to optimize yields in the next series of tapeouts.

“The EDA industry does not provide a mechanism or system to re-simulate a fix (adjusting line spacing, for example) to see its impact on future designs,” notes Michael Buehler, marketing director for the Design-to-Silicon Division at Mentor Graphics. “Today, we focus the problem design by design, as opposed to looking holistically at a family of designs coming up.”

In the past, there was far less sensitivity to individual design features. Any problems that arose were fixed in the manufacturing process. At today’s advanced nodes, specific designs can create systematic failures that cannot be efficiently fixed by tweaking the process. Further, few fabs will want to re-center their production operation to fix one customer’s problems when it means that the rest of the customer base must change its manufacturing rules.

This idea of a feedback mechanism within the manufacturing process for a given family of chips at a given node is not new. But neither does it fit into the traditional DFM mindset.
What should this optimized process based on incremental feedback be called? Some have suggested Design for Intelligent Manufacturing. While descriptive enough, the acronym is something of a problem—DIM. Others players tout the name, Design Manufacturing Co-Optimization, which highlights the collaborative nature of the approach.

Collaboration is indeed critical. This process is affected by everybody who impacts the yield, which includes the entire ecosystem. What was a traditional DFM tool path now becomes an information flow between the tools in design, place and route, production and test. The results are fed back into the tools flow, simulated, optimized and used to tweak the next tapeout. The fix, say to a line spacing margin that turns out to be too tight, could happen a number of different ways—in the manufacturing fabrication, with a change to the test or the router, or even with the IP.

This feedback and optimization process goes way beyond what was typically thought of as DFM. Few disagree on this point. About the only thing open for debate is what this new approach should be called.