Part of the  

Solid State Technology


The Confab


About  |  Contact

Posts Tagged ‘DFT’

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015


By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.


3D EDA brings together proven 2D solutions

Friday, March 14th, 2014

By Ed Korczynski, Senior Technical Editor, SST/SemiMD

With anticipated economic limits to the continuation of Moore’s Law now on the horizon, it seems that moving into the 3rd dimension (3D) by stacking multiple layers of integrated circuits (IC) will be the ultimate expression of CMOS technology. Whether stacking heterogeneous chips using through-silicon vias (TSV), or monolithic approaches to forming multiple active IC layers on a single silicon substrate, 3D ICs should be both smaller and faster compared to functionally equivalent 2D chips and packages.

However, 3D ICs will likely always cost more than doing it in 2D, due to more step being needed in manufacturing. A recent variation of 2D IC packaging with some of the benefits of 3D is the use of silicon interposers containing TSV.

Current state-of-the-art electronic design automation (EDA) tools exist to handle complex IC systems, and can therefore handle complex 3D designs as long as the software has the proper inputs from a foundry’s Process-Design Kit. Figure 1 shows the verification flow for a multi-chip system using the “3DSTACK” capability within Mentor Graphics’ Calibre platform. Leading IC foundries GlobalFoundries and TSMC as well as 3D IC specialty foundry Tezzaron have all qualified 3DSTACK for their 2.5D and 3D design verifications.

FIGURE 1: “3DSTACK” functionality integrates with existing 2D Design Rule Check (DRC) modules within the Calibre platform. (Source: Mentor Graphics).

EDA tools have evolved in complexity such that Design-For-Test (DFT) methodologies and technologies now exist to tackle 3D ICs. Steve Pateras, product marketing director, DFT, Design to Silicon Division of Mentor Graphics advised, “If you’re stacking multiple die together, you need to work with known good die. The ROI basically changes for stacking, such that you need to get into a different regime of test.” In a die stack we have to think about not just known good die, but die known to be good after they are stacked, too. The latter condition mandates standards for DfT to allow test signals to flow between layers.

The IEEE 1838 working group on 3D interface standards is intended for heterogeneous integration, allowing for different IC process technologies, design set-ups, test, and design-for-test approaches. The standard defines test access features that enable the transportation of test stimuli and responses for both a target die and its inter-die connections.

Figure 2 shows the extra die interfaces that must be physically verified within a 3D IC system stack. Die interfaces can be mis-aligned due to translation or rotation during assembly, and with die from different fabs at different geometries it can be non-trivial to ensure that the rights pins are connected.

FIGURE 2: Schematic cross-section of a 3D IC system showing the die interfaces that require new Physical Verification (PV) checks. (Source: Mentor Graphics).

3D memory stacks are somewhat in their own category since they are primarily designed and manufactured by IDMs, though often with a logic layer on the bottom they are mostly homogenous, and since memory usually runs cooler than logic they generally have no cooling issues. For these reasons 3D memory stacks using wire-bonds have been in volume production for years, Micron leads the development of the Hybrid Memory Cube using TSV, and Samsung leads in growing multiple memory layers on a single silicon chip.

Future Demand for 3D Logic

So far, the only known commercial logic chips shipping with TSV are the Xilinx Virtex-7 product, where four 28nm node FPGAs (as reported by Phil Garrou in 2011 in his IFTLE blog) are connected to a silicon interposer. Xilinx has shown that much of the motivation for using 2.5D packaging was to improve yield when working with the maximum number of logic gates in the smallest available process node, and when foundry yields improve with learnings for a given node we would expect that the FPGA would be made using a single-chip 2D solution.

It appears that 2.5D is not so much a stepping-stone to 3D, as it is a clever variant on established 2D advanced packaging options. Silicon interposers with TSV offer certain advantages for integration of high-speed logic in 2D, but due to relatively greater cost compared to other WLP methods will likely only be used for high-margin parts like the Virtex-7. Also, Out-Sourced Assembly and Test (OSAT) companies have been offering both “fan-out” and “fan-in” wafer-level packaging (WLP) options, and heterogeneous integration can certainly be done using these approaches. “We have customers planning on using interposers, but they’re planning on lower-cost substrates,” commented Michael Buehler-Garcia, senior marketing director for Calibre, Design to Silicon Division of Mentor Graphics.

If high volume CMOS logic will always be most cost-effectively integrated in a single 2D slice of silicon, and heterogeneous integration of CMOS can be done in 2D using FD-SOI substrates, then what remains as the demand driver for future 3D logic stacks? What logic products require heterogeneous integration for basic functionality, would be band-width-limited by 2D packages, and also are anticipated to be shipped in sufficiently high-volume to allow for amortization of the integration costs?

Several vendor have recently launched 100G C form-factor pluggable (CFP) modules to increase speeds while reducing costs in communicating between data servers. ClariPhy produces a CFP SoC using a 28nm CMOS process that is packaged with laser diode chips from Sumitomo Electric Industries’ (SEI). “By combining ClariPhy’s SoC with SEI’s world class indium phosphide optics technology and deep experience in volume manufacturing of pluggable optical modules, we will deliver the benefits of coherent technology to metro and datacenter networks,” said Nobu Kuwata, general manager of the Technology and Marketing Department of Sumitomo Electric Device Innovations (SEDI). “We will provide first samples of our 100G coherent CFP next quarter.”

Even greater cost and power savings could derive from a revolution in the interconnections used not just between servers but inside the server farms that provide the ubiquitous “cloud computing” we are all coming to enjoy. “It’s still a couple of years out, but we’re doing research on DARPA projects now,” says Buehler-Garcia in reference to work Mentor Graphics is doing to bring the automation of its Calibre platform to this application space.

The EDA industry’s ability to handle system-on-chip (SoC) and system-in-package (SiP) layouts means that the differences between designing for 2D, 2.5D, and 3D logic should be minimal. “We don’t charge extra for 3D,” explained Buehler-Garcia, “it’s already part of the deal.” ‑E.K.

Safety critical devices drive fast adoption of advanced DFT

Monday, January 6th, 2014

By Ron Press, Mentor Graphics Corp

Devices used in safety critical applications need be known to work and have the ability to be regularly verified. Therefore, a very high-quality test is important, as is a method to perform a built-in self-test. Recently, there has been a strong growth in the automotive market and the number of processors within each car is steadily increasing. These devices are used for more and more functions such as braking systems, engine control, heads-up display, navigation systems, image sensors, and more. As a result, we see many companies designing devices for the automotive market or trying to enter the automotive market.

2011 saw the publication of the ISO standard 26262, which specifies standard criteria for automobile electronics. Our experience is that recently two test requirements are being adopted or at least evaluated by most companies developing safety critical devices. One requirement is to perform a very high-quality test such that there are virtually no defective parts that escape the tests. The other is to perform a built-in self-test such that the part can be tested when in the safety critical application.

There are various pattern types that help support the zero DPM (defects per million) shipped devices goal. In particular, Cell-Aware test is proven to uniquely detect defects that escape traditional tests. Cell-Aware test can find defects that would escape a 100% stuck-at, transition, and timing-aware test set.. This is because it works by first modeling the actual defects that can occur in the physical layout of standard cells. Cell-Aware pattern size was recently improved and reduced, but a complete pattern set is larger than a traditional pattern set so embedded compression is used.

At Mentor Graphics, we started seeing more and more customers implementing logic BIST and embedded compression for the same circuits. Therefore, it made sense to integrate both into common logic that can be shared, since both technologies interface to scan chains in a similar manner. The embedded compression decompressor could be configured into a linear feedback shift register (LFSR) to produce pseudo-random patterns for logic BIST. Both the logic BIST and embedded compression logic provide data to scan chains through a phase shifter so that logic is fully shared. The scan chain outputs are compacted together in embedded compression. This logic is mostly shared with logic BIST to reduce the number of scan chain outputs that enter a signature calculator.

The hybrid embedded compression/logic BIST circuit is useful for meeting the safety-critical device quality and self-test requirements. In addition, since logic is shared the controller is 20-30% smaller than implementing embedded compression and logic BIST separately. As previously mentioned, we have seen this logic being adopted or in evaluation very broadly by automotive device designers.

One side effect of using embedded compression and logic BIST is that each makes the other better. For example, embedded compression can supply an extremely high quality production test. So, fewer test points are necessary in logic BIST to make random pattern resistant logic more testable, which reduces the area of logic BIST test points. Conversely, the X-bounding and any test points that are added for logic BIST make the circuit more testable and improve the embedded compression coverage and pattern count results.

Ron Press is the technical marketing manager of the Silicon Test Solutions products at Mentor Graphics. The 25-year veteran of the test and DFT (design-for-test) industry has presented seminars on DFT and test throughout the world. He has published dozens of papers in the field of test, is a member of the International Test Conference (ITC) Steering Committee, is a Golden Core member of the IEEE Computer Society, and a Senior Member of IEEE. Press has patents on reduced-pin-count testing, glitch-free clock switching, and patents pending on 3D DFT.

Optimizing Test To Enable Diagnosis-Driven Yield Analysis

Thursday, February 21st, 2013

Using diagnosis-driven yield analysis, companies have decreased their time to yield, managed manufacturing excursions and recovered yield caused by systematic defects. Dramatic time savings and yield gains have been proven using these methods. Companies must plan ahead to take advantage of diagnosis-driven yield analysis. The planning needs to include how and what patterns to generate during ATPG/DFT, what design data to archive, how to optimize your test program, how much data to collect, and what/how much diagnosis to perform. This white paper will address how to optimize the test environment in order to enable efficient diagnosis-driven yield analysis.

To download this white paper, click here.