Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘test’

Elusive Analog Fault Simulation Finally Grasped

Tuesday, September 27th, 2016

thumbnail

By Stephen Sunter, Mentor Graphics

The test time per logic gate in ICs has greatly decreased in the last 20 years, thanks to scan-based design-for-test (DFT), automatic test pattern generation (ATPG) tools, and scan compression. But for analog circuits, test time per transistor has not decreased at all. And to make matters worse, the test time for the analog portion of an IC can dominate total test time. A new approach is needed for analog tests to achieve higher coverage in less time, or to improve defect tolerance.

Source: ON Semiconductor

Analog designers and test engineers do not have DFT tools comparable to those used by their digital counterparts. It has been difficult to improve the number of defective parts per million (DPPM) because it has been too challenging to measure defect coverage. These are typically measured by the rate of customer returns, which can occur months after the ICs are tested.

Analog fault simulation has only been discussed in academic papers and recently, in a few industrial papers that describe proprietary software. Why haven’t the analog fault simulation techniques described in all those papers led to commercially-available fault simulators that are used in industry? Mostly because there is no industry-accepted analog fault model and simulating all potential faults requires an impractically long time.

Potential Solutions for Reducing Simulation Time

Many methods for reducing simulation have been proposed over the years in published papers, including:

  • Simulate only shorts and opens in the schematic netlist without variations;
  • Analyze a circuit’s layout to find the shorts and opens that can actually occur (and the likelihood of those defects occurring);
  • Simulate only in the AC domain;
  • Simulate the sensitivities of each tested performance to variations in each circuit element;
  • Use a simplified, time domain simulation to measure the impact of injected shorts and opens on output signals, only within a few clock cycles;
  • Measure analog toggle coverage.

Even if these techniques were very efficient and reduced simulation time dramatically, the large number of defects simulated would mean that the number of undetected defects to diagnose would be large. For example, if there were 100,000 potential faults in a circuit and 90% were detected, there would be 10,000 undetected faults to investigate. Analyzing each defect is a very time-consuming task that requires detailed knowledge of the circuit and tests. Therefore, reducing the number of defects simulated can save a lot of time, in multiple ways. The methods to reduce the number of defects include:

  • Randomly select defects from a list of all potential defects;
  • Randomly select defects, after grouping them according to defect likelihoods;
  • Select only principal parameters of the circuit elements, such as voltage, gate length, width, and oxide thickness;
  • Select representative defects based on circuit analysis.

Potential Standard Analog Fault Models

Currently, there is no accepted analog fault model standard in the industry. Proposals such as simulating only short and open defects and simulating defective variations in circuit elements or in high-level models have been rejected. Because of the lack of a standard, a group of about a dozen companies (including Mentor Graphics) has been meeting regularly since mid-2014 to develop such a fault model. The group has reported their progress publicly several times, and hopes to develop an IEEE standard by 2018.

The Tessent DefectSim Solution

Tessent® DefectSim™ incorporates lessons learned from all previous approaches, combining the best aspects of each while avoiding their pitfalls. Simulation time is reduced using a variety of techniques that all together reduce total simulation time by many orders of magnitude compared to some of the previous approaches, without introducing a new simulator, reducing existing simulator accuracy, or restricting the types of tests. The analog defect models can be shorts and opens, just variations, or both. Or, users can substitute their own proprietary defect models. The defects can be injected at the schematic level, at the layout level, or a combination of both.

To be realistic, defects should be injected in a layout-extracted netlist. But higher-level netlist descriptions or hardware description language (HDL) models, such as Verilog-A or Verilog RTL, can reduce simulation time by one or two orders of magnitude. In practice, the highest level netlist of a subcircuit is often just its schematic; nevertheless, it typically simulates an order of magnitude faster than the layout-extracted netlist. DefectSim runs Eldo® when the circuit contains only SPICE and Verilog-A models, and Questa® ADMS™ when Verilog-AMS or RTL models are also used.

DefectSim introduces a new statistical technique called likelihood-weighted random sampling (LWRS) to minimize the number of defects to simulate. This new technique uses stratified random sampling in which each stratum contains only one defect. The likelihood of randomly selecting each defect is proportional to the likelihood of the defect occurring. Each likelihood of occurrence is computed based on designer-provided global parameters, and parameters of each circuit element.

For example, shorts are the most common. In state-of-the-art production processes, shorts are 3~10X more likely than opens. When the range of defect likelihoods is large, as it is for mixed-signal circuits, LWRS requires up to 75% fewer samples than simple random sampling (SRS) for a given confidence interval (the variation in an estimate that would occur if the random sampling was done many times). In practice, when coverage is 90% or higher, this means that it is usually sufficient to simulate a maximum 250 defects, regardless of the circuit size or the number of potential defects, to estimate coverage within 2.5%, for a 99% confidence level. Simulating as few as one hundred defects is sufficient to get ±4% estimate precision. For small circuits, or when time permits, all defects can be simulated.

DefectSim allows you to combine almost all of the previously-published techniques for reducing simulation time, including random sampling, high-level modeling, stop-on-detection, AC mode, and parallel simulation. All together, these techniques can reduce simulation time by up to six orders of magnitude compared to simulating the production test of all potential defects in a flat, layout-extracted netlist. The same techniques can be applied to the measurement of defect tolerance.

For more information about Tessent DefectSim, read the whitepaper at:
https://www.mentor.com/products/silicon-yield/resources/overview/part-1-analog-fault-simulation-challenges-and-solutions-f9fd7248-3244-4bda-a7e5-5a19f81d7490?cmpid=10167

Test Protocols for the IoT

Tuesday, July 12th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

The Internet-of-Things (IoT) will require components that can sense the world, process and store data, and communicate autonomously within a secured environment. Consequently, IoT devices must incorporate sensors, wireless communication at Radio Frequencies (RF), logic, and embedded memory. Integrated circuit (IC) chips for IoT applications will have to be created at low cost in High Volume Manufacturing (HVM) lines, for which there are unique challenges with design and test. Presto Engineering’s founder and president, Michel Villemain, spoke with the Show Daily about how his company’s test services can accelerate the time-to-market and reduce risk in creating new IoT chip products.

“We started 10 years ago, and were differentiated on RF,” explained Villemain. “We now have a good view on what test costs are in production for different chip functionalities. We focus on specific segments of the industry that are not the traditional ‘drivers’ such as SoCs and large digital chips.” Since most IoT devices are expected to use Over The Air (OTA, a.k.a. “wireless”) links, Presto’s expertise in RF test helps create a low-cost solution for customers.

“We see some general trends in this area,” said Villemain. “The first one in IoT is there is a lot of activity in determining proper protocols for communications, as the industry moves from using short-range private area networks to low-power wide-area-networks with range beyond 300 feet. The second trend which is not technical, is that more and more non-semiconductor companies such as ‘system houses’ will be designing chips to reduce costs and increase security.

“The need for security has been reported as one of the main issues in peoples’ minds preventing deployment of the IoT. When security has to be hardware related and implemented in the chip, the only easy way to enable it is with test,” confided Villemain. “Remember that security is not binary. There is a return-on-investment decision based on how easy would it be to break something and how much would it cost to prevent that breakage. There is somewhat of a consensus that hardware-based solutions provide more security for data traveling over a link, so what we are trying to do is lower the cost of adding security at the hardware level.”

For the test of a very large and complex device, all of the digital instructions are generated by the design tools. However, for a primarily analog device the digital is not the core of the design and not the core expertise of the design team. The Figure shows the workflow used by Presto to methodically manage the establishment of rigorous engineering and production flows for IoT ICs.

test protocols

“Provisioning” is defined as the use of embedded Non-Volatile Memory (NVM) such as Flash within a chip to be able to customize the functionality. If you need to test Flash cells and bake, then program the Flash and bake again before final test it calls for up to three probe insertions, so the type of NVM chosen can alter the text protocol needed.

At end end of last month, Presto announced a multi-year supply agreement with NAGRA—a Kudelski Group company in secure digital TV access and management systems—to provide supply chain management and production services for several of NAGRA’s key products in the Pay TV market. “We are delighted that NAGRA has placed trust in Presto to be its production partner for volume products,” said Michel Villemain, CEO, Presto Engineering. “Leveraging team and expertise acquired from INSIDE Secure in 2015, this is a natural complement to our strategy of deploying an independent subcontract back-end manufacturing and supply chain service for the secure card industry and IoT markets.”

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.

—E.K.

Optimal+ Turns 10, Wins Accolades For Its Work

Friday, July 17th, 2015

thumbnail

By Jeff Dorsch, Contributing Editor

Optimal+ has been in business since 2005. The Israel-based company, which has offices around the world, has something else to celebrate this year: Frost & Sullivan gave Optimal+ its 2015 Global Semiconductor Test Visionary Innovation Leadership Award.

“Frost & Sullivan firmly believes that Optimal+ is the epitome of visionary innovation as it relates to Big Data analytics for the semiconductor industry,” the management consulting and market research firm said. It cited Optimal+’s ability to offer customers a high return on investment, along with improvements in quality and yield, among other attributes.

Optimal+ last year changed its name from OptimalTest, saying it had expanded beyond semiconductor test operations to supply chain management and visibility, product planning, and Big Data analytics.

“The name Optimal+ is more representative of our business and focus moving forward,” founder and CEO Dan Glotter said in a statement.

Kenneth Levy, the chairman emeritus of KLA-Tencor and founder of KLA Instruments, serves as chairman of the Optimal+ board. The company’s investors include Aviv Ventures, Carmel Ventures, Evergreen Venture Partners, and Pitango Venture Capital.

Optimal+ has Advanced Micro Devices and Broadcom among its customers, and it has worked with a wide variety of fabless semiconductor companies, integrated device manufacturers, and outsourced semiconductor assembly and test firms.

“Everything we do is predicated on ROI,” says David Park, the company’s vice president of worldwide marketing. Optimal+ has processed and approved more than 20 billion chips in its decade-long history, and it says its software and services are checking out 15 billion chips a year now.

Park says the company can deliver up to 2 percent greater yield and up to 3 percent improvement in product yield recovery based solely on test, while also increasing operational efficiency and productivity improvements by up to 20 percent. Optimal+ is additionally all about test-time reduction, he adds.

Optimal+ last month rolled out Release 6.0 of its Semiconductor Operations Platform with a new feature, Extreme Analytics and Characterization, or EXACT. It also announced the selection of the HP Vertica Analytics Platform to help its customers with business intelligence and analytics.

For Optimal+, the company’s product and services portfolio is all about providing “Manufacturing Intelligence,” Park notes.

3D-IC Testing With The Mentor Graphics Tessent Platform

Thursday, June 20th, 2013

Three-dimensional stacked integrated circuits (3D-ICs) are composed of multiple stacked die, and are viewed as critical in helping the semiconductor industry keep pace with Moore’s Law. Current integration and interconnect methods include wirebond and flip-chip and have been in production for some time.

3D chips connected via interposers are in production at Xilinx, Samsung, IBM, and Sematech [1]. Interposers are providing the logical first step to industrialization of 3D based on through-silicon vias (TSV)s. The next generation of 3D integration incorporates TSV technology as the primary method of interconnect between the die.

To download this white paper, click here.

Optimizing Test To Enable Diagnosis-Driven Yield Analysis

Thursday, February 21st, 2013

Using diagnosis-driven yield analysis, companies have decreased their time to yield, managed manufacturing excursions and recovered yield caused by systematic defects. Dramatic time savings and yield gains have been proven using these methods. Companies must plan ahead to take advantage of diagnosis-driven yield analysis. The planning needs to include how and what patterns to generate during ATPG/DFT, what design data to archive, how to optimize your test program, how much data to collect, and what/how much diagnosis to perform. This white paper will address how to optimize the test environment in order to enable efficient diagnosis-driven yield analysis.

To download this white paper, click here.