Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘SST News’

Next Page »

Picosun and Hitachi MECRALD Process

Friday, February 24th, 2017

thumbnail

By Ed Korczynski, Sr. Technical Editor

A new microwave electron cyclotron resonance (MECR) atomic layer deposition (ALD) process technology has been co-developed by Hitachi High-Technologies Corporation and Picosun Oy to provide commercial semiconductor IC fabs with the ability to form dielectric films at lower temperatures. Silicon oxide and silicon nitride, aluminum oxide and aluminum nitride films have been deposited in the temperature range of 150-200 degrees C in the new 300-mm single-wafer plasma-enhanced ALD (PEALD) processing chamber.

With the device features within both logic and memory chips having been scaled to atomic dimensions, ALD technology has been increasingly enabling cost-effective high volume manufacturing (HVM) of the most advanced ICs. While the deposition rate will always be an important process parameter for HVM, the quality of the material deposited is far more important in ALD. The MECR plasma source provides a means of tunable energy to alter the reactivity of ALD precursors, thereby allowing for new degrees of freedom in controlling final film properties.

The Figure shows the MECRALD chamber— Hitachi High-Tech’s ECR plasma generator is integrated with Picosun’s digitally controlled ALD system—from an online video (https://youtu.be/SBmZxph-EE0) describing the process sequence:

1.  first precursor gas/vapor flows from a circumferential ring near the wafer chuck,

2.  first vacuum purge,

3.  second precursor gas/vapor is ionized as it flows down through the ECR zone above the circumferential ring, and

4.  second vacuum purge to complete one ALD cycle (which may be repeated).

Cross-sectional schematic of a new Microwave Electron Cyclotron Resonance (MECR) plasma source from Hitachi High-Technologies connected to a single-wafer Atomic Layer Deposition (ALD) processing chamber from Picosun. (Source: Picosun)

The development team claims that MECRALD films are superior to other PEALD films in terms of higher density, lower contamination of carbon and oxygen (in non-oxides), and also show excellent step-coverage as would be expected from a surface-driven ALD process. The relatively density of these films has been confirmed by lower wet etch rates. The single-wafer process non-uniformity on 300mm wafers is claimed at ~1% (1 sigma). The team is now exploring processes and precursors to be able to deposit additional films such as titanium nitride (TiN), tantalum nitride (TaN), and hafnium oxide (HfO). In an interview with Solid State Technology, a spokesperson from Hitachi High-Technologies explained that, “We are now at the development stage, and the final specifications mainly depend on future achievements.”

The MECR source has been used in Hitachi High-Tech’s plasma chamber for IC conductor etch for many years, and is able to generate a stable high-density plasma at very low pressure (< 0.1 Pa). MECR plasmas provide wide process windows through accurate plasma parameter management, such as plasma distribution or plasma position control. The same plasma technology is also used to control ions and radicals in the company’s dry cleaning chambers.

“I’m really impressed by the continuous development of ALD technology, after more than 40 years since the invention,” commented Dr. Tuomo Suntola, and the famous inventor and patentor of the Atomic Layer Deposition method in Finland in 1974, and member of the Picosun board of directors. “Now combining Hitachi and Picosun technologies means (there is) again a major breakthrough in advanced semiconductor manufacturing.”

MECRALD chambers can be clustered on a Picosun platform that features a Brooks robot handler. This technology is still under development, so it’s too soon to discuss manufacturing parameters such as tool cost and wafer throughput.

—E.K.

Vital Control in Fab Materials Supply-Chains – Part 2

Thursday, February 16th, 2017

By Ed Korczynski, Sr. Technical Editor

As detailed in Part 1 of this article published last month by SemiMD, the inaugural Critical Materials Council (CMC) Conference happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is Part 2 of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jeff Hemphill, Staff Materials R&D Engineer, Intel Corporation,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

FIGURE 1: 2016 CMC Conference expert panelists (from left to right) John Smyth, Jonas Sundqvist, Jeff Hemphill, and Jean-Marc Girard. (Source: TECHCET CA)

KORCZYNSKI:  We heard from David Thompson [EDITOR’S NOTE:  Director of Process Chemistry, Applied Materials presented on “Agony in New Material Introductions -  Minimizing and Correlating Variabilities”] today on what we must control, and he gave an example of a so-called trace-contaminant that was essential for the process performance of a precursor, where the trace compound helped prevent particles from flaking off chamber walls. Do we need to specify our contaminants?

GIRARD:  Yes. To David’s point this morning, every molecule is different. Some are very tolerant due to the molecular process associated with it, and some are not. I’ll give you an example of a cobalt material that’s been talked about, where it can be run in production at perhaps 95% in terms of assay, provided that one specific contaminant is less than a couple of parts-per-million. So it’s a combination of both, it’s not assay OR a specification of impurities. It’s a matter of specifying the trace components that really matter when you reach the point that the data you gather gives you that understanding, and obviously an assay within control limits.

HEMPHILL:  Talking about whether we’re over-specifying or not, the emphasis is not about putting the right number on known parameters like assay that are obvious to measure, the emphasis is on identifying and understanding what makes up the rest of it and in a sense trying over-specify that. You identify through mass-spectrometry and other techniques that some fraction of a percent is primarily say five different species, it’s finding out how to individually monitor and track and control those as separate parameters. So from a specification point of view what we want is not necessarily the lowest possible numbers, but it’s expanding how many things we’re looking at so that we’re capturing everything that’s there.

KORCZYNSKI:  Is that something that you’re starting to push out to your suppliers?

HEMPHILL:  Yes. It depends on the application we’re talking about, but we go into it with the assumption that just assay will not be enough. Whether a single molecule or a blend of things is supposed to be there, we know that just having those be controlled by specification will not be sufficient. We go under the assumption that we are going to identify what makes up the remaining part of the profile, and those components are going to need to be controlled as well.

KORCZYNSKI:  Is that something that has changed by node? Back when things were simpler say at 45nm and larger, were these aspects of processing that we could safely ignore as ‘noise’ but are now important ‘signals’?

HEMPHILL:  Yes, we certainly didn’t pay as close attention just a couple of generations ago.

KORCZYNSKI:  That seems to lead us to questions about single-sources versus dual-sourcing. There are many good reasons to do both, but not simultaneously. However, it seems that because of all of the challenges we’re heard about over the last day-and-a-half of this conference it creates greater burden on the suppliers, and for critical materials the fabs are moving toward more single-sourcing over time.

SMYTHE:  I think that it comes down to more of a concern over geographic risk. I’ll buy from one entity if that entity has more than one geographic location for the supply, so that I’m not exposed to a single ‘Act of God’ or a ‘random statistical occurrence of global warming.’ So for example I  need to ask if a supplier has a place in the US and a place in France that makes the same thing, so that if something bad happens in one location it can still be sourced? Or do you have an alternate-supply agreement that if you can’t supply it you have an agreement with Company-X to supply it so that you still have control? You can’t come to a Micron and say we want to make sure that we get at minimum 25% no matter what, because what typically happens with second-sourcing is Company-A gets 75% of the business while Company-B gets 25%. There are a lot of reasons that that doesn’t work so well, so people may have an impression that there’s a movement toward single-source but it’s ‘single flexible-source.’

HEMPHILL:  There are a lot of benefits of dual- or multiple-sourcing. The commercial benefits of competition can be positive and we’re for it when it works. The risk is that as things are progressing and we’re getting more sensitive to differences in materials it’s getting harder to maintain that. We have seen situations where historically we were successful with dual-sourcing a raw material coming from two different suppliers or even a single supplier using two different manufacturing lines and everything was fine and qualified and we could alternate sources invisibly. However, as our sensitivity has grown over time we can start to detect differences.

So the concept of being ‘copy-exactly’ that we use in our factories, we really need production lines to do that, and if we’re talking about two different companies producing the same material then we’re not going to get them to be copy-exactly. When that results in enough of a variation in the material that we can detect it in the factory then we cannot rely upon two sources. Our preference would be one company that maintains multiple production sites that are designed to be exactly the same, then we have a high degree of confidence that they will be able to produce the same material.

FIGURE 2: Jean-Marc Girard, Distinguished Member of Technical Staff of Micron Technology, provided the supplier perspective. (Source: SEMI)

GIRARD:  I can give you a supplier perspective on that. We are seeing very different policies from different customers, to the point that we’re seeing an increase in the number of customers doing single-sourcing with us, provided we can show the ability to maintain business continuity in case of a problem. I think that the industry became mature after the tragic earthquake and tsunami in Japan in 2011 with greater understanding of what business continuity means. We have the same discussions with our own suppliers, who may say that they have a dedicated reactor for a certain product with another backup reactor with a certain capacity on the same site, and we ask what happens if the plant goes on strike or there’s a fire there?

A situation where you might think the supply was stable involved silane in the United States. There are two large silane plants in the United States that are very far apart from each other and many Asian manufacturers dependent upon them. When the U.S. harbors went on strike for a long time there was no way that material could ship out of the U.S. customers. So, yes there were two plants but in such an event you wouldn’t have global supply. So there is no one way to manage our supply lines and we need to have conversations with our customers to discuss the risks. How much time would it take to rebuild a supply-chain source with someone else? If you can get that sort of constructive discussion going then customers are usually open to single-sourcing. One regional aspect is that Asian customers tend to favor dual-sourcing more, but that can lead to IP problems.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Vital Control in Fab Materials Supply-Chains

Wednesday, January 25th, 2017

By Ed Korczynski, Sr. Technical Editor

The inaugural Critical Materials Council (CMC) Conference, co-sponsored by Solid State Technology, happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is an edited excerpt of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

KORCZYNSKI:  Let’s start with specifications: over-specifying, and under-specifying. Do we have the right methodologies to be able to estimate the approximate ‘ball-park’ range that the impurities need to be in?

GIRARD:  For determining the specifications, to some extent it doesn’t matter because we are out of the world of specs, where what matters is the control-limits. To Tim Hendry’s point in the Keynote yesterday [EDITOR’S NOTE:  Tim G. Hendrey, vice president of the Technology and Manufacturing Group and director of Fab Materials at Intel Corporation provided a conference keynote address on “Process Control Methods for Advanced Materials”], what was really interesting is instead of the common belief that we should start by supplying the product with the lowest possible variability, instead we should try to explore the window in which the product is working. So getting 10 containers from the same batch and introducing deliberate variability so that you know the process space in which you can play. That is the most important information to be able to reach the most reasonable and data-driven numbers to specify control limits. A lot of specs in the past were primarily determined by marketing decisions instead of data.

FIGURE 1: Jonas Sundqvist, Sr. Scientist of Fraunhofer IKTS, discusses collaboration with industry on application-specific ALD R&D. (Source: TECHCET CA)

SUNDQVIST:  Like the first introduction of what were called “super-clean” ALD precursors for the original MIS DRAM capacitors, Samsung used about 10nm of hafnium-aluminate and it would not matter if there was slight contamination in the precursors because you were not trying to control for a specific high-k phase. Whereas now you are doping very precisely and you have already scaled thinness so over time the specification for high-k precursors has become more important.

SMYTHE:  I think it comes down to the premise that when you are doing vapor transport through a bubbler that some would argue that that’s like a distillation column. So it’s a matter of thinking about what is transporting and what isn’t. In some cases the contaminant you’re concerned about is in the ampule but it never makes it to the process chamber, or the act of oxidizing destroys it as a volatile byproduct. So I think the bigger issue is change-management not necessarily the exact specification. You must know what you have, and agree that a single adjustment to improve the productivity of chemical synthesis requires that ‘fingerprinting’ must be done to show the same results. The argument is that you do not accept “less-than” as part of a specification, you only accept what it is.

AUDIENCE QUESTION:  The systems in which these precursors are used also have ‘memory’ based on the prior reactions in the chamber and byproducts that get absorbed on walls. When these byproducts come out in subsequent processing they can alter conditions so that you’re actually running in CVD-mode instead of ALD-mode. Chamber effects can wash-out a lot of value of having really pure chemicals moving through a delivery system into a chamber and picking up contaminants that you spent a whole lot of money taking out at the point of delivery. What do you think about that?

GIRARD:  Well, this is a ‘crisis!’ When something like this starts to happen in a fab or even during the development cycles, you can’t prioritize resources and approaches you just have to do everything. Sometimes it’s the tool, sometimes it’s the chemical, sometimes it’s the interaction of the two, sometimes it’s back-streaming from the vacuum sub-system…there are so many ways that things can go wrong. Certainly you have to clear up the chemistry part as early as possible.

SUNDQVIST:  We work with zirconium precursors for ALD, and you can develop a precursor that gives you a very pure ALD process that really works like an ALD process should. However, you can still use the TEMA-Zr precursor, that in processing has a CVD component which you can use that to gain throughput. So you can have a really good ALD precursor that gives low particle-counts and good process stability and ideal thermal processing range, but the growth rate goes down by 20% so you’re not very popular in the fab. Many things change when you make an ‘improved’ molecule to perfect the process, and sometime you want to use an imperfect part of the process.

FIGURE 2: John Smythe, Distinguished Member of Technical Staff of Micron Technology, explains approaches to controlling materials all the way to point-of-use. (Source: TECHCET CA)

SMYTHE:  What we’re doing a lot more these days is doing chamber finger-printing, where we’re putting a quad-filtered mass-spec on each chamber—not a cheap little RGA, but real analytical-grade—and it’s been enlightening. If you look at your chemistry moving through a delivery line using something like the Schrødenger software, it’s not a big deal to see that you can use the mass spec to see some synthesis happening in the line. We joke and call it ‘point of use synthesis’ but it’s not very funny. We are used to having spare delivery lines built-in so we can install tools to try to gain insights to prevent what we’ve been talking about.

KORCZYNSKI:  John, since Micron has fabs in Lehi and fabs in Singapore and other places, while they do run different product loads, do you have to worry about how long it takes things to travel on a slow boat to Singapore? Do you have to stockpile things more strategically these days, and does that effect your receiving department?

SMYTHE:  What we really need are a few good ocean-going hydrofoil ships! The most complete answer is we first identify which things need ‘batch-qual’ so if we do a batch-qual in Virginia and know that material is going to Taiwan that we have confidence it will pass batch-qual in Taiwan. There are certain materials that we require information on which synthesis batch, which production batch, and sometimes which bottling batch. Sometimes you take a yield hit because you didn’t have the right vision, and then you institute batch qual.

I think most of you are familiar with the concept of ‘ship-to-stock,’ when you have enough good statistical history and a good change management process with the supplier then you can do ship-to-stock and that reduces the batch-qual overhead. On a case by case basis you have to figure out how difficult that is. A small story I can tell is that with Block Co-Polymer (BCP) self-assembly we found one particular element that in concentration above 5 ppm prevented the poly-styrene from self-assembling in the same way, whereas other metal trace contaminants could be a hundred times higher and have no effect on the process. So this gets back to some of our earlier discussion that it’s not enough to know that your trace elements are below some level. Tell me the exact atoms and the exact counts and then we’ll talk about using them. The BCP R&D taught us that in some situations just changing from one batch to the next could increase defects a thousands times. So we will see a bigger push to counting atoms.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Mentor Graphics Joins GLOBALFOUNDRIES FDXcelerator Partner Program

Thursday, December 22nd, 2016

Mentor Graphics Corp. (NASDAQ: MENT) today announced that it has joined GLOBALFOUNDRIES’ FDXcelerator Partner Program. FDXcelerator program partners support customers of GLOBALFOUNDRIES FDX™ technologies by providing a variety of design solutions, including approved design methodology, IP development expertise, hardware/software system integration expertise, and other critical software, services, and support. They participate in FDXcelerator Partner Program events, and receive early access to the GLOBALFOUNDRIES FDX roadmap and associated technology offerings.

“Mentor Graphics is proud to have expanded our long-term relationship with GLOBALFOUNDRIES to include the FDXcelerator Partner Program,” said Joe Sawicki, vice-president and general manager of the Design-to-Silicon division at Mentor Graphics. “We look forward to delivering an enhanced set of solutions to mutual customers in support of GLOBALFOUNDRIES FDX offerings that will enable the development of high quality low-power designs based upon FD-SOI technology.”

Mentor Graphics offerings participating in the FDXcelerator program include:

  • Multiple design implementation solutions from Digital IC Design, including the Oasys-RTL™ floorplanning and synthesis platform and Nitro-SoC™ next-generation place and route platform.
  • The Calibre® platform, including the Calibre DFM tool suite, the most comprehensive set of IC design verification tools in the EDA industry. Calibre tools will be designated as the sign-off tools for FDX across all GLOBALFOUNDRIES design creation flows.
  • The Analog FastSPICE (AFS)™ Platform, the fastest, most accurate, and highest capacity simulation for nanometer-scale circuits, and the Eldo® Platform, the most advanced circuit verification for analog-centric circuits. Collaboration with GLOBALFOUNDRIES includes device and circuit level certification for 22FDX, and support of reference flows for 22FDX.
  • The Tessent® product suite of comprehensive silicon test and yield analysis solutions includes a full design for test reference flow for 22FDX designs, and provides the industry’s highest test quality, lowest test cost, and fastest time to root cause of test failures.

“We are very pleased that Mentor Graphics has joined our FDXcelerator Partner Program,” said Alain Mutricy, senior vice president of product management at GLOBALFOUNDRIES. “The combination of Mentor’s EDA offerings and our FDX technologies provide customers with the solutions that will enable success in delivering products for today’s highly competitive IC markets.”

Linde Korea acquires Air Liquide Korea’s industrial merchant and electronics on-site and liquid bulk air gases business

Thursday, December 15th, 2016

Linde Korea, a member of The Linde Group, today announced that it has completed the takeover of Air Liquide Korea’s industrial merchant and electronics on-site and liquid bulk air gases business in South Korea. The ten sites under this agreement complement Linde’s existing presence and offerings in the country. In addition, the acquisition of the direct bulk business is a natural fit with Linde’s strategy of growing its local direct bulk supply network and customer base. The agreement underscores Linde’s focus on serving the demands for industrial air gas products in the electronics, chemicals and manufacturing industries.

Sanjiv Lamba, Chief Operating Officer for Asia Pacific and Member of the Executive Board of Linde AG, said “I am delighted that we have concluded the acquisition of Air Liquide’s industrial merchant and electronics on-site and liquid bulk air gases business in South Korea. The acquired industrial merchant and electronics on-site facilities will further strengthen our existing extensive network of sites and customer density in South Korea, and support the growth intentions of major markets, particularly in the electronics sector. The acquisition is part of our strategy of delivering long-term sustainable profits in key markets in the region, and complements the recent investments we made in enhancing our R&D capabilities in Asia.”

Steven Fang, Regional Business Unit Head, East Asia, The Linde Group, said “Our track record of investments in South Korea underscores our long-term commitment to expand our business in the region. Our investments also reaffirm our commitment to key customers, including Korean conglomerates such as Samsung, LG, Lotte Chemical and SK Hynix, to support their growth plans, in South Korea and worldwide.”

Under this agreement, Linde Korea has completed takeover of Air Liquide Korea’s industrial merchant and electronics on-site and liquid bulk air gases business in South Korea. It includes the transfer of the related operating sites for the on-site plants as well as tanks and related equipment for liquid storage. In addition, the associated customer contracts have been transferred to Linde Korea, together with Air Liquide Korea employees who will continue to operate the plants and service customers.

Linde Korea first established its operations in Pohang in 1988. Over the past 30 years, it has continuously expanded its product and services portfolio, and footprint across the country. In the last 10 years alone, Linde Korea has invested over EUR 300 million in industrial gases production facilities and equipment, contributing to the country’s industrial growth and economic success. It includes the production facilities in Seosan and Giheung to produce high purity industrial gases, and its investment in the joint venture PSG, a leading distributor of merchant and packaged industrial gases in South Korea.

Mentor Graphics Signs Agreement with ARM to Accelerate Early Hardware/Software Development

Wednesday, November 16th, 2016

Mentor Graphics Corporation (NASDAQ: MENT) has signed a multiyear license agreement with ARM to gain early access to a broad range of ARM Fast ModelsCycle Models and related technologies. Mentor will have access to all ARM Fast Models for the ARMv7 and ARMv8 architectures across all ARM Cortex-A, Cortex-R, Cortex-M cores, GPUs and System IP, in addition to engineering collaboration on further optimizations. This builds on agreements already in place to ensure that the validation of ARM models is completed ahead of mutual customer demand.

“Our collaboration with Mentor has resulted in one of ARM’s broadest modeling partnerships,” said Javier Orensanz, general manager, development solutions group, ARM. “With this agreement, our mutual customers can utilize ARM’s entire model portfolio to speed system execution and debug issues with complete accuracy.”

As a result of this agreement, ARM Fast Models can be combined with the Veloce emulation platform, for example, to enable faster verification and earlier software development. Moving the modeling of the CPU and GPU out of the emulator and into the ARM Fast Models allows software execution performance orders of magnitude faster than a traditional approach that relies on a complete RTL description to be ready. This enables software tasks to be executed quickly, such as Android boots and application execution. Verification teams can now validate more than just boot code and drivers. They can also run complete software stacks to exercise the system in a realistic manner and flush out hard-to-find bugs, which would otherwise have gone undetected until physical prototypes were available.

“This second agreement with ARM clearly indicates our strategic alignment toward providing a complete HW/SW development platform,” said Brian Derrick, vice president of marketing, Mentor Graphics. “Our mutual customers benefit from early access and validation of state-of-the-art Mentor technology working with the most current ARM models.”

Elusive Analog Fault Simulation Finally Grasped

Tuesday, September 27th, 2016

thumbnail

By Stephen Sunter, Mentor Graphics

The test time per logic gate in ICs has greatly decreased in the last 20 years, thanks to scan-based design-for-test (DFT), automatic test pattern generation (ATPG) tools, and scan compression. But for analog circuits, test time per transistor has not decreased at all. And to make matters worse, the test time for the analog portion of an IC can dominate total test time. A new approach is needed for analog tests to achieve higher coverage in less time, or to improve defect tolerance.

Source: ON Semiconductor

Analog designers and test engineers do not have DFT tools comparable to those used by their digital counterparts. It has been difficult to improve the number of defective parts per million (DPPM) because it has been too challenging to measure defect coverage. These are typically measured by the rate of customer returns, which can occur months after the ICs are tested.

Analog fault simulation has only been discussed in academic papers and recently, in a few industrial papers that describe proprietary software. Why haven’t the analog fault simulation techniques described in all those papers led to commercially-available fault simulators that are used in industry? Mostly because there is no industry-accepted analog fault model and simulating all potential faults requires an impractically long time.

Potential Solutions for Reducing Simulation Time

Many methods for reducing simulation have been proposed over the years in published papers, including:

  • Simulate only shorts and opens in the schematic netlist without variations;
  • Analyze a circuit’s layout to find the shorts and opens that can actually occur (and the likelihood of those defects occurring);
  • Simulate only in the AC domain;
  • Simulate the sensitivities of each tested performance to variations in each circuit element;
  • Use a simplified, time domain simulation to measure the impact of injected shorts and opens on output signals, only within a few clock cycles;
  • Measure analog toggle coverage.

Even if these techniques were very efficient and reduced simulation time dramatically, the large number of defects simulated would mean that the number of undetected defects to diagnose would be large. For example, if there were 100,000 potential faults in a circuit and 90% were detected, there would be 10,000 undetected faults to investigate. Analyzing each defect is a very time-consuming task that requires detailed knowledge of the circuit and tests. Therefore, reducing the number of defects simulated can save a lot of time, in multiple ways. The methods to reduce the number of defects include:

  • Randomly select defects from a list of all potential defects;
  • Randomly select defects, after grouping them according to defect likelihoods;
  • Select only principal parameters of the circuit elements, such as voltage, gate length, width, and oxide thickness;
  • Select representative defects based on circuit analysis.

Potential Standard Analog Fault Models

Currently, there is no accepted analog fault model standard in the industry. Proposals such as simulating only short and open defects and simulating defective variations in circuit elements or in high-level models have been rejected. Because of the lack of a standard, a group of about a dozen companies (including Mentor Graphics) has been meeting regularly since mid-2014 to develop such a fault model. The group has reported their progress publicly several times, and hopes to develop an IEEE standard by 2018.

The Tessent DefectSim Solution

Tessent® DefectSim™ incorporates lessons learned from all previous approaches, combining the best aspects of each while avoiding their pitfalls. Simulation time is reduced using a variety of techniques that all together reduce total simulation time by many orders of magnitude compared to some of the previous approaches, without introducing a new simulator, reducing existing simulator accuracy, or restricting the types of tests. The analog defect models can be shorts and opens, just variations, or both. Or, users can substitute their own proprietary defect models. The defects can be injected at the schematic level, at the layout level, or a combination of both.

To be realistic, defects should be injected in a layout-extracted netlist. But higher-level netlist descriptions or hardware description language (HDL) models, such as Verilog-A or Verilog RTL, can reduce simulation time by one or two orders of magnitude. In practice, the highest level netlist of a subcircuit is often just its schematic; nevertheless, it typically simulates an order of magnitude faster than the layout-extracted netlist. DefectSim runs Eldo® when the circuit contains only SPICE and Verilog-A models, and Questa® ADMS™ when Verilog-AMS or RTL models are also used.

DefectSim introduces a new statistical technique called likelihood-weighted random sampling (LWRS) to minimize the number of defects to simulate. This new technique uses stratified random sampling in which each stratum contains only one defect. The likelihood of randomly selecting each defect is proportional to the likelihood of the defect occurring. Each likelihood of occurrence is computed based on designer-provided global parameters, and parameters of each circuit element.

For example, shorts are the most common. In state-of-the-art production processes, shorts are 3~10X more likely than opens. When the range of defect likelihoods is large, as it is for mixed-signal circuits, LWRS requires up to 75% fewer samples than simple random sampling (SRS) for a given confidence interval (the variation in an estimate that would occur if the random sampling was done many times). In practice, when coverage is 90% or higher, this means that it is usually sufficient to simulate a maximum 250 defects, regardless of the circuit size or the number of potential defects, to estimate coverage within 2.5%, for a 99% confidence level. Simulating as few as one hundred defects is sufficient to get ±4% estimate precision. For small circuits, or when time permits, all defects can be simulated.

DefectSim allows you to combine almost all of the previously-published techniques for reducing simulation time, including random sampling, high-level modeling, stop-on-detection, AC mode, and parallel simulation. All together, these techniques can reduce simulation time by up to six orders of magnitude compared to simulating the production test of all potential defects in a flat, layout-extracted netlist. The same techniques can be applied to the measurement of defect tolerance.

For more information about Tessent DefectSim, read the whitepaper at:
https://www.mentor.com/products/silicon-yield/resources/overview/part-1-analog-fault-simulation-challenges-and-solutions-f9fd7248-3244-4bda-a7e5-5a19f81d7490?cmpid=10167

Mentor Graphics Extends Offering to Support TSMC 7nm and 16FFC FinFET Process Technologies

Wednesday, September 21st, 2016

Mentor Graphics Corp. (NASDAQ: MENT) today announced further enhancements and optimizations for various products within the Calibre Platform, and Analog FastSPICE (AFS) Platform, as well as the completion of further certifications and reference flows for Taiwan Semiconductor Manufacturing Corporation (TSMC) 16FFC FinFET and 7nm FinFET processes. Moreover, the Calibre offering has been extended on additional established TSMC processes in support of the growing Internet of Things (IoT) design market requirements.

The AFS Platform, including AFS Mega simulation, has been certified for the TSMC 16FFC FinFET and the TSMC 7nm FinFET process technologies through TSMC’s SPICE Simulation Tool Certification Program. The AFS Platform supports TSMC design platforms for mobile, HPC, automotive, and IoT/wearables. Analog, mixed-signal, and RF design teams at leading semiconductor companies worldwide will benefit from using Analog FastSPICE to efficiently verify their chips designed in 16FFC and 7nm FinFET technologies.

Mentor’s Calibre xACT™ extraction offering is now certified for the TSMC 16FFC FinFET and the TSMC 7nm FinFET process technologies. Calibre xACT extraction leverages its built-in deterministic fast field-solver engine to deliver needed accuracy around three-dimensional FinFET devices and local interconnect. Its scalable multiprocessing delivers sufficient punch for large leading-edge digital designs. In addition, both companies continue extraction collaboration in established process nodes, with additional corner variation test cases and tighter criteria to ensure tool readiness for IoT applications.

The Calibre PERC™ reliability platform has also been enhanced to enable TSMC 7nm customers to run point-to-point resistance checks at full chip. This greater capacity allows customers to quickly analyze interconnect robustness at all levels (IP, block, and full chip) while verifying lower resistance paths on critical electrostatic discharge (ESD) circuitry, helping ensure long-term chip reliability. Likewise, Calibre Multi-Patterning functionality has been enhanced for 7nm, including new analysis, graph reduction and visualization capabilities which are essential to customers designing and debugging this completely new multi-patterning technique.

The Calibre YieldEnhancer ECOFill solution, initially developed for 20nm, has now been extended to all TSMC process nodes from 7nm to 65nm. Designers at all process nodes will now be able to minimize fill runtimes, manage fill hierarchy, and minimize shape removal when implementing changes to the initial design.

Mentor’s Nitro-SoC P&R platform has also been enhanced to support advanced 7nm requirements, such as floorplan boundary cell insertion, stacking via routing, M1 routing and cut-metal methodology, tap cell insertion and swapping, and ECO flow methodology. Certification of the flow integration of these N7 features are on-going. For 16FFC, the needed tool features have been validated by TSMC, and Mentor is optimizing its correlation with sign-off analysis.

“Today’s chip design teams are looking at different process nodes to implement their complete solution,” said Joe Sawicki, vice president and general manager of Mentor Graphics Design-to-Silicon Division. “By working with TSMC, Mentor is able to provide mutual customers with a single solution that is not only certified, but also includes the latest tool capabilities, for whichever TSMC process node they choose.”

“TSMC’s long-standing collaboration with Mentor Graphics enables both companies to work together effectively to identify new challenges and develop innovative solutions across all process nodes,” said Suk Lee, TSMC senior director, Design Infrastructure Marketing Division. “The Mentor Analog FastSPICE Platform, AFS Mega, and Calibre xACT tools have successfully met the accuracy and compatibility requirements for 16FFC and 7nm FinFET technologies. That certification, along with the Calibre Platform’s provision of fast, accurate physical verification, and extraction solutions critical to 7nm, ensures mutual customers they have access to EDA tools that are optimized for the newest process technologies.”

Mentor Graphics Veloce Emulation Platform Used by Starblaze for Verification of SSD Enterprise Storage Design

Wednesday, September 21st, 2016

Mentor Graphics Corporation (NASDAQ: MENT) today announced that the Veloce® emulation platform was successfully used by Starblaze Technology for a specialized high-speed, enterprise-based Solid State Drive (SSD) storage design.

Starblaze performed a detailed and lengthy analysis of the available solutions in the emulation market.  The Veloce emulation platform was selected and deployed because of its superior virtualization technology and memory protocol support, rich software debug capabilities and proven track record delivering innovative emulation technology.

“The enterprise SSD market is evolving rapidly, so the SoC (System on a Chip) verification technology we use has to be perfectly aligned with our needs, especially in terms of flexibility and high-performance protocol support,” said Sky Shen, CEO of Starblaze Technology. “After using the Veloce emulation platform on our latest high-performance, enterprise SSD controller project, we are convinced that a virtual solution with extensive software debug capability is the trend for the future of emulation technology.”

In the SSD storage space, it is extremely important for design teams to study the architecture and tune the performance while finding deep hardware bugs in the pre-silicon stage. Starblaze used VirtuaLAB PCIe to provide the host connection to their design on the Veloce emulation platform. The VirtuaLAB PCIe delivers very high debug productivity, and Starblaze was able to use its Software Design Kit “as is” without any modification or adaption.  In addition to using Veloce VirtuaLAB, Starblaze used Mentor’s Codelink® software debug capability to support the requirements of their embedded core software debug. In the flash interface side, the Veloce platform provides both HW and SW sparse memory solutions, which permits the necessary tradeoffs in the storage application.

ICE and Virtual:  Complementary Technologies

With the Veloce Emulation platform, verification teams have access to the best of both worlds, whether using an ICE-based or virtual emulation environment.  In-circuit emulation (ICE), a foundational emulation use model, remains a ‘must have’ for SoC designs that need to connect to real devices or custom hosts where physical hardware is required. The Veloce iSolve™ library offers a full complement of hardware components to build a robust ICE-based flow.

As more verification teams move from an ICE-based flow to a virtual flow, the Veloce emulation platform provides a smooth transition.  The Veloce Deterministic ICE App complements ICE by eliminating the non-deterministic nature of ICE and enabling advanced verification techniques: debug, power analysis, coverage closure, and software debug.

Full virtualization is achieved with the Veloce VirtuaLAB environment, which delivers virtual ICE-equivalent, high-speed host protocols and memory devices, allowing for greater flexibility for hardware/software system-level debug, power analysis, and system performance analysis.

“The Veloce emulation platform continues to deliver a comprehensive and robust emulation platform to a broad set of markets that all have unique challenges,” said Eric Selosse, vice president and general manager of the Mentor Emulation Division. “With Starblaze’s expertise in Flash Controller and SoC design, they quickly recognized the benefits of our VirtuaLAB solution.  Our success in working with them is attributed to our in-depth knowledge of the power of a virtual solution, and our timely support in deploying the Veloce emulation platform to meet their specific needs.”

About the Veloce Emulation platform

The Veloce emulation platform uses innovative software, running on powerful, qualified hardware and an extensible operating system, to target design risks faster than hardware-centric strategies. Now considered among the most versatile and powerful of verification tools, emulation greatly expands the ability of project teams to do hardware debugging, hardware/software co-verification or integration, system-level prototyping, low-power verification and power estimation and performance characterization.

The Veloce emulation platform is a core technology in the Mentor® Enterprise Verification Platform™ (EVP) – a platform that boosts productivity in ASIC and SoC functional verification by combining advanced verification technologies in a comprehensive platform. The Mentor EVP combines Questa® advanced verification solutions, the Veloce emulation platform, and the Visualizer™ debug environment into a globally accessible, high-performance datacenter resource. The Mentor EVP features global resource management that supports project teams around the world, maximizing both user productivity and total verification return on investment.

Veloce2 Emulator

Veloce2 Emulator is a high capacity, high-speed, multi-application powerhouse for simulation and emulation of SoC designs Learn More

About Mentor Graphics

Mentor Graphics Corporation is a world leader in electronic hardware and software design solutions, providing products, consulting services and award-winning support for the world’s most successful electronic, semiconductor and systems companies. Established in 1981, the company reported revenues in the last fiscal year of approximately $1.18 billion. Corporate headquarters are located at 8005 S.W. Boeckman Road, Wilsonville, Oregon 97070-7777. http://www.mentor.com.

Solid State Watch: August 19-25, 2016

Tuesday, August 30th, 2016
YouTube Preview Image
Next Page »