Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘SST News’

Next Page »

Mentor Graphics Team Receives the Harvey Rosten Award for Thermal Heatsink Optimization Methodology

Thursday, March 23rd, 2017

Mentor Graphics Corporation (NASDAQ:  MENT) today announced that Dr. Robin Bornoff, Dr. John Parry and John Wilson, a team from Mentor Graphics Mechanical Analysis Division, received the Harvey Rosten Award for Excellence in thermal modeling and analysis of electronics.  The team received the award for their paper, Subtractive Design: A Novel Approach to Heatsink Improvement, at the 33rd annual IEEE Thermal Measurement, Modeling and Management Symposium (SEMI-THERM) in San Jose, California.

Mentor Graphics team received the 2017 Harvey Rosten Award for excellence in thermal modeling and analysis in electronics at the SEMI-THERM symposium in San Jose, CA. The recipients (from left to right) Robin Bornoff, John Parry and John Wilson were honored for their technical paper on a unique heatsink optimization methodology.

The Mentor Graphics team created a unique methodology to sequentially remove underperforming portions of a heatsink to save weight and cost without compromising overall thermal performance. This method using Mentor Graphics® FloTHERM® technology provides a variety of automated optimization approaches to gain deeper insights into thermal characterizations to determine the best thermal design solution.

“We are extremely proud of our team for their commitment and dedication in discovering ways in which heatsinks may be optimized,” stated Roland Feldhinkel, general manager of Mentor Graphics Mechanical Analysis Division. “To be recognized by the selection committee comprised of highly esteemed thermal experts is a tremendous honor, and particularly personal since this award is named after the co-founder of Flomerics, which Mentor Graphics acquired in 2008.”

The Harvey Rosten Award For Excellence has been established by the family and friends of Harvey Rosten, who was responsible for the development of PHOENICS, the world’s first commercial general purpose computational fluid dynamics (CFD) software while working at CHAM, and co-founder of Flomerics (now a division of Mentor Graphics Corporation). The Award commemorates Rosten’s achievements in the field of thermal analysis of electronics equipment, and the thermal modeling of electronics parts and packages. The award aims to encourage innovation and excellence in these fields.

2017 Harvey Rosten Award Recipients

Dr. Robin Bornoff is a market development manager in Mentor Graphics Mechanical Analysis Division.  Robin was previously an application and support engineer, and a product marketing manager, specializing in the application of CFD to electronics cooling and the design of the built environment. He attained a mechanical engineering degree from Brunel University in 1992 followed by a PhD in 1995 for computational fluid dynamics (CFD) research.

John Wilson is currently the electronics product specialist for Mentor Graphics Mechanical Analysis Division.  John previously managed the engineering design services team, where he gained extensive experience in IC package-level test and analysis correlation, heatsink optimization and compact model development. He joined Mentor Graphics in 1999 after receiving his BS and MS in mechanical engineering from the University of Colorado at Denver.

Dr. John Parry is the electronics industry manager for Mentor Graphics Mechanical Analysis Division, which he joined when it was founded as Flomerics in 1989.  He attained a chemical Engineering Degree from Leeds University in 1982 and a PhD in 1988. His expertise includes compact modeling of fans, IC and LED packages, heatsinks, DoE and optimization methods, and thermal characterization, with over 75 published technical articles. John is a member of JC15 and past chair of SEMI-THERM.

Linde Invests Over EUR 110M in China to Strengthen Position as Supplier of Choice for Electronics Manufacturers

Wednesday, March 22nd, 2017

Gases and engineering company The Linde Group, through its electronics gases joint venture in China, Linde LienHwa, is expanding its commitment to China and the Asia Pacific region through investments of over EUR 110 million. The capital is being allocated for new on-site gas production facilities in major electronics manufacturing clusters in the eastern and central provinces of China. These investments with new and established customers will support multiple long-term contracts to provide electronics gases to leading-edge foundry, memory and flat panel display fabs.

Sanjiv Lamba, Member of the Executive Board of Linde AG and Chief Operating Officer for Asia Pacific, said, “These significant capital investments underscore Linde’s continued commitment to our business in Asia Pacific in general, and China, in particular, and build upon earlier investments and capabilities in the region, including the recent start-up of our state-of-the-art R&D center in Taichung, Taiwan. Asia will continue to be a growth driver for Linde and we will continue to invest in Asia.”

Stan Tang, President and General Manager of Linde LienHwa in China added, “Linde’s over EUR 110 million in new on-site plant investments demonstrates our commitment to the rapidly developing Chinese electronics manufacturing sector. The supply contracts that Linde has secured in China validate our customers’ confidence in the safety, quality and reliability of our gases supply and systems.”

SEMI (Semiconductor Equipment and Materials International), the global trade association that represents the electronics industry, estimates that more than 50 percent of new semiconductor fab investments in the next few years will be in China. China has made a large commitment to the electronics industry through the National IC Industry Investment Fund, more commonly known as The Big Fund, where it has pledged around EUR 20 billion from 2014 through 2017 to build the semiconductor industry in China. An additional EUR 82 billion is expected to be added from private equity funds and local governments.

Linde LienHwa, together with Linde’s Engineering Division, will design and construct these facilities. Linde SPECTRA-N® nitrogen generators have the highest level of operational efficiency, enabling lowest cost of ownership and reducing environmental footprint. These projects include multiple gaseous nitrogen plants, with a combined capacity of over 110,000 Nm3/hr (normal cubic meters per hour), plus several other bulk gas supply systems. All the plants will be on stream by the end of 2017.

Linde and its joint venture partners in China currently deliver gases solutions and systems to more than a dozen electronic production facilities across the major segments of the electronics industry, including those in semiconductor, display, solar and LED. Linde is also committed to meeting the electronic special gas (ESG) needs of its growing Chinese customer base. For example, Linde produces bulk amounts of key ESGs like ammonia (NH3) and nitrous oxide (N2O) in China, South Korea and Taiwan to ensure local supply and regional supply chain security.

Linde Electronics, the global electronics business of The Linde Group, supplies the world’s largest semiconductor manufacturers in Taiwan, Korea and the US, and is securing a leading position in China with international and domestic manufacturers. Linde Electronics is committed to building an infrastructure of specialty gas capabilities and co-investment partnerships in China.

Mentor Graphics Launches Unique Xpedition PCB Systems Vibration and Acceleration Simulation Solution

Monday, March 6th, 2017

Mentor Graphics Corporation (NASDAQ:  MENT) today announced its new Xpedition® vibration and acceleration simulation product for printed circuit board (PCB) systems reliability and failure prediction. As the industry’s market share leader in PCB design software, the Mentor Graphics® Xpedition product augments mechanical analysis and physical testing by introducing virtual accelerated lifecycle testing much earlier in the design process. This is the industry’s first PCB-design-specific vibration and acceleration simulation solution targeting products where harsh environments can compromise product performance and reliability, including the military, aerospace, automotive and industrial markets.

Traditional, physical HALT (highly accelerated lifecycle testing) is conducted just before volume manufacturing, and requires expert technicians, which can result in costly schedule delays. Bridging mechanical and electronic design disciplines, the Xpedition product provides vibration simulation significantly faster than any existing method. This results in increased test coverage and shortened design cycles to ensure product reliability and faster time to market.

The Xpedition component modeling library is the most extensive in the industry, comprised of over 4,000 unique 3D solid models which are used to create highly defined parts for simulation. The 3D library allows users to easily match geometries to their 2D cell database. Designers can assemble the parts models on board and automatically mesh them for performance analysis, including stiffeners and mechanical parts. The system modeling tool is ultra-fast since it can model over 1,000 components per minute.

Based on years of experience, the Xpedition technology provides an easy-to-use, automated environment leveraging a finite-elements engine developed for quick, accurate analyses. Unlike other tools, the technology is optimized for the PCB layout designer, enabling simulation and improved redesign at the desktop. The intuitive pre-processor and wizard allow users to set-up simulation in quickly and easily, for fast and accurate virtual prototyping.  The easy-to-use, patented post-processor technology lets designers quickly see high-failure-probability components and analyze boundary conditions, material properties, and environment profiles.

“Tech-Clarity research shows that higher quality and reliability have become top ways companies are trying to differentiate their products. With products becoming increasingly complex, engineers need better ways to efficiently improve product quality, without adding cost,” stated Michelle Boucher, vice president of research, Tech-Clarity. “Extending the virtual prototyping capabilities can be an important way to mitigate risk associated with product performance and reliability. Simulation capabilities such as what is available in Mentor’s Xpedition, can help companies catch problems earlier to improve quality, while saving time and cost by reducing physical tests.”

The Xpedition product can also perform acceleration stress simulation for specialized applications. This feature provides safety factor simulation for constant acceleration conditions, pin-level Von-Mises stress, detailed stress and deformation plots, and three- axes user-defined force vector (X, Y, Z) simulation. These features are developed for safety-critical applications such as those targeting the mil-aero markets.

“As leaders in their respective fields, Mentor customers constantly endeavor to implement strategic initiatives around risk mitigation through design-for-reliability solutions,” stated AJ Incorvaia, vice president and general manager of Mentor Graphics Board Systems Division. “Our new and patented Xpedition technology serves the PCB systems design community with an automated, fast, easy-to-use solution that simulates vibration in accelerated life cycle conditions. This enables customers to quickly identify performance issues before products are committed to prototype and manufacture, thus saving time and cost while ensuring end-product reliability.”

Xpedition Vibration and Acceleration Solution – Key Features

  • Simulate during the design process to determine PCB reliability and reduce field failure rates.
  • Detect components on the threshold of failure that would be missed during physical testing.
  • Analyze pin-level Von-Mises stress and deformation to determine failure probability and safety factors.
  • Automated simulation setup, significantly faster than existing methods.

Picosun and Hitachi MECRALD Process

Friday, February 24th, 2017

thumbnail

By Ed Korczynski, Sr. Technical Editor

A new microwave electron cyclotron resonance (MECR) atomic layer deposition (ALD) process technology has been co-developed by Hitachi High-Technologies Corporation and Picosun Oy to provide commercial semiconductor IC fabs with the ability to form dielectric films at lower temperatures. Silicon oxide and silicon nitride, aluminum oxide and aluminum nitride films have been deposited in the temperature range of 150-200 degrees C in the new 300-mm single-wafer plasma-enhanced ALD (PEALD) processing chamber.

With the device features within both logic and memory chips having been scaled to atomic dimensions, ALD technology has been increasingly enabling cost-effective high volume manufacturing (HVM) of the most advanced ICs. While the deposition rate will always be an important process parameter for HVM, the quality of the material deposited is far more important in ALD. The MECR plasma source provides a means of tunable energy to alter the reactivity of ALD precursors, thereby allowing for new degrees of freedom in controlling final film properties.

The Figure shows the MECRALD chamber— Hitachi High-Tech’s ECR plasma generator is integrated with Picosun’s digitally controlled ALD system—from an online video (https://youtu.be/SBmZxph-EE0) describing the process sequence:

1.  first precursor gas/vapor flows from a circumferential ring near the wafer chuck,

2.  first vacuum purge,

3.  second precursor gas/vapor is ionized as it flows down through the ECR zone above the circumferential ring, and

4.  second vacuum purge to complete one ALD cycle (which may be repeated).

Cross-sectional schematic of a new Microwave Electron Cyclotron Resonance (MECR) plasma source from Hitachi High-Technologies connected to a single-wafer Atomic Layer Deposition (ALD) processing chamber from Picosun. (Source: Picosun)

The development team claims that MECRALD films are superior to other PEALD films in terms of higher density, lower contamination of carbon and oxygen (in non-oxides), and also show excellent step-coverage as would be expected from a surface-driven ALD process. The relatively density of these films has been confirmed by lower wet etch rates. The single-wafer process non-uniformity on 300mm wafers is claimed at ~1% (1 sigma). The team is now exploring processes and precursors to be able to deposit additional films such as titanium nitride (TiN), tantalum nitride (TaN), and hafnium oxide (HfO). In an interview with Solid State Technology, a spokesperson from Hitachi High-Technologies explained that, “We are now at the development stage, and the final specifications mainly depend on future achievements.”

The MECR source has been used in Hitachi High-Tech’s plasma chamber for IC conductor etch for many years, and is able to generate a stable high-density plasma at very low pressure (< 0.1 Pa). MECR plasmas provide wide process windows through accurate plasma parameter management, such as plasma distribution or plasma position control. The same plasma technology is also used to control ions and radicals in the company’s dry cleaning chambers.

“I’m really impressed by the continuous development of ALD technology, after more than 40 years since the invention,” commented Dr. Tuomo Suntola, and the famous inventor and patentor of the Atomic Layer Deposition method in Finland in 1974, and member of the Picosun board of directors. “Now combining Hitachi and Picosun technologies means (there is) again a major breakthrough in advanced semiconductor manufacturing.”

MECRALD chambers can be clustered on a Picosun platform that features a Brooks robot handler. This technology is still under development, so it’s too soon to discuss manufacturing parameters such as tool cost and wafer throughput.

—E.K.

Vital Control in Fab Materials Supply-Chains – Part 2

Thursday, February 16th, 2017

By Ed Korczynski, Sr. Technical Editor

As detailed in Part 1 of this article published last month by SemiMD, the inaugural Critical Materials Council (CMC) Conference happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is Part 2 of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jeff Hemphill, Staff Materials R&D Engineer, Intel Corporation,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

FIGURE 1: 2016 CMC Conference expert panelists (from left to right) John Smyth, Jonas Sundqvist, Jeff Hemphill, and Jean-Marc Girard. (Source: TECHCET CA)

KORCZYNSKI:  We heard from David Thompson [EDITOR’S NOTE:  Director of Process Chemistry, Applied Materials presented on “Agony in New Material Introductions -  Minimizing and Correlating Variabilities”] today on what we must control, and he gave an example of a so-called trace-contaminant that was essential for the process performance of a precursor, where the trace compound helped prevent particles from flaking off chamber walls. Do we need to specify our contaminants?

GIRARD:  Yes. To David’s point this morning, every molecule is different. Some are very tolerant due to the molecular process associated with it, and some are not. I’ll give you an example of a cobalt material that’s been talked about, where it can be run in production at perhaps 95% in terms of assay, provided that one specific contaminant is less than a couple of parts-per-million. So it’s a combination of both, it’s not assay OR a specification of impurities. It’s a matter of specifying the trace components that really matter when you reach the point that the data you gather gives you that understanding, and obviously an assay within control limits.

HEMPHILL:  Talking about whether we’re over-specifying or not, the emphasis is not about putting the right number on known parameters like assay that are obvious to measure, the emphasis is on identifying and understanding what makes up the rest of it and in a sense trying over-specify that. You identify through mass-spectrometry and other techniques that some fraction of a percent is primarily say five different species, it’s finding out how to individually monitor and track and control those as separate parameters. So from a specification point of view what we want is not necessarily the lowest possible numbers, but it’s expanding how many things we’re looking at so that we’re capturing everything that’s there.

KORCZYNSKI:  Is that something that you’re starting to push out to your suppliers?

HEMPHILL:  Yes. It depends on the application we’re talking about, but we go into it with the assumption that just assay will not be enough. Whether a single molecule or a blend of things is supposed to be there, we know that just having those be controlled by specification will not be sufficient. We go under the assumption that we are going to identify what makes up the remaining part of the profile, and those components are going to need to be controlled as well.

KORCZYNSKI:  Is that something that has changed by node? Back when things were simpler say at 45nm and larger, were these aspects of processing that we could safely ignore as ‘noise’ but are now important ‘signals’?

HEMPHILL:  Yes, we certainly didn’t pay as close attention just a couple of generations ago.

KORCZYNSKI:  That seems to lead us to questions about single-sources versus dual-sourcing. There are many good reasons to do both, but not simultaneously. However, it seems that because of all of the challenges we’re heard about over the last day-and-a-half of this conference it creates greater burden on the suppliers, and for critical materials the fabs are moving toward more single-sourcing over time.

SMYTHE:  I think that it comes down to more of a concern over geographic risk. I’ll buy from one entity if that entity has more than one geographic location for the supply, so that I’m not exposed to a single ‘Act of God’ or a ‘random statistical occurrence of global warming.’ So for example I  need to ask if a supplier has a place in the US and a place in France that makes the same thing, so that if something bad happens in one location it can still be sourced? Or do you have an alternate-supply agreement that if you can’t supply it you have an agreement with Company-X to supply it so that you still have control? You can’t come to a Micron and say we want to make sure that we get at minimum 25% no matter what, because what typically happens with second-sourcing is Company-A gets 75% of the business while Company-B gets 25%. There are a lot of reasons that that doesn’t work so well, so people may have an impression that there’s a movement toward single-source but it’s ‘single flexible-source.’

HEMPHILL:  There are a lot of benefits of dual- or multiple-sourcing. The commercial benefits of competition can be positive and we’re for it when it works. The risk is that as things are progressing and we’re getting more sensitive to differences in materials it’s getting harder to maintain that. We have seen situations where historically we were successful with dual-sourcing a raw material coming from two different suppliers or even a single supplier using two different manufacturing lines and everything was fine and qualified and we could alternate sources invisibly. However, as our sensitivity has grown over time we can start to detect differences.

So the concept of being ‘copy-exactly’ that we use in our factories, we really need production lines to do that, and if we’re talking about two different companies producing the same material then we’re not going to get them to be copy-exactly. When that results in enough of a variation in the material that we can detect it in the factory then we cannot rely upon two sources. Our preference would be one company that maintains multiple production sites that are designed to be exactly the same, then we have a high degree of confidence that they will be able to produce the same material.

FIGURE 2: Jean-Marc Girard, Distinguished Member of Technical Staff of Micron Technology, provided the supplier perspective. (Source: SEMI)

GIRARD:  I can give you a supplier perspective on that. We are seeing very different policies from different customers, to the point that we’re seeing an increase in the number of customers doing single-sourcing with us, provided we can show the ability to maintain business continuity in case of a problem. I think that the industry became mature after the tragic earthquake and tsunami in Japan in 2011 with greater understanding of what business continuity means. We have the same discussions with our own suppliers, who may say that they have a dedicated reactor for a certain product with another backup reactor with a certain capacity on the same site, and we ask what happens if the plant goes on strike or there’s a fire there?

A situation where you might think the supply was stable involved silane in the United States. There are two large silane plants in the United States that are very far apart from each other and many Asian manufacturers dependent upon them. When the U.S. harbors went on strike for a long time there was no way that material could ship out of the U.S. customers. So, yes there were two plants but in such an event you wouldn’t have global supply. So there is no one way to manage our supply lines and we need to have conversations with our customers to discuss the risks. How much time would it take to rebuild a supply-chain source with someone else? If you can get that sort of constructive discussion going then customers are usually open to single-sourcing. One regional aspect is that Asian customers tend to favor dual-sourcing more, but that can lead to IP problems.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Vital Control in Fab Materials Supply-Chains

Wednesday, January 25th, 2017

By Ed Korczynski, Sr. Technical Editor

The inaugural Critical Materials Council (CMC) Conference, co-sponsored by Solid State Technology, happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is an edited excerpt of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

KORCZYNSKI:  Let’s start with specifications: over-specifying, and under-specifying. Do we have the right methodologies to be able to estimate the approximate ‘ball-park’ range that the impurities need to be in?

GIRARD:  For determining the specifications, to some extent it doesn’t matter because we are out of the world of specs, where what matters is the control-limits. To Tim Hendry’s point in the Keynote yesterday [EDITOR’S NOTE:  Tim G. Hendrey, vice president of the Technology and Manufacturing Group and director of Fab Materials at Intel Corporation provided a conference keynote address on “Process Control Methods for Advanced Materials”], what was really interesting is instead of the common belief that we should start by supplying the product with the lowest possible variability, instead we should try to explore the window in which the product is working. So getting 10 containers from the same batch and introducing deliberate variability so that you know the process space in which you can play. That is the most important information to be able to reach the most reasonable and data-driven numbers to specify control limits. A lot of specs in the past were primarily determined by marketing decisions instead of data.

FIGURE 1: Jonas Sundqvist, Sr. Scientist of Fraunhofer IKTS, discusses collaboration with industry on application-specific ALD R&D. (Source: TECHCET CA)

SUNDQVIST:  Like the first introduction of what were called “super-clean” ALD precursors for the original MIS DRAM capacitors, Samsung used about 10nm of hafnium-aluminate and it would not matter if there was slight contamination in the precursors because you were not trying to control for a specific high-k phase. Whereas now you are doping very precisely and you have already scaled thinness so over time the specification for high-k precursors has become more important.

SMYTHE:  I think it comes down to the premise that when you are doing vapor transport through a bubbler that some would argue that that’s like a distillation column. So it’s a matter of thinking about what is transporting and what isn’t. In some cases the contaminant you’re concerned about is in the ampule but it never makes it to the process chamber, or the act of oxidizing destroys it as a volatile byproduct. So I think the bigger issue is change-management not necessarily the exact specification. You must know what you have, and agree that a single adjustment to improve the productivity of chemical synthesis requires that ‘fingerprinting’ must be done to show the same results. The argument is that you do not accept “less-than” as part of a specification, you only accept what it is.

AUDIENCE QUESTION:  The systems in which these precursors are used also have ‘memory’ based on the prior reactions in the chamber and byproducts that get absorbed on walls. When these byproducts come out in subsequent processing they can alter conditions so that you’re actually running in CVD-mode instead of ALD-mode. Chamber effects can wash-out a lot of value of having really pure chemicals moving through a delivery system into a chamber and picking up contaminants that you spent a whole lot of money taking out at the point of delivery. What do you think about that?

GIRARD:  Well, this is a ‘crisis!’ When something like this starts to happen in a fab or even during the development cycles, you can’t prioritize resources and approaches you just have to do everything. Sometimes it’s the tool, sometimes it’s the chemical, sometimes it’s the interaction of the two, sometimes it’s back-streaming from the vacuum sub-system…there are so many ways that things can go wrong. Certainly you have to clear up the chemistry part as early as possible.

SUNDQVIST:  We work with zirconium precursors for ALD, and you can develop a precursor that gives you a very pure ALD process that really works like an ALD process should. However, you can still use the TEMA-Zr precursor, that in processing has a CVD component which you can use that to gain throughput. So you can have a really good ALD precursor that gives low particle-counts and good process stability and ideal thermal processing range, but the growth rate goes down by 20% so you’re not very popular in the fab. Many things change when you make an ‘improved’ molecule to perfect the process, and sometime you want to use an imperfect part of the process.

FIGURE 2: John Smythe, Distinguished Member of Technical Staff of Micron Technology, explains approaches to controlling materials all the way to point-of-use. (Source: TECHCET CA)

SMYTHE:  What we’re doing a lot more these days is doing chamber finger-printing, where we’re putting a quad-filtered mass-spec on each chamber—not a cheap little RGA, but real analytical-grade—and it’s been enlightening. If you look at your chemistry moving through a delivery line using something like the Schrødenger software, it’s not a big deal to see that you can use the mass spec to see some synthesis happening in the line. We joke and call it ‘point of use synthesis’ but it’s not very funny. We are used to having spare delivery lines built-in so we can install tools to try to gain insights to prevent what we’ve been talking about.

KORCZYNSKI:  John, since Micron has fabs in Lehi and fabs in Singapore and other places, while they do run different product loads, do you have to worry about how long it takes things to travel on a slow boat to Singapore? Do you have to stockpile things more strategically these days, and does that effect your receiving department?

SMYTHE:  What we really need are a few good ocean-going hydrofoil ships! The most complete answer is we first identify which things need ‘batch-qual’ so if we do a batch-qual in Virginia and know that material is going to Taiwan that we have confidence it will pass batch-qual in Taiwan. There are certain materials that we require information on which synthesis batch, which production batch, and sometimes which bottling batch. Sometimes you take a yield hit because you didn’t have the right vision, and then you institute batch qual.

I think most of you are familiar with the concept of ‘ship-to-stock,’ when you have enough good statistical history and a good change management process with the supplier then you can do ship-to-stock and that reduces the batch-qual overhead. On a case by case basis you have to figure out how difficult that is. A small story I can tell is that with Block Co-Polymer (BCP) self-assembly we found one particular element that in concentration above 5 ppm prevented the poly-styrene from self-assembling in the same way, whereas other metal trace contaminants could be a hundred times higher and have no effect on the process. So this gets back to some of our earlier discussion that it’s not enough to know that your trace elements are below some level. Tell me the exact atoms and the exact counts and then we’ll talk about using them. The BCP R&D taught us that in some situations just changing from one batch to the next could increase defects a thousands times. So we will see a bigger push to counting atoms.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Mentor Graphics Joins GLOBALFOUNDRIES FDXcelerator Partner Program

Thursday, December 22nd, 2016

Mentor Graphics Corp. (NASDAQ: MENT) today announced that it has joined GLOBALFOUNDRIES’ FDXcelerator Partner Program. FDXcelerator program partners support customers of GLOBALFOUNDRIES FDX™ technologies by providing a variety of design solutions, including approved design methodology, IP development expertise, hardware/software system integration expertise, and other critical software, services, and support. They participate in FDXcelerator Partner Program events, and receive early access to the GLOBALFOUNDRIES FDX roadmap and associated technology offerings.

“Mentor Graphics is proud to have expanded our long-term relationship with GLOBALFOUNDRIES to include the FDXcelerator Partner Program,” said Joe Sawicki, vice-president and general manager of the Design-to-Silicon division at Mentor Graphics. “We look forward to delivering an enhanced set of solutions to mutual customers in support of GLOBALFOUNDRIES FDX offerings that will enable the development of high quality low-power designs based upon FD-SOI technology.”

Mentor Graphics offerings participating in the FDXcelerator program include:

  • Multiple design implementation solutions from Digital IC Design, including the Oasys-RTL™ floorplanning and synthesis platform and Nitro-SoC™ next-generation place and route platform.
  • The Calibre® platform, including the Calibre DFM tool suite, the most comprehensive set of IC design verification tools in the EDA industry. Calibre tools will be designated as the sign-off tools for FDX across all GLOBALFOUNDRIES design creation flows.
  • The Analog FastSPICE (AFS)™ Platform, the fastest, most accurate, and highest capacity simulation for nanometer-scale circuits, and the Eldo® Platform, the most advanced circuit verification for analog-centric circuits. Collaboration with GLOBALFOUNDRIES includes device and circuit level certification for 22FDX, and support of reference flows for 22FDX.
  • The Tessent® product suite of comprehensive silicon test and yield analysis solutions includes a full design for test reference flow for 22FDX designs, and provides the industry’s highest test quality, lowest test cost, and fastest time to root cause of test failures.

“We are very pleased that Mentor Graphics has joined our FDXcelerator Partner Program,” said Alain Mutricy, senior vice president of product management at GLOBALFOUNDRIES. “The combination of Mentor’s EDA offerings and our FDX technologies provide customers with the solutions that will enable success in delivering products for today’s highly competitive IC markets.”

Linde Korea acquires Air Liquide Korea’s industrial merchant and electronics on-site and liquid bulk air gases business

Thursday, December 15th, 2016

Linde Korea, a member of The Linde Group, today announced that it has completed the takeover of Air Liquide Korea’s industrial merchant and electronics on-site and liquid bulk air gases business in South Korea. The ten sites under this agreement complement Linde’s existing presence and offerings in the country. In addition, the acquisition of the direct bulk business is a natural fit with Linde’s strategy of growing its local direct bulk supply network and customer base. The agreement underscores Linde’s focus on serving the demands for industrial air gas products in the electronics, chemicals and manufacturing industries.

Sanjiv Lamba, Chief Operating Officer for Asia Pacific and Member of the Executive Board of Linde AG, said “I am delighted that we have concluded the acquisition of Air Liquide’s industrial merchant and electronics on-site and liquid bulk air gases business in South Korea. The acquired industrial merchant and electronics on-site facilities will further strengthen our existing extensive network of sites and customer density in South Korea, and support the growth intentions of major markets, particularly in the electronics sector. The acquisition is part of our strategy of delivering long-term sustainable profits in key markets in the region, and complements the recent investments we made in enhancing our R&D capabilities in Asia.”

Steven Fang, Regional Business Unit Head, East Asia, The Linde Group, said “Our track record of investments in South Korea underscores our long-term commitment to expand our business in the region. Our investments also reaffirm our commitment to key customers, including Korean conglomerates such as Samsung, LG, Lotte Chemical and SK Hynix, to support their growth plans, in South Korea and worldwide.”

Under this agreement, Linde Korea has completed takeover of Air Liquide Korea’s industrial merchant and electronics on-site and liquid bulk air gases business in South Korea. It includes the transfer of the related operating sites for the on-site plants as well as tanks and related equipment for liquid storage. In addition, the associated customer contracts have been transferred to Linde Korea, together with Air Liquide Korea employees who will continue to operate the plants and service customers.

Linde Korea first established its operations in Pohang in 1988. Over the past 30 years, it has continuously expanded its product and services portfolio, and footprint across the country. In the last 10 years alone, Linde Korea has invested over EUR 300 million in industrial gases production facilities and equipment, contributing to the country’s industrial growth and economic success. It includes the production facilities in Seosan and Giheung to produce high purity industrial gases, and its investment in the joint venture PSG, a leading distributor of merchant and packaged industrial gases in South Korea.

Mentor Graphics Signs Agreement with ARM to Accelerate Early Hardware/Software Development

Wednesday, November 16th, 2016

Mentor Graphics Corporation (NASDAQ: MENT) has signed a multiyear license agreement with ARM to gain early access to a broad range of ARM Fast ModelsCycle Models and related technologies. Mentor will have access to all ARM Fast Models for the ARMv7 and ARMv8 architectures across all ARM Cortex-A, Cortex-R, Cortex-M cores, GPUs and System IP, in addition to engineering collaboration on further optimizations. This builds on agreements already in place to ensure that the validation of ARM models is completed ahead of mutual customer demand.

“Our collaboration with Mentor has resulted in one of ARM’s broadest modeling partnerships,” said Javier Orensanz, general manager, development solutions group, ARM. “With this agreement, our mutual customers can utilize ARM’s entire model portfolio to speed system execution and debug issues with complete accuracy.”

As a result of this agreement, ARM Fast Models can be combined with the Veloce emulation platform, for example, to enable faster verification and earlier software development. Moving the modeling of the CPU and GPU out of the emulator and into the ARM Fast Models allows software execution performance orders of magnitude faster than a traditional approach that relies on a complete RTL description to be ready. This enables software tasks to be executed quickly, such as Android boots and application execution. Verification teams can now validate more than just boot code and drivers. They can also run complete software stacks to exercise the system in a realistic manner and flush out hard-to-find bugs, which would otherwise have gone undetected until physical prototypes were available.

“This second agreement with ARM clearly indicates our strategic alignment toward providing a complete HW/SW development platform,” said Brian Derrick, vice president of marketing, Mentor Graphics. “Our mutual customers benefit from early access and validation of state-of-the-art Mentor technology working with the most current ARM models.”

Elusive Analog Fault Simulation Finally Grasped

Tuesday, September 27th, 2016

thumbnail

By Stephen Sunter, Mentor Graphics

The test time per logic gate in ICs has greatly decreased in the last 20 years, thanks to scan-based design-for-test (DFT), automatic test pattern generation (ATPG) tools, and scan compression. But for analog circuits, test time per transistor has not decreased at all. And to make matters worse, the test time for the analog portion of an IC can dominate total test time. A new approach is needed for analog tests to achieve higher coverage in less time, or to improve defect tolerance.

Source: ON Semiconductor

Analog designers and test engineers do not have DFT tools comparable to those used by their digital counterparts. It has been difficult to improve the number of defective parts per million (DPPM) because it has been too challenging to measure defect coverage. These are typically measured by the rate of customer returns, which can occur months after the ICs are tested.

Analog fault simulation has only been discussed in academic papers and recently, in a few industrial papers that describe proprietary software. Why haven’t the analog fault simulation techniques described in all those papers led to commercially-available fault simulators that are used in industry? Mostly because there is no industry-accepted analog fault model and simulating all potential faults requires an impractically long time.

Potential Solutions for Reducing Simulation Time

Many methods for reducing simulation have been proposed over the years in published papers, including:

  • Simulate only shorts and opens in the schematic netlist without variations;
  • Analyze a circuit’s layout to find the shorts and opens that can actually occur (and the likelihood of those defects occurring);
  • Simulate only in the AC domain;
  • Simulate the sensitivities of each tested performance to variations in each circuit element;
  • Use a simplified, time domain simulation to measure the impact of injected shorts and opens on output signals, only within a few clock cycles;
  • Measure analog toggle coverage.

Even if these techniques were very efficient and reduced simulation time dramatically, the large number of defects simulated would mean that the number of undetected defects to diagnose would be large. For example, if there were 100,000 potential faults in a circuit and 90% were detected, there would be 10,000 undetected faults to investigate. Analyzing each defect is a very time-consuming task that requires detailed knowledge of the circuit and tests. Therefore, reducing the number of defects simulated can save a lot of time, in multiple ways. The methods to reduce the number of defects include:

  • Randomly select defects from a list of all potential defects;
  • Randomly select defects, after grouping them according to defect likelihoods;
  • Select only principal parameters of the circuit elements, such as voltage, gate length, width, and oxide thickness;
  • Select representative defects based on circuit analysis.

Potential Standard Analog Fault Models

Currently, there is no accepted analog fault model standard in the industry. Proposals such as simulating only short and open defects and simulating defective variations in circuit elements or in high-level models have been rejected. Because of the lack of a standard, a group of about a dozen companies (including Mentor Graphics) has been meeting regularly since mid-2014 to develop such a fault model. The group has reported their progress publicly several times, and hopes to develop an IEEE standard by 2018.

The Tessent DefectSim Solution

Tessent® DefectSim™ incorporates lessons learned from all previous approaches, combining the best aspects of each while avoiding their pitfalls. Simulation time is reduced using a variety of techniques that all together reduce total simulation time by many orders of magnitude compared to some of the previous approaches, without introducing a new simulator, reducing existing simulator accuracy, or restricting the types of tests. The analog defect models can be shorts and opens, just variations, or both. Or, users can substitute their own proprietary defect models. The defects can be injected at the schematic level, at the layout level, or a combination of both.

To be realistic, defects should be injected in a layout-extracted netlist. But higher-level netlist descriptions or hardware description language (HDL) models, such as Verilog-A or Verilog RTL, can reduce simulation time by one or two orders of magnitude. In practice, the highest level netlist of a subcircuit is often just its schematic; nevertheless, it typically simulates an order of magnitude faster than the layout-extracted netlist. DefectSim runs Eldo® when the circuit contains only SPICE and Verilog-A models, and Questa® ADMS™ when Verilog-AMS or RTL models are also used.

DefectSim introduces a new statistical technique called likelihood-weighted random sampling (LWRS) to minimize the number of defects to simulate. This new technique uses stratified random sampling in which each stratum contains only one defect. The likelihood of randomly selecting each defect is proportional to the likelihood of the defect occurring. Each likelihood of occurrence is computed based on designer-provided global parameters, and parameters of each circuit element.

For example, shorts are the most common. In state-of-the-art production processes, shorts are 3~10X more likely than opens. When the range of defect likelihoods is large, as it is for mixed-signal circuits, LWRS requires up to 75% fewer samples than simple random sampling (SRS) for a given confidence interval (the variation in an estimate that would occur if the random sampling was done many times). In practice, when coverage is 90% or higher, this means that it is usually sufficient to simulate a maximum 250 defects, regardless of the circuit size or the number of potential defects, to estimate coverage within 2.5%, for a 99% confidence level. Simulating as few as one hundred defects is sufficient to get ±4% estimate precision. For small circuits, or when time permits, all defects can be simulated.

DefectSim allows you to combine almost all of the previously-published techniques for reducing simulation time, including random sampling, high-level modeling, stop-on-detection, AC mode, and parallel simulation. All together, these techniques can reduce simulation time by up to six orders of magnitude compared to simulating the production test of all potential defects in a flat, layout-extracted netlist. The same techniques can be applied to the measurement of defect tolerance.

For more information about Tessent DefectSim, read the whitepaper at:
https://www.mentor.com/products/silicon-yield/resources/overview/part-1-analog-fault-simulation-challenges-and-solutions-f9fd7248-3244-4bda-a7e5-5a19f81d7490?cmpid=10167

Next Page »