Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘yield’

Next Page »

Vital Control in Fab Materials Supply-Chains – Part 2

Thursday, February 16th, 2017

By Ed Korczynski, Sr. Technical Editor

As detailed in Part 1 of this article published last month by SemiMD, the inaugural Critical Materials Council (CMC) Conference happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is Part 2 of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jeff Hemphill, Staff Materials R&D Engineer, Intel Corporation,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

FIGURE 1: 2016 CMC Conference expert panelists (from left to right) John Smyth, Jonas Sundqvist, Jeff Hemphill, and Jean-Marc Girard. (Source: TECHCET CA)

KORCZYNSKI:  We heard from David Thompson [EDITOR’S NOTE:  Director of Process Chemistry, Applied Materials presented on “Agony in New Material Introductions -  Minimizing and Correlating Variabilities”] today on what we must control, and he gave an example of a so-called trace-contaminant that was essential for the process performance of a precursor, where the trace compound helped prevent particles from flaking off chamber walls. Do we need to specify our contaminants?

GIRARD:  Yes. To David’s point this morning, every molecule is different. Some are very tolerant due to the molecular process associated with it, and some are not. I’ll give you an example of a cobalt material that’s been talked about, where it can be run in production at perhaps 95% in terms of assay, provided that one specific contaminant is less than a couple of parts-per-million. So it’s a combination of both, it’s not assay OR a specification of impurities. It’s a matter of specifying the trace components that really matter when you reach the point that the data you gather gives you that understanding, and obviously an assay within control limits.

HEMPHILL:  Talking about whether we’re over-specifying or not, the emphasis is not about putting the right number on known parameters like assay that are obvious to measure, the emphasis is on identifying and understanding what makes up the rest of it and in a sense trying over-specify that. You identify through mass-spectrometry and other techniques that some fraction of a percent is primarily say five different species, it’s finding out how to individually monitor and track and control those as separate parameters. So from a specification point of view what we want is not necessarily the lowest possible numbers, but it’s expanding how many things we’re looking at so that we’re capturing everything that’s there.

KORCZYNSKI:  Is that something that you’re starting to push out to your suppliers?

HEMPHILL:  Yes. It depends on the application we’re talking about, but we go into it with the assumption that just assay will not be enough. Whether a single molecule or a blend of things is supposed to be there, we know that just having those be controlled by specification will not be sufficient. We go under the assumption that we are going to identify what makes up the remaining part of the profile, and those components are going to need to be controlled as well.

KORCZYNSKI:  Is that something that has changed by node? Back when things were simpler say at 45nm and larger, were these aspects of processing that we could safely ignore as ‘noise’ but are now important ‘signals’?

HEMPHILL:  Yes, we certainly didn’t pay as close attention just a couple of generations ago.

KORCZYNSKI:  That seems to lead us to questions about single-sources versus dual-sourcing. There are many good reasons to do both, but not simultaneously. However, it seems that because of all of the challenges we’re heard about over the last day-and-a-half of this conference it creates greater burden on the suppliers, and for critical materials the fabs are moving toward more single-sourcing over time.

SMYTHE:  I think that it comes down to more of a concern over geographic risk. I’ll buy from one entity if that entity has more than one geographic location for the supply, so that I’m not exposed to a single ‘Act of God’ or a ‘random statistical occurrence of global warming.’ So for example I  need to ask if a supplier has a place in the US and a place in France that makes the same thing, so that if something bad happens in one location it can still be sourced? Or do you have an alternate-supply agreement that if you can’t supply it you have an agreement with Company-X to supply it so that you still have control? You can’t come to a Micron and say we want to make sure that we get at minimum 25% no matter what, because what typically happens with second-sourcing is Company-A gets 75% of the business while Company-B gets 25%. There are a lot of reasons that that doesn’t work so well, so people may have an impression that there’s a movement toward single-source but it’s ‘single flexible-source.’

HEMPHILL:  There are a lot of benefits of dual- or multiple-sourcing. The commercial benefits of competition can be positive and we’re for it when it works. The risk is that as things are progressing and we’re getting more sensitive to differences in materials it’s getting harder to maintain that. We have seen situations where historically we were successful with dual-sourcing a raw material coming from two different suppliers or even a single supplier using two different manufacturing lines and everything was fine and qualified and we could alternate sources invisibly. However, as our sensitivity has grown over time we can start to detect differences.

So the concept of being ‘copy-exactly’ that we use in our factories, we really need production lines to do that, and if we’re talking about two different companies producing the same material then we’re not going to get them to be copy-exactly. When that results in enough of a variation in the material that we can detect it in the factory then we cannot rely upon two sources. Our preference would be one company that maintains multiple production sites that are designed to be exactly the same, then we have a high degree of confidence that they will be able to produce the same material.

FIGURE 2: Jean-Marc Girard, Distinguished Member of Technical Staff of Micron Technology, provided the supplier perspective. (Source: SEMI)

GIRARD:  I can give you a supplier perspective on that. We are seeing very different policies from different customers, to the point that we’re seeing an increase in the number of customers doing single-sourcing with us, provided we can show the ability to maintain business continuity in case of a problem. I think that the industry became mature after the tragic earthquake and tsunami in Japan in 2011 with greater understanding of what business continuity means. We have the same discussions with our own suppliers, who may say that they have a dedicated reactor for a certain product with another backup reactor with a certain capacity on the same site, and we ask what happens if the plant goes on strike or there’s a fire there?

A situation where you might think the supply was stable involved silane in the United States. There are two large silane plants in the United States that are very far apart from each other and many Asian manufacturers dependent upon them. When the U.S. harbors went on strike for a long time there was no way that material could ship out of the U.S. customers. So, yes there were two plants but in such an event you wouldn’t have global supply. So there is no one way to manage our supply lines and we need to have conversations with our customers to discuss the risks. How much time would it take to rebuild a supply-chain source with someone else? If you can get that sort of constructive discussion going then customers are usually open to single-sourcing. One regional aspect is that Asian customers tend to favor dual-sourcing more, but that can lead to IP problems.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Vital Control in Fab Materials Supply-Chains

Wednesday, January 25th, 2017

By Ed Korczynski, Sr. Technical Editor

The inaugural Critical Materials Council (CMC) Conference, co-sponsored by Solid State Technology, happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is an edited excerpt of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

KORCZYNSKI:  Let’s start with specifications: over-specifying, and under-specifying. Do we have the right methodologies to be able to estimate the approximate ‘ball-park’ range that the impurities need to be in?

GIRARD:  For determining the specifications, to some extent it doesn’t matter because we are out of the world of specs, where what matters is the control-limits. To Tim Hendry’s point in the Keynote yesterday [EDITOR’S NOTE:  Tim G. Hendrey, vice president of the Technology and Manufacturing Group and director of Fab Materials at Intel Corporation provided a conference keynote address on “Process Control Methods for Advanced Materials”], what was really interesting is instead of the common belief that we should start by supplying the product with the lowest possible variability, instead we should try to explore the window in which the product is working. So getting 10 containers from the same batch and introducing deliberate variability so that you know the process space in which you can play. That is the most important information to be able to reach the most reasonable and data-driven numbers to specify control limits. A lot of specs in the past were primarily determined by marketing decisions instead of data.

FIGURE 1: Jonas Sundqvist, Sr. Scientist of Fraunhofer IKTS, discusses collaboration with industry on application-specific ALD R&D. (Source: TECHCET CA)

SUNDQVIST:  Like the first introduction of what were called “super-clean” ALD precursors for the original MIS DRAM capacitors, Samsung used about 10nm of hafnium-aluminate and it would not matter if there was slight contamination in the precursors because you were not trying to control for a specific high-k phase. Whereas now you are doping very precisely and you have already scaled thinness so over time the specification for high-k precursors has become more important.

SMYTHE:  I think it comes down to the premise that when you are doing vapor transport through a bubbler that some would argue that that’s like a distillation column. So it’s a matter of thinking about what is transporting and what isn’t. In some cases the contaminant you’re concerned about is in the ampule but it never makes it to the process chamber, or the act of oxidizing destroys it as a volatile byproduct. So I think the bigger issue is change-management not necessarily the exact specification. You must know what you have, and agree that a single adjustment to improve the productivity of chemical synthesis requires that ‘fingerprinting’ must be done to show the same results. The argument is that you do not accept “less-than” as part of a specification, you only accept what it is.

AUDIENCE QUESTION:  The systems in which these precursors are used also have ‘memory’ based on the prior reactions in the chamber and byproducts that get absorbed on walls. When these byproducts come out in subsequent processing they can alter conditions so that you’re actually running in CVD-mode instead of ALD-mode. Chamber effects can wash-out a lot of value of having really pure chemicals moving through a delivery system into a chamber and picking up contaminants that you spent a whole lot of money taking out at the point of delivery. What do you think about that?

GIRARD:  Well, this is a ‘crisis!’ When something like this starts to happen in a fab or even during the development cycles, you can’t prioritize resources and approaches you just have to do everything. Sometimes it’s the tool, sometimes it’s the chemical, sometimes it’s the interaction of the two, sometimes it’s back-streaming from the vacuum sub-system…there are so many ways that things can go wrong. Certainly you have to clear up the chemistry part as early as possible.

SUNDQVIST:  We work with zirconium precursors for ALD, and you can develop a precursor that gives you a very pure ALD process that really works like an ALD process should. However, you can still use the TEMA-Zr precursor, that in processing has a CVD component which you can use that to gain throughput. So you can have a really good ALD precursor that gives low particle-counts and good process stability and ideal thermal processing range, but the growth rate goes down by 20% so you’re not very popular in the fab. Many things change when you make an ‘improved’ molecule to perfect the process, and sometime you want to use an imperfect part of the process.

FIGURE 2: John Smythe, Distinguished Member of Technical Staff of Micron Technology, explains approaches to controlling materials all the way to point-of-use. (Source: TECHCET CA)

SMYTHE:  What we’re doing a lot more these days is doing chamber finger-printing, where we’re putting a quad-filtered mass-spec on each chamber—not a cheap little RGA, but real analytical-grade—and it’s been enlightening. If you look at your chemistry moving through a delivery line using something like the Schrødenger software, it’s not a big deal to see that you can use the mass spec to see some synthesis happening in the line. We joke and call it ‘point of use synthesis’ but it’s not very funny. We are used to having spare delivery lines built-in so we can install tools to try to gain insights to prevent what we’ve been talking about.

KORCZYNSKI:  John, since Micron has fabs in Lehi and fabs in Singapore and other places, while they do run different product loads, do you have to worry about how long it takes things to travel on a slow boat to Singapore? Do you have to stockpile things more strategically these days, and does that effect your receiving department?

SMYTHE:  What we really need are a few good ocean-going hydrofoil ships! The most complete answer is we first identify which things need ‘batch-qual’ so if we do a batch-qual in Virginia and know that material is going to Taiwan that we have confidence it will pass batch-qual in Taiwan. There are certain materials that we require information on which synthesis batch, which production batch, and sometimes which bottling batch. Sometimes you take a yield hit because you didn’t have the right vision, and then you institute batch qual.

I think most of you are familiar with the concept of ‘ship-to-stock,’ when you have enough good statistical history and a good change management process with the supplier then you can do ship-to-stock and that reduces the batch-qual overhead. On a case by case basis you have to figure out how difficult that is. A small story I can tell is that with Block Co-Polymer (BCP) self-assembly we found one particular element that in concentration above 5 ppm prevented the poly-styrene from self-assembling in the same way, whereas other metal trace contaminants could be a hundred times higher and have no effect on the process. So this gets back to some of our earlier discussion that it’s not enough to know that your trace elements are below some level. Tell me the exact atoms and the exact counts and then we’ll talk about using them. The BCP R&D taught us that in some situations just changing from one batch to the next could increase defects a thousands times. So we will see a bigger push to counting atoms.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Fab Facilities Data and Defectivity

Monday, August 1st, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

In-the-know attendees at SEMICON West at a Thursday morning working breakfast heard from executives representing the world’s leading memory fabs discuss manufacturing challenges at the 4th annual Entegris Yield Forum. Among the excellent presenters was Norm Armour, managing director worldwide facilities and corporate EHSS of Micron. Armour has been responsible for some of the most famous fabs in the world, including the Malta, New York logic fab of GlobalFoundries, and AMD’s Fab25 in Austin, Texas. He discussed how facilities systems effect yield and parametric control in the fab.

Just recently, his organization within Micron broke records working with M&W on the new flagship Fab 10X in Singapore—now running 3D-NAND—by going from ground-breaking to first-tool-in in less than 12 months, followed by over 400 tools installed in 3 months. “The devil is in the details across the board, especially for 20nm and below,” declared Armour. “Fabs are delicate ecosystems. I’ll give a few examples from a high-volume fab of things that you would never expect to see, of component-level failures that caused major yield crashes.”

Ultra-Pure Water (UPW)

Ultra-Pure Water (UPW) is critical for IC fab processes including cleaning, etching, CMP, and immersion lithography, and contamination specs are now at the part-per-billion (ppb) or part-per-trillion (ppt) levels. Use of online monitoring is mandatory to mitigate risk of contamination. International Technology Roadmap for Semiconductors (ITRS) guidelines for UPW quality (minimum acceptable standard) include the following critical parameters:

  • Resistivity @ 25C >18.0 Mohm-cm,
  • TOC <1.0 ppb,
  • Particles/ml < 0.3 @ 0.05 um, and
  • Bacteria by culture 1000 ml <1.

In one case associated with a gate cleaning tool, elevated levels of zinc were detected with lots that had passed through one particular tool for a variation on a classic SC1 wet clean. High-purity chemistries were eliminated as sources based on analytical testing, so the root-cause analysis shifted to to the UPW system as a possible source. Then statistical analysis could show a positive correlation between UPW supply lines equipped with pressure regulators and the zinc exposure. The pressure regulator vendor confirmed use of zinc-oxide and zinc-stearate as part of the assembly process of the pressure regulator. “It was really a curing agent for an elastomer diaphragm that caused the contamination of multiple lots,” confided Armour.

UPW pressure regulators are just one of many components used in facilities builds that can significantly degrade fab yield. It is critical to implement a rigorous component testing and qualification process prior to component installation and widespread use. “Don’t take anything for granted,” advised Armour. “Things like UPW regulators have a first-order impact upon yield and they need to be characterized carefully, especially during new fab construction and fit up.”

Photoresist filtration

Photoresist filtration has always been important to ensure high yield in manufacturing, but it has become ultra-critical for lithography at the 20nm node and below. Dependable filtration is particularly important because industry lacks in-line monitoring technology capable of detecting particles in the range below ~40nm.

Micron tried using filters with 50nm pore diameters for a 20nm node process…and saw excessive yield losses along with extreme yield variability. “We characterized pressure-drop as a function of flow-rate, and looked at various filter performances for both 20nm and 40nm particles,” explained Armour. “We implemented a new filter, and lo and behold saw a step function increase in our yields. Defect densities dropped dramatically.” Tracking the yields over time showed that the variability was significantly reduced around the higher yield-entitlement level.

Airborne Molecular Contamination (AMC)

Airborne Molecular Contamination (AMC) is ‘public enemy number one’ in 20nm-node and below fabs around the world. “In one case there were forrest fires in Sumatra and the smoke was going into the atmosphere and actually went into our air intakes in a high volume fab in Taiwan thousands of miles away, and we saw a spike in hydrogen-sulfide,” confided Armour. “It increased our copper CMP defects, due to copper migration. After we installed higher-quality AMC filters for the make-up air units we saw dramatic improvement in copper defects. So what is most important is that you have real-time on-line monitoring of AMC levels.”

Building collaborative relationships with vendors is critical for troubleshooting component issues and improving component quality. “Partnering with suppliers like Entegris is absolutely essential,” continued Armour. “On AMCs for example, we have had a very close partnership that developed out of a team working together at our Inotera fab in Taiwan. There are thousands of important technologies that we need to leverage now to guarantee high yields in leading-node fabs.” The Figure shows just some of the AMCs that must be monitored in real-time.

Big Data

The only way to manage all of this complexity is with “Big Data” and in addition to primary process parameter that must be tracked there are many essential facilities inputs to analytics:

  • Environmental Parameters – temperature, humidity, pressure, particle count, AMCs, etc.
  • Equipment Parameters – run state, motor current, vibration, valve position, etc.
  • Effluent Parameters – cooling water, vacuum, UPW, chemicals, slurries, gases, etc.

“Conventional wisdom is that process tools create 90% of your defect density loss, but that’s changing toward facilities now,” said Armour. “So why not apply the same methodologies within facilities that we do in the fab?” SPC is after-the-fact reactive, while APC is real-time fault detection on input variables, including such parameters as vibration or flow-rate of a pump.

“Never enough data,” enthused Armour. “In terms of monitoring input variables, we do this through the PLCs and basically use SCADA to do the fault-detection interdiction on the critical input variables. This has been proven to be highly effective, providing a lot of protection, and letting me sleep better at night.”

Micron also uses these data to provide site-to-site comparisons. “We basically drive our laggard sites to meet our world-class sites in terms of reducing variation on facility input variables,” explained Armour. “We’re improving our forecasting as a result of this capability, and ultimately protecting our fab yields. Again, the last thing a fab manager wants to see is facilities causing yield loss and variation.”

—E.K.

Samsung’s Closed-Loop DFM Solution Accelerates Yield Ramps

Sunday, June 5th, 2016

thumbnail

Mentor Graphics Corp. announced that Samsung Foundry’s closed-Loop design-for-manufacturing (DFM) solution uses production Mentor Calibre and Tessent platforms to accelerate customer yield ramps.

In the Closed-Loop DFM flows, Samsung integrates its DFM kits with its testing and manufacturing expertise to identify integrated circuit design patterns that are most likely to impact manufacturing yield, thereby helping customers improve design quality, yield, and ramp to production.

“We can detect the risks in customer products and prevent them,” said K.K. (Kuang-Kuo) Lin, Director, Foundry Marketing Ecosystem, Samsung Semiconductor. “We have seen yield gain of up to 8.5%. In terms of the post-manufacturing yield analysis, we have seen the benefits of around 2%. These numbers are not guaranteed because each product is different, but from our experience, these are the numbers we have seen.”

The Samsung solution extracts customer yield-averse design patterns, feeds that information forward to optimize manufacturing and testing, and closes the loop with feedback from silicon results for product design and yield improvement. This solution is not only useful to initial customer designs, but it also allows learning from current production designs to be applied to next-generation designs from that same customer across entire product families.

As shown in Figure 1, Samsung’s foundry offerings cover the needs of devices, ranging from the IoT to consumer, mobile computing, high end computing to automotive. The company, which first got into the foundry business in 2005, claims to be the first foundry to have high-k metal gates in production (in 2011), the first foundry to offer FinFET risk production (in 2013) and the first foundry to tape out a 10nm product. “We are also at the forefront of 7nm. We call it 7LPP, which will be based on EUV,” he added.

Figure 1

With the end goal of rapid yield ramp for new production introduction, Samsung turned to Mentor Graphics tools for pre-production DFM, which it calls PRISM (pattern recognition and identity scoring methods), which runs on Mentor’s Calibre platform. For this pre-production phase, “we provide very comprehensive process-aware DFM sign-off kits and optimization flow for the designers so they can double-check and verify, prevent any DFM issues during the design phase,” Lin said.

The other component of closed-loop DFM is in post-manufacturing. Samsung has developed as set of tools called FLARE (Failure analysis And yield Rank Estimation with DFM hotspot database), which runs on Mentor’s Tesset platform.

Figure 2 shows how PRISM and FLARE work together in a closed-loop fashion for pre- and post-production DFM.

Figure 2

“Every design has its idiosyncrasies and its unique signatures because layout designers can be pretty creative,” Lin explained. “We use PRISM to do extensive pattern analysis and then do optimization during the data prep and also use the pattern analysis result to drive in-line inspection.”

Once the wafer is manufactured in the fab, FLARE involves mapping a yield learning database with EDS, (electrical engineering die-sort data). “We’ll combine them to do yield pareto data analysis and also mapping analysis. From those deep learning, we are able to prioritize which part of the fab process we can improve. We can also feedback to the DFM kit which we use in the design phase, which gives the designer feedback on what they can further improve,” Lin said.

At the heard of PRISM is a defect database built from test vehicles and existing products (Figure 3). “We put all the patterns that we know into this defect database,” Lin explained. “We also couple it with some very novel things. We use a layout schematic generator from Mentor to increase the coverage, to enumerate all the possible patterns. And then we also have meta data and simulators to do yield prediction of those known defects from different sources.”

Figure 3

“Once a customer product comes into Samsung foundry, we will check against the known defect database. Then we will do prediction in terms of the process margin and feed-forward this data into the subsequent steps of data prep or retargeting, and in-line inspection so we can prioritize our resources to know what to inspect and what not to in the manufacturing steps,” Lin said (see Figure 4).

Figure 4

“FLARE accelerates the learning in the fab to bring up customer products in our foundry. It helps the customer achieve their time to market. It also saves on fab operation costs, so it’s a win-win situation for everyone,” Lin said.

The Closed-Loop DFM flows are in production use today for customers of Samsung Foundry services. While proven in 14 nm technology, the flows can be used for ICs manufactured with other Samsung process nodes.

At the 2016 Design Automation Conference, Mentor and Samsung are co-hosting a lunch seminar entitled “Accelerate Yield Ramps with Samsung Foundry Closed-Loop DFM and Mentor Tools.” The event is Monday, June 6, from 12:00 to 1:30 PM. Interested customers can register for the event using this registration link.

https://www.mentor.com/products/ic_nanometer_design/events/samsung-dac-lunch-seminar

Molecular Modeling of Materials Defects for Yield Recovery

Monday, March 21st, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

New materials are being integrated into High Volume Manufacturing (HVM) of semiconductor ICs, while old materials are being extended with more stringent specifications. Defects within materials cause yield losses in HVM fabs, and engineers must identify the specific source of an observed defect before corrective steps can be taken. Honeywell Electronic Materials has been using molecular modeling software provided by Scienomics to both develop new materials and to modify old materials. Modeling allowed Honeywell to uncover the origin of subtle solvation-based film defects within Bottom Anti-Reflective Coatings (BARC) which were degrading yield in a customer’s lithographic process module.

Scienomics sponsored a Materials Modeling and Simulations online seminar on February 26th of this year, featuring Dr. Nancy Iwamoto of Honeywell discussing how Scienomics software was used to accelerate response to a customer’s manufacturing yield loss. “This was a product running at a customer line,” explained Iwamoto, “and we needed to find the solution.” The product was a Bottom Anti-Reflective Coating (BARC) organo-silicate polymer delivered in solution form and then spun on wafers to a precise thickness.

Originally observed during optical inspection by fab engineers as 1-2 micron sized vague spots in the BARC, the new defect type was difficult to see yet could be correlated to lithographic yield loss. The defects appeared to be discrete within the film instead of on the top surface, so the source was likely some manner of particle, yet filters did not capture these particles.

The filter captured some particles rich in silicon, as well as other particles rich in carbon. Sequential filtration showed that particles were passing through impossibly small pores, which suggested that the particles were built of deformable gel-like phases. The challenge was to find the material handling or processing situation, which resulted in thermodynamically possible and kinetically probable conditions that could form such gels.

Fig: Materials Processes and Simulations (MAPS) gives researchers access to visualization and analysis tools in a single user interface together with access to multiple simulation engines. (Source: Scienomics)

Molecular modeling and simulation is a powerful technique that can be used for materials design, functional upgrades, process optimization, and manufacturing. The Figure shows a dashboard for Scienomics’ modeling platform. Best practices in molecular modeling to find out-of-control parameters in HVM include a sequential workflow:

  • Build correct models based on experimental observables,
  • Simulate potential molecular structures based on known chemicals and hierarchical models,
  • Analyze manufacturing variabilities to identify excursion sources, and
  • Propose remedy for failure elimination.

Honeywell Electronic Materials researchers had very few experimental observables from which to start:  phenomenon is rare (yet effects yield), not filterable, yet from thermodynamic hydrolysis parameters it must be quasi-stable. Re-testing of product and re-examination of Outgoing Quality Control (OQC) data at the Honeywell production site showed that the molecular weight of the product was consistent with the desired distribution. There was also an observed BARC thickness increase of ~1nm on the wafer associated with the presence of these defects.

Using the modeling platform, Honeywell looked at the solubility parameters for different small molecular chains off of known-branched back-bone centers. Gel-like agglomerations could certainly be formed under the wrong conditions. Once the agglomerations form, they are not very stable so they can probably dis-aggregate when being forced through a filter and then re-aggregate on the other side.

What conditions could induce gel formation? After a few weeks of modeling, it was determined that temperature variations had the greatest influence on the agglomeration, and that variability was strongest at the ~250°K recommended for storage. Storage at 230°K resulted in measurably worse agglomeration, and any extreme in heating/cooling ramp rate tended to reduce solubility.

Molecular modeling was used in a forensic manner to find that the root cause of gel-like defects was related to thermal history:

*   Thermodynamics determined the most likely oligomers that could agglomerate,

*   Temperature-dependent solubility models determined which particles would reach wafers.

Because of the on-wafer BARC thickness increase of ~1nm, fab engineers could use all of the molecular modeling information to trace the temperature variation to bottles installed in the lithographic track tool. The fab was able to change specifications for the storage and handling of the BARC bottles to bring the process back into control.

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.

—E.K.

3DIC Technology Drivers and Roadmaps

Monday, June 22nd, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

After 15 years of targeted R&D, through-silicon via (TSV) formation technology has been established for various applications. Figure 1 shows that there are now detailed roadmaps for different types of 3-dimensional (3D) ICs well established in industry—first-order segmentation based on the wiring-level/partitioning—with all of the unit-processes and integration needed for reliable functionality shown. Using block-to-block integration with 5 micron lines at leading international IC foundries such as GlobalFoundries, systems stacking logic and memory such as the Hybrid Memory Cube (HMC) are now in production.

Fig. 1: Today’s 3D technology landscape segmented by wiring-level, showing cross-sections of typical 2-tier circuit stacks, and indicating planned reductions in contact pitches. (Source: imec)

“There are interposers for high-end complex SOC design with good yield,” informed Eric Beyne, Scientific Director Advanced Packaging & Interconnect for imec in an exclusive interview with Solid State Technology. ““For a systems company, once you’ve made the decision to go 3D there’s no way back,” said Beyne. “If you need high-bandwidth memory, for example, then you’re committed to some sort of 3D. The process is happening today.” Beyne is scheduled to talk about 3D technology driven by 3D application requirements in the imec Technology Forum to be held July 13 in San Francisco.

Adaptation of TSV for stacking of components into a complete functional system is key to high-volume demand. Phil Garrou, packaging technologist and SemiMD blogger, reported from the recent ConFab that Hynix is readying a second generation of high-bandwidth memory (HBM 2) for use in high performance computing (HPC) such as graphics, with products already announced like Pascal from Nvidia and Greenland from AMD.

For a normalized 1 cm2 of silicon area, wide-IO memory needs 1600 signal pins (not counting additional power and ground pins) so several thousand TSV are needed for high-performance stacked DRAM today, while in more advanced memory architectures it could go up by another factor of 10. For wide-IO HVM-2 (or Wide-IO2) the silicon consumed by IO circuitry is maybe 6 cm2 today, such that a 3D stack with shorter vertical connections would eliminate many of the drivers on the chip and would allow scaling of the micro-bumps to perhaps save a total of 4 cm2 in silicon area. 3D stacks provide such trade-offs between design and performance, so the best results are predicted for 3DICs where the partitioning can be re-done at the gate or transistor level. For example, a modern 8-core microprocessor could have over 50% of the silicon area consumed by L3-cache-memory and IO circuitry, and moving from 2D to 3D would reduce total wire-lengths and interconnect power consumptions by >50%.

There are inherent thresholds based on the High:Width ratio (H:W) that determine costs and challenges in process integration of TSV:

-    10:1 ratio is the limit for the use of relatively inexpensive physical vapor deposition (PVD) for the Cu barrier/seed (B/S),

-    20:1 ratio is the limit for the use of atomic-layer deposition (ALD) for B/S and electroless deposition (ELD) for Cu fill with 1.5 x 30 micron vias on the roadmap for the far future,

-    30:1 ratio and greater is unproven as manufacturable, though novel deposition technologies continue to be explored.

TSV Processing Results

The researchers at imec have evaluated different ways of connecting TSV to underlying silicon, and have determined that direct connections to micro-bumps are inherently superior to use of any re-distribution layer (RDL) metal. Consequently, there is renewed effort on scaling of micro-bump pitches to be able to match up with TSV. The standard minimum micro-bump pitch today of 40 micron has been shrunk to 20, and imec is now working on 10 micron with plans to go to 5 micron. While it may not help with TSV connections, an RDL layer may still be needed in the final stack and the Cu metal over-burden from TSV filling has been shown by imec to be sufficiently reproducible to be used as the RDL metal. The silicon surface area covered by TSV today is a few percents not 10s of percents, since the wiring level is global or semi-global.

Regarding the trade-offs between die-to-wafer (D2W) and wafer-to-wafer (W2W) stacking, D2W seems advantageous for most near-term solutions because of easier design and superior yield. D2W design is easier because the top die can be arbitrarily smaller silicon, instead of the identically sized chips needed in W2W stacks. Assuming the same defectivity levels in stacking, D2W yield will almost always be superior to W2W because of the ability to use strictly known-good-die. Still, there are high-density integration concepts out on the horizon that call for W2W stacking. Monolithic 3D (M3D) integration using re-grown active silicon instead of TSV may still be used in the future, but design and yield issues will be at least comparable to those of W2W stacking.

Beyne mentioned that during the recent ECTC 2015, EV Group showed impressive 250nm overlay accuracy on 450mm wafers, proving that W2W alignment at the next wafer size will be sufficient for 3D stacking. Beyne is also excited by the fact the at this year’s ECTC there was, “strong interest in thermo-compression bonding, with 18 papers from leading companies. It’s something that we’ve been working on for many years for die-to-wafer stacking, while people had mistakenly thought that it might be too slow or too expensive.”

Thermal issues for high-performance circuitry remain a potential issue for 3D stacking, particularly when working with finFETs. In 2D transistors the excellent thermal conductivity of the underlying silicon crystal acts like a built-in heat-sink to diffuse heat away from active regions. However, when 3D finFETs protrude from the silicon surface the main path for thermal dissipation is through the metal lines of the local interconnect stack, and so finFETs in general and stacks of finFETs in particular tend to induce more electro-migration (EM) failures in copper interconnects compared to 2D devices built on bulk silicon.

3D Designs and Cost Modeling

At a recent North California Chapter of the American Vacuum Society (NCCAVS) PAG-CMPUG-TFUG Joint Users Group Meeting discussing 3D chip technology held at Semi Global Headquarters in San Jose, Jun-Ho Choy of Mentor Graphics Corp. presented on “Electromigration Simulation Flow For Chip-Scale Parametric Failure Analysis.” Figure 2 shows the results from use of a physics-based model for temperature- and residual-stress-aware void nucleation and growth. Mentor has identified new failure mechanisms in TSV that are based on coefficient of thermal expansion (CTE) mismatch stresses. Large stresses can develop in lines near TSV during subsequent thermal processing, and the stress levels are layout dependent. In the worst cases the combined total stress can exceed the critical level required for void nucleation before any electrical stressing is applied. During electrical stress, EM voids were observed to initially nucleate under the TSV centers at the landing-pad interfaces even though these are the locations of minimal current-crowding, which requires proper modeling of CTE-mismatch induced stresses to explain.

Fig. 2: Calibration of an Electronic Design Automation (EDA) tool allows for accurate prediction of transistor performance depending on distance from a TSV. (Source: Mentor Graphics)

Planned for July 16, 2015 at SEMICON West in San Francisco, a presentation on “3DIC Technology Past, Present and Future” will be part of one of the side Semiconductor Technology Sessions (STS). Ramakanth Alapati, Director of Packaging Strategy and Marketing, GLOBALFOUNDRIES, will discuss the underlying economic, supply chain and technology factors that will drive productization of 3DIC technology as we know it today. Key to understanding the dynamic of technology adaptation is using performance/$ as a metric.

Process Watch: Sampling matters

Monday, September 15th, 2014

By David W. Price and Douglas G. Sutherland

Author’s Note: Sampling Matters is the second in a series of 10 installments that explores the fundamental truths about process control—defect inspection and metrology—for the semiconductor industry. Each article introduces one of the 10 fundamental truths and highlights their implications.

In the previous installment we discussed the importance of a process control strategy being capable (i.e. able to identify the problems that are limiting a factory’s baseline yield). After a capable strategy is in place, a factory can turn to optimizing the strategy to make it cost-effective—ensuring that the factory achieves maximum return on their investment. In most cases, this optimization is made through the introduction of sampling.

In general, process control sampling takes into account:

  • the number of measurement sites per wafer for metrology (or wafer area for defect inspection)
  • the number of wafers per lot that are measured
  • the percentage of measured lots

In this article, it is assumed that the first two sampling components— sites per wafer and wafers per lot, are part of your capability strategy (addressed in the previous article), and that the word sampling refers simply to the percentage of the measured lots.

Sampling is a unique concept to process control—you can’t sub-sample wafers to be etched, for example. The degree to which a factory will sample is based on the probability, projected from historical data, that an excursion will occur and the potential impact of that excursion. Determining an optimum sampling strategy comes down to weighing the cost of process control against the benefit of capturing the defect or other excursion in a timely manner.

The second fundamental truth of process control for the semiconductor IC industry is:

It is always more cost-effective to over-inspect than to under-inspect

For simplicity, let’s assume that the probability that the process control tool will detect the excursion is fixed. That means that we’re neglecting the fact that some types of excursions are easier to detect than others. We will cover this topic in a later series installment.

The economic impact of this excursion is a function of the number of exposed lots per excursion, the number of excursions per year, the yield loss per excursion, and the cost of the twenty-five wafers in the exposed lots. These quantities can be estimated and plotted as a function of the sampling rate (Figure 1a). Thus, lower sampling leads to more yield loss when an excursion occurs simply because there are more lots between inspections. The excursion cost is reciprocal of the sampling rate—very high at low sampling and decreasing at 1/x as the sampling rate, x, increases.

Figure 1a

The capital cost of sampling is a function of the process control tool price, the resources used to operate it and the tool’s throughput and sensitivity. These quantities are estimated and plotted as a function of a sampling rate in Figure 1b. The capital costs increase linearly with the sampling rate because more tool capacity is required as the sampling rate increases.

Figure 1b

The total cost is the sum of the excursion and the capital cost, and the shape of the resulting curve is depicted in Figure 1c. A minimum in this curve indicates an optimum sampling rate—the point of lowest total cost. When there is no minimum, the optimum sampling rate is 100 percent, a scenario that frequently occurs for metrology.

Figure 1c

At the optimum sampling rate, any reduction in sampling would result in more yield loss than capital savings, and any increase in sampling would result in more capital cost than the prevented yield loss (see Figure 1c). While the overall shape of the curve is governed mainly by the lot sampling rate, with other input assumptions shifting the minimum left or right, the curve consistently delivers the message that under-sampling is riskier (i.e. more costly) than over-sampling. Measuring too few sites per wafers, wafers per lot, or lots per run is a high-cost way of running a manufacturing line.

In this model, the optimum sampling rate is proportional to the square root of the excursion cost and inversely proportional to the square root of the capital cost. In other words, a stable process does not need to be measured as frequently as a process having frequent excursions, but the relationship is not linear. If you were to decrease the excursion frequency by a factor of four, then you would be able to cut the sampling rate by a factor of two, to retain the minimum total cost.

Because of the mathematical nature of this curve, the total cost to the right of the optimum (higher sampling) is always less risker than being off by the same amount to the left of the optimum (lower sampling). In other words:

Total cost (xo + Dx) < Total cost (xoDx)

where xo is the sampling rate with the lowest total cost.

There are other reasons to err on the side of over-sampling. Data is usually cheaper than the experienced engineering expertise required to draw conclusions. In a time-to-market based environment, such as the semiconductor industry, you cannot allow your decision makers to be data-starved; the data needs to be readily available to drive appropriate corrective actions. Indeed, of all the activities required for baseline yield learning–engineering talent, inspection data, processed wafers—the cost of the inspection is the least expensive.

About the authors:

Dr. David W. Price and Dr. Douglas Sutherland are senior director and principal scientist, respectively, at KLA-Tencor Corp. Over the last 10 years, Dr. Price and Dr. Sutherland have worked directly with more than 50 semiconductor IC manufacturers to help them optimize their overall inspection strategy to achieve the lowest total cost. This series of articles attempts to summarize some of the universal lessons they have observed through these engagements. For further exploration of how sampling theory can be used to optimize inspection or metrology process control for your specific situation, please contact the authors of this article.

Blog review August 4, 2014

Monday, August 4th, 2014

Innovation is alive and well in the semiconductor industry. That was a key takeaway from the strategic investor panel at the second annual Silicon Innovation Forum at SEMICON West, and one I can’t reinforce enough within the venture capital (VC) community. Eileen Tanghal of Applied Materials reports.

At SEMICON West this year in Thursday morning’s Yield Breakfast sponsored by Entegris, top executives from Qualcomm, GlobalFoundries, and Applied Materials discussed the challenges to achieving profitable fab yield for atomic-scale devices. In his blog, Ed Korzynski reports on what was discussed.

Phil Garrou blogs that Apple has acquired 24 tech companies in the last 18 months. Recently, Apple acquired LuxVue, a start-up focused on low power micro-LED displays. Although Apple has not disclosed any details of the acquisition, not even the purchase price, one can easily envision where micro LED displays could play a big part in Apples thrust into wearable electronics such as the i-watch, Phil says.

Adele Hars continued a report on the SOI papers at the VLSI Symposia in this Part 2 installment. The VLSI Symposia – one on technology and one on circuits – are among the most influential in the semiconductor industry.

Vivek Bakshi created a EUV stir, blogging about IBM’s NXE3300B scanner, at the EUV Center of Excellence in Albany, which recently completed a “40W” EUV light source upgrade.  The upgrade resulted in better than projected performance with 44W of EUV light being measured at intermediate focus and confirmed in resist at the wafer level.

Blog review February 24, 2014

Monday, February 24th, 2014

Paul Farrar, general manager of the G450C consortium, said early work has demonstrated good results and that he sees no real barriers to implementing 450mm wafers from a technical standpoint. But as Pete Singer blogs, he also said: “In the end, if this isn’t cheaper, no one is going to do it,” he said.

Adele Hars of Advanced Substrate News reports that body-biasing design techniques, uniquely available in FD-SOI, have allowed STMicroelectronics and CEA-Leti to demonstrate a DSP that runs 10x faster than anything the industry’s seen before at ultra-low voltages.

Dr. Bruce McGaughy, Chief Technology Officer and Senior Vice President of Engineering, ProPlus Design Solutions, Inc., says the move to state-of-the-art 28nm/20nm planar CMOS and 16nm FinFET technologies present greater challenges to yield than any previous generation. This is putting more emphasis on high sigma yield.

Jamie Girard, senior director, North America Public Policy, SEMI President Obama touched on many different policy areas during his State of the Union talk, and specifically mentioned a number of issues that are of top concern in the industry and with SEMI member companies. Among these are funding for federal R&D, including public-private partnerships, trade, high-skilled immigration reform, and solar energy.

Phil Garrou finishes his look at the IEEE 3DIC meeting, with an analysis of presentations from Tohoku University, Fujitsu’s wafer-on-wafer (WOW), ASE/Chiao Tung University and RTI. In another blog, Phil continues his review of the Georgia Tech Interposer conference, highlighting presentations from Corning, Schott Glass, Asahi Glass, Shinko, Altera, Zeon and Ushio.

Pete Singer recommends taking the new survey by the National Center for Manufacturing Sciences (NCMS) but you may first want to give some thought as to what is and what isn’t “nanotechnology.”

Next Page »