Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘HVM’

Next Page »

EUVL Materials Readiness for HVM

Friday, June 2nd, 2017

thumbnail

By Ed Korczynski, Sr. Technology Editor

Extreme-Ultra-Violet Lithography (EUVL)—based on ~13.5nm wavelength EM waves bouncing off mirrors in a vacuum—will finally be used in commercial IC fabrication by Intel, Samsung, and TSMC starting in 2018. In a recent quarterly earning calls ASML reported a backlog of orders for 21 EUVL tools. At the 2017 SPIE Advanced Lithography conference, presentations detailed how the source and mask and resist all are near targets for next year, while the mask pellicle still needs work. Actinic metrology for mask inspection still remains a known expensive issue to solve.

Figure 1 shows minimal pitch line/space grids and contact-hole arrays patterned with EUVL at global R&D hub IMEC in Belgium, as presented at the recent 2017 IMEC Technology Forum. While there is no way with photolithography to escape the trade-offs of the Resolution/Line-Width-Roughness/Sensitivity (RLS) triangle, patterning at the leading edge of possible pitches requires application-specific etch integration. The bottom row of SEMs in this figure all show dramatic improvements in LWR through atomic-scale etch and deposition treatments to patterned sidewalls.

Fig.1: SEM plan-view images of minimum pitch Resolution and Line-Width-Roughness and Sensitivity (RLS) for both Chemically-Amplified Resist (CAR) and Non-Chemically-Amplified Resist (NCAR, meaning metal-oxide solution from Inpria) formulations, showing that excessive LWR can be smoothed by various post-lithography deposition/etch treatments. (Source: IMEC)

ASML has recently claimed that as an indication of continued maturity, ASML’s NXE:33×0 steppers have now collectively surpassed one million processed wafers to date, and only correctly exposed wafers were included in the count. During the company’s 1Q17 earnings call, it was reported that three additional orders for NXE:3400B steppers were received in Q1 adding  to a total of 21 in backlog, worth nearly US$2.5B.

At $117M each NXE:3400B, assuming 10 years useful life it costs $32,000 each day and assuming 18 productive hours/day and 80 wafers/hour then it costs $22 per wafer-pass just for tool depreciation. In comparison, a $40M argon-fluoride immersion (ArFi) stepper over ten years with 21 available hours/day and 240 wafers/hour costs $2.2 per wafer-pass for depreciation. EUVL will always be an expensive high-value-add technology, even though a single EUVL exposure can replace 4-5 ArFi exposures.

Fabs that delay use of EUVL at the leading edge of device scaling will instead have to buy and facilitize many more ArFi tools, demanding more fab space and more optical lithography gases. SemiMD spoke with Paul Stockman, Linde Electronics’ Head of Market Development, about the global supply of specialty neon and xenon gas blends:  “Xenon is only a ppm level component of the neon-blend for Kr and Ar lasers, so there should be no concerns with Xenon supply for the industry. In our modeling we’ve realized the impact of multi-patterning on gas demand, and we’ve assumed that the industry would need multi-patterning in our forecasts.” said Stockman.

“From the Linde perspective, we manage supply carefully to meet anticipated customer demand,” reminded Stockman. “We recently added 40 million liters of neon capacity in the US, and continue to add significant supply with partners so that we can serve our customers regardless of the EUV scenario.” (Editor’s note: reported by SemiMD here.)

At SPIE Advanced Lithography 2017, SemiMD discussed multi-patterning process flows with Uday Mitra and Regina Freed of Applied Materials. “We need a lot of materials engineering now,” explained Freed. “We need new gap-fills and hard-masks, and we may need new materials for selective deposition. Regarding the etch, we need extreme selectivity with no damage, and ability to get into the smallest features to take out just one atomic layer at a time.”

Reminding us that IC fabs must be risk-averse when considering technology options, Mitra (formerly with Intel) commented, “You don’t do a technology change and a wafer size change at the same time. That’s how you risk manage, and you can imagine with something like EUVL that customers will first use it for limited patterning and check it out.”

Figure 2 lists the major issues in pattern-transfer using plasma etch tools, along with the process variables that must be controlled to ensure proper pattern fidelity. Applied Materials’ Sym3 etch chamber features hardware that provides pulsed energy at dual frequencies along with low residence time of reactant byproducts to allow for precise tuning of process parameters no matter what chemistry is needed.

Fig.2: Patterning issues and associated etch process variables which can be used for control thereof. (Source: Applied Materials)

Andrew Grenville, CEO of resist supplier Inpria, in an exclusive interview with SemiMD, commented on the infrastructure readiness for EUVL volume production. “We are building up our pilot line facility in Corvallis, Oregon. The timing for that is next year, and we are putting in place plans to continue to scale up the new materials at the same times as the quality control systems such as functional QC.” The end-users ask for quality control checks of more parameters, putting a burden on suppliers to invest in more metrology tools and even develop new measurement techniques. Inpria’s resist is based on SnOx nanoparticles, which provide for excellent etch resistance even with layers as thin as 20nm, but required the development of a new technique to measure ppb levels of trace metals in the presence of high tin signals.

“We believe that there is continued opportunity for improvement in the overall patterning performance based on the ancillaries, particularly in simplifying the under-layers. One of the core principles of our material is that we’re putting the ‘resist’ back in the resist,” enthused Grenville. “We can show the etch contrast of our material can really improve the Line-Width Roughness of the patterns because of what you can do in etch, and it’s not merely smoothing the resist. We can substantially improve the outcome by engineering the stack and the etch recipe using completely different chemistry than could be used with chemically-amplified resist.”

The 2017 EUVL Workshop (2017 International Workshop on EUV Lithography) will be held June 12-15 at The Center for X-ray Optics (CXRO) at Lawrence Berkeley National Laboratory in Berkeley, CA. This workshop, now in its tenth year, is focused on the fundamental science of EUV Lithography (EUVL). Travel and hotel information as well as on-line registration is available at https://euvlitho.com/.

[DISCLOSURE:  Ed Korczynski is also Sr. Analyst for TECHCET responsible for the Critical Materials Report (CMR) on Photoresists, Extensions & Ancillaries.]

—E.K.

Vital Control in Fab Materials Supply-Chains – Part 2

Thursday, February 16th, 2017

By Ed Korczynski, Sr. Technical Editor

As detailed in Part 1 of this article published last month by SemiMD, the inaugural Critical Materials Council (CMC) Conference happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is Part 2 of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jeff Hemphill, Staff Materials R&D Engineer, Intel Corporation,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

FIGURE 1: 2016 CMC Conference expert panelists (from left to right) John Smyth, Jonas Sundqvist, Jeff Hemphill, and Jean-Marc Girard. (Source: TECHCET CA)

KORCZYNSKI:  We heard from David Thompson [EDITOR’S NOTE:  Director of Process Chemistry, Applied Materials presented on “Agony in New Material Introductions -  Minimizing and Correlating Variabilities”] today on what we must control, and he gave an example of a so-called trace-contaminant that was essential for the process performance of a precursor, where the trace compound helped prevent particles from flaking off chamber walls. Do we need to specify our contaminants?

GIRARD:  Yes. To David’s point this morning, every molecule is different. Some are very tolerant due to the molecular process associated with it, and some are not. I’ll give you an example of a cobalt material that’s been talked about, where it can be run in production at perhaps 95% in terms of assay, provided that one specific contaminant is less than a couple of parts-per-million. So it’s a combination of both, it’s not assay OR a specification of impurities. It’s a matter of specifying the trace components that really matter when you reach the point that the data you gather gives you that understanding, and obviously an assay within control limits.

HEMPHILL:  Talking about whether we’re over-specifying or not, the emphasis is not about putting the right number on known parameters like assay that are obvious to measure, the emphasis is on identifying and understanding what makes up the rest of it and in a sense trying over-specify that. You identify through mass-spectrometry and other techniques that some fraction of a percent is primarily say five different species, it’s finding out how to individually monitor and track and control those as separate parameters. So from a specification point of view what we want is not necessarily the lowest possible numbers, but it’s expanding how many things we’re looking at so that we’re capturing everything that’s there.

KORCZYNSKI:  Is that something that you’re starting to push out to your suppliers?

HEMPHILL:  Yes. It depends on the application we’re talking about, but we go into it with the assumption that just assay will not be enough. Whether a single molecule or a blend of things is supposed to be there, we know that just having those be controlled by specification will not be sufficient. We go under the assumption that we are going to identify what makes up the remaining part of the profile, and those components are going to need to be controlled as well.

KORCZYNSKI:  Is that something that has changed by node? Back when things were simpler say at 45nm and larger, were these aspects of processing that we could safely ignore as ‘noise’ but are now important ‘signals’?

HEMPHILL:  Yes, we certainly didn’t pay as close attention just a couple of generations ago.

KORCZYNSKI:  That seems to lead us to questions about single-sources versus dual-sourcing. There are many good reasons to do both, but not simultaneously. However, it seems that because of all of the challenges we’re heard about over the last day-and-a-half of this conference it creates greater burden on the suppliers, and for critical materials the fabs are moving toward more single-sourcing over time.

SMYTHE:  I think that it comes down to more of a concern over geographic risk. I’ll buy from one entity if that entity has more than one geographic location for the supply, so that I’m not exposed to a single ‘Act of God’ or a ‘random statistical occurrence of global warming.’ So for example I  need to ask if a supplier has a place in the US and a place in France that makes the same thing, so that if something bad happens in one location it can still be sourced? Or do you have an alternate-supply agreement that if you can’t supply it you have an agreement with Company-X to supply it so that you still have control? You can’t come to a Micron and say we want to make sure that we get at minimum 25% no matter what, because what typically happens with second-sourcing is Company-A gets 75% of the business while Company-B gets 25%. There are a lot of reasons that that doesn’t work so well, so people may have an impression that there’s a movement toward single-source but it’s ‘single flexible-source.’

HEMPHILL:  There are a lot of benefits of dual- or multiple-sourcing. The commercial benefits of competition can be positive and we’re for it when it works. The risk is that as things are progressing and we’re getting more sensitive to differences in materials it’s getting harder to maintain that. We have seen situations where historically we were successful with dual-sourcing a raw material coming from two different suppliers or even a single supplier using two different manufacturing lines and everything was fine and qualified and we could alternate sources invisibly. However, as our sensitivity has grown over time we can start to detect differences.

So the concept of being ‘copy-exactly’ that we use in our factories, we really need production lines to do that, and if we’re talking about two different companies producing the same material then we’re not going to get them to be copy-exactly. When that results in enough of a variation in the material that we can detect it in the factory then we cannot rely upon two sources. Our preference would be one company that maintains multiple production sites that are designed to be exactly the same, then we have a high degree of confidence that they will be able to produce the same material.

FIGURE 2: Jean-Marc Girard, Distinguished Member of Technical Staff of Micron Technology, provided the supplier perspective. (Source: SEMI)

GIRARD:  I can give you a supplier perspective on that. We are seeing very different policies from different customers, to the point that we’re seeing an increase in the number of customers doing single-sourcing with us, provided we can show the ability to maintain business continuity in case of a problem. I think that the industry became mature after the tragic earthquake and tsunami in Japan in 2011 with greater understanding of what business continuity means. We have the same discussions with our own suppliers, who may say that they have a dedicated reactor for a certain product with another backup reactor with a certain capacity on the same site, and we ask what happens if the plant goes on strike or there’s a fire there?

A situation where you might think the supply was stable involved silane in the United States. There are two large silane plants in the United States that are very far apart from each other and many Asian manufacturers dependent upon them. When the U.S. harbors went on strike for a long time there was no way that material could ship out of the U.S. customers. So, yes there were two plants but in such an event you wouldn’t have global supply. So there is no one way to manage our supply lines and we need to have conversations with our customers to discuss the risks. How much time would it take to rebuild a supply-chain source with someone else? If you can get that sort of constructive discussion going then customers are usually open to single-sourcing. One regional aspect is that Asian customers tend to favor dual-sourcing more, but that can lead to IP problems.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Vital Control in Fab Materials Supply-Chains

Wednesday, January 25th, 2017

By Ed Korczynski, Sr. Technical Editor

The inaugural Critical Materials Council (CMC) Conference, co-sponsored by Solid State Technology, happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is an edited excerpt of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

KORCZYNSKI:  Let’s start with specifications: over-specifying, and under-specifying. Do we have the right methodologies to be able to estimate the approximate ‘ball-park’ range that the impurities need to be in?

GIRARD:  For determining the specifications, to some extent it doesn’t matter because we are out of the world of specs, where what matters is the control-limits. To Tim Hendry’s point in the Keynote yesterday [EDITOR’S NOTE:  Tim G. Hendrey, vice president of the Technology and Manufacturing Group and director of Fab Materials at Intel Corporation provided a conference keynote address on “Process Control Methods for Advanced Materials”], what was really interesting is instead of the common belief that we should start by supplying the product with the lowest possible variability, instead we should try to explore the window in which the product is working. So getting 10 containers from the same batch and introducing deliberate variability so that you know the process space in which you can play. That is the most important information to be able to reach the most reasonable and data-driven numbers to specify control limits. A lot of specs in the past were primarily determined by marketing decisions instead of data.

FIGURE 1: Jonas Sundqvist, Sr. Scientist of Fraunhofer IKTS, discusses collaboration with industry on application-specific ALD R&D. (Source: TECHCET CA)

SUNDQVIST:  Like the first introduction of what were called “super-clean” ALD precursors for the original MIS DRAM capacitors, Samsung used about 10nm of hafnium-aluminate and it would not matter if there was slight contamination in the precursors because you were not trying to control for a specific high-k phase. Whereas now you are doping very precisely and you have already scaled thinness so over time the specification for high-k precursors has become more important.

SMYTHE:  I think it comes down to the premise that when you are doing vapor transport through a bubbler that some would argue that that’s like a distillation column. So it’s a matter of thinking about what is transporting and what isn’t. In some cases the contaminant you’re concerned about is in the ampule but it never makes it to the process chamber, or the act of oxidizing destroys it as a volatile byproduct. So I think the bigger issue is change-management not necessarily the exact specification. You must know what you have, and agree that a single adjustment to improve the productivity of chemical synthesis requires that ‘fingerprinting’ must be done to show the same results. The argument is that you do not accept “less-than” as part of a specification, you only accept what it is.

AUDIENCE QUESTION:  The systems in which these precursors are used also have ‘memory’ based on the prior reactions in the chamber and byproducts that get absorbed on walls. When these byproducts come out in subsequent processing they can alter conditions so that you’re actually running in CVD-mode instead of ALD-mode. Chamber effects can wash-out a lot of value of having really pure chemicals moving through a delivery system into a chamber and picking up contaminants that you spent a whole lot of money taking out at the point of delivery. What do you think about that?

GIRARD:  Well, this is a ‘crisis!’ When something like this starts to happen in a fab or even during the development cycles, you can’t prioritize resources and approaches you just have to do everything. Sometimes it’s the tool, sometimes it’s the chemical, sometimes it’s the interaction of the two, sometimes it’s back-streaming from the vacuum sub-system…there are so many ways that things can go wrong. Certainly you have to clear up the chemistry part as early as possible.

SUNDQVIST:  We work with zirconium precursors for ALD, and you can develop a precursor that gives you a very pure ALD process that really works like an ALD process should. However, you can still use the TEMA-Zr precursor, that in processing has a CVD component which you can use that to gain throughput. So you can have a really good ALD precursor that gives low particle-counts and good process stability and ideal thermal processing range, but the growth rate goes down by 20% so you’re not very popular in the fab. Many things change when you make an ‘improved’ molecule to perfect the process, and sometime you want to use an imperfect part of the process.

FIGURE 2: John Smythe, Distinguished Member of Technical Staff of Micron Technology, explains approaches to controlling materials all the way to point-of-use. (Source: TECHCET CA)

SMYTHE:  What we’re doing a lot more these days is doing chamber finger-printing, where we’re putting a quad-filtered mass-spec on each chamber—not a cheap little RGA, but real analytical-grade—and it’s been enlightening. If you look at your chemistry moving through a delivery line using something like the Schrødenger software, it’s not a big deal to see that you can use the mass spec to see some synthesis happening in the line. We joke and call it ‘point of use synthesis’ but it’s not very funny. We are used to having spare delivery lines built-in so we can install tools to try to gain insights to prevent what we’ve been talking about.

KORCZYNSKI:  John, since Micron has fabs in Lehi and fabs in Singapore and other places, while they do run different product loads, do you have to worry about how long it takes things to travel on a slow boat to Singapore? Do you have to stockpile things more strategically these days, and does that effect your receiving department?

SMYTHE:  What we really need are a few good ocean-going hydrofoil ships! The most complete answer is we first identify which things need ‘batch-qual’ so if we do a batch-qual in Virginia and know that material is going to Taiwan that we have confidence it will pass batch-qual in Taiwan. There are certain materials that we require information on which synthesis batch, which production batch, and sometimes which bottling batch. Sometimes you take a yield hit because you didn’t have the right vision, and then you institute batch qual.

I think most of you are familiar with the concept of ‘ship-to-stock,’ when you have enough good statistical history and a good change management process with the supplier then you can do ship-to-stock and that reduces the batch-qual overhead. On a case by case basis you have to figure out how difficult that is. A small story I can tell is that with Block Co-Polymer (BCP) self-assembly we found one particular element that in concentration above 5 ppm prevented the poly-styrene from self-assembling in the same way, whereas other metal trace contaminants could be a hundred times higher and have no effect on the process. So this gets back to some of our earlier discussion that it’s not enough to know that your trace elements are below some level. Tell me the exact atoms and the exact counts and then we’ll talk about using them. The BCP R&D taught us that in some situations just changing from one batch to the next could increase defects a thousands times. So we will see a bigger push to counting atoms.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

High-NA EUV Lithography Investment

Monday, November 28th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

As covered in a recent press release, leading lithography OEM ASML invested EUR 1 billion in cash to buy 24.9% of ZEISS subsidiary Carl Zeiss SMT, and committed to spend EUR ~760 million over the next 6 years on capital expenditures and R&D of an entirely new high numerical aperture (NA) extreme ultra-violet (EUV) lithography tool. Targeting NA >0.5 to be able to print 8 nm half-pitch features, the planned tool will use anamorphic mirrors to reduce shadowing effects from nanometer-scale mask patterns. Clever design and engineering of the mirrors could allow this new NA >0.5 tool to be able to achieve wafer throughputs similar to ASML’s current generation of 0.33 NA tools for the same source power and resist speed.

The Numerical Aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. Higher NA systems can resolve finer features by condensing light from a wider range of angles. Mirror surfaces to reflect EUV “light” are made from over 50 atomic-scale bi-layers of molybdenum (Mo) and silicon (Si), and increasing the width of mirrors to reach higher NA increases the angular spread of the light which results in shadows within patterns.

In the proceedings of last year’s European Mask and Lithography Conference, Zeiss researchers reported on  “Anamorphic high NA optics enabling EUV lithography with sub 8 nm resolution” (doi:10.1117/12.2196393). The abstract summarizes the inherent challenges of establishing high NA EUVL technology:

For such a high-NA optics a configuration of 4x magnification, full field size of 26 x 33 mm² and 6’’ mask is not feasible anymore. The increased chief ray angle and higher NA at reticle lead to non-acceptable mask shadowing effects. These shadowing effects can only be controlled by increasing the magnification, hence reducing the system productivity or demanding larger mask sizes. We demonstrate that the best compromise in imaging, productivity and field split is a so-called anamorphic magnification and a half field of 26 x 16.5 mm² but utilizing existing 6’’ mask infrastructure.

Figure 1 shows that ASML plans to introduce such a system after the year 2020, with a throughput of 185 wafers-per-hour (wph) and with overlay of <2 nm. Hans Meiling, ASML vice president of product management EUV, in an exclusive interview with Solid State Technology explained why >0.5 NA capability will not be upgradable on 0.33 NA tools, “the >0.5NA optical path is larger and will require a new platform. The anamorphic imaging will also require stage architectural changes.”

Fig.1: EUVL stepper product plans for wafers per hour (WPH) and overlay accuracy include change from 0.33 NA to a new >0.5 NA platform. (Source: ASML)

Overlay of <2 nm will be critical when patterning 8nm half-pitch features, particularly when stitching lines together between half-fields patterned by single-exposures of EUV. Minimal overlay is also needed for EUV to be used to cut grid lines that are initially formed by pitch-splitting ArFi. In addition to the high NA set of mirrors, engineers will have to improve many parts of the stepper to be able to improve on the 3 nm overlay capability promised for the NXE:3400B 0.33 NA tool ASML plans to ship next year.

“Achieving better overlay requires improvements in wafer and reticle stages regardless of NA,” explained Meiling. “The optics are one of the many components that contribute to overlay. Compare to ArF immersion lithography, where the optics NA has been at 1.35 for several generations but platform improvements have provided significant overlay improvements.”

Manufacturing Capability Plans

Figure 2 shows that anamorphic systems require anamorphic masks, so moving from 0.33 to >0.5 NA requires re-designed masks. For relatively large chips, two adjacent exposures with two different anamorphic masks will be needed to pattern the same field area which could be imaged with lower resolution by a single 0.33 NA exposure. Obviously, such adjacent exposures of one layer must be properly “stitched” together by design, which is another constraint on electronic design automation (EDA) software.

Fig.2: Anamorphic >0.5 NA EUVL system planned by ASML and Zeiss will magnify mask images by 4x in the x-direction and 8x in the y-direction. (Source: Carl Zeiss SMT)

Though large chips will require twice as many half-field masks, use of anamorphic imaging somewhat reduces the challenges of mask-making. Meiling reminds us that, “With the anamorphic imaging, the 8X direction conditions will actually relax, while the 4X direction will require incremental improvements such as have always been required node-on-node.”

ASML and Zeiss report that ideal holes which “obscure” the centers of mirrors can surprisingly allow for increased transmission of EUV by each mirror, up to twice that of the “unobscured” mirrors in the 0.33 NA tool. The holes allow the mirrors to reflect through each-other, so they all line up and reflect better. Theoretically then each >0.5 NA half-field can be exposed twice as fast as a 0.33 NA full-field, though it seems that some system throughput loss will be inevitable. Twice the number of steps across the wafer will have to slow down throughput by some percent.

White two stitched side-by-side >0.5 NA EUVL exposures will be challenging, the generally known alternatives seem likely to provide only lower throughputs and lower yields:

*   Double-exposure of full-field using 0.33 NA EUVL,

*   Octuple-exposure of full-field using ArFi, or

*   Quadruple-exposure of full-field using ArFi complemented by e-beam direct-writing (EbDW) or by directed self-assembly (DSA).

One ASML EUVL system for HVM is expected to cost ~US$100 million. As presented at the company’s October 31st Investor Day this year, ASML’s modeling indicates that a leading-edge logic fab running ~45k wafer starts per month (WSPM) would need to purchase 7-12 EUV systems to handle an anticipated 6-10 EUV layers within “7nm-node” designs. Assuming that each tool will cost >US$100 million, a leading logic fab would have to invest ~US$1 billion to be able to use EUV for critical lithography layers.

With near US$1 billion in capital investments needed to begin using EUVL, HVM fabs want to be able to get productive value out of the tools over more than a single IC product generation. If a logic fab invests US$1 billion to use 0.33 NA EUVL for the “7nm-node” there is risk that those tools will be unproductive for “5nm-node” designs expected a few years later. Some fabs may choose to push ArFi multi-patterning complemented by another lithography technology for a few years, and delay investment in EUVL until >0.5 NA tools become available.

—E.K.

Multibeam Patents Direct Deposition & Direct Etch

Monday, November 14th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Multibeam Corporation of Santa Clara, California recently announced that its e-beam patent portfolio—36 filed and 25 issued—now includes two innovations that leverage the precision placement of electrons on the wafer to activate chemical processes such as deposition and etch. As per the company’s name, multi-column parallel processing chambers will be used to target throughputs usable for commercial high-volume manufacturing (HVM) though the company does not yet have a released product. These new patents add to the company’s work in developing Complementary E-Beam Lithography (CEBL) to reduce litho cost, Direct Electron Writing (DEW) to enhance device security, and E-Beam Inspection (EBI) to speed defect detection and yield ramp.

The IC fab industry’s quest to miniaturize circuit features has already reached atomic scales, and the temperature and pressure ranges found on the surface of our planet make atoms want to move around. We are rapidly leaving the known era of deterministic manufacturing, and entering an era of stochastic manufacturing where nothing is completely determined because atomic placements and transistor characteristics vary within distributions. In this new era, we will not be able to guarantee that two adjacent transistors will function the same, which can lead to circuit failures. Something new is needed. Either we will have to use new circuit design approaches that require more chip area such as “self-healing” or extreme redundancy, or the world will have to inspect and repair transistors within the billions on every HVM chip.

In an exclusive interview with Solid State Technology, David K. Lam, Multibeam Chairman, said, “We provide a high-throughput platform that uses electron beams as an activation mechanism. Each electron-beam column integrates gas injectors, as well as sensors, which enable highly localized control of material removal and deposition. We can etch material in a precise location to a precise depth. Same with deposition.” Lam (Sc.D. MIT) was the founder and first CEO of Lam Research where he led development and market penetration of the IC fab industry’s first fully automated plasma etch system, and was inducted into the Silicon Valley Engineering Hall of Fame in 2013.

“Precision deposition using miniature-column charged particle beam arrays” (Patent #9,453,281) describes patterning of IC layers by either creating a pattern specified by the design layout database in its entirety or in a complementary fashion with other patterning processes. Reducing the total number of process steps and eliminating lithography steps in localized material addition has the dual benefit of reducing manufacturing cycle time and increasing yield by lowering the probability of defect introduction. Furthermore, highly localized, precision material deposition allows for controlled variation of deposition rate and enables creation of 3D structures such as finFETs and NanoWire (NW) arrays.

Deposition can be performed using one or more multi-column charged particle beam systems using chemical vapor deposition (CVD) alone or in concert with other deposition techniques. Direct deposition can be performed either sequentially or simultaneously by multiple columns in an array, and different columns can be configured and/or optimized to perform the same or different material depositions, or other processes such as inspection and metrology.

“Precision substrate material removal using miniature-column charged particle beam arrays” (Patent #9,466,464) describes localized etch using activation electrons directed according to the design layout database so that etch masks are no longer needed. Figure 1 shows that costs are reduced and edge placement accuracy is improved by eliminating or reducing errors associated with photomasks, litho steps, and hard masks. With highly localized process control, etch depths can vary to accommodate advanced 3D device structures.

Fig.1: Comparison of (LEFT) the many steps needed to etch ICs using conventional wafer processing and (RIGHT) the two simple steps needed to do direct etching. (Source: Multibeam)

“We aren’t inventing new etch chemistries, precursors or reactants,” explained Lam. “In direct etch, we leverage developments in reactive ion etching and atomic layer etch. In direct deposition, we leverage work in atomic layer deposition. Several research groups are also developing processes specifically for e-beam assisted etch and deposition.”

The company continues to invent new hardware, and the latest critical components are “kinetic lens” which are arrangements of smooth and rigid surfaces configured to reflect gas particles. When fixed in position with respect to a gas injector outflow opening, gas particles directed at the kinetic lens are collimated or redirected (e.g., “focused”) towards a wafer surface or a gas detector. Generally, surfaces of a kinetic lens can be thought of as similar to optical mirrors, but for gas particles. A kinetic lens can be used to improve localization on a wafer surface so as to increase partial pressure of an injected gas in a target area. A kinetic lens can also be used to increase specificity and collection rate for a gas detector within a target frame.

Complementary Lithography

Complementary lithography is a cost-effective variant of multi-patterning where some other patterning technology is used with 193nm ArF immersion (ArFi) to extend the resolution limit of the latter. The company’s Pilot™ CEBL Systems work in coordination with ArFi lithography to pattern cuts (of lines in a “1D lines-and-cuts” layout) and holes (i.e., contacts and vias) with no masks. These CEBL systems can seamlessly incorporate multicolumn EBI to accelerate HVM yield ramps, using feedback and feedforward as well as die-to-database comparison.

Figure 2 shows that “1D” refers to 1D gridded design rule. In a 1D layout, optical pattern design is restricted to lines running in a single direction, with features perpendicular to the 1D optical design formed in a complementary lithography step known as “cutting”. The complementary step can be performed using a charged particle beam lithography tool such as Multibeam’s array of electrostatically-controlled miniature electron beam columns. Use of electron beam lithography for this complementary process is also called complementary e-beam lithography, or CEBL. The company claims that low pattern-density layers such as for cuts, one multi-column chamber can provide 5 wafers-per-hour (wph) throughput.

Fig.2: Complementary E-Beam Lithography (CEBL) can be used to “cut” the lines within a 1D grid array previously formed using ArF-immersion (ArFi) optical steppers. (Source: Multibeam)

Direct deposition can be used to locally interconnect 1D lines produced by optical lithography. This is similar in design principle to complementary lithography, but without using a resist layer during the charged particle beam phase, and without many of the steps required when using a resist layer. In some applications, such as restoring interconnect continuity, the activation electrons are directed to repair defects that are detected during EBI.

—E.K.

Applied Materials Releases Selective Etch Tool

Wednesday, June 29th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Applied Materials has disclosed commercial availability of new Selectra(TM) selective etch twin-chamber hardware for the company’s high-volume manufacturing (HVM) Producer® platform. Using standard fluorine and chlorine gases already used in traditional Reactive Ion Etch (RIE) chambers, this new tool provides atomic-level precision in the selective removal of materials in 3D devices structures increasingly used for the most advanced silicon ICs. The tool is already in use at three customer fabs for finFET logic HVM, and at two memory fab customers, with a total of >350 chambers planned to have been shipped to many customers by the end of 2016.

Figure 1 shows a simplified cross-sectional schematic of the Selectra chamber, where the dashed white line indicates some manner of screening functionality so that “Ions are blocked, chemistry passes through” according to the company. In an exclusive interview with Solid State Technology, company representative refused to disclose any hardware details. “We are using typical chemistries that are used in the industry,” explained Ajay Bhatnagar, managing director of Selective Removal Products for Applied Materials. “If there are specific new applications needed than we can use new chemistry. We have a lot of IP on how we filter ions and how we allow radicals to combine on the wafer to create selectivity.”

FIG 1: Simplified cross-sectional schematic of a silicon wafer being etched by the neutral radicals downstream of the plasma in the Selectra chamber. (Source: Applied Materials)

From first principles we can assume that the ion filtering is accomplished with some manner of electrically-grounded metal screen. This etch technology accomplishes similar process results to Atomic Layer Etch (ALE) systems sold by Lam, while avoiding the need for specialized self-limiting chemistries and the accompanying chamber throughput reductions associated with pulse-purge process recipes.

“What we are doing is being able to control the amount of radicals coming to the wafer surface and controlling the removal rates very uniformly across the wafer surface,” asserted Bhatnagar. “If you have this level of atomic control then you don’t need the self-limiting capability. Most of our customers are controlling process with time, so we don’t need to use self-limiting chemistry.” Applied Materials claims that this allows the Selectra tool to have higher relative productivity compared to an ALE tool.

Due to the intrinsic 2D resolutions limits of optical lithography, leading IC fabs now use multi-patterning (MP) litho flows where sacrificial thin-films must be removed to create the final desired layout. Due to litho limits and CMOS device scaling limits, 2D logic transistors are being replaced by 3D finFETs and eventually Gate-All-Around (GAA) horizontal nanowires (NW). Due to dielectric leakage at the atomic scale, 2D NAND memory is being replaced by 3D-NAND stacks. All of these advanced IC fab processes require the removal of atomic-scale materials with extreme selectivity to remaining materials, so the Selectra chamber is expected to be a future work-horse for the industry.

When the industry moves to GAA-NW transistors, alternating layers of Si and SiGe will be grown on the wafer surface, 2D patterned into fins, and then the sacrificial SiGe must be selectively etched to form 3D arrays of NW. Figure 2 shows the SiGe etched from alternating Si/SiGe stacks using a Selectra tool, with sharp Si corners after etch indicating excellent selectivity.

FIG 2: SEM cross-section showing excellent etch of SiGe within alternating Si/SiGe layers, as will be needed for Gate-All-Around (GAA) horizontal NanoWire (NW) transistor formation. (Source: Applied Materials)

“One of the fundamental differences between this system and old downstream plasma ashers, is that it was designed to provide extreme selectivity to different materials,” said Matt Cogorno, global product manager of Selective Removal Products for Applied Materials. “With this system we can provide silicon to titanium-nitride selectivity at 5000:1, or silicon to silicon-nitride selectivity at 2000:1. This is accomplished with the unique hardware architecture in the chamber combined with how we mix the chemistries. Also, there is no polymer formation in the etch process, so after etching there are no additional processing issues with the need for ashing and/or a wet-etch step to remove polymers.”

Systems can also be used to provide dry cleaning and surface-preparation due to the extreme selectivity and damage-free material removal.  “You can control the removal rates,” explained Cogorno. “You don’t have ions on the wafer, but you can modulate the number of radicals coming down.” For HVM of ICs with atomic-scale device structures, this new tool can widen process windows and reduce costs compared to both dry RIE and wet etching.

—E.K.

IoT Demands Part 2: Test and Packaging

Friday, April 15th, 2016

By Ed Korczynski, Senior Technical Editor, Solid State Technology, SemiMD

The Internet-of-Things (IoT) adds new sensing and communications to improve the functionality of all manner of things in the world. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be extremely small and low-cost. To understand the state of technology preparedness to meet the anticipated needs of the different application spaces, experts from GLOBALFOUNDRIES, Cadence, Mentor Graphics and Presto Engineering gave detailed answers to questions about IoT chip needs in EDA and fab nodes, as published in “IoT Demands:  EDA and Fab Nodes.” We continue with the conversation below.

Korczynski: For test of IoT devices which may use ultra-low threshold voltage transistors, what changes are needed compared to logic test of a typical “low-power” chip?

Steve Carlson, product management group director, Cadence

Susceptibility to process corners and operating conditions becomes heightened at near-threshold voltage levels. This translates into either more conservative design sign-off criteria, or the need for higher levels of manufacturing screening/tests. Either way, it has an impact on cost, be it hidden by over-design, or overtly through more costly qualification and test processes.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

We need to make sure that the testability has also been designed to be functional structurally in this mode. In addition, sub-threshold voltage operation must account for non-linear transistor characteristics and the strong impact of local process variation, for which the conventional testability arsenal is still very poor. Automotive screening used low voltage operation (VLV) to detect latent defects, but at very low voltage close to the transistor threshold, digital becomes analog, and therefore if the usual concept still works for defect detection, functional test and @speed tests require additional expertise to be both meaningful and efficient from a test coverage perspective.

Korczynski:  Do we have sufficient specifications within “5G” to handle IoT device interoperability for all market segments?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

The estimated timeline for standardization availability of 5G is around 2020. 5G is being designed keeping three classes of applications in mind:  Enhanced Mobile Broadband, Massive IoT, and Mission-Critical Control. Specifically for IoT, the focus is on efficient, low-cost communication with deep coverage. We will start to see early 5G technologies start to appear around 2018, and device connectivity,

interoperability and marshaling the data they generate that can apply to multiple IoT sub-segments and markets is still very much in development.

Korczynski:  Will the 1st-generation of IoT devices likely include wide varieties of solution for different market-segments such as industrial vs. retail vs. consumer, or will most device use similar form-factors and underlying technologies?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

If we use CES 2016 as a showcase, we are seeing IoT “Things” that are becoming use-case or application-centric as they apply to specific sub-segments such as Connected Home, Automotive, Medical, Security, etc. There is definitely more variety on the consumer front vs. industrial. Vendors / OEMs / System houses are differentiating at the user-interface design and form-factor levels while the “under-the-hood” IC capabilities and component technologies that provide the atomic intelligence are fairly common. ​

Steve Carlson, product management group director, Cadence

Right now it seems like everyone is swinging for the fence. Everyone wants the home-run product that will reach a billion devices sold. Generality generally leads to sub-optimality, so a single device usually fails to meet the needs and expectations of many. Devices that are optimized for more specific use cases and elements of purchasing criteria will win out. The question of interface is an interesting one.

Korczynski:  Will there be different product life-cycles for different IoT market-segments, such as 1-3 years for consumer but 5-10 years for industrial?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

That certainly seems to be the case. According to Gartner’s market analysis for IoT, Consumer is expected to grow at a faster pace in terms of units compared to Enterprise, while Enterprise is expected to lead in revenue. Also the churn-cycle in Consumer is higher / faster compared to Enterprise. Today’s wearables or smart-phones are good reference examples. This will however vary by the type of “Thing” and sub-segment. For example, you expect to have your smart refrigerator for a longer time period compared to smart clothing or eyewear. As ASPs of the “Things”come down over time and new classes of products such as disposables hit the market, we can expect even larger volumes.​

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

The market segments continue to be driven by the same use cases. In consumer wearables, short cycles are linked to fashion trends and rapid obsolescence, where consumer home use has longer cycles closer to industrial market requirements. We believe that the lifecycle norms will hold true for IoT devices.

Korczynski:  For the IoT application of infrastructure monitoring (e.g. bridges, pipelines, etc.) long-term (10-20 year) reliability will be essential, while consumer applications may be best served by 3-5 year reliability devices which cost less; how well can we quantify the trade-off between cost and chip reliability?

Steve Carlson, product management group director, Cadence

Conceptually we know very well how to make devices more reliable. We can lower current densities with bigger wires, we can run at cooler temperatures, and so on.  The difficulty is always in finding optimality for a given criterion across the, for practical purposes, infinite tradeoffs to be made.

Korczynski:  Why is the talk of IoT not just another “Dot Com” hype cycle?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

​​I participated in a panel at SEMICON China in Shanghai last month that discussed a similar question. If we think of IoT as a “brand new thing” (no pun intended), then we can think of it as hype. However if we look at the IoT as as set of use-cases that can take advantage of an evolution of Machine-to-Machine (M2M) going towards broader connectivity, huge amounts of data generated and exchanged, and a generational increase in internet and communication network bandwidths (i.e. 5G), then it seems a more down-to-earth technological progression.

Nicolas Williams, product marketing manager, Mentor Graphics

Unlike the Dot Com hype, which was built upon hope and dreams of future solutions that may or may not have been based in reality, IoT is real business. For example, in a 2016 IC Insights report, we see that last year $63.4 billion in revenue was generated for IoT systems and the market is growing at about 20% CAGR. This same report also shows IoT semiconductor sales of over $15 billion in 2015 with a CAGR of 21.1%.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is the investment needed up front to create sensing agents and an infrastructure for the hardware foundation of the IoT that will lead to big data and ultimately value creation.

Steve Carlson, product management group director, Cadence

There will be plenty of hype cycles for products and product categories along the way. However, the foundational shift of the connection of things is a diode through which civilization will only pass through in one direction.

IoT Demands Part 1: EDA and Fab Nodes

Thursday, April 14th, 2016

The Internet-of-Things (IoT) is expected to add new sensing and communications to improve the functionality of all manner of things in the world:  bridges sensing and reporting when repairs are needed, parts automatically informing where they are in storage and transport, human health monitoring, etc. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be assembled at low-cost and small-size in High Volume Manufacturing (HVM). Micro-Electro-Mechanical Systems (MEMS) and other sensors are being combined with Radio-Frequency (RF) ICs in miniaturized packages for the first wave of growth in major sub-markets.

To meet the anticipated needs of the different IoT application spaces, SemiMD asked leading companies within critical industry segments about the state of technology preparedness:

*  Commercial IC HVM – GLOBALFOUNDRIES,

*  Electronic Design Automation (EDA) – Cadence and Mentor Graphics,

*  IC and complex system test – Presto Engineering.

Korczynski:  Today, ICs for IoT applications typically use 45nm/65nm-node which are “Node -3″ (N-3) compared to sub-20nm-node chips in HVM. Five years from now, when the bleeding-edge will use 10nm node technology, will IoT chips still use N-3 of 28nm-node (considered a “long-lived node”) or will 45nm-node remain the likely sweet-spot of price:performance?

Timothy Dry, product marketing manager, GLOBALFOUNDRIES

In 5 years time, there will be a spread of technology solutions addressing low, middle, and high ends of IoT applications. At the low end, IoT end nodes for applications like connected smoke

detectors, security sensors will be at 55, 40nm ULP and ULL for lowest system power, and low cost. These applications will be typically served by MCUs <50DMIPs. Integrated radios (BLE, 802.15.4), security, Power Management Unit (PMU), and eFlash or MRAM will be common features. Connected LED lighting is forecasted to be a high volume IoT application. The LED drivers will use BCD extensions of 130nm—40nm—that can also support the radio and protocol-MCU with Flash.

In the mid-range, applications like smart-meters and fitness/medical monitoring will need systems that have more processing power <300DMIPS. These products will be implemented in 40nm, 28nm and GLOBALFOUNDRIES’ new 22nm FDSOI technology that uses software-controlled body-biasing to tune SoC operation for lowest dynamic power. Multiple wireless (BLE/802.15.4, WiFi, LPWAN) and wired connectivity (Ethernet, PLC) protocols with security will be integrated for gateway products.

High-end products like smart-watches, learning thermostats, home security/monitoring cameras, and drones will require MPU-class IC products (~2000DMIPs) and run high-order operating systems (e.g. Linux, Android). These products will be made in leading-edge nodes starting at 22FDX, 14FF and migrating to 7FF and beyond. Design for lowest dynamic power for longest battery life will be the key driver, and these products typically require human machine Interface (HMI) with animated graphics on a high resolution displays. Connectivity will include BLE, WiFi and cellular with strong security.

Steve Carlson, product management group director, Cadence

We have seen recent announcements of IoT targeted devices at 14nm. The value created by Moore’s Law integration should hold, and with that, there will be inherent advantages to those who leverage next generation process nodes. Still, other product categories may reach functionality saturation points where there is simply no more value obtained by adding more capability. We anticipate that there will be more “live” process nodes than ever in history.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is fair to say that most IoT devices will be a heterogeneous aggregation of analog functions rather than high power digital processors. Therefore, and by similarity with Bluetooth and RFID devices, 90nm and 65nm will remain the mainstream nodes for many sub-vertical markets, enabling the integration of RF and analog front-end functions with digital gate density. By default, sensors will stay out of the monolithic path for both design and cost reasons. The best answer would be that the IoT ASIC will follow eventually the same scaling as the MCU products, with embedded non-volatile memories, which today is 55-40nm centric and will move to 28nm with industry maturity and volumes.

Korczynski:  If most IoT devices will include some manner of sensor which must be integrated with CMOS logic and memory, then do we need new capabilities in EDA-flows and burn-in/test protocols to ensure meeting time-to-market goals?

Nicolas Williams, product marketing manager, Mentor Graphics

If we define a typical IoT device as a product that contains a MEMS sensor, A/D, digital processing, and a RF-connection to the internet, we can see that the fundamental challenge of IoT design is that teams working on this product need to master the analog, digital, MEMS, and RF domains. Often, these four domains require different experience and knowledge and sometimes design in these domains is accomplished by separate teams. IoT design requires that all four domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dice that will be bonded together, they still need to work together during the layout and verification process. Therefore, a unified design flow is required.

Stephen Pateras, product marketing director, Mentor Graphics

Being able to quickly debug and create test patterns for various embedded sensor IP can be addressed with the adoption of the new IEEE 1687 IP plug-and-play standard. If a sensor IP block’s digital interface adheres to the standard, then any vendor-provided data required to initialize or operate the embedded sensor can be easily and quickly mapped to chip pins. Data sequences for multiple sensor IP blocks can also be merged to create optimized sequences that will minimize debug and test times.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

From a testing standpoint, widely used ATEs are generally focused on a few purposes, but don’t necessarily cover all elements in a system. We think that IoT devices are likely to require complex testing flows using multiple ATEs to assure adequate coverage. This is likely to prevail for some time as short run volumes characteristic of IoT demands are unlikely to drive ATE suppliers to invest R&D dollars in creating new purpose-built machines.

Korczynski:  For the EDA of IoT devices, can all sensors be modeled as analog inputs within established flows or do we need new modeling capability at the circuit level?

Steve Carlson, product management group director, Cadence

Typically, the interface to the physical world has been partitioned at the electrical boundary. But as more mechanical and electro-mechanical sensors are more deeply integrated, there has been growing value in co-design, co-analysis, and co-optimization. We should see more multi-domain analysis over time.

Nicolas Williams, product marketing manager, Mentor Graphics

Designers of IoT devices that contain MEMS sensors need quality models in order to simulate their behavior under physical conditions such as motion and temperature. Unlike CMOS IC design, there are few standardized MEMS models for system-level simulation. State of the art MEMS modeling requires automatic generation of behavioral models based on the results of Finite Element Analysis (FEA) using reduced-order modeling (ROM). ROM is a numerical methodology that reduces the analysis results to create Verilog-A models for use in AMS simulations for co-simulation of the MEMS device in the context of the IoT system.

Molecular Modeling of Materials Defects for Yield Recovery

Monday, March 21st, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

New materials are being integrated into High Volume Manufacturing (HVM) of semiconductor ICs, while old materials are being extended with more stringent specifications. Defects within materials cause yield losses in HVM fabs, and engineers must identify the specific source of an observed defect before corrective steps can be taken. Honeywell Electronic Materials has been using molecular modeling software provided by Scienomics to both develop new materials and to modify old materials. Modeling allowed Honeywell to uncover the origin of subtle solvation-based film defects within Bottom Anti-Reflective Coatings (BARC) which were degrading yield in a customer’s lithographic process module.

Scienomics sponsored a Materials Modeling and Simulations online seminar on February 26th of this year, featuring Dr. Nancy Iwamoto of Honeywell discussing how Scienomics software was used to accelerate response to a customer’s manufacturing yield loss. “This was a product running at a customer line,” explained Iwamoto, “and we needed to find the solution.” The product was a Bottom Anti-Reflective Coating (BARC) organo-silicate polymer delivered in solution form and then spun on wafers to a precise thickness.

Originally observed during optical inspection by fab engineers as 1-2 micron sized vague spots in the BARC, the new defect type was difficult to see yet could be correlated to lithographic yield loss. The defects appeared to be discrete within the film instead of on the top surface, so the source was likely some manner of particle, yet filters did not capture these particles.

The filter captured some particles rich in silicon, as well as other particles rich in carbon. Sequential filtration showed that particles were passing through impossibly small pores, which suggested that the particles were built of deformable gel-like phases. The challenge was to find the material handling or processing situation, which resulted in thermodynamically possible and kinetically probable conditions that could form such gels.

Fig: Materials Processes and Simulations (MAPS) gives researchers access to visualization and analysis tools in a single user interface together with access to multiple simulation engines. (Source: Scienomics)

Molecular modeling and simulation is a powerful technique that can be used for materials design, functional upgrades, process optimization, and manufacturing. The Figure shows a dashboard for Scienomics’ modeling platform. Best practices in molecular modeling to find out-of-control parameters in HVM include a sequential workflow:

  • Build correct models based on experimental observables,
  • Simulate potential molecular structures based on known chemicals and hierarchical models,
  • Analyze manufacturing variabilities to identify excursion sources, and
  • Propose remedy for failure elimination.

Honeywell Electronic Materials researchers had very few experimental observables from which to start:  phenomenon is rare (yet effects yield), not filterable, yet from thermodynamic hydrolysis parameters it must be quasi-stable. Re-testing of product and re-examination of Outgoing Quality Control (OQC) data at the Honeywell production site showed that the molecular weight of the product was consistent with the desired distribution. There was also an observed BARC thickness increase of ~1nm on the wafer associated with the presence of these defects.

Using the modeling platform, Honeywell looked at the solubility parameters for different small molecular chains off of known-branched back-bone centers. Gel-like agglomerations could certainly be formed under the wrong conditions. Once the agglomerations form, they are not very stable so they can probably dis-aggregate when being forced through a filter and then re-aggregate on the other side.

What conditions could induce gel formation? After a few weeks of modeling, it was determined that temperature variations had the greatest influence on the agglomeration, and that variability was strongest at the ~250°K recommended for storage. Storage at 230°K resulted in measurably worse agglomeration, and any extreme in heating/cooling ramp rate tended to reduce solubility.

Molecular modeling was used in a forensic manner to find that the root cause of gel-like defects was related to thermal history:

*   Thermodynamics determined the most likely oligomers that could agglomerate,

*   Temperature-dependent solubility models determined which particles would reach wafers.

Because of the on-wafer BARC thickness increase of ~1nm, fab engineers could use all of the molecular modeling information to trace the temperature variation to bottles installed in the lithographic track tool. The fab was able to change specifications for the storage and handling of the BARC bottles to bring the process back into control.

EUV Resists and Stochastic Processes

Friday, March 4th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

In an exclusive interview with Solid State Technology during SPIE-AL this year, imec Advanced Patterning Department Director Greg McIntyre said, “The big encouraging thing at the conference is the progress on EUV.” The event included a plenary presentation by TSMC Nanopatterning Technology Infrastructure Division Director and SPIE Fellow Anthony Yen on “EUV Lithography: From the Very Beginning to the Eve of Manufacturing.” TSMC is currently learning about EUVL using 10nm- and 7nm-node device test structures, with plans to deploy it for high volume manufacturing (HVM) of contact holes at the 5nm node. Intel researchers confirm that they plan to use EUVL in HVM for the 7nm node.

Recent improvements in EUV source technology— 80W source power had been shown by the end of 2014, 185W by the end of 2015, and 200W has now been shown by ASML—have been enabled by multiple laser pulses tuned to the best produce plasma from tin droplets. TSMC reports that 518 wafers per day were processed by their ASML EUV stepper, and the tool was available ~70% of the time. TSMC shows that a single EUVL process can create 46nm pitch lines/spaces using a complex 2D mask, as is needed for patterning the metal2 layer within multilevel on-chip interconnects.

To improve throughput in HVM, the resist sensitivity to the 13.54nm wavelength radiation of EUV needs to be improved, while the line-width roughness (LWR) specification must be held to low single-digit nm. With a 250W source and 25 mJ/cm2 resist sensitivity an EUV stepper should be able to process ~100 wafer-per-hour (wph), which should allow for affordable use when matched with other lithography technologies.

Researchers from Inpria—the company working on metal-oxide-based EUVL resists—looked at the absorption efficiencies of different resists, and found that the absorption of the metal oxide based resists was ≈ 4 to 5 times higher than that of the Chemically-Amplified Resist (CAR). The Figure shows that higher absorption allows for the use of proportionally thinner resist, which mitigates the issue of line collapse. Resist as thin as 18nm has been patterned over a 70nm thin Spin-On Carbon (SOC) layer without the need for another Bottom Anti-Reflective Coating (BARC). Inpria today can supply 26 mJ/cm2 resist that creates 4.6nm LWR over 140nm Depth of Focus (DoF).

To prevent pattern collapse, the thickness of resist is reduced proportionally to the minimum half-pitch (HP) of lines/spaces. (Source: JSR Micro)

JEIDEC researchers presented their summary of the trade-off between sensitivity and LWR for metal-oxide-based EUV resists:  ultra high sensitivity of 7 mJ/cm2 to pattern 17nm lines with 5.6nm LWR, or low sensitivity of 33 mJ/cm2 to pattern 23nm lines with 3.8nm LWR.

In a keynote presentation, Seong-Sue Kim of Samsung Electronics stated that, “Resist pattern defectivity remains the biggest issue. Metal-oxide resist development needs to be expedited.” The challenge is that defectivity at the nanometer-scale derives from “stochastics,” which means random processes that are not fully predictable.

Stochastics of Nanopatterning

Anna Lio, from Intel’s Portland Technology Development group, stated that the challenges of controlling resist stochastics, “could be the deal breaker.” Intel ran a 7-month test of vias made using EUVL, and found that via critical dimensions (CD), edge-placement-error (EPE), and chain resistances all showed good results compared to 193i. However, there are inherent control issues due to the random nature of phenomena involved in resist patterning:  incident “photons”, absorption, freed electrons, acid generation, acid quenching, protection groups, development processes, etc.

Stochastics for novel chemistries can only be controlled by understanding in detail the sources of variability. From first-principles, EUV resist reactions are not photon-chemistry, but are really radiation-chemistry with many different radiation paths and electrons which can be generated. If every via in an advanced logic IC must work then the failure rate must be on the order of 1 part-per-trillion (ppt), and stochastic variability from non-homogeneous chemistries must be eliminated.

Consider that for a CAR designed for 15mJ/cm2 sensitivity, there will be just:

145 photons/nm2 for 193, and

10 photons/nm2 for EUV.

To improve sensitivity and suppress failures from photon shot-noise, we need to increase resist absorption, and also re-consider chemical amplification mechanisms. “The requirements will be the same for any resist and any chemistry,” reminded Lio. “We need to evaluate all resists at the same exposure levels and at the same rules, and look at different features to show stochastics like in the tails of distributions. Resolution is important but stochastics will rule our world at the dimensions we’re dealing with.”

—E.K.

Next Page »