Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘process’

Next Page »

Picosun and Hitachi MECRALD Process

Friday, February 24th, 2017

thumbnail

By Ed Korczynski, Sr. Technical Editor

A new microwave electron cyclotron resonance (MECR) atomic layer deposition (ALD) process technology has been co-developed by Hitachi High-Technologies Corporation and Picosun Oy to provide commercial semiconductor IC fabs with the ability to form dielectric films at lower temperatures. Silicon oxide and silicon nitride, aluminum oxide and aluminum nitride films have been deposited in the temperature range of 150-200 degrees C in the new 300-mm single-wafer plasma-enhanced ALD (PEALD) processing chamber.

With the device features within both logic and memory chips having been scaled to atomic dimensions, ALD technology has been increasingly enabling cost-effective high volume manufacturing (HVM) of the most advanced ICs. While the deposition rate will always be an important process parameter for HVM, the quality of the material deposited is far more important in ALD. The MECR plasma source provides a means of tunable energy to alter the reactivity of ALD precursors, thereby allowing for new degrees of freedom in controlling final film properties.

The Figure shows the MECRALD chamber— Hitachi High-Tech’s ECR plasma generator is integrated with Picosun’s digitally controlled ALD system—from an online video (https://youtu.be/SBmZxph-EE0) describing the process sequence:

1.  first precursor gas/vapor flows from a circumferential ring near the wafer chuck,

2.  first vacuum purge,

3.  second precursor gas/vapor is ionized as it flows down through the ECR zone above the circumferential ring, and

4.  second vacuum purge to complete one ALD cycle (which may be repeated).

Cross-sectional schematic of a new Microwave Electron Cyclotron Resonance (MECR) plasma source from Hitachi High-Technologies connected to a single-wafer Atomic Layer Deposition (ALD) processing chamber from Picosun. (Source: Picosun)

The development team claims that MECRALD films are superior to other PEALD films in terms of higher density, lower contamination of carbon and oxygen (in non-oxides), and also show excellent step-coverage as would be expected from a surface-driven ALD process. The relatively density of these films has been confirmed by lower wet etch rates. The single-wafer process non-uniformity on 300mm wafers is claimed at ~1% (1 sigma). The team is now exploring processes and precursors to be able to deposit additional films such as titanium nitride (TiN), tantalum nitride (TaN), and hafnium oxide (HfO). In an interview with Solid State Technology, a spokesperson from Hitachi High-Technologies explained that, “We are now at the development stage, and the final specifications mainly depend on future achievements.”

The MECR source has been used in Hitachi High-Tech’s plasma chamber for IC conductor etch for many years, and is able to generate a stable high-density plasma at very low pressure (< 0.1 Pa). MECR plasmas provide wide process windows through accurate plasma parameter management, such as plasma distribution or plasma position control. The same plasma technology is also used to control ions and radicals in the company’s dry cleaning chambers.

“I’m really impressed by the continuous development of ALD technology, after more than 40 years since the invention,” commented Dr. Tuomo Suntola, and the famous inventor and patentor of the Atomic Layer Deposition method in Finland in 1974, and member of the Picosun board of directors. “Now combining Hitachi and Picosun technologies means (there is) again a major breakthrough in advanced semiconductor manufacturing.”

MECRALD chambers can be clustered on a Picosun platform that features a Brooks robot handler. This technology is still under development, so it’s too soon to discuss manufacturing parameters such as tool cost and wafer throughput.

—E.K.

Vital Control in Fab Materials Supply-Chains – Part 2

Thursday, February 16th, 2017

By Ed Korczynski, Sr. Technical Editor

As detailed in Part 1 of this article published last month by SemiMD, the inaugural Critical Materials Council (CMC) Conference happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is Part 2 of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jeff Hemphill, Staff Materials R&D Engineer, Intel Corporation,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

FIGURE 1: 2016 CMC Conference expert panelists (from left to right) John Smyth, Jonas Sundqvist, Jeff Hemphill, and Jean-Marc Girard. (Source: TECHCET CA)

KORCZYNSKI:  We heard from David Thompson [EDITOR’S NOTE:  Director of Process Chemistry, Applied Materials presented on “Agony in New Material Introductions -  Minimizing and Correlating Variabilities”] today on what we must control, and he gave an example of a so-called trace-contaminant that was essential for the process performance of a precursor, where the trace compound helped prevent particles from flaking off chamber walls. Do we need to specify our contaminants?

GIRARD:  Yes. To David’s point this morning, every molecule is different. Some are very tolerant due to the molecular process associated with it, and some are not. I’ll give you an example of a cobalt material that’s been talked about, where it can be run in production at perhaps 95% in terms of assay, provided that one specific contaminant is less than a couple of parts-per-million. So it’s a combination of both, it’s not assay OR a specification of impurities. It’s a matter of specifying the trace components that really matter when you reach the point that the data you gather gives you that understanding, and obviously an assay within control limits.

HEMPHILL:  Talking about whether we’re over-specifying or not, the emphasis is not about putting the right number on known parameters like assay that are obvious to measure, the emphasis is on identifying and understanding what makes up the rest of it and in a sense trying over-specify that. You identify through mass-spectrometry and other techniques that some fraction of a percent is primarily say five different species, it’s finding out how to individually monitor and track and control those as separate parameters. So from a specification point of view what we want is not necessarily the lowest possible numbers, but it’s expanding how many things we’re looking at so that we’re capturing everything that’s there.

KORCZYNSKI:  Is that something that you’re starting to push out to your suppliers?

HEMPHILL:  Yes. It depends on the application we’re talking about, but we go into it with the assumption that just assay will not be enough. Whether a single molecule or a blend of things is supposed to be there, we know that just having those be controlled by specification will not be sufficient. We go under the assumption that we are going to identify what makes up the remaining part of the profile, and those components are going to need to be controlled as well.

KORCZYNSKI:  Is that something that has changed by node? Back when things were simpler say at 45nm and larger, were these aspects of processing that we could safely ignore as ‘noise’ but are now important ‘signals’?

HEMPHILL:  Yes, we certainly didn’t pay as close attention just a couple of generations ago.

KORCZYNSKI:  That seems to lead us to questions about single-sources versus dual-sourcing. There are many good reasons to do both, but not simultaneously. However, it seems that because of all of the challenges we’re heard about over the last day-and-a-half of this conference it creates greater burden on the suppliers, and for critical materials the fabs are moving toward more single-sourcing over time.

SMYTHE:  I think that it comes down to more of a concern over geographic risk. I’ll buy from one entity if that entity has more than one geographic location for the supply, so that I’m not exposed to a single ‘Act of God’ or a ‘random statistical occurrence of global warming.’ So for example I  need to ask if a supplier has a place in the US and a place in France that makes the same thing, so that if something bad happens in one location it can still be sourced? Or do you have an alternate-supply agreement that if you can’t supply it you have an agreement with Company-X to supply it so that you still have control? You can’t come to a Micron and say we want to make sure that we get at minimum 25% no matter what, because what typically happens with second-sourcing is Company-A gets 75% of the business while Company-B gets 25%. There are a lot of reasons that that doesn’t work so well, so people may have an impression that there’s a movement toward single-source but it’s ‘single flexible-source.’

HEMPHILL:  There are a lot of benefits of dual- or multiple-sourcing. The commercial benefits of competition can be positive and we’re for it when it works. The risk is that as things are progressing and we’re getting more sensitive to differences in materials it’s getting harder to maintain that. We have seen situations where historically we were successful with dual-sourcing a raw material coming from two different suppliers or even a single supplier using two different manufacturing lines and everything was fine and qualified and we could alternate sources invisibly. However, as our sensitivity has grown over time we can start to detect differences.

So the concept of being ‘copy-exactly’ that we use in our factories, we really need production lines to do that, and if we’re talking about two different companies producing the same material then we’re not going to get them to be copy-exactly. When that results in enough of a variation in the material that we can detect it in the factory then we cannot rely upon two sources. Our preference would be one company that maintains multiple production sites that are designed to be exactly the same, then we have a high degree of confidence that they will be able to produce the same material.

FIGURE 2: Jean-Marc Girard, Distinguished Member of Technical Staff of Micron Technology, provided the supplier perspective. (Source: SEMI)

GIRARD:  I can give you a supplier perspective on that. We are seeing very different policies from different customers, to the point that we’re seeing an increase in the number of customers doing single-sourcing with us, provided we can show the ability to maintain business continuity in case of a problem. I think that the industry became mature after the tragic earthquake and tsunami in Japan in 2011 with greater understanding of what business continuity means. We have the same discussions with our own suppliers, who may say that they have a dedicated reactor for a certain product with another backup reactor with a certain capacity on the same site, and we ask what happens if the plant goes on strike or there’s a fire there?

A situation where you might think the supply was stable involved silane in the United States. There are two large silane plants in the United States that are very far apart from each other and many Asian manufacturers dependent upon them. When the U.S. harbors went on strike for a long time there was no way that material could ship out of the U.S. customers. So, yes there were two plants but in such an event you wouldn’t have global supply. So there is no one way to manage our supply lines and we need to have conversations with our customers to discuss the risks. How much time would it take to rebuild a supply-chain source with someone else? If you can get that sort of constructive discussion going then customers are usually open to single-sourcing. One regional aspect is that Asian customers tend to favor dual-sourcing more, but that can lead to IP problems.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Vital Control in Fab Materials Supply-Chains

Wednesday, January 25th, 2017

By Ed Korczynski, Sr. Technical Editor

The inaugural Critical Materials Council (CMC) Conference, co-sponsored by Solid State Technology, happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is an edited excerpt of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

KORCZYNSKI:  Let’s start with specifications: over-specifying, and under-specifying. Do we have the right methodologies to be able to estimate the approximate ‘ball-park’ range that the impurities need to be in?

GIRARD:  For determining the specifications, to some extent it doesn’t matter because we are out of the world of specs, where what matters is the control-limits. To Tim Hendry’s point in the Keynote yesterday [EDITOR’S NOTE:  Tim G. Hendrey, vice president of the Technology and Manufacturing Group and director of Fab Materials at Intel Corporation provided a conference keynote address on “Process Control Methods for Advanced Materials”], what was really interesting is instead of the common belief that we should start by supplying the product with the lowest possible variability, instead we should try to explore the window in which the product is working. So getting 10 containers from the same batch and introducing deliberate variability so that you know the process space in which you can play. That is the most important information to be able to reach the most reasonable and data-driven numbers to specify control limits. A lot of specs in the past were primarily determined by marketing decisions instead of data.

FIGURE 1: Jonas Sundqvist, Sr. Scientist of Fraunhofer IKTS, discusses collaboration with industry on application-specific ALD R&D. (Source: TECHCET CA)

SUNDQVIST:  Like the first introduction of what were called “super-clean” ALD precursors for the original MIS DRAM capacitors, Samsung used about 10nm of hafnium-aluminate and it would not matter if there was slight contamination in the precursors because you were not trying to control for a specific high-k phase. Whereas now you are doping very precisely and you have already scaled thinness so over time the specification for high-k precursors has become more important.

SMYTHE:  I think it comes down to the premise that when you are doing vapor transport through a bubbler that some would argue that that’s like a distillation column. So it’s a matter of thinking about what is transporting and what isn’t. In some cases the contaminant you’re concerned about is in the ampule but it never makes it to the process chamber, or the act of oxidizing destroys it as a volatile byproduct. So I think the bigger issue is change-management not necessarily the exact specification. You must know what you have, and agree that a single adjustment to improve the productivity of chemical synthesis requires that ‘fingerprinting’ must be done to show the same results. The argument is that you do not accept “less-than” as part of a specification, you only accept what it is.

AUDIENCE QUESTION:  The systems in which these precursors are used also have ‘memory’ based on the prior reactions in the chamber and byproducts that get absorbed on walls. When these byproducts come out in subsequent processing they can alter conditions so that you’re actually running in CVD-mode instead of ALD-mode. Chamber effects can wash-out a lot of value of having really pure chemicals moving through a delivery system into a chamber and picking up contaminants that you spent a whole lot of money taking out at the point of delivery. What do you think about that?

GIRARD:  Well, this is a ‘crisis!’ When something like this starts to happen in a fab or even during the development cycles, you can’t prioritize resources and approaches you just have to do everything. Sometimes it’s the tool, sometimes it’s the chemical, sometimes it’s the interaction of the two, sometimes it’s back-streaming from the vacuum sub-system…there are so many ways that things can go wrong. Certainly you have to clear up the chemistry part as early as possible.

SUNDQVIST:  We work with zirconium precursors for ALD, and you can develop a precursor that gives you a very pure ALD process that really works like an ALD process should. However, you can still use the TEMA-Zr precursor, that in processing has a CVD component which you can use that to gain throughput. So you can have a really good ALD precursor that gives low particle-counts and good process stability and ideal thermal processing range, but the growth rate goes down by 20% so you’re not very popular in the fab. Many things change when you make an ‘improved’ molecule to perfect the process, and sometime you want to use an imperfect part of the process.

FIGURE 2: John Smythe, Distinguished Member of Technical Staff of Micron Technology, explains approaches to controlling materials all the way to point-of-use. (Source: TECHCET CA)

SMYTHE:  What we’re doing a lot more these days is doing chamber finger-printing, where we’re putting a quad-filtered mass-spec on each chamber—not a cheap little RGA, but real analytical-grade—and it’s been enlightening. If you look at your chemistry moving through a delivery line using something like the Schrødenger software, it’s not a big deal to see that you can use the mass spec to see some synthesis happening in the line. We joke and call it ‘point of use synthesis’ but it’s not very funny. We are used to having spare delivery lines built-in so we can install tools to try to gain insights to prevent what we’ve been talking about.

KORCZYNSKI:  John, since Micron has fabs in Lehi and fabs in Singapore and other places, while they do run different product loads, do you have to worry about how long it takes things to travel on a slow boat to Singapore? Do you have to stockpile things more strategically these days, and does that effect your receiving department?

SMYTHE:  What we really need are a few good ocean-going hydrofoil ships! The most complete answer is we first identify which things need ‘batch-qual’ so if we do a batch-qual in Virginia and know that material is going to Taiwan that we have confidence it will pass batch-qual in Taiwan. There are certain materials that we require information on which synthesis batch, which production batch, and sometimes which bottling batch. Sometimes you take a yield hit because you didn’t have the right vision, and then you institute batch qual.

I think most of you are familiar with the concept of ‘ship-to-stock,’ when you have enough good statistical history and a good change management process with the supplier then you can do ship-to-stock and that reduces the batch-qual overhead. On a case by case basis you have to figure out how difficult that is. A small story I can tell is that with Block Co-Polymer (BCP) self-assembly we found one particular element that in concentration above 5 ppm prevented the poly-styrene from self-assembling in the same way, whereas other metal trace contaminants could be a hundred times higher and have no effect on the process. So this gets back to some of our earlier discussion that it’s not enough to know that your trace elements are below some level. Tell me the exact atoms and the exact counts and then we’ll talk about using them. The BCP R&D taught us that in some situations just changing from one batch to the next could increase defects a thousands times. So we will see a bigger push to counting atoms.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Air-Gaps for FinFETs Shown at IEDM

Friday, October 28th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Researchers from IBM and Globalfoundries will report on the first use of “air-gaps” as part of the dielectric insulation around active gates of “10nm-node” finFETs at the upcoming International Electron Devices Meeting (IEDM) of the IEEE (ieee-iedm.org). Happening in San Francisco in early December, IEDM 2016 will again provide a forum for the world’s leading R&D teams to show off their latest-greatest devices, including 7nm-node finFETs by IBM/Globalfoundries/Samsung and by TSMC. Air-gaps reduce the dielectric capacitance that slows down ICs, so their integration into transistor structures leads to faster logic chips.

History of Airgaps – ILD and IPD

As this editor recently covered at SemiMD, in 1998, Ben Shieh—then a researcher at Stanford University and now a foundry interface for Apple Corp.—first published (Shieh, Saraswat & McVittie. IEEE Electron Dev. Lett., January 1998) on the use of controlled pitch design combined with CVD dielectrics to form “pinched-off keyholes” in cross-sections of inter-layer dielectrics (ILD).

In 2007, IBM researchers showed a way to use sacrificial dielectric layers as part of a subtractive process that allows air-gaps to be integrated into any existing dielectric structure. In an interview with this editor at that time, IBM Fellow Dan Edelstein explained, “we use lithography to etch a narrow channel down so it will cap off, then deliberated damage the dielectric and etch so it looks like a balloon. We get a big gap with a drop in capacitance and then a small slot that gets pinched off.

Intel presented on their integration of air-gaps into on-chip interconnects at IITC in 2010 but delayed use until the company’s 14nm-node reached production in 2014. 2D-NAND fabs have been using air-gaps as part of the inter-poly dielectric (IPD) for many years, so there is precedent for integration near the gate-stack.

Airgaps for finFETs

Now researchers from IBM and Globalfoundries will report in (IEDM Paper #17.1, “Air Spacer for 10nm FinFET CMOS and Beyond,” K. Cheng et al) on the first air-gaps used at the transistor level in logic. Figure 1 shows that for these “10nm-node” finFETs the dielectric spacing—including the air-gap and both sides of the dielectric liner—is about 10 nm. The liner needs to be ~2nm thin so that ~1nm of ultra-low-k sacrificial dielectric remains on either side of the ~5nm air-gap.

Fig.1: Schematic of partial air-gaps only above fin tops using dielectric liners to protect gate stacks during air-gap formation for 10nm finFET CMOS and beyond. (source: IEDM 2016, Paper#17.1, Fig.12)

These air-gaps reduced capacitance at the transistor level by as much as 25%, and in a ring oscillator test circuit by as much as 15%. The researchers say a partial integration scheme—where the air-gaps are formed only above the tops of fin— minimizes damage to the FinFET, as does the high-selectivity etching process used to fabricate them.

Figure 2 shows a cross-section transmission electron micrograph (TEM) of what can go wrong with etch-back air-gaps when all of the processes are not properly controlled. Because there are inherent process:design interactions needed to form repeatable air-gaps of desired shapes, this integration scheme should be extendable “beyond” the “10-nm node” to finFETs formed at tighter pitches. However, it seems likely that “5nm-node” logic FETs will use arrays of horizontal silicon nano-wires (NW), for which more complex air-gap integration schemes would seem to be needed.

Fig.2: TEM image of FinFET transistor damage—specifically, erosion of the fin and source-drain epitaxy—by improper etch-back of the air-gaps at 10nm dimensions. (source: IEDM 2016, Paper#17.1, Fig.10)

—E.K.

D2S Releases 4th-Gen IC Computational Design Platform

Friday, September 30th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

D2S (www.design2silicon.com) recently released the fourth generation of its computational design platform (CDP), which enables extremely fast (400 Teraflops) and precise simulations for semiconductor design and manufacturing. The new CDP is based on NVIDIA Tesla K80 GPUs and Intel Haswell CPUs, and is architected for 24×7 cleanroom production environments. To date, 14 CDPs across four platform generations are in use by customers around the globe, including six of the latest fourth generation. In an exclusive interview with SemiMD, D2S CEO Aki Fujimura stated, “Now that GPUs and CPUs are fast-enough, they can replace other hardware and thereby free up engineering resources to focus on adding value elsewhere.”

Mask data preparation (MDP) and other aspects of IC design and manufacturing require ever-increasing levels of speed and reliability as the data sets upon which they must operate grow larger and more complex with each device generation. The Figure shows a mask needed to print arrays of sub-wavelength features includes complex curvilinear shapes which must be precisely formed even though they do not print on the wafer. Such sub-resolution assist features (SRAF) increase in complexity and density as the half-pitch decreases, so the complexity of mask data increases far more than the density of printed features.

Sub-wavelength lithography using 193nm wavelength requires ever-more complex masks to repeatably print ever smaller half-pitch (HP) features, as shown by (LEFT) a typical mask composed of complex nested curves and dots which do not print (RIGHT) in the array of 32nm HP contacts/vias represented by the small red circles. (Source: D2S)

GPUs, which were first developed as processing engines for the complex graphical content of computer games, have since emerged as an attractive option for compute-intensive scientific applications due in part to their ability to run many more computing threads (up to 500x) compared to similar-generation CPUs. “Being able to process arbitrary shapes is something that mask shops will have to do,” explained Fujimura. “The world could go 193nm or EUV at any particular node, but either way there will be more features and higher complexity within the features, and all of that points to GPU acceleration.”

The D2S CDP is engineered for high reliability inside a cleanroom manufacturing environment. A few of the fab applications where CDPs are currently being used include:

  • model-based MDP for leading-edge designs that require increasingly complex mask shapes,
  • wafer plane analysis of SEM mask images to identify mask errors that print, and
  • inline thermal-effect correction of eBeam mask writers to lower write times.

“The amount of design data required to produce photomasks for leading-edge chip designs is increasing at an exponential rate, which puts more pressure on mask writing systems to maintain reasonable write times for these advanced masks. At the same time, writing these masks requires higher exposure doses and shot counts, which can cause resist proximity heating effects that lead to mask CD errors,” stated Noriaki Nakayamada, group manager at NuFlare Technology. “D2S GPU acceleration technology significantly reduces the calculation time required to correct these resist heating effects. By employing a resist heating correction that includes the use of the D2S CDP as an OEM option on our mask writers, NuFlare estimates that it can reduce CD errors by more than 60 percent, and reduce write times by more than 20 percent.”

In the E-beam Initiative 2015 survey, the most advanced reported mask-set contained >100 masks of which ~20% could be considered ‘critical’. The just released 2016 survey disclosed that the most complex single-layer mask design written last year required 16 TB of data, however platforms like D2S’ CDP have been used to accelerate writing such that the average reported write times have decreased to a weighted average of 4 hours. Meanwhile, the longest reported mask write time decreased from 72 to 48 hours.

3D-NAND Deposition and Etch Integration

Thursday, September 1st, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

3D-NAND chips are in production or pilot-line manufacturing at all major memory manufacturers, and they are expected to rapidly replace most 2D-NAND chips in most applications due to lower costs and greater reliability. Unlike 2D-NAND which was enabled by lithography, 3D-NAND is deposition and etch enabled. “With 3D-NAND you’re talking about 40nm devices, while the most advanced 2D-NAND is running out of steam due to the limited countable number of stored electrons-per-cell, and in terms of the repeatability due to parasitics between adjacent cells,” reminded Harmeet Singh, corporate vice president of Lam Research in an exclusive interview with SemiMD to discuss the company’s presentation at the Flash Memory Summit 2016.

“We’re in an era where deposition and etch uniquely define the customer roadmap,” said Singh,“and we are the leading supplier in 3D-NAND deposition and etch.” Though each NAND manufacturer has different terminology for their unique 3D variant, from a manufacturing process integration perspective they all share similar challenges in the following simplified process sequences:

1)    Deposition of 32-64 pairs of blanket “mold stack” thin-films,

2)    Word-line hole etch through all layers and selective fill of NAND cell materials, and

3)    Formation of “staircase” contacts to each cell layer.

Each of these unique process modules is needed to form the 3D arrays of NVM cells.

For the “mold stack” deposition of blanket alternating layers, it is vital for the blanket PECVD to be defect-free since any defects are mirrored and magnified in upper-layers. All layers must also be stress-free since the stress in each deposited layer accumulates as strain in the underlying silicon wafer, and with over 32 layers the additive strain can easily warp wafers so much that lithographic overlay mismatch induces significant yield loss. Controlled-stress backside thin-film depositions can also be used to balance the stress of front-side films.

Hole Etch

“The difficult etch of the hole, the materials are different so the challenges is different,” commented Singh about the different types of 3D-NAND now being manufactured by leading fabs. “During this conference, one of our customer presented that they do not see the hole diameters shrinking, so at this point it appears to us that shrinking hole diameters will not happen until after the stacking in z-dimension reaches some limit.”

Tri-Layer Resist (TLR) stacks for the hole patterning allow for the amorphous carbon hardmask material to be tuned for maximum etch resistance without having to compromise the resolution of the photo-active layer needed for patterning. Carbon mask is over 3 microns thick and carbon-etching is usually responsive to temperature, so Lam’s latest wafer-chuck for etching features >100 temperature control zones. “This is an example of where Lam is using it’s processes expertise to optimize both the hardmask etch as well as the actual hole etch,” explained Singh.

Staircase Etch

The Figure shows a simplified cross-sectional schematic of how the unique “staircase” wordline contacts are cost-effectively manufactured. The established process of record (POR) for forming the “stairs” uses a single mask exposure of thick KrF photoresist—at 248nm wavelength—to etch 8 sets of stairs controlled by a precise resist trim. The trimming step controls the location of the steps such that they align with the contact mask, and so must be tightly controlled to minimize any misalignment yield loss.

A) Simplified cross-sectional schematic of the staircase etch for 3D-NAND contacts using thick photoresist, B) which allows for controlled resist trimming to expose the next “stair” such that C) successive trimming creates 8-16 steps from a single initial photomask exposure. (Source: Ed Korczynski)

Lam is working on ways to tighten the trimming etch uniformity such that 16 sets of stairs can be repeatably etched from a single KrF mask exposure. Halving the relative rate of vertical etch to lateral etch of the KrF resist allows for the same resist thickness to be used for double the number of etches, saving lithography cost. “We see an amazing future ahead because we are just at the beginning of this technology,” commented Singh.

—E.K.

Applied Materials Releases Selective Etch Tool

Wednesday, June 29th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Applied Materials has disclosed commercial availability of new Selectra(TM) selective etch twin-chamber hardware for the company’s high-volume manufacturing (HVM) Producer® platform. Using standard fluorine and chlorine gases already used in traditional Reactive Ion Etch (RIE) chambers, this new tool provides atomic-level precision in the selective removal of materials in 3D devices structures increasingly used for the most advanced silicon ICs. The tool is already in use at three customer fabs for finFET logic HVM, and at two memory fab customers, with a total of >350 chambers planned to have been shipped to many customers by the end of 2016.

Figure 1 shows a simplified cross-sectional schematic of the Selectra chamber, where the dashed white line indicates some manner of screening functionality so that “Ions are blocked, chemistry passes through” according to the company. In an exclusive interview with Solid State Technology, company representative refused to disclose any hardware details. “We are using typical chemistries that are used in the industry,” explained Ajay Bhatnagar, managing director of Selective Removal Products for Applied Materials. “If there are specific new applications needed than we can use new chemistry. We have a lot of IP on how we filter ions and how we allow radicals to combine on the wafer to create selectivity.”

FIG 1: Simplified cross-sectional schematic of a silicon wafer being etched by the neutral radicals downstream of the plasma in the Selectra chamber. (Source: Applied Materials)

From first principles we can assume that the ion filtering is accomplished with some manner of electrically-grounded metal screen. This etch technology accomplishes similar process results to Atomic Layer Etch (ALE) systems sold by Lam, while avoiding the need for specialized self-limiting chemistries and the accompanying chamber throughput reductions associated with pulse-purge process recipes.

“What we are doing is being able to control the amount of radicals coming to the wafer surface and controlling the removal rates very uniformly across the wafer surface,” asserted Bhatnagar. “If you have this level of atomic control then you don’t need the self-limiting capability. Most of our customers are controlling process with time, so we don’t need to use self-limiting chemistry.” Applied Materials claims that this allows the Selectra tool to have higher relative productivity compared to an ALE tool.

Due to the intrinsic 2D resolutions limits of optical lithography, leading IC fabs now use multi-patterning (MP) litho flows where sacrificial thin-films must be removed to create the final desired layout. Due to litho limits and CMOS device scaling limits, 2D logic transistors are being replaced by 3D finFETs and eventually Gate-All-Around (GAA) horizontal nanowires (NW). Due to dielectric leakage at the atomic scale, 2D NAND memory is being replaced by 3D-NAND stacks. All of these advanced IC fab processes require the removal of atomic-scale materials with extreme selectivity to remaining materials, so the Selectra chamber is expected to be a future work-horse for the industry.

When the industry moves to GAA-NW transistors, alternating layers of Si and SiGe will be grown on the wafer surface, 2D patterned into fins, and then the sacrificial SiGe must be selectively etched to form 3D arrays of NW. Figure 2 shows the SiGe etched from alternating Si/SiGe stacks using a Selectra tool, with sharp Si corners after etch indicating excellent selectivity.

FIG 2: SEM cross-section showing excellent etch of SiGe within alternating Si/SiGe layers, as will be needed for Gate-All-Around (GAA) horizontal NanoWire (NW) transistor formation. (Source: Applied Materials)

“One of the fundamental differences between this system and old downstream plasma ashers, is that it was designed to provide extreme selectivity to different materials,” said Matt Cogorno, global product manager of Selective Removal Products for Applied Materials. “With this system we can provide silicon to titanium-nitride selectivity at 5000:1, or silicon to silicon-nitride selectivity at 2000:1. This is accomplished with the unique hardware architecture in the chamber combined with how we mix the chemistries. Also, there is no polymer formation in the etch process, so after etching there are no additional processing issues with the need for ashing and/or a wet-etch step to remove polymers.”

Systems can also be used to provide dry cleaning and surface-preparation due to the extreme selectivity and damage-free material removal.  “You can control the removal rates,” explained Cogorno. “You don’t have ions on the wafer, but you can modulate the number of radicals coming down.” For HVM of ICs with atomic-scale device structures, this new tool can widen process windows and reduce costs compared to both dry RIE and wet etching.

—E.K.

Many Mixes to Match Litho Apps

Thursday, March 3rd, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

“Mix and Match” has long been a mantra for lithographers in the deep-sub-wavelength era of IC device manufacturing. In general, forming patterns with resolution at minimum pitch as small as 1/4 the wavelength of light can be done using off-axis illumination (OAI) through reticle enhancement techniques (RET) on masks, using optical proximity correction (OPC) perhaps derived from inverse lithography technology (ILT). Lithographers can form 40-45nm wide lines and spaces at the same half-pitch using 193nm light (from ArF lasers) in a single exposure.

Figure 1 shows that application-specific tri-layer photoresists are used to reach the minimum resolution of 193nm-immersion (193i) steppers in a single exposure. Tighter half-pitch features can be created using all manner of multi-patterning processes, including Litho-Etch-Litho-Etch (LELE or LE2) using two masks for a single layer or Self-Aligned Double Patterning (SADP) using sidewall spacers to accomplish pitch-splitting. SADP has been used in high volume manufacturing (HVM) of logic and memory ICs for many years now, and Self-Aligned Quadruple Patterning (SAQP) has been used in HVM by at least one leading memory fab.

Fig.1: Basic tri-layer resist (TLR) technology uses thin Photoresist over silicon-containing Hard-Mask over Spin-On Carbon (SOC), for patterning critical layers of advanced ICs. (Source: Brewer Science)

Next-Generation Lithography (NGL) generally refers to any post-optical technology with at least some unique niche patterning capability of interest to IC fabs:  Extreme Ultra-Violet (EUV), Directed Self-Assembly (DSA), and Nano-Imprint Lithography (NIL). Though proponents of each NGL have dutifully shown capabilities for targeted mask layers for logic or memory, the capabilities of ArF dry and immersion (ArFi) scanners to process >250 wafers/hour with high uptime dominates the economics of HVM lithography.

The world’s leading lithographers gather each year in San Jose, California at SPIE’s Advanced Lithography conference to discuss how to extend optical lithography. So of all the NGL technologies, which will win out in the end?

It is looking most likely that the answer is “all of the above.” EUV and NIL could be used for single layers. For other unique patterning application, ArF/ArFi steppers will be used to create a basic grid/template which will be cut/trimmed using one of the available NGL. Each mask layer in an advanced fab will need application-specific patterning integration, and one of the rare commonalities between all integrated litho modules is the overwhelming need to improve pattern overlay performance.

Naga Chandrasekaran, Micron Corp. vice president of Process R&D, provided a fantastic overview of the patterning requirements for advanced memory chips in a presentation during Nikon’s LithoVision technical symposium held February 21st in San Jose, California prior to the start of SPIE-AL. While resolution improvements are always desired, in the mix-and-match era the greatest challenges involve pattern overlay issues. “In high volume manufacturing, every nanometer variation translates into yield loss, so what is the best overlay that we can deliver as a holistic solution not just considering stepper resolution?” asks Chandrasekaran. “We should talk about cost per nanometer overlay improvement.”

Extreme Ultra-Violet (EUV)

As touted by ASML at SPIE-AL, the brightness and stability and availability of tin-plasma EUV sources continues to improve to 200W in the lab “for one hour, with full dose control,” according to Michael Lercel, ASML’s director of strategic marketing. ASML’s new TWINSCAN NXE:3350B EUVL scanners are now being shipped with 125W power sources, and Intel and Samsung Electronics reported run their EUV power sources at 80W over extended periods.

During Nikon’s LithoVision event, Mark Phillips, Intel Fellow and Director of Lithography Technology Development for Logic, summarized recent progress of EUVL technology:  ~500 wafers-per-day is now standard, and ~1000 wafer-per-day can sometimes happen. However, since grids can be made with ArFi for 1/3 the cost of EUVL even assuming best productivity for the latter, ArFi multi-patterning will continue to be used for most layers. “Resolution is not the only challenge,” reminded Phillips. “Total edge-placement-error in patterning is the biggest challenge to device scaling, and this limit comes before the device physics limit.”

Directed Self-Assembly (DSA)

DSA seems most suited for patterning the periodic 2D arrays used in memory chips such as DRAMs. “Virtual fabrication using directed self-assembly for process optimization in a 14nm DRAM node” was the title of a presentation at SPIE-AL by researchers from Coventor, in which DSA compared favorably to SAQP.

Imec presented electrical results of DSA-formed vias, providing insight on DSA processing variations altering device results. In an exclusive interview with Solid State Technology and SemiMD, imec’s Advanced Patterning Department Director Greg McIntyre reminds us that DSA could save one mask in the patterning of vias which can all be combined into doublets/triplets, since two masks would otherwise be needed to use 193i to do LELE for such a via array. “There have been a lot of patterning tricks developed over the last few years to be able to reduce variability another few nanometers. So all sorts of self-alignments.”

While DSA can be used for shrinking vias that are not doubled/tripled, there are commercially proven spin-on shrink materials that cost much less to use as shown by Kaveri Jain and Scott Light from Micron in their SPIE-AL presentation, “Fundamental characterization of shrink techniques on negative-tone development based dense contact holes.” Chemical shrink processes primarily require control over times, temperatures, and ambients inside a litho track tool to be able repeatably shrink contact hole diameters by 15-25 nm.

Nano-Imprint Litho (NIL)

For advanced IC fab applications, the many different options for NIL technology have been narrowed to just one for IC HVM. The step-and-pattern technology that had been developed and trademarked as “Jet and Flash Imprint Lithography” or “J-FIL” by, has been commercialized for HVM by Canon NanoTechnologies, formerly known as Molecular Imprints. Canon shows improvements in the NIL mask-replication process, since each production mask will need to be replicated from a written master. To use NIL in HVM, mask image placement errors from replication will have to be reduced to ~1nm., while the currently available replication tool is reportedly capable of 2-3nm (3 sigma).

Figure 2 shows normalized costs modeled to produce 15nm half-pitch lines/spaces for different lithography technologies, assuming 125 wph for a single EUV stepper and 60 wph for a cluster of 4 NIL tools. Key to throughput is fast filling of the 26mmx33mm mold nano-cavities by the liquid resist, and proper jetting of resist drops over a thin adhesion layer enables filling times less than 1 second.

Fig.2: Relative estimated costs to pattern 15nm half-pitch lines/spaces for different lithography technologies, assuming 125 wph for a single EUV stepper and 60 wph for a cluster of 4 NIL tools. (Source: Canon)

Researchers from Toshiba and SK Hynix described evaluation results of a long-run defect test of NIL using the Canon FPA-1100 NZ2 pilot production tool, capable of 10 wafers per hour and 8nm overlay, in a presentation at SPIE-AL titled, “NIL defect performance toward high-volume mass production.” The team categorized defects that must be minimized into fundamentally different categories—template, non-filling, separation-related, and pattern collapse—and determined parallel paths to defect reduction to allow for using NIL in HVM of memory chips with <20nm half-pitch features.

—E.K.

Memristor Variants and Models from Knowm

Friday, January 22nd, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Knowm Inc. (www.knowm.org), a start-up pioneering next-generation advanced computing architectures and technology, recently announced the availability of two new variations of memristors targeting different neuromorphic applications. The company also announced raw device data available for purchase to help researchers develop and improve memristor models. These new Knowm offerings enable the next step in the R&D of radically new chips for pattern-recognition, machine-learning, and artificial intelligence (AI) in general.

There is general consensus between industry and academia and government that future improvements in computing are now severely limited by the amount of energy it takes to use Von Neumann architectures. Consequently, the US Whitehouse has issued a grand challenge with the Energy-Efficient Computing: from Devices to Architectures (E2CDA) program (http://www.nsf.gov/pubs/2016/nsf16526/nsf16526.htm) actively soliciting proposals through March 28, 2016.

The Figure shows a schematic cross-section of Knowm’s memristor devices—with Tin (Sn) and Chromium (Cr) metal layers as the new options to tungsten (W)—along with the device I/V curves for each. “They differ in their activation threshold,” explained Knowm CEO and co-founder Alex Nugent in an exclusive interview with Solid State Technology. “As the activation thresholds become smaller you get reduced data retention, but higher cycle endurance. As that threshold increases you have to dissipate more energy per event, and the more energy you dissipate the faster it will burn-out.” Knowm’s two new memristors, as well as the company’s previously announced device, are now available as unpackaged raw dice with masks designed for research probe stations.

Figure: Schematic cross-section of Knowm’s memristor devices using Tin (Sn) or Chromium (Cr) or tungsten (W) metal layers, along with the device I/V curves for each. (Source: Knowm)

Knowm is working on the simultaneous co-optimization of the entire “stack” from memristors to circuit architectures to application-specific algorithms. “The potential of memristors is so huge that we are seeing exponential growth in the literature, a sort of gold rush as engineers race to design new circuits and re-envision old circuits,” commented Knowm CEO and co-founder Alex Nugent. “The problem is that in the race to publish, circuit designers are adopting models that do not adequately describe real devices.” Knowm’s raw data includes AC, DC, pulse response, and retention for different memristors.

Additional memristors are being developed by Knowm’s R&D lab partner Dr. Kris Campbell of  Boise State University (http://coen.boisestate.edu/kriscampbell/), using different metal layers to achieve different activation thresholds beyond the three shown to date. “She has discovered an algorithm for creating memristors along this dimension,” said Nugent. “From a physics perspective it makes sense that there would be devices with high cycle endurance but reduced data retention.”

“In the future what I image is a single chip with multiple memristors on it. Some will be volatile and very fast, while others will be slow,” continued Nugent. “Just like analog design today uses different capacitors, future neuromophic chips would likely use memristors optimized for different changes in adaptation threshhold. If you think about memristors as fundamental elements—as per Leon Chua (https://en.wikipedia.org/wiki/Leon_O._Chua)—then it makes sense that we’ll need different memristors.”

The applications spaces for these devices have intrinsically different requirements for speed and retention. For example, to exploit these devices for pattern recognition and/or anomaly detection (keeping track of confidence in making temporal predictions) it seems best to choose relatively high activation thresholds because the number of operations is unlikely to burn-out devices. Conversely, for circuits that constantly solve optimization problems the best memristors would require low burn-out and thus low activation thresholds. However, analog applications are generally problematic because the existing memristors leak current, such that stored values degrade over time.

Knowm is shipping devices today, mostly to university researchers, and has tested thousands of devices itself. The Knowm memristors can be fabricated at <500°C using industry-standard unit-process steps, allowing for eventual integration with silicon CMOS “back-end” metallization layers. While still in early R&D, this technology could provide much of the foundation for post-Moore’s-Law silicon ICs.

—E.K.

Low-Cost Manufacturing of Flexible Functionalities

Wednesday, July 15th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

SEMICON West includes many business and technology workshops and forums for attendees.  On Wednesday morning July 15, attendees packed the TechXPOT in the South Hall of Moscone Center to hear updates on the status of flexible hybrid electronics manufacturing.

M-H. Huang of Corning showed the surprising properties of “Corning Willow Glass: Substrates for flexible electronic devices.” Willow Glass is created in a fusion-forming process similar to that used to create Gorilla Glass, though with thickness <=200 microns to allow for flexibility. “A key advantage is hermeticity compared to plastic substrates,” reminded Huang. Thin bare glass without any edge or surface coatings can be repeatably bent and twisted without cracking. The minimum bending radius for roll-to-roll (R2R) processing is limited by coating layer delamination:  12.5mm for bare glass, 25mm for AZO-coated glass, and 50mm radius for CZTS cells on glass all passing 500 bending cycles at 60 cycles per minute. Working with the State University of New York at Binghamton Center for Advanced Microelectronic Manufacturing (CAMM), Corning has demonstrated R2R sputtering of Al, Cr/Cu, ITO, SiO2, and IGZO films. Collaborating with ITRI in Taiwan using tools designed specifically for processing flexible glass, Corning demonstrated R2R gravure-offset printing of metal mesh structures silver ink that can be used for 7” touch-panels. Working with both CAMM and ITRI has led to R&D fabrication of a touch sensor with 90% device yield.

Thomas Lantzer, of DuPont Electronic Materials, discussed the “Materials Supplier Perspective on Flexible Hybrid Electronics.” Since the overarching goal of flexible electronics is not just mass and volume reduction but a huge reduction in manufacturing cost, it is axiomatic that fabrication must evolving toward the use of traditional printing methods and flexible substrates.

“There are many printing techniques,” explained Lantzer, “So there are building blocks out there today that we feel will lead to an explosion of fabrication capabilities in the future.” DuPont has been actively involve in flexible materials and electronics for decades, supplying screen printed conductive pastes, resistor pastes for automotive defoggers, flexible films, and flexible materials for copper circuitry.

Mark Poliks, Professor at the State University of New York at Binghamton and Director of the Center for Advanced Microelectronic Manufacturing (CAMM), provided a comprehensive overview of “Materials, Processes & Tools for Fabrication of Flexible Hybrid Electronics.” Working with partners in the Nano-Bio Manufacturing Consortium since 2013, CAMM researchers are developing a wearable disposable sensor system with a target price of $2 to measure human performance parameters. The device including sensors, processor, battery, and wireless communications blocks will be built with copper (Cu) connections on flexible substrates such as polyimide. Initial functionalities will include biometric parameters such as electro-cardio-gram (ECG) signals and skin temperature. First prototypes of ECG sensors on 12.5 micron thin polyimide have been completed, which demonstrate output wave forms with equal or better signal extraction compared to industry standard silver/silver-chloride (Ag/AgCl) electrodes. This new printed sensor and breadboard electronics can be flexed over 200 times and retain the same signal quality and heart-beat extraction. The flexible substrate can accommodate assembly processes for flip-chip (FC) ASIC dice having micro-bumps on a 70 micron pitch, using die-placement accuracy of 9 microns (3 sigma). For flexible hybrid applications, dual-sided placement of components along with printed circuitry reduces the real estate of the final packaged device.

Next Page »