Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘Semiconductor’

Next Page »

Lithographic Stochastic Limits on Resolution

Monday, April 3rd, 2017

thumbnail

By Ed Korczynski, Sr. Technical Editor

The physical and economic limits of Moore’s Law are being approached as the commercial IC fab industry continues reducing device features to the atomic-scale. Early signs of such limits are seen when attempting to pattern the smallest possible features using lithography. Stochastic variation in the composition of the photoresist as well as in the number of incident photons combine to destroy determinism for the smallest devices in R&D. The most advanced Extreme Ultra-Violet (EUV) exposure tools from ASML cannot avoid this problem without reducing throughputs, and thereby increasing the cost of manufacturing.

Since the beginning of IC manufacturing over 50 years ago, chip production has been based on deterministic control of fabrication (fab) processes. Variations within process parameters could be controlled with statistics to ensure that all transistors on a chip performed nearly identically. Design rules could be set based on assumed in-fab distributions of CD and misalignment between layers to determine the final performance of transistors.

As the IC fab industry has evolved from micron-scale to nanometer-scale device production, the control of lithographic patterning has evolved to be able to bend-light at 193nm wavelength using Off-Axis Illumination (OAI) of Optical-Proximity Correction (OPC) mask features as part of Reticle Enhancement Technology (RET) to be able to print <40nm half-pitch (HP) line arrays with good definition. The most advanced masks and 193nm-immersion (193i) steppers today are able to focus more photons into each cubic-nanometer of photoresist to improve the contrast between exposed and non-exposed regions in the areal image. To avoid escalating cost and complexity of multi-patterning with 193i, the industry needs Extreme Ultra-Violet Lithography (EUVL) technology.

Figure 1 shows Dr. Britt Turkot, who has been leading Intel’s integration of EUVL since 1996, reassuring a standing-room-only crowd during a 2017 SPIE Advanced Lithography (http://spie.org/conferences-and-exhibitions/advanced-lithography) keynote address that the availability for manufacturing of EUVL steppers has been steadily improving. The new tools are close to 80% available for manufacturing, but they may need to process fewer wafers per hour to ensure high yielding final chips.

Figure 1. Britt Turkot (Intel Corp.) gave a keynote presentation on "EUVL Readiness for High-Volume Manufacturing” during the 2017 SPIE Advanced Lithography conference. (Source: SPIE)

The KLA-Tencor Lithography Users Forum was held in San Jose on February 26 before the start of SPIE-AL; there, Turcot also provided a keynote address that mentioned the inherent stochastic issues associated with patterning 7nm-node device features. We must ensure zero defects within the 10 billion contacts needed in the most advanced ICs. Given 10 billion contacts it is statistically certain that some will be subject to 7-sigma fluctuations, and this leads to problems in controlling the limited number of EUV photons reaching the target area of a resist feature. The volume of resist material available to absorb EUV in a given area is reduced by the need to avoid pattern-collapse when aspect-ratios increase over 2:1; so 15nm half-pitch lines will generally be limited to just 30nm thick resist. “The current state of materials will not gate EUV,” said Turkot, “but we need better stochastics and control of shot-noise so that photoresist will not be a long-term limiter.”

TABLE:  EUVL stochastics due to scaled contact hole size. (Source: Intel Corp.)

CONTACT HOLE DIAMETER 24nm 16nm
INCIDENT EUV PHOTONS 4610 2050
# ABSORBED IN AREAL IMAGE 700 215

From the LithoGuru blog of gentleman scientist Chris Mack (http://www.lithoguru.com/scientist/essays/Tennants_Law.html):

One reason why smaller pixels are harder to control is the stochastic effects of exposure:  as you decrease the number of electrons (or photons) per pixel, the statistical uncertainty in the number of electrons or photons actually used goes up. The uncertainty produces line-width errors, most readily observed as line-width roughness (LWR). To combat the growing uncertainty in smaller pixels, a higher dose is required.

We define a “stochastic” or random process as a collection of random variables (https://en.wikipedia.org/wiki/Stochastic_process), and a Wiener process (https://en.wikipedia.org/wiki/Wiener_process) as a continuous-time stochastic process in honor of Norbert Wiener. Brownian motion and the thermally-driven diffusion of molecules exhibit such “random-walk” behavior. Stochastic phenomena in lithography include the following:

  • Photon count,
  • Photo-acid generator positions,
  • Photon absorption,
  • Photo-acid generation,
  • Polymer position and chain length,
  • Diffusion during post-exposure bake,
  • Dissolution/neutralization, and
  • Etching hard-mask.

Figure 2 shows the stochastics within EUVL start with direct photolysis and include ionization and scattering within a given discrete photoresist volume, as reported by Solid State Technology in 2010.

Figure 2. Discrete acid generation in an EUV resist is based on photolysis as well as ionization and electron scattering; stochastic variations of each must be considered in minimally scaled areal images. (Source: Solid State Technology)

Resist R&D

During SPIE-AL this year, ASML provided an overview of the state of the craft in EUV resist R&D. There has been steady resolution improvement over 10 years with Photo-sensitive Chemically-Amplified Resists (PCAR) from 45nm to 13nm HP; however, 13nm HP needed 58 mJ/cm2, and provided DoF of 99nm with 4.4nm LWR. The recent non-PCAR Metal-Oxide Resist (MOR) from Inpria has been shown to resolve 12nm HP with  4.7 LWR using 38 mJ/cm2, and increasing exposure to 70 mJ/cm2 has produced 10nm HP L/S patterns.

In the EUVL tool with variable pupil control, reducing the pupil fill increases the contrast such that 20nm diameter contact holes with 3nm Local Critical-Dimension Uniformity (LCDU) can be done. The challenge is to get LCDU to <2nm to meet the specification for future chips. ASML’s announced next-generation N.A. >0.5 EUVL stepper will use anamorphic mirrors and masks which will double the illumination intensity per cm2 compared to today’s 0.33 N.A. tools. This will inherently improve the stochastics, when eventually ready after 2020.

The newest generation EUVL steppers use a membrane between the wafer and the optics so that any resist out-gassing cannot contaminate the mirrors, and this allow a much wider range of materials to be used as resists. Regarding MOR, there are 3.5 times more absorbed photons and 8 times more electrons generated per photon compared to PCAR. Metal hard-masks (HM) and other under-layers create reflections that have a significant effect on the LWR, requiring tuning of the materials in resist stacks.

Default R&D hub of the world imec has been testing EUV resists from five different suppliers, targeting 20 mJ/cm2 sensitivity with 30nm thickness for PCAR and 18nm thickness for MOR. All suppliers were able to deliver the requested resolution of 16nm HP line/space (L/S) patterns, yet all resists showed LWR >5nm. In another experiment, the dose to size for imec’s “7nm-node” metal-2 (M2) vias with nominal pitch of 53nm was ~60mJ/cm2. All else equal, three times slower lithography costs three times as much per wafer pass.

—E.K.

Picosun and Hitachi MECRALD Process

Friday, February 24th, 2017

thumbnail

By Ed Korczynski, Sr. Technical Editor

A new microwave electron cyclotron resonance (MECR) atomic layer deposition (ALD) process technology has been co-developed by Hitachi High-Technologies Corporation and Picosun Oy to provide commercial semiconductor IC fabs with the ability to form dielectric films at lower temperatures. Silicon oxide and silicon nitride, aluminum oxide and aluminum nitride films have been deposited in the temperature range of 150-200 degrees C in the new 300-mm single-wafer plasma-enhanced ALD (PEALD) processing chamber.

With the device features within both logic and memory chips having been scaled to atomic dimensions, ALD technology has been increasingly enabling cost-effective high volume manufacturing (HVM) of the most advanced ICs. While the deposition rate will always be an important process parameter for HVM, the quality of the material deposited is far more important in ALD. The MECR plasma source provides a means of tunable energy to alter the reactivity of ALD precursors, thereby allowing for new degrees of freedom in controlling final film properties.

The Figure shows the MECRALD chamber— Hitachi High-Tech’s ECR plasma generator is integrated with Picosun’s digitally controlled ALD system—from an online video (https://youtu.be/SBmZxph-EE0) describing the process sequence:

1.  first precursor gas/vapor flows from a circumferential ring near the wafer chuck,

2.  first vacuum purge,

3.  second precursor gas/vapor is ionized as it flows down through the ECR zone above the circumferential ring, and

4.  second vacuum purge to complete one ALD cycle (which may be repeated).

Cross-sectional schematic of a new Microwave Electron Cyclotron Resonance (MECR) plasma source from Hitachi High-Technologies connected to a single-wafer Atomic Layer Deposition (ALD) processing chamber from Picosun. (Source: Picosun)

The development team claims that MECRALD films are superior to other PEALD films in terms of higher density, lower contamination of carbon and oxygen (in non-oxides), and also show excellent step-coverage as would be expected from a surface-driven ALD process. The relatively density of these films has been confirmed by lower wet etch rates. The single-wafer process non-uniformity on 300mm wafers is claimed at ~1% (1 sigma). The team is now exploring processes and precursors to be able to deposit additional films such as titanium nitride (TiN), tantalum nitride (TaN), and hafnium oxide (HfO). In an interview with Solid State Technology, a spokesperson from Hitachi High-Technologies explained that, “We are now at the development stage, and the final specifications mainly depend on future achievements.”

The MECR source has been used in Hitachi High-Tech’s plasma chamber for IC conductor etch for many years, and is able to generate a stable high-density plasma at very low pressure (< 0.1 Pa). MECR plasmas provide wide process windows through accurate plasma parameter management, such as plasma distribution or plasma position control. The same plasma technology is also used to control ions and radicals in the company’s dry cleaning chambers.

“I’m really impressed by the continuous development of ALD technology, after more than 40 years since the invention,” commented Dr. Tuomo Suntola, and the famous inventor and patentor of the Atomic Layer Deposition method in Finland in 1974, and member of the Picosun board of directors. “Now combining Hitachi and Picosun technologies means (there is) again a major breakthrough in advanced semiconductor manufacturing.”

MECRALD chambers can be clustered on a Picosun platform that features a Brooks robot handler. This technology is still under development, so it’s too soon to discuss manufacturing parameters such as tool cost and wafer throughput.

—E.K.

Vital Control in Fab Materials Supply-Chains – Part 2

Thursday, February 16th, 2017

By Ed Korczynski, Sr. Technical Editor

As detailed in Part 1 of this article published last month by SemiMD, the inaugural Critical Materials Council (CMC) Conference happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is Part 2 of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jeff Hemphill, Staff Materials R&D Engineer, Intel Corporation,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

FIGURE 1: 2016 CMC Conference expert panelists (from left to right) John Smyth, Jonas Sundqvist, Jeff Hemphill, and Jean-Marc Girard. (Source: TECHCET CA)

KORCZYNSKI:  We heard from David Thompson [EDITOR’S NOTE:  Director of Process Chemistry, Applied Materials presented on “Agony in New Material Introductions -  Minimizing and Correlating Variabilities”] today on what we must control, and he gave an example of a so-called trace-contaminant that was essential for the process performance of a precursor, where the trace compound helped prevent particles from flaking off chamber walls. Do we need to specify our contaminants?

GIRARD:  Yes. To David’s point this morning, every molecule is different. Some are very tolerant due to the molecular process associated with it, and some are not. I’ll give you an example of a cobalt material that’s been talked about, where it can be run in production at perhaps 95% in terms of assay, provided that one specific contaminant is less than a couple of parts-per-million. So it’s a combination of both, it’s not assay OR a specification of impurities. It’s a matter of specifying the trace components that really matter when you reach the point that the data you gather gives you that understanding, and obviously an assay within control limits.

HEMPHILL:  Talking about whether we’re over-specifying or not, the emphasis is not about putting the right number on known parameters like assay that are obvious to measure, the emphasis is on identifying and understanding what makes up the rest of it and in a sense trying over-specify that. You identify through mass-spectrometry and other techniques that some fraction of a percent is primarily say five different species, it’s finding out how to individually monitor and track and control those as separate parameters. So from a specification point of view what we want is not necessarily the lowest possible numbers, but it’s expanding how many things we’re looking at so that we’re capturing everything that’s there.

KORCZYNSKI:  Is that something that you’re starting to push out to your suppliers?

HEMPHILL:  Yes. It depends on the application we’re talking about, but we go into it with the assumption that just assay will not be enough. Whether a single molecule or a blend of things is supposed to be there, we know that just having those be controlled by specification will not be sufficient. We go under the assumption that we are going to identify what makes up the remaining part of the profile, and those components are going to need to be controlled as well.

KORCZYNSKI:  Is that something that has changed by node? Back when things were simpler say at 45nm and larger, were these aspects of processing that we could safely ignore as ‘noise’ but are now important ‘signals’?

HEMPHILL:  Yes, we certainly didn’t pay as close attention just a couple of generations ago.

KORCZYNSKI:  That seems to lead us to questions about single-sources versus dual-sourcing. There are many good reasons to do both, but not simultaneously. However, it seems that because of all of the challenges we’re heard about over the last day-and-a-half of this conference it creates greater burden on the suppliers, and for critical materials the fabs are moving toward more single-sourcing over time.

SMYTHE:  I think that it comes down to more of a concern over geographic risk. I’ll buy from one entity if that entity has more than one geographic location for the supply, so that I’m not exposed to a single ‘Act of God’ or a ‘random statistical occurrence of global warming.’ So for example I  need to ask if a supplier has a place in the US and a place in France that makes the same thing, so that if something bad happens in one location it can still be sourced? Or do you have an alternate-supply agreement that if you can’t supply it you have an agreement with Company-X to supply it so that you still have control? You can’t come to a Micron and say we want to make sure that we get at minimum 25% no matter what, because what typically happens with second-sourcing is Company-A gets 75% of the business while Company-B gets 25%. There are a lot of reasons that that doesn’t work so well, so people may have an impression that there’s a movement toward single-source but it’s ‘single flexible-source.’

HEMPHILL:  There are a lot of benefits of dual- or multiple-sourcing. The commercial benefits of competition can be positive and we’re for it when it works. The risk is that as things are progressing and we’re getting more sensitive to differences in materials it’s getting harder to maintain that. We have seen situations where historically we were successful with dual-sourcing a raw material coming from two different suppliers or even a single supplier using two different manufacturing lines and everything was fine and qualified and we could alternate sources invisibly. However, as our sensitivity has grown over time we can start to detect differences.

So the concept of being ‘copy-exactly’ that we use in our factories, we really need production lines to do that, and if we’re talking about two different companies producing the same material then we’re not going to get them to be copy-exactly. When that results in enough of a variation in the material that we can detect it in the factory then we cannot rely upon two sources. Our preference would be one company that maintains multiple production sites that are designed to be exactly the same, then we have a high degree of confidence that they will be able to produce the same material.

FIGURE 2: Jean-Marc Girard, Distinguished Member of Technical Staff of Micron Technology, provided the supplier perspective. (Source: SEMI)

GIRARD:  I can give you a supplier perspective on that. We are seeing very different policies from different customers, to the point that we’re seeing an increase in the number of customers doing single-sourcing with us, provided we can show the ability to maintain business continuity in case of a problem. I think that the industry became mature after the tragic earthquake and tsunami in Japan in 2011 with greater understanding of what business continuity means. We have the same discussions with our own suppliers, who may say that they have a dedicated reactor for a certain product with another backup reactor with a certain capacity on the same site, and we ask what happens if the plant goes on strike or there’s a fire there?

A situation where you might think the supply was stable involved silane in the United States. There are two large silane plants in the United States that are very far apart from each other and many Asian manufacturers dependent upon them. When the U.S. harbors went on strike for a long time there was no way that material could ship out of the U.S. customers. So, yes there were two plants but in such an event you wouldn’t have global supply. So there is no one way to manage our supply lines and we need to have conversations with our customers to discuss the risks. How much time would it take to rebuild a supply-chain source with someone else? If you can get that sort of constructive discussion going then customers are usually open to single-sourcing. One regional aspect is that Asian customers tend to favor dual-sourcing more, but that can lead to IP problems.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

Vital Control in Fab Materials Supply-Chains

Wednesday, January 25th, 2017

By Ed Korczynski, Sr. Technical Editor

The inaugural Critical Materials Council (CMC) Conference, co-sponsored by Solid State Technology, happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is an edited excerpt of the conversation between the following industry experts:

  • Jean-Marc Girard, CTO and Director of R&D, Air Liquide Advanced Materials,
  • Jonas Sundqvist, Sr. Scientist, Fraunhofer IKTS; and co-chair of ALD Conference, and
  • John Smythe, Distinguished Member of Technical Staff, Micron Technology.

KORCZYNSKI:  Let’s start with specifications: over-specifying, and under-specifying. Do we have the right methodologies to be able to estimate the approximate ‘ball-park’ range that the impurities need to be in?

GIRARD:  For determining the specifications, to some extent it doesn’t matter because we are out of the world of specs, where what matters is the control-limits. To Tim Hendry’s point in the Keynote yesterday [EDITOR’S NOTE:  Tim G. Hendrey, vice president of the Technology and Manufacturing Group and director of Fab Materials at Intel Corporation provided a conference keynote address on “Process Control Methods for Advanced Materials”], what was really interesting is instead of the common belief that we should start by supplying the product with the lowest possible variability, instead we should try to explore the window in which the product is working. So getting 10 containers from the same batch and introducing deliberate variability so that you know the process space in which you can play. That is the most important information to be able to reach the most reasonable and data-driven numbers to specify control limits. A lot of specs in the past were primarily determined by marketing decisions instead of data.

FIGURE 1: Jonas Sundqvist, Sr. Scientist of Fraunhofer IKTS, discusses collaboration with industry on application-specific ALD R&D. (Source: TECHCET CA)

SUNDQVIST:  Like the first introduction of what were called “super-clean” ALD precursors for the original MIS DRAM capacitors, Samsung used about 10nm of hafnium-aluminate and it would not matter if there was slight contamination in the precursors because you were not trying to control for a specific high-k phase. Whereas now you are doping very precisely and you have already scaled thinness so over time the specification for high-k precursors has become more important.

SMYTHE:  I think it comes down to the premise that when you are doing vapor transport through a bubbler that some would argue that that’s like a distillation column. So it’s a matter of thinking about what is transporting and what isn’t. In some cases the contaminant you’re concerned about is in the ampule but it never makes it to the process chamber, or the act of oxidizing destroys it as a volatile byproduct. So I think the bigger issue is change-management not necessarily the exact specification. You must know what you have, and agree that a single adjustment to improve the productivity of chemical synthesis requires that ‘fingerprinting’ must be done to show the same results. The argument is that you do not accept “less-than” as part of a specification, you only accept what it is.

AUDIENCE QUESTION:  The systems in which these precursors are used also have ‘memory’ based on the prior reactions in the chamber and byproducts that get absorbed on walls. When these byproducts come out in subsequent processing they can alter conditions so that you’re actually running in CVD-mode instead of ALD-mode. Chamber effects can wash-out a lot of value of having really pure chemicals moving through a delivery system into a chamber and picking up contaminants that you spent a whole lot of money taking out at the point of delivery. What do you think about that?

GIRARD:  Well, this is a ‘crisis!’ When something like this starts to happen in a fab or even during the development cycles, you can’t prioritize resources and approaches you just have to do everything. Sometimes it’s the tool, sometimes it’s the chemical, sometimes it’s the interaction of the two, sometimes it’s back-streaming from the vacuum sub-system…there are so many ways that things can go wrong. Certainly you have to clear up the chemistry part as early as possible.

SUNDQVIST:  We work with zirconium precursors for ALD, and you can develop a precursor that gives you a very pure ALD process that really works like an ALD process should. However, you can still use the TEMA-Zr precursor, that in processing has a CVD component which you can use that to gain throughput. So you can have a really good ALD precursor that gives low particle-counts and good process stability and ideal thermal processing range, but the growth rate goes down by 20% so you’re not very popular in the fab. Many things change when you make an ‘improved’ molecule to perfect the process, and sometime you want to use an imperfect part of the process.

FIGURE 2: John Smythe, Distinguished Member of Technical Staff of Micron Technology, explains approaches to controlling materials all the way to point-of-use. (Source: TECHCET CA)

SMYTHE:  What we’re doing a lot more these days is doing chamber finger-printing, where we’re putting a quad-filtered mass-spec on each chamber—not a cheap little RGA, but real analytical-grade—and it’s been enlightening. If you look at your chemistry moving through a delivery line using something like the Schrødenger software, it’s not a big deal to see that you can use the mass spec to see some synthesis happening in the line. We joke and call it ‘point of use synthesis’ but it’s not very funny. We are used to having spare delivery lines built-in so we can install tools to try to gain insights to prevent what we’ve been talking about.

KORCZYNSKI:  John, since Micron has fabs in Lehi and fabs in Singapore and other places, while they do run different product loads, do you have to worry about how long it takes things to travel on a slow boat to Singapore? Do you have to stockpile things more strategically these days, and does that effect your receiving department?

SMYTHE:  What we really need are a few good ocean-going hydrofoil ships! The most complete answer is we first identify which things need ‘batch-qual’ so if we do a batch-qual in Virginia and know that material is going to Taiwan that we have confidence it will pass batch-qual in Taiwan. There are certain materials that we require information on which synthesis batch, which production batch, and sometimes which bottling batch. Sometimes you take a yield hit because you didn’t have the right vision, and then you institute batch qual.

I think most of you are familiar with the concept of ‘ship-to-stock,’ when you have enough good statistical history and a good change management process with the supplier then you can do ship-to-stock and that reduces the batch-qual overhead. On a case by case basis you have to figure out how difficult that is. A small story I can tell is that with Block Co-Polymer (BCP) self-assembly we found one particular element that in concentration above 5 ppm prevented the poly-styrene from self-assembling in the same way, whereas other metal trace contaminants could be a hundred times higher and have no effect on the process. So this gets back to some of our earlier discussion that it’s not enough to know that your trace elements are below some level. Tell me the exact atoms and the exact counts and then we’ll talk about using them. The BCP R&D taught us that in some situations just changing from one batch to the next could increase defects a thousands times. So we will see a bigger push to counting atoms.

[DISCLOSURE:  Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

—E.K.

2D Materials May Be Brittle

Tuesday, November 15th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

International researchers using a novel in situ quantitative tensile testing platform have tested the uniform in-plane loading of freestanding membranes of 2D materials inside a scanning electron microscope (SEM). Led by materials researchers at Rice University, the in situ tensile testing reveals the brittle fracture of large-area molybdenum diselenide (MoSe2) crystals and measures their fracture strength for the first time. Borophene monolayers with a wavy topography are more flexible.

A communication to Advanced Materials online (DOI: 10.1002/adma.201604201) titled “Brittle Fracture of 2D MoSe2” by Yinchao Yang et al. disclosed work by researchers from the USA and China led by Department of Materials Science and NanoEngineering Professor Jun Lou at Rice University, Houston, Texas. His team found that MoSe2 is more brittle than expected, and that flaws as small as one missing atom can initiate catastrophic cracking under strain.

“It turns out not all 2D crystals are equal. Graphene is a lot more robust compared with some of the others we’re dealing with right now, like this molybdenum diselenide,” says Lou. “We think it has something to do with defects inherent to these materials. It’s very hard to detect them. Even if a cluster of vacancies makes a bigger hole, it’s difficult to find using any technique.” The team has posted a short animation online showing crack propagation.

2D Materials in a 3D World -222

While all real physical things in our world are inherently built as three-dimensional (3D) structures, a single layer of flat atoms approximates a two-dimensional (2D) structure. Except for special superconducting crystals frozen below the Curie temperature, when electrons flow through 3D materials there are always collisions which increase resistance and heat. However, certain single layers of crystals have atoms aligned such that electron transport is essentially confined within the 2D plane, and those electrons may move “ballistically” without being slowed by collisions.

MoSe2 is a dichalcogenide, a 2D semiconducting material that appears as a graphene-like hexagonal array from above but is actually a sandwich of Mo atoms between two layers of Se chalcogen atoms. MoSe2 is being considered for use as transistors and in next-generation solar cells, photodetectors, and catalysts as well as electronic and optical devices.

The Figure shows the micron-scale sample holder inside a SEM, where natural van der Waals forces held the sample in place on springy cantilever arms that measured the applied stress. Lead-author Yang is a postdoctoral researcher at Rice who developed a new dry-transfer process to exfoliate MoSe2 from the surface upon which it had been grown by chemical vapor deposition (CVD).

Custom built micron-scale mechanical jig used to test mechanical properties of nano-scale materials. (Source: Lou Group/Rice University)

The team measured the elastic modulus—the amount of stretching a material can handle and still return to its initial state—of MoSe2 at 177.2 (plus or minus 9.3) gigapascals (GPa). Graphene is more than five times as elastic. The fracture strength—amount of stretching a material can handle before breaking—was measured at 4.8 (plus or minus 2.9) GPa. Graphene is nearly 25 times stronger.

“The important message of this work is the brittle nature of these materials,” Lou says. “A lot of people are thinking about using 2D crystals because they’re inherently thin. They’re thinking about flexible electronics because they are semiconductors and their theoretical elastic strength should be very high. According to our calculations, they can be stretched up to 10 percent. The samples we have tested so far broke at 2 to 3 percent (of the theoretical maximum) at most.”

Borophene

“Wavy” borophene might be better, according to finding of other Rice University scientists. The Rice lab of theoretical physicist Boris Yakobson and experimental collaborators observed examples of naturally undulating metallic borophene—an atom-thick layer of boron—and suggested that transferring it onto an elastic surface would preserve the material’s stretchability along with its useful electronic properties.

Highly conductive graphene has promise for flexible electronics, but it is too stiff for devices that must repeatably bend, stretch, compress, or even twist. The Rice researchers found that borophene deposited on a silver substrate develops nanoscale corrugations, and due to weak binding to the silver can be exfoliated for transfer to a flexible surface. The research appeared recently in the American Chemical Society journal Nano Letters.

Rice University has been one of the world’s leading locations for the exploration of 1D and 2D materials research, ever since it was lucky enough to get a visionary genius like Richard Smalley to show up in 1976, so we should expect excellent work from people in their department of Materials Science and NanoEngineering (CSNE). Still, this ground-breaking work is being done in labs using tools capable of handling micron-scale substrates, so even after a metaphorical “path” has been found it will take a lot of work to build up a manufacturing roadway capable of fabricating meter-scale substrates.

—E.K.

Air-Gaps for FinFETs Shown at IEDM

Friday, October 28th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Researchers from IBM and Globalfoundries will report on the first use of “air-gaps” as part of the dielectric insulation around active gates of “10nm-node” finFETs at the upcoming International Electron Devices Meeting (IEDM) of the IEEE (ieee-iedm.org). Happening in San Francisco in early December, IEDM 2016 will again provide a forum for the world’s leading R&D teams to show off their latest-greatest devices, including 7nm-node finFETs by IBM/Globalfoundries/Samsung and by TSMC. Air-gaps reduce the dielectric capacitance that slows down ICs, so their integration into transistor structures leads to faster logic chips.

History of Airgaps – ILD and IPD

As this editor recently covered at SemiMD, in 1998, Ben Shieh—then a researcher at Stanford University and now a foundry interface for Apple Corp.—first published (Shieh, Saraswat & McVittie. IEEE Electron Dev. Lett., January 1998) on the use of controlled pitch design combined with CVD dielectrics to form “pinched-off keyholes” in cross-sections of inter-layer dielectrics (ILD).

In 2007, IBM researchers showed a way to use sacrificial dielectric layers as part of a subtractive process that allows air-gaps to be integrated into any existing dielectric structure. In an interview with this editor at that time, IBM Fellow Dan Edelstein explained, “we use lithography to etch a narrow channel down so it will cap off, then deliberated damage the dielectric and etch so it looks like a balloon. We get a big gap with a drop in capacitance and then a small slot that gets pinched off.

Intel presented on their integration of air-gaps into on-chip interconnects at IITC in 2010 but delayed use until the company’s 14nm-node reached production in 2014. 2D-NAND fabs have been using air-gaps as part of the inter-poly dielectric (IPD) for many years, so there is precedent for integration near the gate-stack.

Airgaps for finFETs

Now researchers from IBM and Globalfoundries will report in (IEDM Paper #17.1, “Air Spacer for 10nm FinFET CMOS and Beyond,” K. Cheng et al) on the first air-gaps used at the transistor level in logic. Figure 1 shows that for these “10nm-node” finFETs the dielectric spacing—including the air-gap and both sides of the dielectric liner—is about 10 nm. The liner needs to be ~2nm thin so that ~1nm of ultra-low-k sacrificial dielectric remains on either side of the ~5nm air-gap.

Fig.1: Schematic of partial air-gaps only above fin tops using dielectric liners to protect gate stacks during air-gap formation for 10nm finFET CMOS and beyond. (source: IEDM 2016, Paper#17.1, Fig.12)

These air-gaps reduced capacitance at the transistor level by as much as 25%, and in a ring oscillator test circuit by as much as 15%. The researchers say a partial integration scheme—where the air-gaps are formed only above the tops of fin— minimizes damage to the FinFET, as does the high-selectivity etching process used to fabricate them.

Figure 2 shows a cross-section transmission electron micrograph (TEM) of what can go wrong with etch-back air-gaps when all of the processes are not properly controlled. Because there are inherent process:design interactions needed to form repeatable air-gaps of desired shapes, this integration scheme should be extendable “beyond” the “10-nm node” to finFETs formed at tighter pitches. However, it seems likely that “5nm-node” logic FETs will use arrays of horizontal silicon nano-wires (NW), for which more complex air-gap integration schemes would seem to be needed.

Fig.2: TEM image of FinFET transistor damage—specifically, erosion of the fin and source-drain epitaxy—by improper etch-back of the air-gaps at 10nm dimensions. (source: IEDM 2016, Paper#17.1, Fig.10)

—E.K.

IoT Demands Part 2: Test and Packaging

Friday, April 15th, 2016

By Ed Korczynski, Senior Technical Editor, Solid State Technology, SemiMD

The Internet-of-Things (IoT) adds new sensing and communications to improve the functionality of all manner of things in the world. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be extremely small and low-cost. To understand the state of technology preparedness to meet the anticipated needs of the different application spaces, experts from GLOBALFOUNDRIES, Cadence, Mentor Graphics and Presto Engineering gave detailed answers to questions about IoT chip needs in EDA and fab nodes, as published in “IoT Demands:  EDA and Fab Nodes.” We continue with the conversation below.

Korczynski: For test of IoT devices which may use ultra-low threshold voltage transistors, what changes are needed compared to logic test of a typical “low-power” chip?

Steve Carlson, product management group director, Cadence

Susceptibility to process corners and operating conditions becomes heightened at near-threshold voltage levels. This translates into either more conservative design sign-off criteria, or the need for higher levels of manufacturing screening/tests. Either way, it has an impact on cost, be it hidden by over-design, or overtly through more costly qualification and test processes.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

We need to make sure that the testability has also been designed to be functional structurally in this mode. In addition, sub-threshold voltage operation must account for non-linear transistor characteristics and the strong impact of local process variation, for which the conventional testability arsenal is still very poor. Automotive screening used low voltage operation (VLV) to detect latent defects, but at very low voltage close to the transistor threshold, digital becomes analog, and therefore if the usual concept still works for defect detection, functional test and @speed tests require additional expertise to be both meaningful and efficient from a test coverage perspective.

Korczynski:  Do we have sufficient specifications within “5G” to handle IoT device interoperability for all market segments?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

The estimated timeline for standardization availability of 5G is around 2020. 5G is being designed keeping three classes of applications in mind:  Enhanced Mobile Broadband, Massive IoT, and Mission-Critical Control. Specifically for IoT, the focus is on efficient, low-cost communication with deep coverage. We will start to see early 5G technologies start to appear around 2018, and device connectivity,

interoperability and marshaling the data they generate that can apply to multiple IoT sub-segments and markets is still very much in development.

Korczynski:  Will the 1st-generation of IoT devices likely include wide varieties of solution for different market-segments such as industrial vs. retail vs. consumer, or will most device use similar form-factors and underlying technologies?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

If we use CES 2016 as a showcase, we are seeing IoT “Things” that are becoming use-case or application-centric as they apply to specific sub-segments such as Connected Home, Automotive, Medical, Security, etc. There is definitely more variety on the consumer front vs. industrial. Vendors / OEMs / System houses are differentiating at the user-interface design and form-factor levels while the “under-the-hood” IC capabilities and component technologies that provide the atomic intelligence are fairly common. ​

Steve Carlson, product management group director, Cadence

Right now it seems like everyone is swinging for the fence. Everyone wants the home-run product that will reach a billion devices sold. Generality generally leads to sub-optimality, so a single device usually fails to meet the needs and expectations of many. Devices that are optimized for more specific use cases and elements of purchasing criteria will win out. The question of interface is an interesting one.

Korczynski:  Will there be different product life-cycles for different IoT market-segments, such as 1-3 years for consumer but 5-10 years for industrial?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

That certainly seems to be the case. According to Gartner’s market analysis for IoT, Consumer is expected to grow at a faster pace in terms of units compared to Enterprise, while Enterprise is expected to lead in revenue. Also the churn-cycle in Consumer is higher / faster compared to Enterprise. Today’s wearables or smart-phones are good reference examples. This will however vary by the type of “Thing” and sub-segment. For example, you expect to have your smart refrigerator for a longer time period compared to smart clothing or eyewear. As ASPs of the “Things”come down over time and new classes of products such as disposables hit the market, we can expect even larger volumes.​

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

The market segments continue to be driven by the same use cases. In consumer wearables, short cycles are linked to fashion trends and rapid obsolescence, where consumer home use has longer cycles closer to industrial market requirements. We believe that the lifecycle norms will hold true for IoT devices.

Korczynski:  For the IoT application of infrastructure monitoring (e.g. bridges, pipelines, etc.) long-term (10-20 year) reliability will be essential, while consumer applications may be best served by 3-5 year reliability devices which cost less; how well can we quantify the trade-off between cost and chip reliability?

Steve Carlson, product management group director, Cadence

Conceptually we know very well how to make devices more reliable. We can lower current densities with bigger wires, we can run at cooler temperatures, and so on.  The difficulty is always in finding optimality for a given criterion across the, for practical purposes, infinite tradeoffs to be made.

Korczynski:  Why is the talk of IoT not just another “Dot Com” hype cycle?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

​​I participated in a panel at SEMICON China in Shanghai last month that discussed a similar question. If we think of IoT as a “brand new thing” (no pun intended), then we can think of it as hype. However if we look at the IoT as as set of use-cases that can take advantage of an evolution of Machine-to-Machine (M2M) going towards broader connectivity, huge amounts of data generated and exchanged, and a generational increase in internet and communication network bandwidths (i.e. 5G), then it seems a more down-to-earth technological progression.

Nicolas Williams, product marketing manager, Mentor Graphics

Unlike the Dot Com hype, which was built upon hope and dreams of future solutions that may or may not have been based in reality, IoT is real business. For example, in a 2016 IC Insights report, we see that last year $63.4 billion in revenue was generated for IoT systems and the market is growing at about 20% CAGR. This same report also shows IoT semiconductor sales of over $15 billion in 2015 with a CAGR of 21.1%.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is the investment needed up front to create sensing agents and an infrastructure for the hardware foundation of the IoT that will lead to big data and ultimately value creation.

Steve Carlson, product management group director, Cadence

There will be plenty of hype cycles for products and product categories along the way. However, the foundational shift of the connection of things is a diode through which civilization will only pass through in one direction.

IoT Demands Part 1: EDA and Fab Nodes

Thursday, April 14th, 2016

The Internet-of-Things (IoT) is expected to add new sensing and communications to improve the functionality of all manner of things in the world:  bridges sensing and reporting when repairs are needed, parts automatically informing where they are in storage and transport, human health monitoring, etc. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be assembled at low-cost and small-size in High Volume Manufacturing (HVM). Micro-Electro-Mechanical Systems (MEMS) and other sensors are being combined with Radio-Frequency (RF) ICs in miniaturized packages for the first wave of growth in major sub-markets.

To meet the anticipated needs of the different IoT application spaces, SemiMD asked leading companies within critical industry segments about the state of technology preparedness:

*  Commercial IC HVM – GLOBALFOUNDRIES,

*  Electronic Design Automation (EDA) – Cadence and Mentor Graphics,

*  IC and complex system test – Presto Engineering.

Korczynski:  Today, ICs for IoT applications typically use 45nm/65nm-node which are “Node -3″ (N-3) compared to sub-20nm-node chips in HVM. Five years from now, when the bleeding-edge will use 10nm node technology, will IoT chips still use N-3 of 28nm-node (considered a “long-lived node”) or will 45nm-node remain the likely sweet-spot of price:performance?

Timothy Dry, product marketing manager, GLOBALFOUNDRIES

In 5 years time, there will be a spread of technology solutions addressing low, middle, and high ends of IoT applications. At the low end, IoT end nodes for applications like connected smoke

detectors, security sensors will be at 55, 40nm ULP and ULL for lowest system power, and low cost. These applications will be typically served by MCUs <50DMIPs. Integrated radios (BLE, 802.15.4), security, Power Management Unit (PMU), and eFlash or MRAM will be common features. Connected LED lighting is forecasted to be a high volume IoT application. The LED drivers will use BCD extensions of 130nm—40nm—that can also support the radio and protocol-MCU with Flash.

In the mid-range, applications like smart-meters and fitness/medical monitoring will need systems that have more processing power <300DMIPS. These products will be implemented in 40nm, 28nm and GLOBALFOUNDRIES’ new 22nm FDSOI technology that uses software-controlled body-biasing to tune SoC operation for lowest dynamic power. Multiple wireless (BLE/802.15.4, WiFi, LPWAN) and wired connectivity (Ethernet, PLC) protocols with security will be integrated for gateway products.

High-end products like smart-watches, learning thermostats, home security/monitoring cameras, and drones will require MPU-class IC products (~2000DMIPs) and run high-order operating systems (e.g. Linux, Android). These products will be made in leading-edge nodes starting at 22FDX, 14FF and migrating to 7FF and beyond. Design for lowest dynamic power for longest battery life will be the key driver, and these products typically require human machine Interface (HMI) with animated graphics on a high resolution displays. Connectivity will include BLE, WiFi and cellular with strong security.

Steve Carlson, product management group director, Cadence

We have seen recent announcements of IoT targeted devices at 14nm. The value created by Moore’s Law integration should hold, and with that, there will be inherent advantages to those who leverage next generation process nodes. Still, other product categories may reach functionality saturation points where there is simply no more value obtained by adding more capability. We anticipate that there will be more “live” process nodes than ever in history.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is fair to say that most IoT devices will be a heterogeneous aggregation of analog functions rather than high power digital processors. Therefore, and by similarity with Bluetooth and RFID devices, 90nm and 65nm will remain the mainstream nodes for many sub-vertical markets, enabling the integration of RF and analog front-end functions with digital gate density. By default, sensors will stay out of the monolithic path for both design and cost reasons. The best answer would be that the IoT ASIC will follow eventually the same scaling as the MCU products, with embedded non-volatile memories, which today is 55-40nm centric and will move to 28nm with industry maturity and volumes.

Korczynski:  If most IoT devices will include some manner of sensor which must be integrated with CMOS logic and memory, then do we need new capabilities in EDA-flows and burn-in/test protocols to ensure meeting time-to-market goals?

Nicolas Williams, product marketing manager, Mentor Graphics

If we define a typical IoT device as a product that contains a MEMS sensor, A/D, digital processing, and a RF-connection to the internet, we can see that the fundamental challenge of IoT design is that teams working on this product need to master the analog, digital, MEMS, and RF domains. Often, these four domains require different experience and knowledge and sometimes design in these domains is accomplished by separate teams. IoT design requires that all four domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dice that will be bonded together, they still need to work together during the layout and verification process. Therefore, a unified design flow is required.

Stephen Pateras, product marketing director, Mentor Graphics

Being able to quickly debug and create test patterns for various embedded sensor IP can be addressed with the adoption of the new IEEE 1687 IP plug-and-play standard. If a sensor IP block’s digital interface adheres to the standard, then any vendor-provided data required to initialize or operate the embedded sensor can be easily and quickly mapped to chip pins. Data sequences for multiple sensor IP blocks can also be merged to create optimized sequences that will minimize debug and test times.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

From a testing standpoint, widely used ATEs are generally focused on a few purposes, but don’t necessarily cover all elements in a system. We think that IoT devices are likely to require complex testing flows using multiple ATEs to assure adequate coverage. This is likely to prevail for some time as short run volumes characteristic of IoT demands are unlikely to drive ATE suppliers to invest R&D dollars in creating new purpose-built machines.

Korczynski:  For the EDA of IoT devices, can all sensors be modeled as analog inputs within established flows or do we need new modeling capability at the circuit level?

Steve Carlson, product management group director, Cadence

Typically, the interface to the physical world has been partitioned at the electrical boundary. But as more mechanical and electro-mechanical sensors are more deeply integrated, there has been growing value in co-design, co-analysis, and co-optimization. We should see more multi-domain analysis over time.

Nicolas Williams, product marketing manager, Mentor Graphics

Designers of IoT devices that contain MEMS sensors need quality models in order to simulate their behavior under physical conditions such as motion and temperature. Unlike CMOS IC design, there are few standardized MEMS models for system-level simulation. State of the art MEMS modeling requires automatic generation of behavioral models based on the results of Finite Element Analysis (FEA) using reduced-order modeling (ROM). ROM is a numerical methodology that reduces the analysis results to create Verilog-A models for use in AMS simulations for co-simulation of the MEMS device in the context of the IoT system.

Molecular Modeling of Materials Defects for Yield Recovery

Monday, March 21st, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

New materials are being integrated into High Volume Manufacturing (HVM) of semiconductor ICs, while old materials are being extended with more stringent specifications. Defects within materials cause yield losses in HVM fabs, and engineers must identify the specific source of an observed defect before corrective steps can be taken. Honeywell Electronic Materials has been using molecular modeling software provided by Scienomics to both develop new materials and to modify old materials. Modeling allowed Honeywell to uncover the origin of subtle solvation-based film defects within Bottom Anti-Reflective Coatings (BARC) which were degrading yield in a customer’s lithographic process module.

Scienomics sponsored a Materials Modeling and Simulations online seminar on February 26th of this year, featuring Dr. Nancy Iwamoto of Honeywell discussing how Scienomics software was used to accelerate response to a customer’s manufacturing yield loss. “This was a product running at a customer line,” explained Iwamoto, “and we needed to find the solution.” The product was a Bottom Anti-Reflective Coating (BARC) organo-silicate polymer delivered in solution form and then spun on wafers to a precise thickness.

Originally observed during optical inspection by fab engineers as 1-2 micron sized vague spots in the BARC, the new defect type was difficult to see yet could be correlated to lithographic yield loss. The defects appeared to be discrete within the film instead of on the top surface, so the source was likely some manner of particle, yet filters did not capture these particles.

The filter captured some particles rich in silicon, as well as other particles rich in carbon. Sequential filtration showed that particles were passing through impossibly small pores, which suggested that the particles were built of deformable gel-like phases. The challenge was to find the material handling or processing situation, which resulted in thermodynamically possible and kinetically probable conditions that could form such gels.

Fig: Materials Processes and Simulations (MAPS) gives researchers access to visualization and analysis tools in a single user interface together with access to multiple simulation engines. (Source: Scienomics)

Molecular modeling and simulation is a powerful technique that can be used for materials design, functional upgrades, process optimization, and manufacturing. The Figure shows a dashboard for Scienomics’ modeling platform. Best practices in molecular modeling to find out-of-control parameters in HVM include a sequential workflow:

  • Build correct models based on experimental observables,
  • Simulate potential molecular structures based on known chemicals and hierarchical models,
  • Analyze manufacturing variabilities to identify excursion sources, and
  • Propose remedy for failure elimination.

Honeywell Electronic Materials researchers had very few experimental observables from which to start:  phenomenon is rare (yet effects yield), not filterable, yet from thermodynamic hydrolysis parameters it must be quasi-stable. Re-testing of product and re-examination of Outgoing Quality Control (OQC) data at the Honeywell production site showed that the molecular weight of the product was consistent with the desired distribution. There was also an observed BARC thickness increase of ~1nm on the wafer associated with the presence of these defects.

Using the modeling platform, Honeywell looked at the solubility parameters for different small molecular chains off of known-branched back-bone centers. Gel-like agglomerations could certainly be formed under the wrong conditions. Once the agglomerations form, they are not very stable so they can probably dis-aggregate when being forced through a filter and then re-aggregate on the other side.

What conditions could induce gel formation? After a few weeks of modeling, it was determined that temperature variations had the greatest influence on the agglomeration, and that variability was strongest at the ~250°K recommended for storage. Storage at 230°K resulted in measurably worse agglomeration, and any extreme in heating/cooling ramp rate tended to reduce solubility.

Molecular modeling was used in a forensic manner to find that the root cause of gel-like defects was related to thermal history:

*   Thermodynamics determined the most likely oligomers that could agglomerate,

*   Temperature-dependent solubility models determined which particles would reach wafers.

Because of the on-wafer BARC thickness increase of ~1nm, fab engineers could use all of the molecular modeling information to trace the temperature variation to bottles installed in the lithographic track tool. The fab was able to change specifications for the storage and handling of the BARC bottles to bring the process back into control.

Solid State Watch: August 21-27, 2015

Monday, August 31st, 2015
YouTube Preview Image
Next Page »