Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘Applied Materials’

Next Page »

Enabling the A.I. Era

Monday, October 23rd, 2017

thumbnail

By Pete Singer, Editor-in-Chief

There’s a strongly held belief now that the way in which semiconductors will be designed and manufactured in the future will be largely determined by a variety of rapidly growing applications, including artificial intelligence/deep learning, virtual and augmented reality, 5G, automotive, the IoT and many other uses, such as bioelectronics and drones.

The key question for most semiconductor manufacturers is how can the benefit from these trends? One of the goals of a recent panel assembled by Applied Materials for an investor day in New York was to answer that question.

Jay Kerley, Praful Krishna, Mukash Khare, Matt Johnson and Christos Georgiopoulos (left to right)

The panel, focused on “enabling the A.I. era,” was moderated by Sundeep Bajikar (former Sellside Analyst, ASIC Design Engineer). The panelists were: Christos Georgiopoulos (former Intel VP, professor), Matt Johnson (SVP in Automotive at NXP), Jay Kerley (CIO of Applied Materials), Mukesh Khare (VP of IBM Research) and Praful Krishna (CEO of Coseer). The panel discussion included three debates: the first one was “Data: Use or Discard”; the second was “Cloud versus Edge”; and the third was “Logic versus Memory.”

“There’s a consensus view that there will be an explosion of data generation across multiple new categories of devices,” said Bajikar, noting that the most important one is the self-driving car.  NXP’s Johnson responded that “when it comes to data generation, automotive is seeing amazing growth.” He noted the megatrends in this space: the autonomy, connectivity, the driver experience, and electrification of the vehicle. “These are changing automotive in huge ways. But if you look underneath that, AI is tied to all of these,” he said.

He said that estimates of data generation by the hour are somewhere from 25 gigabytes per hour on the low end, up to 250 gigabytes or more per hour on the high end. or even more in some estimates. “It’s going to be, by the second, the largest data generator that we’ve seen ever, and it’s really going to have a huge impact on all of us.”

Georgiopoulos agrees that there’s an enormous amount of infrastructure that’s getting built right now. “That infrastructure is consisting of both the ability to generate the data, but also the ability to process the data both on the edge as well as on the cloud,” he said. The good news is that sorting that data may be getting a little easier. “One of the more important things over the last four or five years has been the quality of the data that’s getting generated, which diminishes the need for extreme algorithmic development,” he said. “The better data we get, the more reasonable the AI neural networks can be and the simpler the AI networks can be for us to extract information that we need and turn the data information into dollars.”

Edge computing describes a computing topology in which information processing, and content collection and delivery, are placed closer to the sources of this information. Connectivity and latency challenges, bandwidth constraints and greater functionality embedded at the edge favors distributed models. Jay Kerley (CIO of Applied Materials) addressed the debate of cloud vs edge computing, noting it was a factor of data, then actual value and finally intelligence. “There’s no doubt that with the pervasiveness of the edge and billions of devices, data is going to be generated exponentially. But the true power comes in harnessing that data in the core. Taking it and turning it into actual intelligence. I believe that it’s going to happen in both places, and as a result of that, the edge is not going to only generate data, it’s going to have to consume data, and it’s going to have to make decisions. When you’re talking about problems around latency, maybe problems around security, problems around privacy, that can’t be overcome, the edge is going to have to be able to make decisions,” he said.

Kerley said there used to be a massive push to build data centers, but that’s changed. “You want to shorten the latency to the edge, so that data centers are being deployed in a very pervasive way,” he said. What’s also changing is that cloud providers have a huge opportunity to invest in the edge, to make the edge possible. “If they don’t, they are going to get cut out,” he added. “They’ve got to continue to invest to make access into the cloud as easy, and as frictionless as possible. At the end of the day, with all that data coming into these cloud data centers, the processing of that information, turning it into actual intelligence, turning it into value, is absolutely critical.”

Mukesh Khare (VP of IBM Research) also addressed the value of data. “We all believe that data is our next natural resource. We’re not going to discard it. You’re going to go and figure out how to generate value out of it,” he said.

Khare said that today, most artificial intelligence is too complex. It requires, training, building models and then doing inferencing using those models. “The reason there is good in artificial intelligence is because of the exponential increase in data, and cheap compute. But, keep in mind that, the compute that we are using right now is the old compute. That compute was built to do spreadsheet, databases, the traditional compute.

“Since that compute is cheap and available, we are making use of it. Even with the cheap and available compute in cloud, it takes months to generate those models. So right now, most of the training is still being done in cloud. Whereas, inferencing, making use from that model is done at the edge. However, going forward, it is not possible because the devices at the edge are continuously generating so much data that you cannot send all the data back to the cloud, generate models, and come back on the edge.”

“Eventually, a lot of training needs to move to the edge as well,” Khare said. This will require some innovation so that the compute, which is being done right now in cloud, can be transferred over to edge with low-power devices, cheap devices. Applied Materials’ Kerley added that innovation has to happen not only at the edge, but in the data center and at the network layer, as well as in the software frameworks. “Not only the AI frameworks, but what’s driving compression, de-duplication at the storage layer is absolutely critical as well,” he said.

NXP’s Johnson also weighed in on the edge vs cloud debate with the opinion that both will be required for automotive. “For automotive to do what it needs to, both need to evolve,” he said. “In the classic sense of automotive, the vehicle would be the edge, which needs access to the cloud frequently, or non-stop. I think it’s important to remember that the edge values efficiency. So, efficiency, power, performance and cost are all very important to make this happen,” he added.

Automotive security adds another degree of complexity. “If you think of something that’s always connected, and has the ability to make decisions and control itself, the security risk is very high. And it’s not just to the consumer of the vehicle, but also to the company itself that’s providing these vehicles. It’s actually foundational that the level of safety, security, reliability, that we put into these things is as good as it can be,” Johnson said.

Georgiopoulos said a new compute model is required for A.I. “It’s important to understand that the traditional workloads that we all knew and loved for the last forty years, don’t apply with A.I. They are completely new workloads that require very different type of capabilities from the machines that you build,” he said.  “With these new kind of workloads, you’re going to require not only new architectures, you’re going to require new system level design. And you’re going to require new capabilities like frameworks. He said TensorFlow, which is an open-source software library for machine intelligence originally developed by researchers and engineers working on the Google Brain Team, seems to be the biggest framework right now. “Google made it public for only one very good reason. The TPU that they have created runs TensorFlow better than any other hardware around. Well, guess what? If you write something on TensorFlow, you want to go to the Google backend to run it, because you know you’re going to get great results. These kind of architectures are getting created right now that we’re going to see a lot more of,” he said.

Georgiopoulos said this “architecture war” is by no means over. “There are no standardized ways by which you’re going to do things. There is no one language that everybody’s going to use for these things. It’s going to develop, and it’s going to develop over the next five years. Then we’ll figure out which architecture may be prevalent or not. But right now, it’s an open space,” he said.

IBM’s Khare, weighed in on how transistors and memory will need to evolve to meet the demands of new AI computer architectures, “For artificial intelligence in our world, we have to think very differently. This is an inflection, but this is the kind of inflection that world has not seen for last 60 years.” He said the world has gone from tabulating system era (1900 to 1940) to the programmable system era in 1950s, which we are still using. “We are entering the era of what we call cognitive computing, which we believe started in 2011, when IBM first demonstrated artificial intelligence through our Watson System, which played Jeopardy,” he said.

Khare said “we are still using the technology of programmable systems, such as logic, memory, the traditional way of thinking, and applying it to AI, because that’s the best we’ve got.”

AI needs more innovation at all levels, Khare said. “You have to think about systems level optimization, chip design level optimization, device level optimization, and eventually materials level optimization,” he said.  “The artificial workloads that are coming out are very different. They do not require the traditional way of thinking — they require the way the brain thinks. These are the brain inspired systems that will start to evolve.”

Khare believes analog compute might hold the answer. “Analog compute is where compute started many, many years ago. It was never adopted because the precision was not high enough, so there were a lot of errors. But the brain doesn’t think in 32 bits, our brain thinks analog, right? So we have to bring those technologies to the forefront,” he said. “In research at IBM we can see that there could be several orders of magnitude reduction in power, or improvement in efficiency that’s possible by introducing some of those concepts, which are more brain inspired.”

Innovations at 7nm to Keep Moore’s Law Alive

Thursday, January 19th, 2017

thumbnail

By Dave Lammers, Contributing Editor

Despite fears that Moore’s Law improvements are imperiled, the innovations set to come in at the 7nm node this year and next may disprove the naysayers. EUV lithography is likely to gain a toehold at the 7nm node, competing with multi-patterning and, if all goes well, shortening manufacturing cycles. Cobalt may replace tungsten in an effort to reduce resistance-induced delays at the contacts, a major challenge with finFET transistors, experts said.

While the industry did see a slowdown in Moore’s Law cost reductions when double patterning became necessary several years ago, Scotten Jones, who runs a semiconductor consultancy focused on cost analysis, said Intel and the leading foundries are back on track in terms of node-to-node cost improvements.

Speaking at the recent SEMI Industry Strategy Symposium (ISS), Jones said his cost modeling backs up claims made by Intel, GlobalFoundries, and others that their leading-edge processes deliver on die costs. Cost improvements stalled at TSMC for the16nm node due to multi-patterning, Jones said. “That pause at TSMC fooled a lot of people. The reality now may surprise those people who said Moore’s Law was dead. I don’t believe that, and many technologists don’t believe that either,” he said.

As Intel has adopted a roughly 2.5-year cadence for its more-aggressive node scaling, Jones said “the foundries are now neck and neck with Intel on density.” Intel has reached best-ever yield levels with its finFET-based process nodes, and the foundries also report reaching similar yield levels for their FinFET processes. “It is hard, working up the learning curve, but these companies have shown we can get there,” he said.

IC Knowledge cost models show the chip industry is succeeding in scaling density and costs. (Source: Scotten Jones presentation at 2017 SEMI ISS)

TSMC, spurred by its contract with Apple to supply the main iPhone processors, is expected to be first to ship its 7nm products late this year, though its design rules (contacted poly pitch and minimum metal pitch) are somewhat close to Intel’s 10nm node.

While TSMC and GlobalFoundries are expected to start 7nm production using double and quadruple patterning, they may bring in EUV lithography later. TSMC has said publicly it plans to exercise EUV in parallel with 193i manufacturing for the 7nm node. Samsung has put its stake in the ground to use EUV rather than quadruple patterning in 2018 for critical layers of its 7nm process. Jones, president of IC Knowledge LLC, said Intel will have the most aggressive CPP and MPP pitches for its 7nm technology, and is likely to use EUV in 2019-2020 to push its metal pitches to the minimum possible with EUV scanners.

EUV progress at imec

In an interview at the 62nd International Electron Devices Meeting (IEDM) in San Francisco in early December, An Steegen, the senior vice president of process technology at Imec (Leuven, Belgium), said Imec researchers are using an ASML NXE 3300B scanner with 0.3 NA optics and an 80-Watt power supply to pattern about 50 wafers per hour.

“The stability on the tool, the up time, has improved quite a lot, to 55 percent. In the best weeks we go well above 70 percent. That is where we are at today. The next step is a 125-Watt power supply, which should start rolling out in the field, and then 250 Watts.”

Steegen said progress is being made in metal-containing EUV resists, and in development of pellicles “which can withstand hydrogen in the chamber.”

If those challenges can be met, EUV would enable single patterning for vias and several metal layers in the middle of the line (MOL), using cut masks to print the metal line ends. “For six or seven thin wires and vias, at the full (7nm node) 32nm pitch, you can do it with a single exposure by going to EUV. The capability is there,” Steegen said.

TSMC’s 7nm development manager, S.Y. Wu, speaking at IEDM, said quadruple patterning and etch (4P4E) will be required for critical layers until EUV reaches sufficient maturity. “EUV is under development (at TSMC), and we will use 7nm as the test vehicle.”

Huiming Bu was peppered with questions following a presentation of the IBM Alliance 7nm technology at IEDM.

Huiming Bu, who presented the IBM Alliance 7nm paper at IEDM, said “EUV delivers significant depth of field (DoF) improvement” compared with the self-aligned quadruple (SAQP) required for the metal lines with immersion scanners.

A main advantage for EUV compared with multi-patterning is that designs would spend fewer days in the fabs. Speaking at ISS, Gary Patton, the chief technology officer at GlobalFoundries, said EUV could result in 30-day reductions in fab cycle times, compared with multiple patterning with 193nm immersion scanners, based on 1.5 days of cycle time per mask layer.

Moreover, EUV patterns would produce less variation in electrical performance and enable tighter process parameters, Patton said.

Since designers have become accustomed to using several colors to identify multi-patterning layers for the 14nm node, the use of double and quadruple patterning at the 7nm node would not present extraordinary design challenges. Moving from multi-patterning to EUV will be largely transparent to design teams as foundries move from multi-patterning to EUV for critical layers.

Interconnect resistance challenges

As interconnects scale and become more narrow, signals can slow down as electrons get caught up in the metal grain boundaries. Jones estimates that as much as 85 percent of parasitic capacitance is in the contacts.

For the main interconnects, nearly two decades ago, the industry began a switch from aluminum to copper. Tungsten has been used for the contacts, vias, and other metal lines near the transistor, partly out of concerns that copper atoms would “poison” the nearby transistors.

Tungsten worked well, partly because the bi-level liner – tantalum nitride at the interface with the inter-level dielectric (ILD) and tantalum at the metal lines – was successful at protecting against electromigration. The TaN-Ta liner is needed because the fluorine-based CVD processes can attack the silicon. For tungsten contacts, Ti serves to getter oxygen, and TiN – which has high resistance — serves as an oxygen and fluorine barrier.

However, as contacts and MOL lines shrunk, the thickness of the liner began to equal the tungsten metal thicknesses.

Dan Edelstein, an IBM fellow who led development of IBM’s industry-leading copper interconnect process, said a “pinch point” has developed for FinFETs at the point where contacts meet the middle-of-the-line (MOL) interconnects.

“With cobalt, there is no fluorine in the deposition process. There is a little bit of barrier, which can be either electroplated or deposited by CVD, and which can be polished by CMP. Cobalt is fairly inert; it is a known fab-friendly metal,” Edelstein said, due to its longstanding use as a silicide material.

As the industry evaluated cobalt, Edelstein said researchers have found that cobalt “doesn’t present a risk to the device. People have been dropping it in, and while there are still some bugs that need to be worked out, it is not that hard to do. And it gives a big change in performance,” he said.

Annealing advantages to Cobalt

Contacts are a “pinch point” and the industry may switch to cobalt (Source: Applied Materials)

An Applied Materials senior director, Mike Chudzik, writing on the company’s blog, said the annealing step during contact formation also favors cobalt: “It’s not just the deposition step for the bulk fill involved – there is annealing as well. Co has a higher thermal budget making it possible to anneal, which provides a superior, less granular fill with no seams and thus lowers overall resistance and improves yield,” Chudzik explained.

Increasing the volume of material in the contact and getting more current through is critical at the 7nm node. “Pretty much every chipmaker is working aggressively to alleviate this issue. They understand if it’s not resolved then it won’t matter what else is done with the device to try and boost performance,” Chudzik said.

Prof. Koike strikes again

Innovations underway at a Japanese university aim to provide a liner between the cobalt contact fill material and the adjacent materials. At a Sunday short course preceding the IEDM, Reza Arghavani of Lam Research said that by creating an alloy of cobalt and approximately 10 percent titanium, “magical things happen” at the interfaces for the contact, M0 and M1 layers.

The idea for adding titanium arose from Prof. Junichi Koike at Tohoku University, the materials scientist who earlier developed a manganese-copper solution for improved copper interconnects. For contacts and MOL, the Co-Ti liner prevents diffusion into the spacer oxide, Arghavani said. “There is no (resistance) penalty for the liner, and it is thermally stable, up to 400 to 500 degrees C. It is a very promising material, and we are working on it. W (tungsten) is being pushed as far as it can go, but cobalt is being actively pursued,” he said.

Stressor changes ahead

Presentations at the 2016 IEDM by the IBM Alliance (IBM, GlobalFoundries, and Samsung) described the use of a stress relaxed buffer (SRB) layer to induce stress, but that technique requires solutions for the defects introduced in the silicon layer above it. As a result of that learning process, SRB stress techniques may not come into the industry until the 5 nm node, or a second-generation 7nm node.

Technology analyst Dick James, based in Ottawa, said over the past decade companies have pushed silicon-germanium stressors for the PFET transistors about as far as practical.

“The stress mechanisms have changed since Intel started using SiGe at the 90nm node. Now, companies are a bit mysterious, and nobody is saying what they are doing. They can’t do tensile nitride anymore at the NFET; there is precious little room to put linear stress into the channel,” he said.

The SRB technique, James said, is “viable, but it depends on controlling the defects.” He noted that Samsung researchers presented work on defects at the IEDM in December. “That was clearly a research paper, and adding an SRB in production volumes is different than doing it in an R&D lab.”

James noted that scaling by itself helps maintain stress levels, even as the space for the stressor atoms becomes smaller. “If companies shorten the gate length and keep the same stress as before, the stress per nanometer at least maintains itself.”

Huiming Bu, the IBM researcher, was optimistic, saying that the IBM Alliance work succeeded at adding both compressive and tensile strain. The SRB/SSRW approach used by the IBM Alliance was “able to preserve a majority – 75 percent – of the stress on the substrate.”

Jones, the IC Knowledge analyst, said another area of intense interest in research is high-mobility channels, including the use of SiGe channel materials in the PMOS FinFETS.

He also noted that for the NMOS finFETs, “introducing tensile stress in fins is very challenging, with lots of integration issues.” Jones said using an SRB layer is a promising path, but added: “My point here is: Will it be implemented at 7 nm? My guess is no.”

Putting it in a package

Steegen said innovation is increasingly being done by the system vendors, as they figure out how to combine different ICs in new types of packages that improve overall performance.

System companies, faced with rising costs for leading-edge silicon, are figuring out “how to add functionality, by using packaging, SOC partitioning and then putting them together in the package to deliver the logic, cache, and IOs with the right tradeoffs,” she said.

MRAM Takes Center Stage at IEDM 2016

Monday, December 12th, 2016

thumbnail

By Dave Lammers, Contributing Editor

The IEDM 2016 conference, held in early December in San Francisco, was somewhat of a coming-out party for magneto-resistive memory (MRAM). The MRAM presentations at IEDM were complemented by a special MRAM-focused poster session – organized by the IEEE Magnetics Society in cooperation with the IEEE Electron Devices Society (EDS) – with 33 posters and a lively crowd.

And in the opening keynote speech of the 62nd International Electron Devices Meeting, Seok-hee Lee, executive vice president at SK Hynix (Seoul), set the stage by saying that the race is on between DRAM and emerging memories such as MRAM. “Originally, people thought that DRAM scaling would stop. Then engineers in the DRAM and NAND worlds worked hard and pushed out the end further in the future,” he said.

While cautioning that MRAM bit cells are larger than in DRAM and thus more more costly, Lee said MRAM has “very strong potential in embedded memory.”

SK Hynix is not the only company with a full-blown MRAM development effort underway. Samsung, which earlier bought MRAM startup Grandis and which has a materials-related research relationship with IBM, attracted a standing-room-only crowd to its MRAM paper at IEDM. TSMC is working with TDK on its program, and Sony is using 300mm wafers to build high-performance MRAMs for startup Avalanche Technology.

And one knowledgeable source said “the biggest processor company also has purchased a lot of equipment” for its MRAM development effort.

Dave Eggleston, vice president of emerging memory at GlobalFoundries, said he believes GlobalFoundries is the furthest along on the MRAM optimization curve, partly due to its technology and manufacturing partnership with Everspin Technologies (Chandler, Ariz.). Everspin has been working on MRAM for more than 20 years, and has shipped nearly 60 million discrete MRAMs, largely to the cache buffering and industrial markets.

GlobalFoundries has announced plans to use embedded STT-MRAM in its 22FDX platform, which uses fully-depleted SOI technology, as early as 2018.

Future versions of MRAM– such as spin orbit torque (SOT) MRAM and Voltage Controlled MRAM — could compete with SRAM and DRAM. Analysts said today’s spin-transfer torque STT-MRAM – referring to the torque that arises from the transfer of electron spins to the free magnetic layer — is vying for commercial adoption as ever-faster processors need higher performance memory subsystems.

STT-MRAM is fast enough to fit in as a new memory layer below the processor and the SRAM-based L1/L2 cache layers, and above DRAM and storage-level NAND flash layers, said Gary Bronner, vice president of research at Rambus Inc.

With good data retention and speed, and medium density, MRAM “may have advantages in the lower-level caches” of systems which have large amounts of on-chip SRAM, Bronner said, due in part to MRAM’s smaller cell size than six-transistor SRAM. While DRAM in the sub-20nm nodes faces cost issues as its moves to more complex capacitor structures, Bronner said that “thus far STT-MRAM) is not cheaper than DRAM.”

IBM researchers, which pioneered the spin-transfer torque approach to MRAM, are working on a high-performance MRAM technology which could be used in servers.

As of now, MRAM density is limited largely by the size of the transistors required to drive sufficient current to the magnetic tunnel junction (MTJ) to flip its magnetic orientation. Dan Edelstein, an IBM fellow working on MRAM development at IBM Research, said “it is a tall order for MRAM to replace DRAM. But MRAM could be used in system-level memory architectures and as an embedded memory technology.”

PVD and etch challenges

Edelstein, who was a key figure in developing copper interconnects at IBM some twenty years ago, said MRAM only requires a few extra mask layers to be integrated into the BEOL in logic. But there remain major challenges in improving the throughput of the PVD deposition steps required to deposit the complex material stack and to control the interfacial layers.

The PVD steps must deposit approximately 30 layers and control them to Angstrom-level precision. Deposition must occur under very low base pressure, and in oxygen- and water-vapor free environments. While tool vendors are working on productization of 300mm MRAM deposition tools, Edelstein said keeping particles under control and minimizing the maintenance and chamber cleaning are all challenging.

Etching the complex materials stack is even harder. Chemical RIE is not practical for MRAMs at this point, and using ion beam etching (IBE) presents challenges in terms of avoiding re-deposition of material sputtered off during the IBE etch steps for the high-aspect-ratio MTJs.

For the tool vendors, MRAMs present challenges as companies go from R&D to high-volume manufacturing, Edelstein said.

A Samsung MRAM researcher, Y.J. Song, briefly described IBE challenges during an IEDM presentation describing an embedded STT-MRAM with a respectable 8-Mbit density and a cell size of .0364 sq. micron. “We worked to optimize the contact etching,” using IBE etch during the patterning steps, he said. The short fail rate was reduced, while keeping the processing temperature at less than 350°C, Song said.

Samsung embedded an STT-MRAM module in the copper back end of the line (BEOL) of a 28nm logic process. (Source: Samsung presentation at IEDM 2016).

Many of the presentations at IEDM described improvements in key parameters, such as the tunnel magnetic resistance (TMR), cell size, data retention, and read error rates at high temperatures or low operating voltages.

An SK Hynix presentation described a 4-Gbit STT-MRAM optimized as a stand-alone, high-density memory. “There still are reliability issues for high-density MRAM memory,” said SK Hynix’s S.-W. Chung. The industry needs to boost the TMR “as high as possible” and work on improving the “not sufficiently long” retention times.

At high temperatures, error rates tend to rise, a concern in certain applications. And since devices are subjected to brief periods of high temperatures during reflow soldering, that issue must be dealt with as well, detailed by a Bosch presentation at IEDM.

Cleans and encapsulation important

Gouri Sankar Kar, who is coordinating the MRAM research program at the Imec consortium (Leuven, Belgium), said one challenge is to reduce the cell size and pitch without damaging the magnetic properties of the magnetic tunnel junction. For the 28nm logic node, embedded MRAM would be in the range of a 200nm pitch and 45nm critical dimensions (CDs). At the IEDM poster session, Imec presented an 8nm cell size STT-MRAM that could intersect the 10nm logic node, with the MRAM pitch in the 100nm range. GlobalFoundries, Micron, Qualcomm, Sony and TSMC are among the participants in the Imec MRAM effort.

Kar said in addition to the etch challenges, post-patterning treatment and the encapsulation liner can have a major impact on MTJ materials selection. “Some metals can be cleaned immediately, and some not. For the materials stack, patterning (litho and etch) and clean optimization are crucial.”

“Chemical etch (RIE) is not really possible at this stage. All the tool vendors are working on physical sputter etch (IBE) where they can limit damage. But I would say all the major tool vendors at this point have good tools,” Kar said.

To reach volume manufacturing, tool vendors need to improve the tool up-time and reduce the maintenance cycles. There is a “tail bits” relationship between the rate of bit failures and the health of the chambers that still needs improvement. “The cleanup steps after etching are very, very critical” to the overall effort to improving the cost effectiveness of MRAM, Kar said, adding that he is “very positive” about the future of MRAM technology.

A complete flow at AMAT

Applied Materials is among the equipment companies participating in the Imec program, with TEL and Canon-Anelva also heavily involved. Beyond that, Applied has developed a complete MRAM manufacturing flow at the company’s Dan Maydan Center in Santa Clara, and presented its cooperative work with Qualcomm on MRAM development at IEDM.

In an interview, Er-Xuan Ping, the Applied Materials managing director in charge of memory and materials technologies, said about 20 different layers, including about ten different materials, must be deposited to create the magnetic tunnel junctions. As recently as a few years ago, throughput of this materials stack was “extremely slow,” he said. But now Applied’s multi-cathode PVD tool, specially developed for MRAM deposition, can deposit 5 Angstrom films in just a few seconds. Throughput is approaching 20 wafers per hour.

Applied Materials “basically created a brand-new PVD chamber” for STT-MRAM, and he said the tool has a new e-chuck, optimized chamber walls and a multi-cathode design.

The MRAM-optimized PVD tool does not have an official name yet, and Ping said he refers to it as multi-cathode PVD. With MRAM requiring deposition of so many different metals and other materials, the Applied tool does not require the wafer to be moved in and out, increasing efficiency. The shape and structure of the chamber wall, Ping said, allow absorption of downstream plasma material so that it doesn’t come back as particles.

For etch, Applied has worked to create etching processes that result in very low bit failure rates, but at relatively relaxed pitches in the 130-200nm range. “We have developed new etch technologies so we don’t think etch will be a limiting factor. But etch is still challenging, especially for cells with 50nm and smaller cell sizes. We are still in unknown territory there,” said Ping.

Jürgen Langer, R&D manager at Singulus Technology (Frankfurt, Germany), said Singulus has developed a production-optimized PVD tool which can deposit “30 material layers in the Angstrom range. We can get 20 wafers per hour throughputs, so I would say this is not a beta tool, it is for production.”

Jürgen Langer, R&D manager, presented a poster on MRAM deposition from Singulus Technology (Frankfurt, Germany).

Where does it fit?

Once the production challenges of making MRAM are ironed out, the question remains: Where will MRAM fit in the systems of tomorrow?

Tom Coughlin, a data storage consultant based in Atascadero, Calif., said embedded MRAM “could have a very important effect for industrial and consumer devices. MRAM could be part of the memory cache layers, providing power advantages over other non-volatile devices.” And with its ability to power on and power off without expending energy, MRAM could reduce overall power consumption in smart phones, cutting in to the SRAM and NOR sectors.

“MRAM definitely has a niche, replacing some DRAM and SRAM. It may replace NOR. Flash will continue for mass storage, and then there is the 3D Crosspoint from Intel. I do believe MRAM has a solid basis for being part of that menagerie. We are almost in a Cambrian explosion in memory these days,” Coughlin said.

Process Control Deals with Big Data, Busy Engineers

Tuesday, November 22nd, 2016

thumbnail

By Dave Lammers, Contributing Editor

Turning data into insights that will improve fab productivity is one of the semiconductor industry’s biggest opportunities, one that experts say requires a delicate mix between automation and human expertise.

A year ago, after the 2015 Advanced Process Control (APC) conference in Austin, attendees said one of their challenges was that it takes too long to create the fault detection and classification (FDC) models that alert engineers when something is amiss in a process step.

“The industry listened,” said Brad van Eck, APC conference co-chairman. Participants at the 2016 APC in Phoenix heard progress reports from device makers as diverse as Intel, Qorvo, Seagate, and TSMC, as well as from key APC software vendors including Applied Materials, Bistel, and others.

Steve Chadwick, principal engineer for manufacturing IT at Intel, described the challenge in a keynote address. IC manufacturers which have spent billions of dollars on semiconductor equipment are seeking new ways to maximize their investments.

Steve Chadwick

“We all want to increase our quality, make the product in the best time, get the most good die out, and all of that. Time to market can be a game changer. That is universal to the manufacturing space,” Chadwick said.

“Every time we have a new generation of processor, we double the data size. Roughly a gigabyte of information is collected on every wafer, and we sort thousands of wafers a day,” Chadwick said. The result is petabytes of data which needs to be stored, analyzed, and turned into actionable “wisdom.”

Intel has invested in data centers located close their factories, making sure they have the processing power to handle data coming in from roughly 5 billion sensor data points collected each day at a single Intel factory.

“We have to take all of this raw data that we have in a data store and apply some kind of business logic to it. We boil it down to ‘wisdom,’ telling someone something they didn’t know beforehand.”

In a sense, technology is catching up, as Hadoop and several other data search engines are adopted to big data. Also, faster processors allow servers to analyze problems in 15 seconds or less, compared to several hours a few years ago.

Where all of this gets interesting is in figuring out how to relate to busy engineers who don’t want to be bothered with problems that don’t directly concern them. Chadwick detailed the notification problem at Intel fabs, particularly as engineers use smart phones and tablets to receive alarms. “Engineers are busy, and so you only tell them something they need to know. Sometimes engineers will say, ‘Hey, Steve, you just notified my phone of 500 things that I can’t do anything about. Can you cut it out?’”

Notification must be prioritized, and the best option in many cases is to avoid notifiying a person at all, instead sending a notification to an expert system. If that is not an option, the notification has to be tailored to the device the engineer is using. Intel is moving quickly to HTML 5-based data due largely to its portability across multiple devices, he added.

With more than half a million ad hoc jobs per week, Intel’s approach is to keep data and analysis close to the factory, processing whenever possible in the local geography. Instead of shipping data to a distant data center for analysis, the normal procedure is to ship the small analysis code to a very large data set.

False positives decried

Fault detection and classification (FDC) models are difficult to create and oftentimes overly sensitive, resulting in false alarms. These widely used, manually created FDC models can take two weeks or longer to set up. While they take advantage of subject-matter-expert (SME) knowledge and are easy to understand, tool limits tend to be costly to set up and manage, with a high level of false positives and missed alarms.

An Applied Materials presentation — by Parris Hawkins, James Moyne, Jimmy Iskandar, Brad Schulze, and Mike Armacost – detailed work that Applied is doing in cooperation with process control researchers at the University of Cincinnati. The goal is to develop next-generation FDC that leverages Big Data, prediction analytics, and expert engineers to combine automated model development with inputs from human experts.

Fully automated solutions are plagued with significant false positives/negatives, and are “generally not very useful,” said Hawkins. By incorporating metrology and equipment health data, a form of “supervised” model creation can result in more accurate process controls, he said.

The model creation effort first determines which sensors and trace features are relevant, and then optimizes the tool limits and other parameters. The goal is to find the optimum between too-wide limits that fail to alert when faults are existent, and overly tight limits which set off false alarms too often.

Next-generation FDC would leverage Big Data and human expertise. (Source: Applied Materials presentation at APC 2016).

Full-trace FDC

BISTel has developed an approach called Dynamic Full Trace FDC. Tom Ho, president of BISTel USA, presented the work in conjunction with Qorvo engineers, where a beta version of the software is being used.

Tom Ho

Ho said Dynamic Full Trace FDC starts with the notion that the key to manufacturing is repeatability, and in a stable manufacturing environment “anything that differs, isn’t routine, it is an indication of a mis-process and should not be repeatable. Taking that concept, then why not compare a wafer to everything that is supposed to repeat. Based on that, in an individual wafer process, the neighboring wafer becomes the model.”

The full-trace FDC approach has a limited objective: to make an assessment whether the process is good or bad. It doesn’t recommend adjustments, as a run-to-run tool might.

The amount of data involved is small, because it is confined to that unique process recipe. And because the neighboring trace is the model, there is no need for the time-consuming model creation mentioned so often at APC 2016. Compute power can be limited to a personal computer for an individual tool.

Ho took the example of an etch process that might have five recipe steps, starting with pumping down the chamber to the end point where the plasma is turned off. Dynamic full-trace FDC assumes that most wafers will receive a good etch process, and it monitors the full trace to cover the entire process.

“There is no need for a model, because the model is your neighboring trace,” he said. “It definitely saves money in multiple ways. With the rollout of traditional FDC, each tool type can take a few weeks to set up the model and make sure it is running correctly. For multiple tool types that can take a few months. And model maintenance is another big job,” he said.

For the most part, the dynamic full-trace software runs on top of the Bistel FDC platform, though it could be used with another FDC vendor “if the customer has access to the raw trace data,” he said.

Applied Materials Intros High Res E-Beam Inspection System

Monday, July 11th, 2016

thumbnail

Applied Materials, Inc. introduced its next-generation e-beam inspection system that offers resolution down to 1nm. This allows users to detect the most challenging “killer” defects that other technologies cannot find, and to monitor process marginality to rapidly resolve ramp issues and achieve higher yields. Called PROVision™, the system offers 3x faster throughput over existing e-beam hotspot inspection tools.

Ram Peltinov, senior director, strategic marketing for the Process Diagnostics and Control Group at Applied Materials, said the development of the new system was driven by a number of new challenges: Structures and defects are now too small for optical resolution; multi-patterning triggers a need for massive measurements; and 3D architectures limit the ability to detect and measure.

“FinFETs are becoming increasingly complex, the multi-patterning creates multiple steps, the DRAM aspect ratios are getting very high and the VNAND is going vertical,” he said. “All these changes are happening in parallel and this creates great opportunity for metrology and inspection,” he said. According to Gartner, the market for e-beam inspection systems has tripled in the last five years, from $81M in 2010 to $241M in 2015.

The system’s high current density (beam current per sampling area) eliminates the sampling/throughput tradeoff of previous systems, allowing the fastest sampling throughput at its 1nm resolution. Imaging capabilities encompass techniques such as see-through, high aspect ratio, 360° topography, and back-scattered electron detection.

“It allows them to capture defects they couldn’t see before,” Peltinov said. The system can detect, for example, epi-overgrowth in FinFETs. “While the epi overgrowth is clearly visible on the PROVision, it’s almost impossible to see in conventional EBI. Without the resolution and the special imaging, it’s very difficult to catch that.”

“They can also increase their sampling with the faster throughput on the most challenging layers. This also helps them reveal process signatures of their most subtle process variation,”  Peltinov added. Massive sampling reveals hidden process trends and “signatures” that help identify sources of abnormalities, and shorten the time to root cause from days to minutes.

Applied Materials Releases Selective Etch Tool

Wednesday, June 29th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Applied Materials has disclosed commercial availability of new Selectra(TM) selective etch twin-chamber hardware for the company’s high-volume manufacturing (HVM) Producer® platform. Using standard fluorine and chlorine gases already used in traditional Reactive Ion Etch (RIE) chambers, this new tool provides atomic-level precision in the selective removal of materials in 3D devices structures increasingly used for the most advanced silicon ICs. The tool is already in use at three customer fabs for finFET logic HVM, and at two memory fab customers, with a total of >350 chambers planned to have been shipped to many customers by the end of 2016.

Figure 1 shows a simplified cross-sectional schematic of the Selectra chamber, where the dashed white line indicates some manner of screening functionality so that “Ions are blocked, chemistry passes through” according to the company. In an exclusive interview with Solid State Technology, company representative refused to disclose any hardware details. “We are using typical chemistries that are used in the industry,” explained Ajay Bhatnagar, managing director of Selective Removal Products for Applied Materials. “If there are specific new applications needed than we can use new chemistry. We have a lot of IP on how we filter ions and how we allow radicals to combine on the wafer to create selectivity.”

FIG 1: Simplified cross-sectional schematic of a silicon wafer being etched by the neutral radicals downstream of the plasma in the Selectra chamber. (Source: Applied Materials)

From first principles we can assume that the ion filtering is accomplished with some manner of electrically-grounded metal screen. This etch technology accomplishes similar process results to Atomic Layer Etch (ALE) systems sold by Lam, while avoiding the need for specialized self-limiting chemistries and the accompanying chamber throughput reductions associated with pulse-purge process recipes.

“What we are doing is being able to control the amount of radicals coming to the wafer surface and controlling the removal rates very uniformly across the wafer surface,” asserted Bhatnagar. “If you have this level of atomic control then you don’t need the self-limiting capability. Most of our customers are controlling process with time, so we don’t need to use self-limiting chemistry.” Applied Materials claims that this allows the Selectra tool to have higher relative productivity compared to an ALE tool.

Due to the intrinsic 2D resolutions limits of optical lithography, leading IC fabs now use multi-patterning (MP) litho flows where sacrificial thin-films must be removed to create the final desired layout. Due to litho limits and CMOS device scaling limits, 2D logic transistors are being replaced by 3D finFETs and eventually Gate-All-Around (GAA) horizontal nanowires (NW). Due to dielectric leakage at the atomic scale, 2D NAND memory is being replaced by 3D-NAND stacks. All of these advanced IC fab processes require the removal of atomic-scale materials with extreme selectivity to remaining materials, so the Selectra chamber is expected to be a future work-horse for the industry.

When the industry moves to GAA-NW transistors, alternating layers of Si and SiGe will be grown on the wafer surface, 2D patterned into fins, and then the sacrificial SiGe must be selectively etched to form 3D arrays of NW. Figure 2 shows the SiGe etched from alternating Si/SiGe stacks using a Selectra tool, with sharp Si corners after etch indicating excellent selectivity.

FIG 2: SEM cross-section showing excellent etch of SiGe within alternating Si/SiGe layers, as will be needed for Gate-All-Around (GAA) horizontal NanoWire (NW) transistor formation. (Source: Applied Materials)

“One of the fundamental differences between this system and old downstream plasma ashers, is that it was designed to provide extreme selectivity to different materials,” said Matt Cogorno, global product manager of Selective Removal Products for Applied Materials. “With this system we can provide silicon to titanium-nitride selectivity at 5000:1, or silicon to silicon-nitride selectivity at 2000:1. This is accomplished with the unique hardware architecture in the chamber combined with how we mix the chemistries. Also, there is no polymer formation in the etch process, so after etching there are no additional processing issues with the need for ashing and/or a wet-etch step to remove polymers.”

Systems can also be used to provide dry cleaning and surface-preparation due to the extreme selectivity and damage-free material removal.  “You can control the removal rates,” explained Cogorno. “You don’t have ions on the wafer, but you can modulate the number of radicals coming down.” For HVM of ICs with atomic-scale device structures, this new tool can widen process windows and reduce costs compared to both dry RIE and wet etching.

—E.K.

New Tungsten Barrier/Liner, Fill Processes Reduce Resistance and Increase Yield

Friday, June 3rd, 2016

thumbnail

By Pete Singer, Editor-in-Chief

Today’s most advanced chips pack two billion transistors on a die size of 100 mm2. Considering transistors are three terminal devices, that equates to six billion contacts to those transistors, which connect to 10-15 Layers of stacked wiring. Although the wiring is copper, the contacts at the transistor level and the so-called local interconnect level just above the contact level are made of tungsten (Figure 1). Although tungsten has slightly higher resistance than copper, the danger of copper contamination killing the transistor is such that tungsten is still used.

Figure 1. The contact (black area) is the first, smallest, most critical connection between the transistor and interconnect wiring. Source: TECHINSIGHTS

Two looming problems are that contact resistance is going up, to the point where it will soon be higher than that of the transistor (Figure 2). Yield is also at risk since just one bad contact can cause entire portions of the chip to fail. “Not only are there a lot of these contacts, they’re very challenging to make because they are so small and getting even smaller with each node,” said Jonathan Bakke, Global Product Manager, Transistor and Interconnect Group, Applied Materials.

Figure 2. At the 10nm node and beyond, contact and plug resistance is expected to rise exponentially and dominate.

Applied Materials recently launched two new products aimed at reducing contact resistance and improving yield in tungsten contacts. The Applied Endura® Volta™ CVD W product results in a new tungsten-based material that serves as both a barrier and a liner, enabling the lower resistance W fill to be three times wider than in traditional process flows. The end result is an increase of up to 90% in contact resistance. The Applied Centura® iSprint™ ALD/CVD SSW (seam-suppressed tungsten) product achieves bottom-up gap fill in tungsten contact CVD processes, reducing seams and voids, which increases yield.

The traditional process flow to from a contact has been to deposit a layer of titanium to form a silicide layer by reacting with the silicon, followed by a TiN barrier. This barrier film prevents the diffusion of fluorine into the silicon of the transistor from the tungsten hexafluoride (WF6) used to deposit the subsequent tungsten contact fill. Because tungsten doesn’t grow directly on TiN, a seed layer of W is typically deposited by ALD before the WF6 CVD bulk fill.

Two challenges associated with this approach is that the barrier and liner have not scaled – they have been made as thin as possible, but they’ve reached a limit. The TiN barrier is typically around 30-40Å and the liner film another 20Å. As a result, the volume of the overall plug made of the more desirable, lower resistance W is reduced. “The TiN and tungsten based liner are both high resistance layers. The more volume they occupy, the more they contribute to resistance,” Bakke said.

The second challenge is that, because the W CVD process results in a conformal fill, where all sides grow at the same rate, a seam is often formed in the middle of the contact. Or, even worse, the top closes before the W completely fills the contact hole, resulting in a void. Both seams and voids can be exposed or breached during the subsequent chemical mechanical planarization (CMP) step. “The contacts or local interconnects are becoming much smaller with each node and they’re getting more challenging to fill with low resistance material and without seams or voids,” Bakke said. Figure 3 shows common problems with resistance and yield.

Figure 3: Barriers and liners don't scale, leaving less room for low resistance W fill. Seams and voids can cause yield problems.

Seams and voids can lead to yield problems such as overly high contact resist or even open contacts. If even a few of the 6 billion contacts on a chip fail, there can a big impact on yield. One study (Figure 4), shows that even at the 20nm node, one defect in a billion can lead to a yield loss of 15% or more. “This tells you that you really have to have perfect gap fill. If one contact goes, it can knock out an entire portion of the device and make it inoperable,” Bakke said.

Figure 4. Source: Nvidia

Enter the Applied Endura® Volta™ CVD W and the Applied Centura® iSprint™ ALD/CVD SSW (seam-suppressed tungsten).

A process has been developed for the Endura – Applied’s platform for metal deposition, including PVD and CVD – to deposit a tungsten-based CVD film that serves as both the barrier layer and the liner layer. At around the 30Å thickness that would be typical of just the barrier, and it’s as effective a barrier as TiN. “We’re doing materials engineering to create the first new liner for tungsten plug in 10 years,” Bakke said. This means more of the volume of the contact consists of the lower resistivity W fill (Figure 5). “You can actually triple the tungsten fill width at the 15 nm node. You get a lot more low-resistance material in there. Beyond that, it’s a simpler process flow, by removing the one layer, the liner,” Bakke added.

Figure 5

Figure 6 shows how the new W-based barrier/liner compares to the standard flow. The tungsten-based film is 75% lower in resistitivity that the TiN (left). At thicknesses which are relevant for the 10nm node, an 80% reduction in total stack resistivity is seen (right).

Figure 6

Perhaps even more important is the contact resistance, as shown in Figure 7, which charts contact resistance vs critical dimension. “By the time you’re getting to the 10 and 7nm node thicknesses, you actually have a big drop in resistivity up to about 90% reduction in resistance at the 7nm node thicknesses,” Bakke explained.

Figure 7

One reason why plug resistance is becoming more important is indicated by the orange line in Fig. 7, which shows silicide contact resistance. “For a long time, the silicide was the big contributor to the transistor contact total resistance. Manufacturers spent a lot time trying to decrease that resistance as they scaled. There’s a cross-over point (blue line) where the plug starts become of higher resistance than the contact. We need to focus on bring the plug resistance back down so it’s not the major contributor to the total resistance,” Bakke said.

Figure 8 shows the end result, with a clean interface between both the tungsten and underlying tungsten layer. “The Volta W adheres very well to dielectric sidewalls. And the W fill is able to deposit on the Volta W and give good gap fill performance,” said Bakke. “It’s also able to survive all the post-processing steps, such as CMP and deposition of copper.”

Figure 8. Degas, clean and Volta W are integrated in the Endura platform.

The Applied Centura® iSprint™ ALD/CVD SSW process uses a “special treatment” after the liner (or barrier/liner in the case of Volta W) to suppress growth on the field and induce growth in a bottom-up fashion (Figure 9). This bottom-up growth eliminates seams and voids. “Because you have a more robust fill, you get an improved yield because you don’t breach the contact or local interconnect during the CMP step,” Bakke said. “This is the first bottom-up tungsten CVD in high volume manufacturing,” he added.

Figure 9. Bottom-up fill is shown in a diagram (top) and in an actual structure.

Bakke wouldn’t say what the special treatment was, but a patent search revealed a possible approach, involving activated nitrogen where the activated nitrogen is deposited preferentially on the surface regions.

Roll-to-Roll Coating Technology: It’s a Different Ball of Wax

Monday, April 18th, 2016

Compiled and edited by Jeff Dorsch, Contributing Editor

Manufacturing flexible electronics and coatings for a variety of products has some similarities to semiconductor manufacturing and some substantial differences, principally roll-to-roll fabrication, as opposed to making chips on silicon wafers and other rigid substrates. This interview is with Neil Morrison, senior manager, Roll-to-Roll Coating Products Division, Applied Materials.

1. What are the leading market trends in roll-to-roll coating systems?

Neil Morrison: Several market trends are driving innovations in roll-to-roll technology and barrier films.  One is the flexible electronics market where we see the increasing use of film-based components within displays for portable electronic devices such as smartwatches, smartphones, tablets and laptops.

The majority of these passive applications are for anti-reflection films, optical polarizers and hard coat protected cover glass films.

Examples of active device applications include touch sensors. Roll-to-roll vacuum processing dominates this segment through the use of low-temperature deposited, optically matched layer stacks based on indium tin oxide (ITO). Roll-to-roll deposition of barrier film is also increasing with the emergence of quantum dot-enhanced LCD displays and the utilization of barrier films in organic light-emitting diode (OLED) lighting.

In addition to the electronics industry, roll-to-roll technology is used for food packaging and industrial coatings. What’s new today for food packaging is consumers want to be able to view the freshness of the food inside the packaging. Given this, the use of both aluminum foil and traditional roll-to-roll evaporated aluminum layers is slowly being phased into vacuum-deposited aluminum oxide (AlOx) coated packaging.

Within the industrial coatings market segment, significant growth is being driven by the use of Fabry-Perot color shift systems for “holographic” security applications, such as those used to protect printed currency from counterfeiting. This requires the use of electron-beam evaporation tooling to deposit highly uniform, optical quality dielectric materials sandwiched between two metallic reflector layers.

2. What are the leading technology trends in roll-to-roll coating systems?

Neil Morrison: Roll-to-roll coating is being extended to the display industry through the use of higher optical performance substrates with enhanced transmission, optical clarity and color neutrality. These materials are typically more difficult to handle than traditional polyethylene terephthalate (PET) substrates due to inherent properties and the properties of the primer and/or hard coat layers used to treat or protect their surface.

The majority of displays used in mobile applications are moving to thinner substrates, to reduce the “real estate” within the display and enable thinner form factor products and more space for larger batteries.

At the technology level, roll-to-roll sputter tooling dominates the touch panel industry with continual improvements in substrate handling, pre-treatment and inline process monitoring and control. Roll-to-roll chemical vapor deposition (CVD) equipment has also entered the marketplace to address high barrier requirements and to reduce cost compared with traditional sputter-based solutions. Roll-to-roll CVD technology is still in its infancy but is expected to become more prevalent in the near future within the barrier and hard coat market segments.

In the display industry, defect requirements are becoming more and more stringent and are moving towards metrics previously unseen in the roll-to-roll industry.

3. How would you best and briefly describe the Applied SmartWeb, Applied TopBeam, and Applied TopMet systems?

Neil Morrison: The Applied SmartWeb roll-to-roll modular sputtering or physical vapor deposition tool is used to deposit metals, dielectrics and transparent conductive oxides on polymeric substrates for the touch panel and optical coating industry. Its high-precision substrate conveyance system permits winding of polymeric substrates down to thickness levels of ~23 microns at speeds of up to 20 meters/minute depending upon the application. Up to six process compartments with separate gas flow control and pumping allow the deposition of complex layer stacks within a single pass.

Our Applied TopBeam system is a roll-to-roll e-beam evaporation tool used to deposit dielectrics on substrate thicknesses as low as 12 micron and at speeds up to approximately 10 meters/second.  Key to the tool is Applied’s unique electron-beam steering and control system, which provides excellent layer deposition and uniformity at exceptionally high processing speeds by permitting uniform and stable heating of the evaporant material  over the entire width of the substrate.

The Applied TopMet is a high-productivity roll-to-roll thermal evaporation platform available for depositing Al and AlOx layers on substrates down to 12 microns in thickness and is used primarily for food and industrial packaging.

Applied SmartWeb (Source: Applied Materials)

4. Who are Applied’s leading competitors in this market?

Neil Morrison: Other companies in the roll-to-roll market include Von Ardenne, Leybold Optics (Buehler), Schmid, Ulvac and Kobelco.

5. How big is the worldwide market on annual basis?

Neil Morrison: It is difficult to accurately size the entire roll-to-roll market because of the wide variety of applications across multiple industries from flexible electronics to food packaging. Just estimating the size of the market within the flexible electronics category alone is tough because there are three areas that combine to make up the current flexible electronics market – OLEDs for flexible displays, flexible printed circuit boards, and flexible touch panels for phones and tablets. And with applications continuing to grow, it is difficult to provide a specific market size.

Controlling Variabilities When Integrating IC Fab Materials

Friday, April 15th, 2016

thumbnail

By Ed Korczynski, Senior Technical Editor, SemiMD/Solid State Technology

Semiconductor integrated circuit (IC) manufacturing has always relied upon the supply of critical materials from a global supply chain. Now that shrinks of IC feature sizes have begun to reach economic limits, future functionality improvements in ICs are increasingly derived from the use of new materials. The Critical Materials Conference 2016—to be held May 5-6 in Hillsboro, Oregon (cmcfabs.org)—will explore best practices in the integration of novel materials into manufacturing. Dr. David Thompson, Senior Director, Center of Excellence in Chemistry, Applied Materials will present on “Agony in New Material Introductions – minimizing and correlating variabilities,” which he was willing to discuss in advance with SemiMD.

Korczynski: With more and more materials being considered for use in high-volume manufacturing (HVM) of advanced ICs, how do you begin to selectively screen out materials that will not work for one reason or another to be able to reach the best new material for a target application?

Thompson: While there’s ‘no one size fits all’ solution to this, it typically starts with a review of what’s available and known about the current offerings. With respect to the talk at the CMC, we’ll review the challenges we run into after the materials system and chemistries are set and have been proven generally viable, but still require significant optimization in order to get acceptable yields for manufacturing. It’s a very long road from device proof of concept on a new materials system to a viable manufacturing process.

Korczynski: Since new materials are being considered for use on the atomic-scale in advanced devices, doesn’t all of this have to be done with control at the atomic scale?

Thompson: For the material on the chip, many mainstream analytical techniques are used to achieve atomic level control including TEMs and AFMs with atomic resolution during film development for many applications. Unfortunately, this resolution is not available for the chemicals we’re relying on to deposit these materials. For a typical precursor that weighs in the 200 Dalton range, a gram of precursor may have 5 × 1020 molecules. That’s a lot of molecules. Even with ppb (109) resolutions on analytical, you’re still dealing with invisible populations of >1010 molecules. It gets worse. While trace metals analysis can hit ppb levels, molecular analysis techniques are typically limited in the 0.1 to 0.01 percent resolutions for most semiconductor precursors and there may be impurities which are invisible to routine analytical techniques.

Ultimately, we rely on analytical techniques to control the gross parameters and disciplined process controls to verify suppliers produce the same compositions the same way, and to manage impurities. On the process and hardware side, it’s like threading the needle trying to get the right film at the right throughput, in a process space that’s as tolerant as possible to the inevitable variability in these chemistries.

Korczynski: With all of this investment in developing one specialty material supplier for advanced IC manufacturing, what is the cost to develop and qualify a second source?

Thompson: Generally, it’s not sustainable to release a product with dual specialty material sources. The problem with dual-sourcing is chemical suppliers protect their knowledge—not simple IP—but also their sub-supply-chains and proprietary methods of production, transport and delivery. However, given how trace elements in the formulation can change depending on conditions the molecules experience over time, the customer in many cases needs to develop two separate sub-recipes based on the specific vendor’s chemistry they are using. So, redundancy in the supply chain is prudent as is making sure the vendor can produce the material in different locations.

There are countless examples over the last 20 years of what I like to call ‘the agony of the supply-chain’ when a process got locked into using a material when the only supply was from a Ph.D. chemist making it in small batches in a lab. In most cases the initial batch of any new molecule is made at a scale that would fit in a coffee mug. Sometimes though scaling up the first industrial-scale batch can alter impurity factors that change yields on the wafer even with improved purification. So while a customer would like to keep using a small batch production, it’s not sustainable but trying to qualify a second vendor in this environment presents significant challenges.

Korczynski: Can you share an example with us of how your team brought a source of subtle variation under control?

Thompson: We had a process using a new metal film, and in the early development everything looked great. Eventually we observed a drift of process results that was more pronounced with some ampoules and less so with others. The root cause initially eluded us. Then, a bright Ph.D. on our team said it’s interesting that the supplier did not report a particular contaminant that would tend to be present as a byproduct of the reaction. The supplier confirmed it was present and variable at concentrations in the 100-300 ppm concentration in the blend. This contaminant was relatively more volatile than the main component due to vapor pressure differences and much more reactive with the substrate/wafer. It was found this variability in the chemistry induced the process variation on the wafer (as shown in Figure 1).

FIGURE 1. RESOLUTION OF SEQUENTIAL WAFER DRIFT VIA IMPURITY MANAGEMENT

Chasing impurities and understanding their impact requires rigor and a lot of data collection. There’s no Star Trek analyzer we can use to give us knowledge of all impurities present and the role of those impurities on the process. Many impurities are invisible to routine analytical techniques, so we work very closely with vendors to establish a chemistry analytical protocol for each precursor that may consist of 5-10 different techniques. For the impurities we can’t detect we rely on excellent manufacturing process control and sub-supply sourcing management.

Korczynski: Is the supply-chain for advanced precursors for deposition and etch supplying everything we need in early R&D?

Thompson: New precursor ideation—the science that leads to new classes of compounds with new reactivity that Roy Gordon, or more recently Chuck Winter, have  been doing in academia is critically important and while there are a few academics doing excellent work in this space, in general there’s not enough focus on this topic.While we see many IP protected molecules, too often they are obvious simple modifications to one skilled in the art, consisting of merely adding a functional group off of a ring, or mixing and matching known ligand systems. We don’t see a lot of disruptive chemistries. The industry is hunting for differentiated reactivity, and evolutionary precursor development approaches generally aren’t sufficiently disruptive. While this research is useful in terms of tuning a vapor pressure or thermal stability it only very rarely produces a differentiated reactivity.

Korczynski: Do we need new methodologies to more efficiently manage all of this?

Thompson: Applied has made significant investments over the last 5 years to help accelerate the readiness of new materials across the board. One of the best things about working at Applied is the rate at which we can learn and build an ecosystem around a new material. With our strength in chemistry, deposition, CMP, etch, metrology and a host of other technologies, we get a fast, strong feedback loop going to accelerate issue discovery, resolution and general learning around new materials.

On the chemical supply-chain front, the need is making sure that chemical vendors accelerate their analytical chemistry development on new materials. Correlating the variability of chemistry to process results and ultimately yield is the real battle. The more knowledge we have of a chemistry moving into development, the faster learning can occur. I explain to my team that we can’t be proactive and respond to things we didn’t anticipate. Situations where trying to develop the analytical technique to see the impurity responsible for causing (or resolving) a variability is to start out at a significant disadvantage. However, we’ve seen a good response from suppliers on new materials and significant improvement on the early learnings necessary to minimize the agony of new material introductions.

3D Chips, New Packaging Challenge Metrology and Inspection Gear

Monday, March 21st, 2016

Compiled and edited by Jeff Dorsch, Contributing Editor

Metrology and inspection technology is growing more complicated as device dimensions continue to shrink. Discussing crucial trends in the field are Lior Engel, vice president of the Imaging and Process Control Group at Applied Materials, and Elvino da Silveira, vice president of marketing, Rudolph Technologies.

1. What are the latest market trends in metrology and inspection?

Lior Engel, Applied Materials: The market trends we are witnessing today are influenced by the memory mix growth in wafer fab equipment and emergence of technology inflections as the industry progresses to advanced nodes and 3D device architectures. The optical inspection market is growing along with wafer fab equipment. We have seen the memory mix of wafer fab equipment grow from 23 percent in 2012 to almost 50 percent in 2015. The memory growth trend along with the transition from planar to 3D NAND changes the dynamics, as 3D NAND in general requires more metrology solutions while the foundries are maintaining high demand for optical wafer inspection. Demand for electron-beam products is increasing for all device types.

Shrinking design rules and shrinking process windows translate to systematic defects becoming a critical issue. These can hinder time to yield and affect production yields. The interaction between the design and process can fail under certain process conditions, and the resulting small defects are extremely difficult to find. Challenges such as these are fueling the need for both optical and e-beam inspection solutions in the fab. These different solutions complement each other and help the fab throughout the entire chip lifecycle. From a market perspective, the e-beam inspection market continues to grow and outperform WFE. E-beam inspection is currently focused at the R&D stage but is beginning to shift to high-volume manufacturing.

The metrology market is also growing due to multi-patterning requirements, the need for increased measurement points and tighter process window control. The advent of e-beam massive metrology tools provides a solution for process monitoring and uniformity control. Also driving the market are the ever increasing high aspect ratio (HAR) 3D NAND devices in memory and the increasing complexity in 3D FinFET metrology in foundry.

The workhorse metrology solutions include CD-SEM for multi-patterning controls and HAR memory, and optical critical-dimension (OCD) addressing spacer profile reconstruction in multi-patterning and full device characterization in FinFET.

Elvino da Silveira, Rudolph Technologies: In our experience, fan-out wafer level packaging (FOWLP) is a big trend for our customers. FOWLP does not require a substrate, so the lower cost makes it an attractive packaging technique over 2.5D or embedded interposers. There is a wide range of low- to high-end FOWLP applications, such as MCP/SiP, PoP, and 2.5D FOWLP, each requiring specific inspection/metrology techniques.

Further, we see submicron inspection as a big trend fueled by shrink. More than Moore is driving creative packaging that requires inspection of shrinking redistribution (RDL) lines. Miniaturization and multiple functions packaging, driven by the wearables and Internet of Things market, creates more emphasis on microelectromechanical system (MEMS) devices and sensors. Also, shrinking nodes in the front-end have shifted macro inspection needs to the submicron level.

With regards to front-end metrology trends, 3D is the driver. Second- and third-generation FinFET and 3D memory (both DRAM and NAND) are the key market drivers for front-end logic and memory.  We are also seeing radio-frequency (RF), MEMS, and CMOS image sensors (CIS) move to adopt the latest generation of metrology as they compete to improve their processes and gain market share.

2. What are the latest technology trends in metrology/inspection?

Lior Engel, Applied Materials: Inflection challenges that are affecting M&I technology trends include:

 Design rules shrinking causing denser feature imaging and the advent of smaller killer defects

 3D transistors having more complex geometries, trenches and sidewalls. There is no line of sight to the killer defects and new materials are being introduced

  •  HAR structures introducing buried defects and new metrology challenges
  •  Process marginality resulting in critical systematic defects that require metrology coverage

In optical inspection, the technology trends addressing growing challenges include sensitivity improvement by enhancing the signal from key defects of interest. This can be achieved by enhancing imaging techniques and nuisance separation capabilities. In addition, leveraging design information (CAD) and optical information to optimize nuisance filtering.

For e-beam applications, which include SEM review, CD-SEM metrology and e-beam inspection, trends include:

 Adding on-tool automatic classification and analysis capabilities, which result in more meaningful statistical process control (SPC) and yield control. Automation produces faster enhanced results, reducing human error and speeding up the process

 Achieving 1-nanometer e-beam resolution and the availability of new imaging techniques are being utilized for finding the smaller defects in complex structures

  •  e-beam massive critical dimension (CD) measurements are being used for uniformity control
  •  e-beam voltage contrast inspection is increasingly required for embedded defects in 3D structures
  •  In-die on-device 3D and overlay measurements are challenging current optical metrology techniques, and trending towards new in-line solutions such as e-beam.

Elvino da Silveira, Rudolph Technologies: Increasingly complex front-end processes paired with “More than Moore” advanced packaging techniques are resulting in die-level stress. Product loss at assembly is extremely expensive since it’s one of the last steps in the process. Singulation excursions can manifest as a yield problem, but most often result as a reliability problem making them harder to detect and control. Traditional automated optical inspection (AOI) has been focused on active die areas rather than total chip area and is somewhat difficult and prone to overkill. Rudolph has developed a method to detect and monitor wafer chipping without extra investment or tool process time.

We solve the AOI inspection challenges by specifically monitoring the die seal ring while simultaneously inspecting both the active area and remaining kerf area, avoiding any throughput penalty. With our high-sensitivity/low-noise pattern-based inspection, customers can decide how close chips can occur relative to the seal ring. Judgements can be made about die quality based on certain characteristics (distance between die, die rotation, etc.). Lastly, customers can review image capture in both visible and infrared (IR).

Another inspection technology trend we see is the need for detection of non-visible/low-contrast killer defects in 3DIC flows. A 3D stacked IC flow may require a combination of through-silicon via (TSV) formation followed by die-stacking and molding. TSV interconnect formation flow will require processes such as via etch, via fill, nail reveal, copper pillar, wafer bonding, and debonding. A comprehensive process control strategy for such a complex flow requires multiple inspection and metrology approaches. Bright-field and dark-field detection is the baseline inspection technology for random and systematic defects. As the processes for TSV take on a more fab-like look, and are implemented in what is now being called the middle end, attention is turning to defects that are normally not visible. Examples of non-visible defects range from voids in TSVs to faint organic residues and incomplete etch on the bump pad. Voids can be detected using laser acoustic metrology. Laser acoustics also offer a unique solution for measuring the individual layers in a pillar bump stack to ensure tight process control and device yield. Organic residue-based defects have been tedious to detect using manual fluorescent microscopes. Now a more reliable approach to detecting organic defects is possible using automated high-speed fluorescent imaging based inspection. The strategy of combining bright-field, dark-field inspection with automated fluorescent imaging inspection, laser acoustics and software to analyze defect and metrology data has proved to be a cost effective approach to managing visible and non-visible defects in advanced assembly flows.

Advanced patterning of three-dimensional gate structures and memory cells is driving the need for advanced metrology techniques. Some of which have not been developed yet! Optical CD, X-ray, and acoustic metrologies are all at the leading edge. Optical wavelength ranges are now upwards of 20 microns to deal with thick multilayer memory stacks. Missing layer detection and the ability to measure ultrathin metal stacks with complicated interface characteristics are also challenges faced by our customers.

3. How are equipment vendors helping find defects in the nanoscale era?

Lior Engel, Applied Materials: Vendors must combine enhanced resolution, advanced imaging, and smarter applications into their offerings to meet the increasingly complex requirements from chipmakers as they transition to advanced nodes and 3D devices. E-beam and optical inspection solutions must become faster and more sensitive.

Metrology solutions are being used beyond traditional systematic process control, generating massive high-sensitivity data that is leveraged for predictive analysis.

In addition, as challenges grow, advanced applications leveraging design data and machine learning capabilities improve the overall results that the tools can deliver.

Elvino da Silveira, Rudolph Technologies: Those suppliers that can not only provide the required technology, but also provide the ability to take multiple points of data from across the fab, analyze that data, and make it actionable. True end-to-end process control that reduces time-to-ramp and improves ramp to yield—this is the value proposition that Rudolph offers its customers.

4. How is the 2016 market shaping up?

Lior Engel, Applied Materials: As was stated in Applied’s latest earnings call, our market outlook, taking into account the global economic climate, is that wafer fab equipment spending levels in 2016 will be similar to 2015. Driving industry investment are the technology inflections around 10-nanometer and the shift to 3D NAND, as well as increased spending in China.

Elvino da Silveira, Rudolph Technologies: Although Gartner is forecasting a flat 2016, Rudolph is uniquely positioned in both the front-end and true back-end semiconductor processes in a number of growth markets. Additionally, our new product pipeline is strong.

We see an opportunity to outperform our peers in 2016.

5. Is business improving, declining, or staying flat this year?

Lior Engel, Applied Materials: While the overall spending trend for WFE this year is flat, we are maintaining a positive outlook for Applied in 2016 because our customers are making strategic inflection-driven investments that play to our strengths. Our position is optimistic on wafer inspection for 2016. Our latest UVision Brightfield tool has a good position in foundry and logic. We’re the leader in e-beam review and are now taking that technology into inspection where we have significant pull from customers. So I think overall in 2016, we’re pretty optimistic about that business.

Next Page »