Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘Imec’

Next Page »

Lithographic Stochastic Limits on Resolution

Monday, April 3rd, 2017

thumbnail

By Ed Korczynski, Sr. Technical Editor

The physical and economic limits of Moore’s Law are being approached as the commercial IC fab industry continues reducing device features to the atomic-scale. Early signs of such limits are seen when attempting to pattern the smallest possible features using lithography. Stochastic variation in the composition of the photoresist as well as in the number of incident photons combine to destroy determinism for the smallest devices in R&D. The most advanced Extreme Ultra-Violet (EUV) exposure tools from ASML cannot avoid this problem without reducing throughputs, and thereby increasing the cost of manufacturing.

Since the beginning of IC manufacturing over 50 years ago, chip production has been based on deterministic control of fabrication (fab) processes. Variations within process parameters could be controlled with statistics to ensure that all transistors on a chip performed nearly identically. Design rules could be set based on assumed in-fab distributions of CD and misalignment between layers to determine the final performance of transistors.

As the IC fab industry has evolved from micron-scale to nanometer-scale device production, the control of lithographic patterning has evolved to be able to bend-light at 193nm wavelength using Off-Axis Illumination (OAI) of Optical-Proximity Correction (OPC) mask features as part of Reticle Enhancement Technology (RET) to be able to print <40nm half-pitch (HP) line arrays with good definition. The most advanced masks and 193nm-immersion (193i) steppers today are able to focus more photons into each cubic-nanometer of photoresist to improve the contrast between exposed and non-exposed regions in the areal image. To avoid escalating cost and complexity of multi-patterning with 193i, the industry needs Extreme Ultra-Violet Lithography (EUVL) technology.

Figure 1 shows Dr. Britt Turkot, who has been leading Intel’s integration of EUVL since 1996, reassuring a standing-room-only crowd during a 2017 SPIE Advanced Lithography (http://spie.org/conferences-and-exhibitions/advanced-lithography) keynote address that the availability for manufacturing of EUVL steppers has been steadily improving. The new tools are close to 80% available for manufacturing, but they may need to process fewer wafers per hour to ensure high yielding final chips.

Figure 1. Britt Turkot (Intel Corp.) gave a keynote presentation on "EUVL Readiness for High-Volume Manufacturing” during the 2017 SPIE Advanced Lithography conference. (Source: SPIE)

The KLA-Tencor Lithography Users Forum was held in San Jose on February 26 before the start of SPIE-AL; there, Turcot also provided a keynote address that mentioned the inherent stochastic issues associated with patterning 7nm-node device features. We must ensure zero defects within the 10 billion contacts needed in the most advanced ICs. Given 10 billion contacts it is statistically certain that some will be subject to 7-sigma fluctuations, and this leads to problems in controlling the limited number of EUV photons reaching the target area of a resist feature. The volume of resist material available to absorb EUV in a given area is reduced by the need to avoid pattern-collapse when aspect-ratios increase over 2:1; so 15nm half-pitch lines will generally be limited to just 30nm thick resist. “The current state of materials will not gate EUV,” said Turkot, “but we need better stochastics and control of shot-noise so that photoresist will not be a long-term limiter.”

TABLE:  EUVL stochastics due to scaled contact hole size. (Source: Intel Corp.)

CONTACT HOLE DIAMETER 24nm 16nm
INCIDENT EUV PHOTONS 4610 2050
# ABSORBED IN AREAL IMAGE 700 215

From the LithoGuru blog of gentleman scientist Chris Mack (http://www.lithoguru.com/scientist/essays/Tennants_Law.html):

One reason why smaller pixels are harder to control is the stochastic effects of exposure:  as you decrease the number of electrons (or photons) per pixel, the statistical uncertainty in the number of electrons or photons actually used goes up. The uncertainty produces line-width errors, most readily observed as line-width roughness (LWR). To combat the growing uncertainty in smaller pixels, a higher dose is required.

We define a “stochastic” or random process as a collection of random variables (https://en.wikipedia.org/wiki/Stochastic_process), and a Wiener process (https://en.wikipedia.org/wiki/Wiener_process) as a continuous-time stochastic process in honor of Norbert Wiener. Brownian motion and the thermally-driven diffusion of molecules exhibit such “random-walk” behavior. Stochastic phenomena in lithography include the following:

  • Photon count,
  • Photo-acid generator positions,
  • Photon absorption,
  • Photo-acid generation,
  • Polymer position and chain length,
  • Diffusion during post-exposure bake,
  • Dissolution/neutralization, and
  • Etching hard-mask.

Figure 2 shows the stochastics within EUVL start with direct photolysis and include ionization and scattering within a given discrete photoresist volume, as reported by Solid State Technology in 2010.

Figure 2. Discrete acid generation in an EUV resist is based on photolysis as well as ionization and electron scattering; stochastic variations of each must be considered in minimally scaled areal images. (Source: Solid State Technology)

Resist R&D

During SPIE-AL this year, ASML provided an overview of the state of the craft in EUV resist R&D. There has been steady resolution improvement over 10 years with Photo-sensitive Chemically-Amplified Resists (PCAR) from 45nm to 13nm HP; however, 13nm HP needed 58 mJ/cm2, and provided DoF of 99nm with 4.4nm LWR. The recent non-PCAR Metal-Oxide Resist (MOR) from Inpria has been shown to resolve 12nm HP with  4.7 LWR using 38 mJ/cm2, and increasing exposure to 70 mJ/cm2 has produced 10nm HP L/S patterns.

In the EUVL tool with variable pupil control, reducing the pupil fill increases the contrast such that 20nm diameter contact holes with 3nm Local Critical-Dimension Uniformity (LCDU) can be done. The challenge is to get LCDU to <2nm to meet the specification for future chips. ASML’s announced next-generation N.A. >0.5 EUVL stepper will use anamorphic mirrors and masks which will double the illumination intensity per cm2 compared to today’s 0.33 N.A. tools. This will inherently improve the stochastics, when eventually ready after 2020.

The newest generation EUVL steppers use a membrane between the wafer and the optics so that any resist out-gassing cannot contaminate the mirrors, and this allow a much wider range of materials to be used as resists. Regarding MOR, there are 3.5 times more absorbed photons and 8 times more electrons generated per photon compared to PCAR. Metal hard-masks (HM) and other under-layers create reflections that have a significant effect on the LWR, requiring tuning of the materials in resist stacks.

Default R&D hub of the world imec has been testing EUV resists from five different suppliers, targeting 20 mJ/cm2 sensitivity with 30nm thickness for PCAR and 18nm thickness for MOR. All suppliers were able to deliver the requested resolution of 16nm HP line/space (L/S) patterns, yet all resists showed LWR >5nm. In another experiment, the dose to size for imec’s “7nm-node” metal-2 (M2) vias with nominal pitch of 53nm was ~60mJ/cm2. All else equal, three times slower lithography costs three times as much per wafer pass.

—E.K.

Deep Learning Could Boost Yields, Increase Revenues

Thursday, March 23rd, 2017

thumbnail

By Dave Lammers, Contributing Editor

While it is still early days for deep-learning techniques, the semiconductor industry may benefit from the advances in neural networks, according to analysts and industry executives.

First, the design and manufacturing of advanced ICs can become more efficient by deploying neural networks trained to analyze data, though labelling and classifying that data remains a major challenge. Also, demand will be spurred by the inference engines used in smartphones, autos, drones, robots and other systems, while the processors needed to train neural networks will re-energize demand for high-performance systems.

Abel Brown, senior systems architect at Nvidia, said until the 2010-2012 time frame, neural networks “didn’t have enough data.” Then, a “big bang” occurred when computing power multiplied and very large labelled data sets grew at Amazon, Google, and elsewhere. The trifecta was complete with advances in neural network techniques for image, video, and real-time voice recognition, among others.

During the training process, Brown noted, neural networks “figure out the important parts of the data” and then “converge to a set of significant features and parameters.”

Chris Rowen, who recently started Cognite Ventures to advise deep-learning startups, said he is “becoming aware of a lot more interest from the EDA industry” in deep learning techniques, adding that “problems in manufacturing also are very suitable” to the approach.

Chris Rowen, Cognite Ventures

For the semiconductor industry, Rowen said, deep-learning techniques are akin to “a shiny new hammer” that companies are still trying to figure out how to put to good use. But since yield questions are so important, and the causes of defects are often so hard to pinpoint, deep learning is an attractive approach to semiconductor companies.

“When you have masses of data, and you know what the outcome is but have no clear idea of what the causality is, (deep learning) can bring a complex model of causality that is very hard to do with manual methods,” said Rowen, an IEEE fellow who earlier was the CEO of Tensilica Inc.

The magic of deep learning, Rowen said, is that the learning process is highly automated and “doesn’t require a fab expert to look at the particular defect patterns.”

“It really is a rather brute force, naïve method. You don’t really know what the constituent patterns are that lead to these particular failures. But if you have enough examples that relate inputs to outputs, to defects or to failures, then you can use deep learning.”

Juan Rey, senior director of engineering at Mentor Graphics, said Mentor engineers have started investigating deep-learning techniques which could improve models of the lithography process steps, a complex issue that Rey said “is an area where deep neural networks and machine learning seem to be able to help.”

Juan Rey, Mentor Graphics

In the lithography process “we need to create an approximate model of what needs to be analyzed. For example, for photolithography specifically, there is the transition between dark and clear areas, where the slope of intensity for that transition zone plays a very clear role in the physics of the problem being solved. The problem tends to be that the design, the exact formulation, cannot be used in every space, and we are limited by the computational resources. We need to rely on a few discrete measurements, perhaps a few tens of thousands, maybe more, but it still is a discrete data set, and we don’t know if that is enough to cover all the cases when we model the full chip,” he said.

“Where we see an opportunity for deep learning is to try to do an interpretation for that problem, given that an exhaustive analysis is impossible. Using these new types of algorithms, we may be able to move from a problem that is continuous to a problem with a discrete data set.”

Mentor seeks to cooperate with academia and with research consortia such as IMEC. “We want to find the right research projects to sponsor between our research teams and academic teams. We hope that we can get better results with these new types of algorithms, and in the longer term with the new hardware that is being developed,” Rey said.

Many companies are developing specialized processors to run machine-learning algorithms, including non-Von Neumann, asynchronous architectures, which could offer several orders of magnitude less power consumption. “We are paying a lot of attention to the research, and would like to use some of these chips to solve some of the problems that the industry has, problems that are not very well served right now,” Rey said.

While power savings can still be gained with synchronous architectures, Rey said brain-inspired projects such as Qualcomm’s Zeroth processor, or the use of memristors being developed at H-P Labs, may be able to deliver significant power savings. “These are all worth paying attention to. It is my feeling that different architectures may be needed to deal with unstructured data. Otherwise, total power consumption is going through the roof. For unstructured data, these types of problem can be dealt with much better with neuromorphic computers.”

The use of deep learning techniques is moving beyond the biggest players, such as Google, Amazon, and the like. Just as various system integrators package the open source modules of the Hadoop data base technology into a more-secure offering, several system integrators are offering workstations packaged with the appropriate deep-learning tools.

Deep learning has evolved to play a role in speech recognition used in Amazon’s Echo. Source: Amazon

Robert Stober, director of systems engineering at Bright Computing, bundles AI software and tools with hardware based on Nvidia or Intel processors. “Our mission statement is to deploy deep learning packages, infrastructure, and clusters, so there is no more digging around for weeks and weeks by your expensive data scientists,” Stober said.

Deep learning is driving new the need for new types of processors as well as high-speed interconnects. Tim Miller, senior vice president at One Stop Systems, said that training the neural networks used in deep learning is an ideal task for GPUs because they can perform parallel calculations, sharply reducing the training time. However, GPUs often are large and require cooling, which most systems are not equipped to handle.

David Kanter, principal consultant at Real World Technologies, said “as I look at what’s driving the industry, it’s about convolutional neural networks, and using general-purpose hardware to do this is not the most efficient thing.”

However, research efforts focused on using new materials or futuristic architectures may over-complicate the situation for data scientists outside of the research arena. At the International Electron Devices Meeting (IEDM 2017), several research managers discussed using spin torque magnetic (STT-MRAM) technology, or resistive RAMs (ReRAM), to create dense, power-efficient networks of artificial neurons.

While those efforts are worthwhile from a research standpoint, Kanter said “when proving a new technology, you want to minimize the situation, and if you change the software architecture of neural networks, that is asking a lot of programmers, to adopt a different programming method.”

While Nvidia, Intel, and others battle it out at the high end for the processors used in training the neural network, the inference engines which use the results of that training must be less expensive and consume far less power.

Kanter said “today, most inference processing is done on general-purpose CPUs. It does not require a GPU. Most people I know at Google do not use a GPU. Since the (inference processing) workload load looks like the processing of DSP algorithms, it can be done with special-purpose cores from Tensilica (now part of Cadence) or ARC (now part of Synopsys). That is way better than any GPU,” Kanter said.

Rowen was asked if the end-node inference engine will blossom into large volumes. “I would emphatically say, yes, powerful inference engines will be widely deployed” in markets such as imaging, voice processing, language recognition, and modeling.

“There will be some opportunity for stand-alone inference engines, but most IEs will be part of a larger system. Inference doesn’t necessarily need hundreds of square millimeters of silicon. But it will be a major sub-system, widely deployed in a range of SoC platforms,” Rowen said.

Kanter noted that Nvidia has a powerful inference engine processor that has gained traction in the early self-driving cars, and Google has developed an ASIC to process its Tensor deep learning software language.

In many other markets, what is needed are very low power consumption IEs that can be used in security cameras, voice processors, drones, and many other markets. Nvidia CEO Jen Hsung Huang, in a blog post early this year, said that deep learning will spur demand for billions of devices deployed in drones, portable instruments, intelligent cameras, and autonomous vehicles.

“Someday, billions of intelligent devices will take advantage of deep learning to perform seemingly intelligent tasks,” Huang wrote. He envisions a future in which drones will autonomously find an item in a warehouse, for example, while portable medical instruments will use artificial intelligence to diagnose blood samples on-site.

In the long run, that “billions” vision may be correct, Kanter said, adding that the Nvidia CEO, an adept promoter as well as an astute company leader, may be wearing his salesman hat a bit.

“Ten years from now, inference processing will be widespread, and many SoCs will have an inference accelerator on board,” Kanter said.

MRAM Takes Center Stage at IEDM 2016

Monday, December 12th, 2016

thumbnail

By Dave Lammers, Contributing Editor

The IEDM 2016 conference, held in early December in San Francisco, was somewhat of a coming-out party for magneto-resistive memory (MRAM). The MRAM presentations at IEDM were complemented by a special MRAM-focused poster session – organized by the IEEE Magnetics Society in cooperation with the IEEE Electron Devices Society (EDS) – with 33 posters and a lively crowd.

And in the opening keynote speech of the 62nd International Electron Devices Meeting, Seok-hee Lee, executive vice president at SK Hynix (Seoul), set the stage by saying that the race is on between DRAM and emerging memories such as MRAM. “Originally, people thought that DRAM scaling would stop. Then engineers in the DRAM and NAND worlds worked hard and pushed out the end further in the future,” he said.

While cautioning that MRAM bit cells are larger than in DRAM and thus more more costly, Lee said MRAM has “very strong potential in embedded memory.”

SK Hynix is not the only company with a full-blown MRAM development effort underway. Samsung, which earlier bought MRAM startup Grandis and which has a materials-related research relationship with IBM, attracted a standing-room-only crowd to its MRAM paper at IEDM. TSMC is working with TDK on its program, and Sony is using 300mm wafers to build high-performance MRAMs for startup Avalanche Technology.

And one knowledgeable source said “the biggest processor company also has purchased a lot of equipment” for its MRAM development effort.

Dave Eggleston, vice president of emerging memory at GlobalFoundries, said he believes GlobalFoundries is the furthest along on the MRAM optimization curve, partly due to its technology and manufacturing partnership with Everspin Technologies (Chandler, Ariz.). Everspin has been working on MRAM for more than 20 years, and has shipped nearly 60 million discrete MRAMs, largely to the cache buffering and industrial markets.

GlobalFoundries has announced plans to use embedded STT-MRAM in its 22FDX platform, which uses fully-depleted SOI technology, as early as 2018.

Future versions of MRAM– such as spin orbit torque (SOT) MRAM and Voltage Controlled MRAM — could compete with SRAM and DRAM. Analysts said today’s spin-transfer torque STT-MRAM – referring to the torque that arises from the transfer of electron spins to the free magnetic layer — is vying for commercial adoption as ever-faster processors need higher performance memory subsystems.

STT-MRAM is fast enough to fit in as a new memory layer below the processor and the SRAM-based L1/L2 cache layers, and above DRAM and storage-level NAND flash layers, said Gary Bronner, vice president of research at Rambus Inc.

With good data retention and speed, and medium density, MRAM “may have advantages in the lower-level caches” of systems which have large amounts of on-chip SRAM, Bronner said, due in part to MRAM’s smaller cell size than six-transistor SRAM. While DRAM in the sub-20nm nodes faces cost issues as its moves to more complex capacitor structures, Bronner said that “thus far STT-MRAM) is not cheaper than DRAM.”

IBM researchers, which pioneered the spin-transfer torque approach to MRAM, are working on a high-performance MRAM technology which could be used in servers.

As of now, MRAM density is limited largely by the size of the transistors required to drive sufficient current to the magnetic tunnel junction (MTJ) to flip its magnetic orientation. Dan Edelstein, an IBM fellow working on MRAM development at IBM Research, said “it is a tall order for MRAM to replace DRAM. But MRAM could be used in system-level memory architectures and as an embedded memory technology.”

PVD and etch challenges

Edelstein, who was a key figure in developing copper interconnects at IBM some twenty years ago, said MRAM only requires a few extra mask layers to be integrated into the BEOL in logic. But there remain major challenges in improving the throughput of the PVD deposition steps required to deposit the complex material stack and to control the interfacial layers.

The PVD steps must deposit approximately 30 layers and control them to Angstrom-level precision. Deposition must occur under very low base pressure, and in oxygen- and water-vapor free environments. While tool vendors are working on productization of 300mm MRAM deposition tools, Edelstein said keeping particles under control and minimizing the maintenance and chamber cleaning are all challenging.

Etching the complex materials stack is even harder. Chemical RIE is not practical for MRAMs at this point, and using ion beam etching (IBE) presents challenges in terms of avoiding re-deposition of material sputtered off during the IBE etch steps for the high-aspect-ratio MTJs.

For the tool vendors, MRAMs present challenges as companies go from R&D to high-volume manufacturing, Edelstein said.

A Samsung MRAM researcher, Y.J. Song, briefly described IBE challenges during an IEDM presentation describing an embedded STT-MRAM with a respectable 8-Mbit density and a cell size of .0364 sq. micron. “We worked to optimize the contact etching,” using IBE etch during the patterning steps, he said. The short fail rate was reduced, while keeping the processing temperature at less than 350°C, Song said.

Samsung embedded an STT-MRAM module in the copper back end of the line (BEOL) of a 28nm logic process. (Source: Samsung presentation at IEDM 2016).

Many of the presentations at IEDM described improvements in key parameters, such as the tunnel magnetic resistance (TMR), cell size, data retention, and read error rates at high temperatures or low operating voltages.

An SK Hynix presentation described a 4-Gbit STT-MRAM optimized as a stand-alone, high-density memory. “There still are reliability issues for high-density MRAM memory,” said SK Hynix’s S.-W. Chung. The industry needs to boost the TMR “as high as possible” and work on improving the “not sufficiently long” retention times.

At high temperatures, error rates tend to rise, a concern in certain applications. And since devices are subjected to brief periods of high temperatures during reflow soldering, that issue must be dealt with as well, detailed by a Bosch presentation at IEDM.

Cleans and encapsulation important

Gouri Sankar Kar, who is coordinating the MRAM research program at the Imec consortium (Leuven, Belgium), said one challenge is to reduce the cell size and pitch without damaging the magnetic properties of the magnetic tunnel junction. For the 28nm logic node, embedded MRAM would be in the range of a 200nm pitch and 45nm critical dimensions (CDs). At the IEDM poster session, Imec presented an 8nm cell size STT-MRAM that could intersect the 10nm logic node, with the MRAM pitch in the 100nm range. GlobalFoundries, Micron, Qualcomm, Sony and TSMC are among the participants in the Imec MRAM effort.

Kar said in addition to the etch challenges, post-patterning treatment and the encapsulation liner can have a major impact on MTJ materials selection. “Some metals can be cleaned immediately, and some not. For the materials stack, patterning (litho and etch) and clean optimization are crucial.”

“Chemical etch (RIE) is not really possible at this stage. All the tool vendors are working on physical sputter etch (IBE) where they can limit damage. But I would say all the major tool vendors at this point have good tools,” Kar said.

To reach volume manufacturing, tool vendors need to improve the tool up-time and reduce the maintenance cycles. There is a “tail bits” relationship between the rate of bit failures and the health of the chambers that still needs improvement. “The cleanup steps after etching are very, very critical” to the overall effort to improving the cost effectiveness of MRAM, Kar said, adding that he is “very positive” about the future of MRAM technology.

A complete flow at AMAT

Applied Materials is among the equipment companies participating in the Imec program, with TEL and Canon-Anelva also heavily involved. Beyond that, Applied has developed a complete MRAM manufacturing flow at the company’s Dan Maydan Center in Santa Clara, and presented its cooperative work with Qualcomm on MRAM development at IEDM.

In an interview, Er-Xuan Ping, the Applied Materials managing director in charge of memory and materials technologies, said about 20 different layers, including about ten different materials, must be deposited to create the magnetic tunnel junctions. As recently as a few years ago, throughput of this materials stack was “extremely slow,” he said. But now Applied’s multi-cathode PVD tool, specially developed for MRAM deposition, can deposit 5 Angstrom films in just a few seconds. Throughput is approaching 20 wafers per hour.

Applied Materials “basically created a brand-new PVD chamber” for STT-MRAM, and he said the tool has a new e-chuck, optimized chamber walls and a multi-cathode design.

The MRAM-optimized PVD tool does not have an official name yet, and Ping said he refers to it as multi-cathode PVD. With MRAM requiring deposition of so many different metals and other materials, the Applied tool does not require the wafer to be moved in and out, increasing efficiency. The shape and structure of the chamber wall, Ping said, allow absorption of downstream plasma material so that it doesn’t come back as particles.

For etch, Applied has worked to create etching processes that result in very low bit failure rates, but at relatively relaxed pitches in the 130-200nm range. “We have developed new etch technologies so we don’t think etch will be a limiting factor. But etch is still challenging, especially for cells with 50nm and smaller cell sizes. We are still in unknown territory there,” said Ping.

Jürgen Langer, R&D manager at Singulus Technology (Frankfurt, Germany), said Singulus has developed a production-optimized PVD tool which can deposit “30 material layers in the Angstrom range. We can get 20 wafers per hour throughputs, so I would say this is not a beta tool, it is for production.”

Jürgen Langer, R&D manager, presented a poster on MRAM deposition from Singulus Technology (Frankfurt, Germany).

Where does it fit?

Once the production challenges of making MRAM are ironed out, the question remains: Where will MRAM fit in the systems of tomorrow?

Tom Coughlin, a data storage consultant based in Atascadero, Calif., said embedded MRAM “could have a very important effect for industrial and consumer devices. MRAM could be part of the memory cache layers, providing power advantages over other non-volatile devices.” And with its ability to power on and power off without expending energy, MRAM could reduce overall power consumption in smart phones, cutting in to the SRAM and NOR sectors.

“MRAM definitely has a niche, replacing some DRAM and SRAM. It may replace NOR. Flash will continue for mass storage, and then there is the 3D Crosspoint from Intel. I do believe MRAM has a solid basis for being part of that menagerie. We are almost in a Cambrian explosion in memory these days,” Coughlin said.

CMOS-Photonics Technology Challenges

Friday, July 8th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Fig 1

While it is very easy to talk about the potential advantages of CMOS-photonic integration, the design and manufacturing of commercially competitive products has been extraordinarily difficult. It has been well-known that the cost efficiencies of silicon wafers and CMOS fab processes could theoretically be leveraged to create low-cost photonic circuitry. However, the physics of optics is quite different from the physics of electronics, and so there have been unexpected challenges in moving R&D experiments to HVM products. During the Imec Technology Forum in Brussels held this May, Joris Van Campenhout, imec program director for Optical I/O (Fig. 1) sat down with Solid State Technology to discuss recent progress and future plans.

Data centers—also known as “The Cloud”—continue to grow along with associated power-consumptions, so there are strong motivations to find cost-effective ways to replace more of the electrical switches with lower-power optical circuits. Optical connections in modern data centers do not all have the same specifications, with a clear hierarchy based on the 3D grid-like layout of rows of rack-mounted Printed Circuit Boards (PCB). The table shows the basic differences in physical scale and switching speeds required at different levels within the hierarchy.

Data centers—also known as “The Cloud”—continue to grow along with associated power-consumptions, so there are strong motivations to find cost-effective ways to replace more of the electrical switches with lower-power optical circuits. Optical connections in modern data centers do not all have the same specifications, with a clear hierarchy based on the 3D grid-like layout of rows of rack-mounted Printed Circuit Boards (PCB). The table shows the basic differences in physical scale and switching speeds required at different levels within the hierarchy.

ESTIMATED DATA CENTER REQUIREMENTS FOR OPTICAL I/O  (Source: imec)
OPTICAL CONNECTION RACK BACKPLANE PCB CHIP
DISTANCE 5-500m 0.5-3m 5-50cm 1-50mm
RELATIVE COST $$$$ $$$ $$ $
POWER/Gbps 5mW 1mW 0.5mW 0.1mW

Rack fiberoptic lines connecting the rows of rack-mounted printed-circuit boards (PCB) in data centers represent a major portion of the total investments for capital equipment, so there is a roadmap to keep the same fibers in place while upgrading the speeds of photonic transmit and receive components over time:

40GHz was standard through 2015,

100GHz upgrades in 2016,

400GHz planned by 2019, and

1THz estimated by 2022.

Some companies have tried to develop multi-mode fiber solutions, but imec is working on single-mode. The telecommunications standard for single-mode optical fiber diameter is 9 microns, while multimode today can be up to 50 microns diameter. “Fundamentally single-mode will be the most integrate-able way to try to get that fiber on to a chip,” explained Van Campenhout. “It is difficult enough to get nine micron diameter fibers to couple to sub-micron waveguides on chip.”

Backplane is the PCB-to-PCB connection within one rack, that today uses copper connections running at up to 50 GHz. Imec sees backplane applications as a possible insertion point for CMOS-Photonics, because there are approximately 10X the number of connections compared to rack applications and because the relative cost target calls for new technologies. Imec’s approach uses 56G silicon ring-modulators to shift wavelengths by 0.1% at very low power, knowingly taking on control issues with non-linearity, and high temperature sensitivity. “We’re confident that it can be done,” stated Van Campenhout, “but the question remains if the overhead can be reduced so that the costs are competitive.” The overhead includes the possible need for on-chip thin-film heaters/coolers to be able to control the temperature.

PCB level connections are being pushed by the Consortium for On-Board Optics (COBO), an industry group working to develop a series of specifications to permit the use of board-mounted optical modules in the manufacturing of networked equipment (i.e. switches, servers, etc.). The organization plans to reference industry specifications where possible and develop specifications where required with attention to electrical interfaces, pin-outs, connectors, thermals, etc. for the development of interchangeable and interoperable optical modules that can be mounted onto motherboards and daughtercards.

Luxtera is the commercial market leader for CMOS-Photonic chips used at the Rack level today, and uses ‘active alignment’ meaning that the fiber has to be lit with the laser and then aligning to the waveguides during test and during assembly. Luxtera is fabless and uses Freescale as foundry to build the chip in an established CMOS SOI process flow originally established for high performance microprocessors. The company produces 10G chips today for advanced Ethernet connections, and through a partnership with Molex ships 40G Active Optical Cables.

Chip level optical connections require breakthrough technologies such as indium-phosphide epitaxy on silicon to be able to grow the most efficient electrically-controlled optical switches, instead of having to pick-and-place discrete components aligned with waveguides. Alignment of components is a huge issue for manufacturing and test that adds inherent costs. “The main issue is getting the coupling from the chip to the fiber with low losses, since sub-micron alignment is needed to avoid a 1 dB loss,” summarized Van Campenhout.

Figure 2 shows a simplified functional schematic of a high-capacity optical communications links employing Dense Wavelength Division Multiplexing (DWDM) to combine modulated laser beams of different colors on a single-mode fiber. Luxtera is working on DWDM for increased bandwidth as is imec.

FIGURE 2: Dense Wavelength Division Multiplexing (DWDM) scheme allows multiplication of the total single-mode fiber (SMF) bandwidth by the number of laser colors used. (Source: imec)

Difficult Design

“If you have just a 1 nm variation in the waveguide width, that device’s spectral response will be proportional as a rule of thumb,” explained Van Campenhout. “We can tune for that with a heating element, but then we lose the low-power advantage.” This results in a need for different design-for-manufacturing approaches.

“When we do photonics design we have to have round features or the light will scatter. So when we do mask making we have to use different rules, and we need to educate all of our partners that we are doing photonics,” reminded Van Campenhout. “However there are EDA companies that are becoming aware of these aspect, so things are developing nicely to create a whole ecosystem to be able to build these. We have the first version of a PDK that we use for multi-product-wafer runs, so we can deliver custom chips to partners.”

Mentor Graphics is an imec partner, and the company’s Tom Daspit, marketing manager for Pyxis Design Tools, spoke with Solid State Technology about the special challenges of EDA for photonics. “You’ve now jumped off the cliff of the orthogonal design environment. Light doesn’t bend at 45° let alone 90°. On an IC it’s all orthogonal, while if it’s photonic we have to modify the interconnect so that the final design is a nice curved one.” To produce a smooth curve the EDA tools must fracture it into a small grid for the photomask, so a seemingly simple set of curves can require gigabytes in a final GDSII file.

It was about 4 years ago that some customers began asking Mentor to modify tools to be able to support photonics, and today there are customers large and small, and some are in full volume production for communications applications. “Remember when they building the old Cray supercomputers and they had to account for all wire lengths to handle signal delays, well now with photonics we need to account for waveguide lengths,” commented Daspit.

In full volume products today are likely communications chips. Customers do not typically share product plans, so not sure of applications spaces. Everybody wants to get rid of the Cu in the backpane to eliminate power consumption, but:

“The big application is photonics for sensor integration, with universities leading the way. Medical is a huge new market,” explained Daspit. “The CMOS die could be 130- down to 65nm or maybe 28nm-nm for some digital.” So there are a wide variety of future applications for CMOS-Photonics, and despite the known manufacturing challenges there are already commercial applications in communications.

—E.K.

Solid State Watch: May 20-26, 2016

Tuesday, May 31st, 2016
YouTube Preview Image

79 GHz CMOS RADAR Chips for Cars from Imec and Infineon

Tuesday, May 24th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

As unveiled at the annual Imec Technology Forum in Brussels (itf2016.be), Infineon Technologies AG (infineon.com) and imec (imec.be) are working on highly integrated CMOS-based 79 GHz sensor chips for automotive radar applications. Imec provides expertise in high-frequency system, circuit, and antenna design for radar applications, complementing Infineon’s knowledge from the many learnings that go along with holding the world’s top market share in commercial radar sensor chips. Infineon and imec expect functional CMOS sensor chip samples in the third quarter of 2016. A complete radar system demonstrator is scheduled for the beginning of 2017.

Whether or not fully automated cars and trucks will be traveling on roads soon, today’s drivers want more sensors to be able to safely avoid accidents in conditions of limited visibility. Typically, there are up to three radar systems in today’s vehicle equipped with driver assistance functions. In a future with fully automated cars, up to ten radar systems and ten more sensor systems using cameras or lidar (https://en.wikipedia.org/wiki/Lidar) could be needed. Short-range radar (SRR) would look for side objects, medium-range radar (MRR) would scan widely for objects up to 50m in front and in back, and long-range radar (LRR) would focus up to 250m in front and in back for high-speed collision avoidance.

“Infineon enables the radar-based safety cocoon of the partly and fully automated car,” said Ralf Bornefeld, Vice President & General Manager, Sense & Control, Infineon Technologies AG. “In the future, we will manufacture radar sensor chips as a single-chip solution in a classic CMOS process for applications like automated parking. Infineon will continue to set industry standards in radar technology and quality.”

The Figure shows the evolution of radar technology over the last decades, leading to the current miniaturization using solid-state silicon CMOS. Key to the successful development of this 79 GHz demonstrator was choosing to use 28 nm CMOS technology. Imec has been refining this technology as shown at ISSCC (isscc.org) for many years, first showing a 28nm transmitter chip in 2013, then showing a 28nm transmit and receive (a.k.a. “transceiver”) chip in 2014, and finally showing a single-chip with a transceiver and analog-digital converters (ADC) and phase-lock loops (PLL) and digital components in 2015. Long-term supply of eventual commercial chips should be ensured by using 28nm technology, which is known as a “long lived” node.

“We are excited to work with Infineon as a valuable partner in our R&D program on advanced CMOS-based 77 GHz and 79 GHz radar technology,” stated Wim Van Thillo, program director perceptive systems at imec. “Compared to the mainstream 24 GHz band, the 77 GHz and 79 GHz bands enable a finer range, Doppler and angular resolution. With these advantages, we aim to realize radar prototypes with integrated multiple-input, multiple-output (MIMO) antennas that not only detect large objects, but also pedestrians and bikers and thus contribute to a safer environment for all.”

Since the aesthetics are always important for buyers, automobile companies have been challenged to integrate all of the desired sensors into vehicles in an invisible manner. “The designers hate what they call the ‘warts’ on car bumpers that are the small holes needed for the ultrasonic sensors currently used,” explained Van Thillo in a press conference during ITF2016.

In an ITF2016 presentation, CEO Reinhard Ploss, discussed how Infineon works with industrial partners to create competitive commercial products. “When we first developed RADAR, there was a collaboration between the Tier-1 car companies and ourselves,” explained Ploss. “The key lies in the algorithms needed to process the data, since the raw data stream is essentially useless. The next generation of differentiation for semiconductors will be how to integrate algorithms. In effect, how do you translate ‘pixels’ into ‘optics’ without an expensive microprocessor?”

Evolution of radar technology over time has reached the miniaturization of 79 GHz using 28nm silicon CMOS technology. Imec is now also working on 140 GHz radar chips. (Source: imec)

—E.K.

EUV Resists and Stochastic Processes

Friday, March 4th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

In an exclusive interview with Solid State Technology during SPIE-AL this year, imec Advanced Patterning Department Director Greg McIntyre said, “The big encouraging thing at the conference is the progress on EUV.” The event included a plenary presentation by TSMC Nanopatterning Technology Infrastructure Division Director and SPIE Fellow Anthony Yen on “EUV Lithography: From the Very Beginning to the Eve of Manufacturing.” TSMC is currently learning about EUVL using 10nm- and 7nm-node device test structures, with plans to deploy it for high volume manufacturing (HVM) of contact holes at the 5nm node. Intel researchers confirm that they plan to use EUVL in HVM for the 7nm node.

Recent improvements in EUV source technology— 80W source power had been shown by the end of 2014, 185W by the end of 2015, and 200W has now been shown by ASML—have been enabled by multiple laser pulses tuned to the best produce plasma from tin droplets. TSMC reports that 518 wafers per day were processed by their ASML EUV stepper, and the tool was available ~70% of the time. TSMC shows that a single EUVL process can create 46nm pitch lines/spaces using a complex 2D mask, as is needed for patterning the metal2 layer within multilevel on-chip interconnects.

To improve throughput in HVM, the resist sensitivity to the 13.54nm wavelength radiation of EUV needs to be improved, while the line-width roughness (LWR) specification must be held to low single-digit nm. With a 250W source and 25 mJ/cm2 resist sensitivity an EUV stepper should be able to process ~100 wafer-per-hour (wph), which should allow for affordable use when matched with other lithography technologies.

Researchers from Inpria—the company working on metal-oxide-based EUVL resists—looked at the absorption efficiencies of different resists, and found that the absorption of the metal oxide based resists was ≈ 4 to 5 times higher than that of the Chemically-Amplified Resist (CAR). The Figure shows that higher absorption allows for the use of proportionally thinner resist, which mitigates the issue of line collapse. Resist as thin as 18nm has been patterned over a 70nm thin Spin-On Carbon (SOC) layer without the need for another Bottom Anti-Reflective Coating (BARC). Inpria today can supply 26 mJ/cm2 resist that creates 4.6nm LWR over 140nm Depth of Focus (DoF).

To prevent pattern collapse, the thickness of resist is reduced proportionally to the minimum half-pitch (HP) of lines/spaces. (Source: JSR Micro)

JEIDEC researchers presented their summary of the trade-off between sensitivity and LWR for metal-oxide-based EUV resists:  ultra high sensitivity of 7 mJ/cm2 to pattern 17nm lines with 5.6nm LWR, or low sensitivity of 33 mJ/cm2 to pattern 23nm lines with 3.8nm LWR.

In a keynote presentation, Seong-Sue Kim of Samsung Electronics stated that, “Resist pattern defectivity remains the biggest issue. Metal-oxide resist development needs to be expedited.” The challenge is that defectivity at the nanometer-scale derives from “stochastics,” which means random processes that are not fully predictable.

Stochastics of Nanopatterning

Anna Lio, from Intel’s Portland Technology Development group, stated that the challenges of controlling resist stochastics, “could be the deal breaker.” Intel ran a 7-month test of vias made using EUVL, and found that via critical dimensions (CD), edge-placement-error (EPE), and chain resistances all showed good results compared to 193i. However, there are inherent control issues due to the random nature of phenomena involved in resist patterning:  incident “photons”, absorption, freed electrons, acid generation, acid quenching, protection groups, development processes, etc.

Stochastics for novel chemistries can only be controlled by understanding in detail the sources of variability. From first-principles, EUV resist reactions are not photon-chemistry, but are really radiation-chemistry with many different radiation paths and electrons which can be generated. If every via in an advanced logic IC must work then the failure rate must be on the order of 1 part-per-trillion (ppt), and stochastic variability from non-homogeneous chemistries must be eliminated.

Consider that for a CAR designed for 15mJ/cm2 sensitivity, there will be just:

145 photons/nm2 for 193, and

10 photons/nm2 for EUV.

To improve sensitivity and suppress failures from photon shot-noise, we need to increase resist absorption, and also re-consider chemical amplification mechanisms. “The requirements will be the same for any resist and any chemistry,” reminded Lio. “We need to evaluate all resists at the same exposure levels and at the same rules, and look at different features to show stochastics like in the tails of distributions. Resolution is important but stochastics will rule our world at the dimensions we’re dealing with.”

—E.K.

Many Mixes to Match Litho Apps

Thursday, March 3rd, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

“Mix and Match” has long been a mantra for lithographers in the deep-sub-wavelength era of IC device manufacturing. In general, forming patterns with resolution at minimum pitch as small as 1/4 the wavelength of light can be done using off-axis illumination (OAI) through reticle enhancement techniques (RET) on masks, using optical proximity correction (OPC) perhaps derived from inverse lithography technology (ILT). Lithographers can form 40-45nm wide lines and spaces at the same half-pitch using 193nm light (from ArF lasers) in a single exposure.

Figure 1 shows that application-specific tri-layer photoresists are used to reach the minimum resolution of 193nm-immersion (193i) steppers in a single exposure. Tighter half-pitch features can be created using all manner of multi-patterning processes, including Litho-Etch-Litho-Etch (LELE or LE2) using two masks for a single layer or Self-Aligned Double Patterning (SADP) using sidewall spacers to accomplish pitch-splitting. SADP has been used in high volume manufacturing (HVM) of logic and memory ICs for many years now, and Self-Aligned Quadruple Patterning (SAQP) has been used in HVM by at least one leading memory fab.

Fig.1: Basic tri-layer resist (TLR) technology uses thin Photoresist over silicon-containing Hard-Mask over Spin-On Carbon (SOC), for patterning critical layers of advanced ICs. (Source: Brewer Science)

Next-Generation Lithography (NGL) generally refers to any post-optical technology with at least some unique niche patterning capability of interest to IC fabs:  Extreme Ultra-Violet (EUV), Directed Self-Assembly (DSA), and Nano-Imprint Lithography (NIL). Though proponents of each NGL have dutifully shown capabilities for targeted mask layers for logic or memory, the capabilities of ArF dry and immersion (ArFi) scanners to process >250 wafers/hour with high uptime dominates the economics of HVM lithography.

The world’s leading lithographers gather each year in San Jose, California at SPIE’s Advanced Lithography conference to discuss how to extend optical lithography. So of all the NGL technologies, which will win out in the end?

It is looking most likely that the answer is “all of the above.” EUV and NIL could be used for single layers. For other unique patterning application, ArF/ArFi steppers will be used to create a basic grid/template which will be cut/trimmed using one of the available NGL. Each mask layer in an advanced fab will need application-specific patterning integration, and one of the rare commonalities between all integrated litho modules is the overwhelming need to improve pattern overlay performance.

Naga Chandrasekaran, Micron Corp. vice president of Process R&D, provided a fantastic overview of the patterning requirements for advanced memory chips in a presentation during Nikon’s LithoVision technical symposium held February 21st in San Jose, California prior to the start of SPIE-AL. While resolution improvements are always desired, in the mix-and-match era the greatest challenges involve pattern overlay issues. “In high volume manufacturing, every nanometer variation translates into yield loss, so what is the best overlay that we can deliver as a holistic solution not just considering stepper resolution?” asks Chandrasekaran. “We should talk about cost per nanometer overlay improvement.”

Extreme Ultra-Violet (EUV)

As touted by ASML at SPIE-AL, the brightness and stability and availability of tin-plasma EUV sources continues to improve to 200W in the lab “for one hour, with full dose control,” according to Michael Lercel, ASML’s director of strategic marketing. ASML’s new TWINSCAN NXE:3350B EUVL scanners are now being shipped with 125W power sources, and Intel and Samsung Electronics reported run their EUV power sources at 80W over extended periods.

During Nikon’s LithoVision event, Mark Phillips, Intel Fellow and Director of Lithography Technology Development for Logic, summarized recent progress of EUVL technology:  ~500 wafers-per-day is now standard, and ~1000 wafer-per-day can sometimes happen. However, since grids can be made with ArFi for 1/3 the cost of EUVL even assuming best productivity for the latter, ArFi multi-patterning will continue to be used for most layers. “Resolution is not the only challenge,” reminded Phillips. “Total edge-placement-error in patterning is the biggest challenge to device scaling, and this limit comes before the device physics limit.”

Directed Self-Assembly (DSA)

DSA seems most suited for patterning the periodic 2D arrays used in memory chips such as DRAMs. “Virtual fabrication using directed self-assembly for process optimization in a 14nm DRAM node” was the title of a presentation at SPIE-AL by researchers from Coventor, in which DSA compared favorably to SAQP.

Imec presented electrical results of DSA-formed vias, providing insight on DSA processing variations altering device results. In an exclusive interview with Solid State Technology and SemiMD, imec’s Advanced Patterning Department Director Greg McIntyre reminds us that DSA could save one mask in the patterning of vias which can all be combined into doublets/triplets, since two masks would otherwise be needed to use 193i to do LELE for such a via array. “There have been a lot of patterning tricks developed over the last few years to be able to reduce variability another few nanometers. So all sorts of self-alignments.”

While DSA can be used for shrinking vias that are not doubled/tripled, there are commercially proven spin-on shrink materials that cost much less to use as shown by Kaveri Jain and Scott Light from Micron in their SPIE-AL presentation, “Fundamental characterization of shrink techniques on negative-tone development based dense contact holes.” Chemical shrink processes primarily require control over times, temperatures, and ambients inside a litho track tool to be able repeatably shrink contact hole diameters by 15-25 nm.

Nano-Imprint Litho (NIL)

For advanced IC fab applications, the many different options for NIL technology have been narrowed to just one for IC HVM. The step-and-pattern technology that had been developed and trademarked as “Jet and Flash Imprint Lithography” or “J-FIL” by, has been commercialized for HVM by Canon NanoTechnologies, formerly known as Molecular Imprints. Canon shows improvements in the NIL mask-replication process, since each production mask will need to be replicated from a written master. To use NIL in HVM, mask image placement errors from replication will have to be reduced to ~1nm., while the currently available replication tool is reportedly capable of 2-3nm (3 sigma).

Figure 2 shows normalized costs modeled to produce 15nm half-pitch lines/spaces for different lithography technologies, assuming 125 wph for a single EUV stepper and 60 wph for a cluster of 4 NIL tools. Key to throughput is fast filling of the 26mmx33mm mold nano-cavities by the liquid resist, and proper jetting of resist drops over a thin adhesion layer enables filling times less than 1 second.

Fig.2: Relative estimated costs to pattern 15nm half-pitch lines/spaces for different lithography technologies, assuming 125 wph for a single EUV stepper and 60 wph for a cluster of 4 NIL tools. (Source: Canon)

Researchers from Toshiba and SK Hynix described evaluation results of a long-run defect test of NIL using the Canon FPA-1100 NZ2 pilot production tool, capable of 10 wafers per hour and 8nm overlay, in a presentation at SPIE-AL titled, “NIL defect performance toward high-volume mass production.” The team categorized defects that must be minimized into fundamentally different categories—template, non-filling, separation-related, and pattern collapse—and determined parallel paths to defect reduction to allow for using NIL in HVM of memory chips with <20nm half-pitch features.

—E.K.

Comfortable Consumer EEG Headset Shown by Imec and Holst Centre

Thursday, August 27th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

A new wireless electroencephalogram (EEG) headset that is comfortable while providing medical-grade data acquisition has been shown by the partnership of imec, the Holst Centre, and the Industrial Design Engineering (IDE) department of TU Delft. The 3D-printed low-volume product enables early research and self-monitoring of emotions and mood in daily life situations using a smartphone application. Consumer applications include games that monitor relaxation and/or concentration, and medical applications that help with sleep disorders and treatment of Attention Deficit Hyperactivity Disorder (ADHD).

Figure 1 shows the new headset with novel elastic electrode arrays in an elegant uni-body assembly to optimize both comfort and signal quality. The electronics package in the middle of the headset fits on the back of the user’s neck. Each electrode is a small array of elastic polymer fingers which allow for dry contact—without needing a conductive liquid or gel—to skin for long-term comfortable use.

Figure1: Comfortable EEG headset developed by imec and Holst Centre and TU Delft in 2015, providing medical-quality data tracing of emotions and mood in daily life situations using a smartphone application. (Source: imec)

“Leveraging imec’s strong background in EEG sensing, dry polymer and active electrodes, miniaturized and low-power data acquisition, and low-power wireless interfaces to smartphones, we were able to focus on the ergonomics of this project. In doing so, we have successfully realized this unique combination of comfort and effectiveness at the lowest possible cost to the future user,” stated Bernard Grundlehner, EEG system architect at imec.

In 2011, imec and Holst Centre created an 8-channel ultra-low-power analog readout application-specific integrated circuit (ASIC) that consumes only 200µW and features high common mode rejection ratio (CMRR) of 120dB and signal to noise ratio of 25dB on real EEG signals. This ASIC is tuned to high input impedance (1GΩ) for compatibility with the use of dry electrodes. That system—including ASIC, radio, and controller chips— could be integrated in a package of 25mmx35mmx5mm dimensions for easy of integration in headsets, helmets, or other accessories. That system consumes only 3.3mW for continuous recording and wireless transmission of 1 channel—9.2mW for 8 channels—allowing for 1.5 to 4 days of functionality when powered by a 100mAh Li-ion battery.

In 2009, imec and Holst Centre showed off a rough mobile EEG prototype to partners and journalists at the yearly imec Technology Forum. Figure 2 shows that the prototype was bulky and a bit awkward to wear, while the figure does not show that sintered silver/silver-chloride electrodes are very hard such that dry contact to the human scalp tends to be uncomfortable.

Figure2: Ed Korczynski tests an imec EEG headset rough prototype, using uncomfortable hard silver/silver-chloride electrodes, at the 2009 Imec Technology Forum. (Source: Ed Korczynski)

The 2015 model uses new flexible electrodes arrays which are inherently more comfortable than hard silver/silver-chloride electrodes. A team of six master students from IDE of TU Delft led the design optimization of the 3D unibody for the new headset using 3D printing for short-loop prototyping and testing of different shapes for stability and comfort. Iterative tests with users for multiple applications led to this design which is intended for long-term comfortable use by consumers outside of a controlled research environment.

The new EEG headset is manufactured in one piece using 3-D printing, after which the electronic components are placed, connected, and covered by a 3-D-printed rubber inlay. The EEG electrodes are situated at the front of the headset for optimal acquisition of signals related to emotion and mood variations. A mobile app can then tie the user’s emotional state to environmental information such as location, time, agenda, and social context to track possible unconscious effects.

—E.K.

Solid State Technology: August 7-14, 2015

Monday, August 24th, 2015
YouTube Preview Image
Next Page »