Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘SST Top Story Right’

Next Page »

Waddle-room for Black Swans: EUV Stochastics

Friday, April 13th, 2018

thumbnail

By Ed Korczynski, Sr. Technical Editor

Long-delayed and extremely complex, extreme ultra-violet (EUV) lithography technology is now being readied for the high-volume manufacturing (HVM) of commercial semiconductor integrated circuits (IC). The International Society for Optics and Photonics (SPIE) Advanced Lithography conferences held yearly in San Jose, California gather the world’s top lithographers focused on patterning of ICs, and the 2018 gathering included several days of in-depth presentations on the progress of EUV sources, steppers, masks, and photoresists.

With a nod to Taleb’s “black swan theory” (https://en.wikipedia.org/wiki/Black_swan_theory) stochastic defects in advanced lithography have been called the “black swans” of yield loss. They hide in the long-tail on the short side of the seven-sigma distribution of a billion contact-holes. They cause missing contacts and cannot be controlled with source-mask optimization (SMO). They breed in etch chambers.

Many yield losses in lithography are classified as “systematic” due to predicable interactions of photons and masks and photoresists, and modeling can show how to constrain them. White swan “random” defects—such as those caused by particles or molecular contaminants—can be penned up and controlled with proper materials-engineering and filtration of photoresist blends. In contrast, “stochastic” black swans appear due to atomic-scale inhomogeneities in resists and the wiggliness of atoms.

The wavelength of EUV is ~13.5nm, which the IC fab industry would like to use to pattern half-pitches (HP) in the range of 16-20nm in a single-exposure. At these dimensions, we find black swans hiding in lithography and in etch results.

An ongoing issue is lack of ability to model the multi-dimensional complexity of plasma etching multi-layer resist stacks. In Moshe Preil’s 2016 SPIE keynote titled “Patterning Challenges in the sub-10 nm Era,” he wrote:

It is certainly not surprising that etch simulation is not as predictive as lithography. The plasma environment is significantly more chaotic than the relatively well behaved world of photons and photosensitive molecules. Even the evolving need for stochastic simulation in the lithography domain is significantly simpler than the three dimensional controlled chaos of an etch chamber. The number of different chemical pathways available for reaction within an etcher can also present a daunting challenge. The etch process actually needs these multiple pathways to passivate sidewalls while etching vertically in order to carefully balance lateral vs. vertical etch rates and provide the desired material selectivity.

Etch faces additional challenges due to the resist pattern itself. Over the years, resist films have been reduced in thickness to such an extent that the resist itself is no longer adequate to act as the transfer mask for the entire etch process. Etch stacks are now a complex layer cake of optical materials (anti-reflection coatings) and multiple hard masks. While this simplifies the resist patterning process, it has shifted the burden to etch, making the stack more complex and difficult to model. Etch recipe optimization remains largely the domain of highly talented and diligent engineers whose work is often more an art than a science.

Today’s Tri-Layer-Resist (TLR) stacks of photoresist over silicon-based hard-mask over carbon-based anti-reflective coating continue to evolve in complexity. Quadruple-Layer Resist (QLR) stacks add an adhesion/buffer layer of material between the photoresist and the hard-mask. Even without considering multi-patterning process integration, just transferring the pattern from a single-exposure now entails extreme etch complexity.

Figure 1 from “Line-edge roughness performance targets of EUV lithography” presented at 2017 SPIE by Brunner et al. (Proc. of SPIE Vol. 10143, 10143E-2) shows simulated stochastic variation in 18nm HP line grids. The authors explain why such black swan events cannot be ignored.

Fig. 1: Stochastic image simulation in PROLITH™ software of a single exposure of EUV to form long trenches at 36nm pitch: (LEFT) aerial image first calculated as a smooth profile, (CENTER) stochastic calculation of photo-acid concentration before post-exposure bake (PEB) as “latent image”, and (RIGHT) final calculated image after development, based on stochastic de-blocking reactions during PEB. (Source: Brunner et al., Proc. of SPIE Vol. 10143, 10143E-2)

Such stochastic noise is present for all lithographic processes but is more worrisome for EUV lithography for several reasons:

  • fewer photons per unit dose, since each EUV photon carries 14X more energy than a 193nm photon,
  • limited EUV power – only a fraction (~1%) of the source power at intermediate focus makes it to the wafer,
  • only a fraction of EUV photons are actually absorbed within the resist, typically <20% for polymer materials, and
  • smaller features as we progress to more advanced nodes, and so less area to collect EUV photons. Ideally, as the lithographic pixel size shrinks, the number of photons per pixel would stay the same.

Stochastic phenomena – photon shot noise, resist molecular inhomogeneities, electron scattering events, etc. – now contribute to dimensional variation in EUV resist patterns at levels comparable to or greater than customary sources of variation, such as defocus. These stochastic effects help to limit k1 to higher values (worse resolution) than traditional optical lithography, and will counteract the benefits of high NA EUV optics. The quest to improve EUV lithography pattern quality will increasingly focus on overcoming stochastic barriers. Higher power EUV light sources are urgently needed as features shrink. Photoresist materials with higher EUV absorption will also help with stochastic issues. Alternative non-polymeric resist materials and post-develop smoothing processes may also play a future role.

In “Stochastic effects in EUV lithography: random, local CD variability, and printing failures” by Peter De Bisschop of IMEC (J. Micro/Nanolith. MEMS MOEMS, Oct-Dec 2017, Vol. 16/4) data are shown in support of the need for new stochastic control metrics in addition to the established “process window” metrics. A dose experiment using a family of chemically-amplified resists (CAR) to produce 18nm HP line/space (L/S) grids showed that increasing dose in the range from 30 to 60 mJ/cm2 reduced line-width roughness (LWR) from 4.6 to 3.9nm, with no further improvement when increasing dose to 70 and 80 mJ/cm2. However, micro-bridging across spaces continued to drop by orders of magnitude over the entire range of doses, proving that stochastic defects are different “animals.”

In general, we can categorize sources of stochastic variation in advanced lithography as follows:

1)    Number and spacial-distribution of photons absorbed (a.k.a. “shot noise”),

2)    Quantum efficiency of photo-acid generation(PAG)/diffusion along with quencher distribution,

3)    Develop and rinse solution inhomogeneities,

4)    Underlayer (hardmask/anti-reflective coating/adhesion layer) optical and chemical interactions,

5)    Smoothing techniques including deposition, etch, and infusion, and

6)    Design layout and OPC and SMO.

While we cannot eliminate stochastics by design, we can start to design around them using sophisticated process simulation models. At 2018 SPIE, Ping-Jui Wu et al. from National Taiwan University and TSMC used sophisticated molecular dynamics simulations to model “Nanoscale inhomogeneity and photoacid generation dynamics in extreme ultraviolet resist materials.” Figure 2 shows that ion-pair interactions in CAR create different nano-scale domains of poly(4-hydroxystyrene) (PHS) base polymers and triphenylsulfonium (TPS) based PAGs, depending upon the concentration of tert-butyl acrylate (TBA) copolymers in the blend.

Fig. 2: Molecular dynamics simulation of nano-scale domain separation within 8nm edge-length cubes of CAR composed of phenol groups (grey), TBA (red), and TPS (yellow) for (a) phenol-rich blend, and (b) TBA-rich blend. (Source: Wu et al., Proc. of SPIE Vol. 10586, 10586-10)

Table 1 shows modeled (Brainard, Trefonas & Gallatin, Proc. of SPIE Vol. 10583/10583-40) contributions to stochastic LWR from PS-CAR exposed with 0.33NA EUV to form 16nm HP L/S grids. For this PS-CAR blend the quencher variability contributes nearly as much LWR as the photon shot-noise, indicating room for improvement by fine-tuning the PS-CAR formulation.

One way to scare away black swans hiding in the resist is with bright light, as shown at 2018 SPIE by a team led by researchers from TEL in “EUV resist sensitization and roughness improvement by PSCAR™ with in-line UV flood exposure system.” Photo-Sensitized (PS) CAR contains a precursor molecule that converts to PS when exposed to EUV light, in addition to PS-PAG and “photo decomposable base (quencher) which can be photosensitized” (PS-PDB) molecules. UV flood exposure after EUV pattern exposure but before development generates extra acid, allowing for higher quencher loading, such that higher image contrast with reduced LWR can be obtained. By increasing the concentrations of PAG and quencher in the resist blend there is a reduction in the stochastic at any target dose.

DUV Ducks:  ArFi multi-patterning

As shown by Nikon Precision at the company’s 2018 workshop in San Jose pre-SPIE, deep ultra-violet (DUV) steppers using 248nm KrF or 193 ArF sources continue to improve in IC fabrication capability. ASML also continues to improve its DUV steppers, including integrating the advanced metrology technology acquired from Hermes Microvision as part of the company’s Holistic Lithography offering.

“The Challenge of Multi-Patterning Lithography for Contact Layer in 7nm and Beyond” by Wan-Hsiang Liang et al. of GlobalFoundries at 2018 SPIE showed how multiple ArF-immersion (ArFi) exposures can replace one EUV step. They characterized the process window (PW) for patterning as limited by two types of defects:  1) single-layer bridging or missing contacts driven by lithography, and 2) multi-layer bridging or unlanded contacts or extra patterns driven by both lithography and hard-mask open (HMO) etch. They found that a patterning PW can only be obtained by co-optimizing lithography and etch.

DUV versus EUV cost estimates

The target metal HP for IMEC node 7nm (iN7) on-chip IC interconnects is 16nm, dropping to 10nm for the next iN5. A single-exposure of 0.33NA EUV can create the former half-pitch, but 10nm will require double-exposure of EUV.

The capital expenditure (CapEx) for 8 EUV or 16 ArFi steppers is ~US$1B. We know that EUV could improve fab yields, but we also know that black swans will cause new yield losses. The least risk for first use of EUV is for blocks/cuts to ArFi SAQP lines, so that multi-color ArFi masks could be substituted in an EUV yield-loss emergency without having to change the design.

In my ongoing role as an analyst with TECHCET, at 2018 SPIE I presented a poster on “Cost modeling 22nm pitch patterning approaches” in HVM using either EUV or ArFi DUV steppers in complex multi-patterning process flows. In this model, all yield losses including those from stochastic black swans are assumed as zero to create a Cost Per Wafer Pass (CPWP) metric. Real Cost of Ownership (CoO) calculations can start with these relative CPWP numbers and then factor in systematic yield losses dependent upon design, as well as random yield losses dependent upon particles and wafer-breakage. CPWP includes only fab costs, not including EDA nor masks nor final test.

Figure 3 shows that EUV-based process flows could save money over strict use of ArFi in multi-patterning, assuming 1 EUV exposure can replace 3 ArFi exposures with similar yield. EDA for EUV should cost less than doing multi-color ArFi layouts, and design:process-induced systematic yield losses should be reduced. By reducing the number of deposition and etch steps needed in the full flow, use of EUV should significantly reduce the turn-around-time (TAT) through the fab. GlobalFoundries’ Gary Patton has said that such TAT savings for advanced logic chips could be a month or more.

Fig. 3: Cost Per Wafer Pass (CPWP)—with all yield losses including those from stochastics set to zero—modeled for different process flows to achieve 22nm pitch patterns, showing that flows using EUV could reduce HVM costs if yields can be managed. (Source: Korczynski, Proc. of SPIE Vol. 10589, 10589-25)

EUV resist materials have additional stochastic constraints compared to ArFi resists, and as more highly engineered materials are expected to cost more. Nonetheless, the cost of stepper CapEx depreciation per wafer is ~10x the cost of all lithography materials for both ArFi and EUV in this model. More details of the CPWP model including materials assumptions will be presented at the 2018 Critical Materials Council (CMC) Conference, April 26-27 in Chandler, Arizona [DISCLOSURE: Ed Korczynski is co-chair of this public conference].

Conclusions

As the commercial IC fab industry begins ramping EUV lithography into HVM, engineers now must anticipate new stochastic failures. Perfect dose and focus cannot prevent them. A new constraint is added to the myriad challenges of engineering photoresist blends.

At the level of atoms we find plenty of kinetic energy to make things wiggle…or waddle. Waddling black swans have always been with us, but we used to be able to ignore them. While we can control random white swans, these black swans cannot be controlled but we can give them room to waddle around.

—E.K.

Companies Ready Cobalt for MOL, Gate Fill

Thursday, December 21st, 2017

thumbnail

By Dave Lammers

Cobalt for middle-of-the-line and trench contacts emerged at the International Electron Devices Meeting, as Intel, GlobalFoundries, and Applied Materials discussed how to best take advantage of cobalt’s properties.

For its forthcoming 10nm logic process, Intel Corp. used cobalt for several of the lower metal levels, including a cobalt fill at the trench contacts and cobalt M0 and M1 wiring levels. The result was much-improved resistivity and reliability, compared with the traditional metallization at those levels.

Cobalt was used for the local interconnects of the Intel 10nm process, improving line resistance by 60 percent. (Source: Intel)

Chris Auth, director of logic technology at Intel’s Portland Technology Center, said the contacted line resistance “provides a good indicator of the benefits of cobalt versus tungsten,” with a 60 percent reduction in line resistance and a 1.5X reduction in contact resistance.

While cobalt was used for the local interconnects, the upper 10 metal layers were copper, with a cobalt cap used for layers M2-M5 to provide a 50X improvement in electro-migration. Intel continued to use tungsten for the gate fill.

John Pellerin, a vice president at GlobalFoundries who directs global research and development, said GlobalFoundries decided that for its 7nm logic technology, ramping in mid-2018, it would replace tungsten with cobalt at the trench contact level, which is considered the first level of the middle-of-the-line (MOL).

“We are evaluating it for implementation into the next level of contact above that. Cobalt trench level contacts are process of record (POR) for the 7nm technology,” Pellerin said in an interview at the 2017 IEDM, held Dec. 2-6 in San Francisco.

High performance logic often involves four-fin logic cells to drive the maximum amount of current from the largest transistor width. “You have to get that current out of the transistor. That is where the MOL comes into play. Junction and MOL resistance optimization is key to taking advantage of a four-fin footprint, and it takes a multi-front optimization to take advantage of that equation.

Pellerin said the biggest challenge with tungsten trench contacts is that the CVD process tends to leave a seam void. “We are always fighting seam voids. With cobalt deposition we get an intrinsic resistance improvement, and don’t get seam voids by pushing tungsten down in there,” Pellerin said.

Tighter Metal Pitches

Scotten Jones, president of consultancy IC Knowledge (Boston), said semiconductor vendors will introduce cobalt when it makes sense. Because it is a new material, requiring considerable costs prior to insertion, companies will use it when they need it.

“Global has trench contacts, while Intel uses cobalt at three levels. But the reason is that Intel has a 36nm minimum metal pitch with its 10nm process, while Global is at 40nm with its 7nm process. It is only at the point where the line gets narrow enough that cobalt starts to make sense.”

Applied Cobalt Solutions

As cobalt begins to replace tungsten at the smaller-dimension interconnect layers, Applied Materials is readying process flows and E-beam inspection solutions optimized for cobalt.

Namsung Kim, senior director of engineering management at Applied Materials, said cobalt has a bulk resistivity that is similar to tungsten, but the barrier thickness required for tungsten at leading-edge transistors is swinging the advantage to cobalt as dimensions shrink.

Line resistance probability plot of cobalt versus tungsten at 12nm critical dimensions. (Source: Applied Materials)

“Compared with tungsten, cobalt has a very thin barrier thickness, so you can fill up with more material. At our Maydan Technology Center, we’ve developed a reflow process for cobalt that is unique,” Kim said. The cobalt reflow process uses an annealing step to create larger cobalt grain sizes, reducing the resistance. And because there is no source of fluorine in the cobalt deposition steps, a thin barrier layer can suffice.

At IEDM, Naomi Yoshida, a distinguished member of the technical staff at Applied, presented a paper describing Applied’s research using cobalt to fill a 5nm-logic-generation replacement metal gate (RMG). The fill is deposited above the high-k dielectric and work function metals, and at the 5nm node and beyond there is precious little room for the gap fill metal.

Yoshida said modern transistors use multiple layers of work-function metals to control threshold voltages, with high-performance logic requiring low Vt’s and IoT devices requiring relatively high Vt’s. After the different work function layers are deposited, the fill material is deposited.

At the 5nm node, Applied Materials estimates that the contacted poly pitch (CPP) will shrink to about 42nm, while the gate length (Lg) will be less than 12nm. “There is very limited space for the fill materials, so customers need a more conductive metal in a limited space. That is the major challenge,” Yoshida said in an interview at the IEDM.

Work Function Maintained

Naomi Yoshida: room for gate fill disappearing

The Applied R&D work showed that if the barrier layer for a tungsten fill is reduced too much, to a 2nm or 3nm TiN layer for example, the effective work function (eWF) degrades by as much as 500mV eWF and the total gate conductance suffers. With the CVD process used to deposit a tungsten RMG fill, there was “significant fluorine diffusion” into the work function metal layer in the case of a 2nm TiN barrier.

By contrast, the cobalt fill maintained the NMOS band-edge eWF with the same 2nm TiN barrier.

Gradually, cobalt will be adopted more widely for the contacts, interconnects, and RMG gate fill steps. “It is time to think about how to achieve more conductance in the gate material. Previously, people said there was a negligible contribution from the gate material, but now with the smaller gates at 5nm, gate fill metal makes a huge contribution to resistance, and barrier thickness reduction is important as well,” Yoshida said.

E-beam Inspection

Nicolas Breil: E-beam void inspection useful for cobalt contacts

Applied Materials also has developed an e-beam inspection solution, ProVision, first introduced in mid-2016, and has optimized it for inspecting cobalt voids. Nicolas Breil, a director in the company’s contact module division, said semiconductor R&D organizations are busy developing cobalt contact solutions, optimizing the deposition, CMP, and other steps. “For such a dense and critical level as the contact, it always needs very careful engineering. They key is to get results as fast as possible, but being fast can be very expensive.”

Amir Wachs, business development manager at Applied’s process diagnostics and control business unit in Rehovat, Israel, said the ProVision e-beam inspection system has a resolution of 1nm, at 10,000-20,000 locations per hour, taking a few hundred measurements on each field of view.

“Voids form when there are adhesion issues between the cobalt and TiN. One of the key issues is the correct engineering of the Ti nitride and PVD cobalt and CVD cobalt. To detect embedded voids requires a TEM inspection, but then customers get very limited statistics. There might be a billion contacts per chip, and with conventional TEM you might get to inspect two.”

The ProVision system speeds up the feedback loop between inspection and co-optimization. “Customers can assess the validity of the optimization. With other inspection methods, co-optimization might take five days to three weeks. With this type of analysis, using ProVision, customers can do tests early in the flow and validate their co-optimization within a few hours,” Wachs said.

Enabling the A.I. Era

Monday, October 23rd, 2017

thumbnail

By Pete Singer, Editor-in-Chief

There’s a strongly held belief now that the way in which semiconductors will be designed and manufactured in the future will be largely determined by a variety of rapidly growing applications, including artificial intelligence/deep learning, virtual and augmented reality, 5G, automotive, the IoT and many other uses, such as bioelectronics and drones.

The key question for most semiconductor manufacturers is how can the benefit from these trends? One of the goals of a recent panel assembled by Applied Materials for an investor day in New York was to answer that question.

Jay Kerley, Praful Krishna, Mukash Khare, Matt Johnson and Christos Georgiopoulos (left to right)

The panel, focused on “enabling the A.I. era,” was moderated by Sundeep Bajikar (former Sellside Analyst, ASIC Design Engineer). The panelists were: Christos Georgiopoulos (former Intel VP, professor), Matt Johnson (SVP in Automotive at NXP), Jay Kerley (CIO of Applied Materials), Mukesh Khare (VP of IBM Research) and Praful Krishna (CEO of Coseer). The panel discussion included three debates: the first one was “Data: Use or Discard”; the second was “Cloud versus Edge”; and the third was “Logic versus Memory.”

“There’s a consensus view that there will be an explosion of data generation across multiple new categories of devices,” said Bajikar, noting that the most important one is the self-driving car.  NXP’s Johnson responded that “when it comes to data generation, automotive is seeing amazing growth.” He noted the megatrends in this space: the autonomy, connectivity, the driver experience, and electrification of the vehicle. “These are changing automotive in huge ways. But if you look underneath that, AI is tied to all of these,” he said.

He said that estimates of data generation by the hour are somewhere from 25 gigabytes per hour on the low end, up to 250 gigabytes or more per hour on the high end. or even more in some estimates. “It’s going to be, by the second, the largest data generator that we’ve seen ever, and it’s really going to have a huge impact on all of us.”

Georgiopoulos agrees that there’s an enormous amount of infrastructure that’s getting built right now. “That infrastructure is consisting of both the ability to generate the data, but also the ability to process the data both on the edge as well as on the cloud,” he said. The good news is that sorting that data may be getting a little easier. “One of the more important things over the last four or five years has been the quality of the data that’s getting generated, which diminishes the need for extreme algorithmic development,” he said. “The better data we get, the more reasonable the AI neural networks can be and the simpler the AI networks can be for us to extract information that we need and turn the data information into dollars.”

Edge computing describes a computing topology in which information processing, and content collection and delivery, are placed closer to the sources of this information. Connectivity and latency challenges, bandwidth constraints and greater functionality embedded at the edge favors distributed models. Jay Kerley (CIO of Applied Materials) addressed the debate of cloud vs edge computing, noting it was a factor of data, then actual value and finally intelligence. “There’s no doubt that with the pervasiveness of the edge and billions of devices, data is going to be generated exponentially. But the true power comes in harnessing that data in the core. Taking it and turning it into actual intelligence. I believe that it’s going to happen in both places, and as a result of that, the edge is not going to only generate data, it’s going to have to consume data, and it’s going to have to make decisions. When you’re talking about problems around latency, maybe problems around security, problems around privacy, that can’t be overcome, the edge is going to have to be able to make decisions,” he said.

Kerley said there used to be a massive push to build data centers, but that’s changed. “You want to shorten the latency to the edge, so that data centers are being deployed in a very pervasive way,” he said. What’s also changing is that cloud providers have a huge opportunity to invest in the edge, to make the edge possible. “If they don’t, they are going to get cut out,” he added. “They’ve got to continue to invest to make access into the cloud as easy, and as frictionless as possible. At the end of the day, with all that data coming into these cloud data centers, the processing of that information, turning it into actual intelligence, turning it into value, is absolutely critical.”

Mukesh Khare (VP of IBM Research) also addressed the value of data. “We all believe that data is our next natural resource. We’re not going to discard it. You’re going to go and figure out how to generate value out of it,” he said.

Khare said that today, most artificial intelligence is too complex. It requires, training, building models and then doing inferencing using those models. “The reason there is good in artificial intelligence is because of the exponential increase in data, and cheap compute. But, keep in mind that, the compute that we are using right now is the old compute. That compute was built to do spreadsheet, databases, the traditional compute.

“Since that compute is cheap and available, we are making use of it. Even with the cheap and available compute in cloud, it takes months to generate those models. So right now, most of the training is still being done in cloud. Whereas, inferencing, making use from that model is done at the edge. However, going forward, it is not possible because the devices at the edge are continuously generating so much data that you cannot send all the data back to the cloud, generate models, and come back on the edge.”

“Eventually, a lot of training needs to move to the edge as well,” Khare said. This will require some innovation so that the compute, which is being done right now in cloud, can be transferred over to edge with low-power devices, cheap devices. Applied Materials’ Kerley added that innovation has to happen not only at the edge, but in the data center and at the network layer, as well as in the software frameworks. “Not only the AI frameworks, but what’s driving compression, de-duplication at the storage layer is absolutely critical as well,” he said.

NXP’s Johnson also weighed in on the edge vs cloud debate with the opinion that both will be required for automotive. “For automotive to do what it needs to, both need to evolve,” he said. “In the classic sense of automotive, the vehicle would be the edge, which needs access to the cloud frequently, or non-stop. I think it’s important to remember that the edge values efficiency. So, efficiency, power, performance and cost are all very important to make this happen,” he added.

Automotive security adds another degree of complexity. “If you think of something that’s always connected, and has the ability to make decisions and control itself, the security risk is very high. And it’s not just to the consumer of the vehicle, but also to the company itself that’s providing these vehicles. It’s actually foundational that the level of safety, security, reliability, that we put into these things is as good as it can be,” Johnson said.

Georgiopoulos said a new compute model is required for A.I. “It’s important to understand that the traditional workloads that we all knew and loved for the last forty years, don’t apply with A.I. They are completely new workloads that require very different type of capabilities from the machines that you build,” he said.  “With these new kind of workloads, you’re going to require not only new architectures, you’re going to require new system level design. And you’re going to require new capabilities like frameworks. He said TensorFlow, which is an open-source software library for machine intelligence originally developed by researchers and engineers working on the Google Brain Team, seems to be the biggest framework right now. “Google made it public for only one very good reason. The TPU that they have created runs TensorFlow better than any other hardware around. Well, guess what? If you write something on TensorFlow, you want to go to the Google backend to run it, because you know you’re going to get great results. These kind of architectures are getting created right now that we’re going to see a lot more of,” he said.

Georgiopoulos said this “architecture war” is by no means over. “There are no standardized ways by which you’re going to do things. There is no one language that everybody’s going to use for these things. It’s going to develop, and it’s going to develop over the next five years. Then we’ll figure out which architecture may be prevalent or not. But right now, it’s an open space,” he said.

IBM’s Khare, weighed in on how transistors and memory will need to evolve to meet the demands of new AI computer architectures, “For artificial intelligence in our world, we have to think very differently. This is an inflection, but this is the kind of inflection that world has not seen for last 60 years.” He said the world has gone from tabulating system era (1900 to 1940) to the programmable system era in 1950s, which we are still using. “We are entering the era of what we call cognitive computing, which we believe started in 2011, when IBM first demonstrated artificial intelligence through our Watson System, which played Jeopardy,” he said.

Khare said “we are still using the technology of programmable systems, such as logic, memory, the traditional way of thinking, and applying it to AI, because that’s the best we’ve got.”

AI needs more innovation at all levels, Khare said. “You have to think about systems level optimization, chip design level optimization, device level optimization, and eventually materials level optimization,” he said.  “The artificial workloads that are coming out are very different. They do not require the traditional way of thinking — they require the way the brain thinks. These are the brain inspired systems that will start to evolve.”

Khare believes analog compute might hold the answer. “Analog compute is where compute started many, many years ago. It was never adopted because the precision was not high enough, so there were a lot of errors. But the brain doesn’t think in 32 bits, our brain thinks analog, right? So we have to bring those technologies to the forefront,” he said. “In research at IBM we can see that there could be several orders of magnitude reduction in power, or improvement in efficiency that’s possible by introducing some of those concepts, which are more brain inspired.”

EUV Leads the Next Generation Litho Race

Friday, October 20th, 2017

thumbnail

As previously reported by Solid State Technology, the eBeam Initiative recently reported the results of its lithography perceptions and mask-makers’ surveys. After the survey results were presented at the 2017 Photomask Technology Symposium, Aki Fujimura, CEO of D2S, the managing company sponsor of the eBeam Initiative, spoke with Solid State Technology about the survey results and current challenges in advanced lithography.

The Figure shows the consensus opinions of 75 luminaries from 40 companies who provided inputs to the perceptions survey regarding which Next-Generation Lithography (NGL) technologies will be used in volume manufacturing over the next few years. “We don’t want to interpret these data too much, but at the same time the information should be representative because people will be making business decisions based on this,” said Fujimura.

Figure 1

Confidence in Extreme Ultra-Violet (EUV) lithography is now strong, with 79 percent of respondents predicting it will be used in HVM by the end of 2021, a huge increase from 33 percent just three years ago. Another indication of aggregate confidence in EUVL technology readiness is that only 7 percent of respondents thought that “actinic mask inspection” would never be used in manufacturing, significantly reduced from 22 percent just last year.

“Asking luminaries is very meaningful, and obviously the answers are highly correlated with where the industry will be spending on technologies,” explained Fujimura. “The predictability of these sorts of things is very high. In particular in an industry with confidentiality issue, what people ‘think’ is going to happen typically reflects what they know but cannot say.”

Fujimura sees EUVL technology receiving most of the investment for next-generation lithography (NGL), “Because EUV is a universal technology. Whether you’re a memory or logic maker it’s useful for all applications. Whereas nano-imprint is only useful for defect-resistant designs like memory.”

Vivek Bakshi’s recent blog post details the current status of EUVL technology evolution. With practical limits on the source-power, many organization are looking at ways to increase the sensitivity of photoresist so as to increases the throughput of EUVL processes. Unfortunately, the physics and chemistry of photoresists means that there are inherent trade-offs between the best Resolution and Line-width-roughness (LWR) and Sensitivity, termed the “RLS triangle”.

The Critical Gases and Materials Group (CGMG) of SEMI held a recent webinar in which Greg MacIntyre, Imec’s director of patterning, discussed the inherent tradeoffs within the RLS triangle when attempting to create the smallest possible features with a single lithographic exposure. Since the resist sensitivity directly correlates to the maximum throughput of the lithographic exposure tool, there are various tricks used to improve the resolution and roughness at a given sensitivity:  optimized underlayer reflections for exposures, smoothing materials for post-develop, and hard-masks for etch integration.

Mask-Making Metrics

The business dynamics of making photomasks provides leading indicators of the IC fab industry’s technology directions. A lot of work has been devoted to keeping mask write times consistent compared with last year, while the average complexity of masks continues to increase with Reticle Enhancement Technologies (RET) to extend the resolution of optical lithography. Even with write times equal, the average mask turn-around time (TAT) is significantly greater for more critical layers, approaching 12 days for 7nm- to 10nm-node masks.

“A lot of the increase in mask TAT is coming from the data-preparation time,” explained Fujimura. “This is important for the economics and the logistics of mask shops.” The weighted average of mask data preparation time reported in the survey is significantly greater for finer masks, exceeding 21 hours for 7nm- to 10nm-nodes. Data per mask continues to increase; the most dense mask now averages 0.94 TB, and the most dense mask single mask takes 2.2 TB.

—E.K.

Embedded FPGAs Offer SoC Flexibility

Wednesday, October 4th, 2017

thumbnail

By Dave Lammers, Contributing Editor

It was back in 1985 that Ross Freeman invented the FPGA, gaining a fundamental patent (#4,870,302) that promised engineers the ability to use “open gates” that could be “programmed to add new functionality, adapt to changing standards or specifications, and make last-minute design changes.”

Freeman, a co-founder of Xilinx, died in 1989, too soon to see the emerging development of embedded field programmable logic arrays (eFPGAs). The IP cores offer system-on-chip (SoC) designers an ability to create hardware accelerators and to support changing algorithms. Proponents claim the approach provides advantages to artificial intelligence (AI) processors, automotive ICs, and the SoCs used in data centers, software-defined networks, 5G wireless, encryption, and other emerging applications.

With mask costs escalating rapidly, eFPGAs offer a way to customize SoCs without spinning new silicon. While eFPGAs cannot compete with custom silicon in terms of die area, the flexibility, speed, and power consumption are proving attractive.

Semico Research analyst Rich Wawrzyniak, who tracks the SoC market, said he considers eFPGAs to be “a very profound development in the industry, a capability that is going to get used in lots of places that we haven’t even imagined yet.”

While Altera, now owned by Intel, and Xilinx, have not ventured publicly into the embedded space, Wawrzyniak noted that a lively bunch of competitors are moving to offer eFPGA intellectual property (IP) cores.

Multiple competitors enter eFPGA field

Achronix Semiconductor (Santa Clara, Calif.) has branched out from its early base in stand-alone FPGAs, using Intel’s 22nm process, to an IP model. It is emphasizing its embeddable Speedcore eFPGAs that can be added to SoCs using TSMC’s 16FF foundry process. 7nm IP cores are under development.

Efinix Inc. (Santa Clara recently rolled out its Efinix Programmable Accelerator (EPA) technology.

Efinix (efinixinc.com) claims that its programmable arrays can either compete with established stand-alone FPGAs on performance, but at half the power, or can be added as IP cores to SoCs. The Efinix Programmable Accelerator technology can provide a look up table (LUT)-based logic cell or a routing switch, among other functions, the company said.

Efinix was founded by several managers with engineering experience at Altera Corp. at various times in their careers — Sammy Cheung, Tony Ngai, Jay Schleicher, and Kar Keng Chua — and has financial backing from two Malaysia-based investment funds.

Flex Logix Technologies, (Mountain View, Calif.) (www.flex-logix.com) an eFPGA startup founded in 2014, recently gained formal admittance to TSMC’s IP Alliance program. It supports a wide array of foundry processes, providing embedded FPGA IP and software tools for TSMC’s 16FFC/FF+, 28HPM/HPC, and 40ULP/LP.

Flex Logix supports several process generations at foundry TSMC. The 16nm test chip is being evaluated. (Source: Flex Logix)

QuickLogic adds SMIC to foundry roster

Menta  (http://www.menta-efpga.com/) is another competitor in the FPGA space. Based in Montpellier, France, Menta is a privately held company founded a decade ago that offers programmable logic IP targeted to both GLOBALFOUNDRIES (14LPP) and TSMC (28HPM and 28HPC+) processes.

Menta offers either pre-configured IP blocks, or custom IPs for SoCs or ASICs. The French company supports its IP with a tool set, called Origami, which generates a bitstream from RTL, including synthesis. Menta said it has fielded four generations of products that in use by customers now “for meeting the sometimes conflicting requirements of changing standards, security updates and shrinking time-to-market windows of mobile and consumer products, IoT devices, networking and automotive ICs.”

QuickLogic, a Silicon Valley stalwart founded in 1988, also is expanding its eFPGA capability. In mid-September, QuickLogic (Sunnyvale, Calif.) (quicklogic.com) announced that its eFPGA IP can now be used with the 40nm low-leakage process at Shanghai-based Semiconductor Manufacturing International Corp. (SMIC). QuickLogic also offers its eFPGA technology on several of the mature GLOBALFOUNDRIES processes, and is participating in the foundry’s 22FDX IP program.

Wawrzyniak, who tracks the SoC market for Semico Research, said an important market is artificial intelligence, using eFPGA gates to add a flexible convolutional neural network (CNN) capability. Indeed, Flex Logix said one of its earliest adopters is an AI research group at Harvard University that is developing a programmable AI processor.

A seminal capability

The U.S. government’s Defense Advanced Projects Agency (DARPA) also has supported Flex Logix by taking a license, endorsing an eFPGA capability for defense and aerospace ICs used by the U.S. military.

With security being such a concern for the Internet of Things edge devices market, Wawrzyniak said eFPGA gates could be used to secure IoT devices against hackers, a potentially large market.

“The major use is in apps and instances where people need some programmability. This is a seminal, basic capability. How many times have you heard someone say, ‘I wish I could put a little bit of programmability into my SoC.’ People are going to take this and run with it in ways we can’t imagine,” he said.

Bob Wheeler, networking analyst at The Linley Group, said the intellectual property (IP) model makes sense for startups. Achronix, during the dozen years it developed and then fielded its standalone FPGAs, “was on a very ambitious road, competing with Altera and Xilinx. Achronix went down the road of developing parts, and that is a tall order.”

While the cost of running an IP company is less than fielding stand-alone parts, Wheeler said “People don’t appreciate the cost of developing the software tools, to program the FPGA and configure the IP.” The compiler, in particular, is a key challenge facing any FPGA vendor.

Wheeler said Achronix https://www.achronix.com/ , has gained credibility for its tools, including its compiler, after fielding its high-performance discrete FPGAs in 2016, made on Intel’s 22nm process.

Achronix offers Speedcore eFPGAs, based on the same architecture as its standalone FPGAs. (Source: Achronix Semiconductor)

And Wheeler cautioned that IP companies face the business challenge of getting a fair return on their development efforts, especially for low-cost IoT solutions where companies maintain tight budgets for the IP that they license.

Achronix earlier this year announced that its 2017 revenues will exceed $100 million, based on a seven-times increase in sales of its Speedster 22i FPGA family, as well as licensing of its Speedcore embedded IP products, targeted to TSMC’s leading-edge 16 nm node, with 7nm process technology for design starts beginning in the second half of this year. Achronix revenues “began to significantly ramp in 2016 and the company reached profitability in Q1 2017,” said CEO Robert Blake.

Escalating mask costs

Flex Logix CEO Geoff Tate

Geoff Tate, now the CEO of Flex Logix Technologies, earlier headed up Rambus for 15 years. Tate said Flex Logix (www.flex-logix.com uses a hierarchical interconnect, developed by co-founder Cheng Wang and others while he earned his doctorate at UCLA. The innovative interconnect approach garnered the Lewis Outstanding Paper award for Wang and three co-authors at the 2014 International Solid-State Circuits Conference (ISSCC), and attracted attention from venture capitalists at Lux Ventures and Eclipse Ventures.

Tate said one of those VCs came to him one day and asked for an evaluation of Wang & Co.’s technology. Tate met with Wang, a native of Shanghai, and found him to be anything but a prima donna with a great idea. “He seemed very motivated, not just an R&D guy.”

While most FPGAs use a mesh interconnect in an X-Y grid of wires, Wang had come up with a hierarchical interconnect that provided high density without sacrificing performance, and proved its potential with prototype chips at UCLA.

“Chips need to be more flexible and adaptable. FPGAs give you another level of programmability,” Tate noted.

Meanwhile, potential customers in networking, data centers, and other markets were looking for ways to make their designs more flexible. An embedded FPGA block could help customers adapt a design to new wireless and networking protocols. Since mask costs were escalating, to an estimated $5 million for 16nm designs and more than double that for 7nm SoCs, customers had another reason to risk working with a startup.

TSMC has supported Flex Logix, in mid-September awarding the company the TSMC Open Innovation Platform’s Partner of the Year Award for 2017 in the category of New IP.

“Our lead customer has a working chip, with embedded FPGA on it. They are in the process of debugging rest of their chip. Overall, we are still in the early stages of market development,” Tate said, explaining that semiconductor companies are understandably risk-averse when it comes to their IP choices.

Asked about the status of its 16nm test chip, Tate said “the silicon is out of the fab. The next step is packaging, then evaluation board assembly.  We should be doing validation testing starting in late September.”

Potential customers are in the process of sending engineers to Flex Logix to look at metrics of the largest 16nm arrays, such as IR drop, vest vectors, switching simulations, and the like. “They making sure we are testing in a thorough fashion. If we screw them over, they’ll tell everybody, so we have got to get it right the first time,” Tate said.

GlobalFoundries Turns the Corner

Friday, September 29th, 2017

thumbnail

By David Lammers

Claiming that GlobalFoundries “is a different company than two years ago,” executives said the foundry’s strategies are starting to pay off in emerging markets such as 5G wireless, automotive, and high-performance processors.

CEO Sanjay Jha, speaking at the GlobalFoundries Technology Conference, held in Santa Clara, Calif. recently, said that to succeed in the foundry segment requires that customers “have confidence that they are going to get their wafers at the right time and with the right quality. That has taken time, but we are there.”

CEO Sanjay Jha: “differentiated” processes are key.

Innovation is another essential requirement for success, Jha said, arguing that R&D dollars must include spending on “differentiated” approaches. Alain Mutricy, senior vice president of product development, acknowledged that only recently have customers turned to GlobalFoundries as more than just a second-source to TSMC. For the first few years, “most companies used us to keep (wafer) prices down,” he said, while noting GlobalFoundries bears some responsibility for that by not investing nearly enough, early on, in IP libraries and EDA tool development.

Founded in March 2009 as a spinout of the manufacturing arm of Advanced Micro Devices, Global Foundries’ Abu Dhabi-based owner soon acquired Singapore’s Chartered Semiconductor in January 2010, and further expanded through the July 2015 acquisition of IBM Microelectronics. It is now engaged in building what Jha said will be the largest wafer fab in China, in Chengdu, capable of processing a million wafers a year. The Chengdu fab, operated by GlobalFoundries but with investments from the local government, will begin with 180nm and 130nm products now fabbed in Singapore, and then add 22FDX IC production to meet demand from Chinese customers.

While the road to profitability has been a hard one, Len Jelinek, chief technology analyst at HIS Markit, said GlobalFoundries is now “cash flow positive,” with the flagship Malta, N.Y. fab “essentially full” at an estimated 40,000 wafer starts per month. That is a big turnaround from four years ago, he said.

Malta fab’s capacity doubling

Nathan Brookwood, longtime microprocessor watcher at Insight64, said while AMD no longer has an ownership stake in GlobalFoundries, it does have wafer supply agreements with the foundry. The fact that AMD’s Zen-based microprocessors and newest graphics chips are all made on the 14nm Finfet process at Fab 8 “means that AMD is now actually using the wafer supply it is committed to taking. That helps both companies.”

Andrea Lati, director of market research at VLSI Research, said while TSMC “is clearly a very well-run company that is marching ahead,” GlobalFoundries also is making progress. Again, AMD’s success is a large part of that, Lati said, noting that “AMD is definitely doing very well for the last couple of years, and has good prospects, along with Nvidia, in the graphics side.”

In a telephone interview, Tom Caulfield, senior vice president and general manager of the GlobalFoundries’ Malta fab, said “we are continually adding capacity in 14nm as we get a window on to the demand from our customers. In 2016 and 2017 we made additional investments.”

While not putting a specific number on Malta’s capacity, Caulfield said that if the beginning of 2016 is taken as a baseline, by the end of 2018 the wafer capacity at Malta’s Fab 8 will have more than doubled.

“AMD refreshed its entire portfolio with 14nm, exclusively made here at Malta, and we are chasing more demand than we planned on. AMD’s success is a proxy for our success. We are in this hand in hand,” Caulfield said.

Asked if a new fab was being considered at Malta, Caulfield said “At some point we will need more brick and mortar. Eventually we will run out of space, but we still have some time in front of us.

FDX in the wings

Scotten Jones, who runs a semiconductor cost modeling consultancy, IC Knowledge LLC, said competition is also heating up at the 28nm node, once controlled almost exclusively by TSMC. As GlobalFoundries, Samsung — and more recently, SMIC and UMC — have ironed out their own 28nm processes, the profitability of TSMC’s 28nm business has tightened, Jones said.

The competitive spotlight is now on the 22FDX SOI-based process developed by GlobalFoundries, buttressed by an embedded 22nm eMRAM capability developed along with MRAM pioneer Everspin Technologies.

Gary Patton, chief technology officer at GlobalFoundries, said the SOI-based 22nm node supports forward biasing, while the 12nm FDX technology will support both forward and back-biasing, to either boost performance or conserve power. Patton said the 12FDX process will provide 26 percent more performance and 47 percent less power consumption than the 22FDX process, with prototypes expected in the second half of 2018 and volume production beginning in 2019.

CTO Gary Patton: Technology development boosted by IBM engineers.

Patton said “maybe we haven’t done enough” to explain the differences between the 14nm FinFet technology and the SOI-based FDX technologies. The FinFET transistors have enough drive current to drive signals across fairly large die sizes, while the FDX technology is best suited to die sizes of 150 sq. mm and smaller, he said.

Jones said his cost analysis shows that the design costs for the planar FDX chips are much less expensive than for FinFETs, which require “some fairly expensive EDA tools.” That combines with a much smaller mask count, due to multi-patterning.

Patton said the 22FDX designs require 40 percent fewer masks that comparable 14nm FinFET-based designs. “With the SOI technology customers have the option of using body biasing, which has been used in the industry for the past three or four years. We can operate at .4 Volts, and customers are putting RF on the same chip as digital.”

Asked if he thought the FDX processes would gain traction in the marketplace, Jones answered in the affirmative. “I think it will find its place. It is still early. These kinds of new technologies take time to get established,” Jones said.

Jha said two companies have developed products based on 22FDX, Dream Chip Technologies, an advanced driver assistance system (ADAS) supplier, which last February said it has completed a computer vision SoC based on the 22FDX process, and Ineda Systems, which seeks to integrate RF and digital capabilities on its 22FDX-based processors, targeted at the Internet of Things market.

Mutricy said 70 companies purchased the 22FDX foundation IP provided by Invecas for the 22FDX process, with 18 tapeouts on track for production next year.

Patton said the addition of 500 technologists from IBM’s microelectronics division has aided the technology development operation. “GlobalFoundries is absolutely a different company than it was just two years ago,” Patton said at the GTC event.

Silicon Photonics Technology Developments

Thursday, April 6th, 2017

thumbnail

By Ed Korczynski, Sr. Technical Editor

With rapidly increasing use of “Cloud” client:server computing there is motivation to find cost-savings in the Cloud hardware, which leads to R&D of improved photonics chips. Silicon photonics chips could reduce hardware costs compared to existing solutions based on indium-phosphide (InP) compound semiconductors, but only with improved devices and integration schemes. Now MIT researchers working within the US AIM Photonics program have shown important new silicon photonics properties. Meanwhile, GlobalFoundries has found a way to allow for automated passive alignment of optical fibers to silicon chips, and makes chips on 300mm silicon wafers for improved performance at lower cost.

In a recent issue of Nature Photonics, MIT researchers present “Electric field-induced second-order nonlinear optical effects in silicon waveguides.” They also report prototypes of two different silicon devices that exploit those nonlinearities: a modulator, which encodes data onto an optical beam, and a frequency doubler, a component vital to the development of lasers that can be precisely tuned to a range of different frequencies.

This work happened within the American Institute for Manufacturing Integrated Photonics (AIM Photonics) program, which brought government, industry, and academia together in R&D of photonics to better position the U.S. relative to global competition. Federal funding of $110 million was combined with some $500 million from AIM Photonics’ consortium of state and local governments, manufacturing firms, universities, community colleges, and nonprofit organizations across the country. Michael Watts, an associate professor of electrical engineering and computer science at MIT, has led the technological innovation in silicon photonics.

“Now you can build a phase modulator that is not dependent on the free-carrier effect in silicon,” says Michael Watts in an online interview. “The benefit there is that the free-carrier effect in silicon always has a phase and amplitude coupling. So whenever you change the carrier concentration, you’re changing both the phase and the amplitude of the wave that’s passing through it. With second-order nonlinearity, you break that coupling, so you can have a pure phase modulator. That’s important for a lot of applications.”

The first author on the new paper is Erman Timurdogan, who completed his PhD at MIT last year and is now at the silicon-photonics company Analog Photonics. The frequency doubler uses regions of p- and n-doped silicon arranged in regularly spaced bands perpendicular to an undoped silicon waveguide. The space between bands is tuned to a specific wavelength of light, such that a voltage across them doubles the frequency of the optical signal passing. Frequency doublers can be used as precise on-chip optical clocks and amplifiers, and as terahertz radiation sources for security applications.

GlobalFoundries’ Packaging Prowess

At the start of the AIM Photonics program in 2015, MIT researchers had demonstrated light detectors built from efficient ring resonators that they could reduce the energy cost of transmitting a bit of information down to about a picojoule, or one-tenth of what all-electronic chips require. Jagdeep Shah, a researcher at the U.S. Department of Defense’s Institute for Defense Analyses who initiated the program that sponsored the work said, “I think that the GlobalFoundries process was an industry-standard 45-nanometer design-rule process.”

The Figure shows that researchers at IBM developed an automated method to assemble twelve optical fibers to a
silicon chip while the fibers are dark, and GlobalFoundries chips can now be paired with this assembly technology. Because the micron-scale fibers must be aligned with nanometer precision, default industry standard has been to expensively align actively lit fibers. Leveraging the company’s work for Micro-Electro-Mechanical Sensors (MEMS) customers, GlobalFoundries uses an automated pick-and-place tool to push ribbons of multiple fibers into MEMS groves for the alignment. Ted Letavic, Global Foundries’ senior fellow, said the edge coupling process was in production for a telecommunications application. Silicon photonics may find first applications for very high bandwidth, mid- to long-distance transmission (30 meters to 80 kilometers), where spectral efficiency is the key driver according to Letavic.

FIGURE: GlobalFoundries chips can be combined with IBM’s automated method to assemble 12 optical fibers to a silicon photonics chip. (Source: IBM, Tymon Barwicz et al.)

GobalFoundries has now transferred its monolithic process from 200mm to 300mm-diameter silicon wafers, to achieve both cost-reduction and improved device performance. The 300mm fab lines feature higher-N.A. immersion lithography tools which provide better overlay and line width roughness (LWR). Because the of the extreme sensitivity of optical coupling to the physical geometry of light-guides, improving the patterning fidelity by nanometers can reduce transmission losses by 3X.

—E.K.

SiPs Simplify Wireless IoT Design

Thursday, February 16th, 2017

thumbnail

By Dave Lammers, Contributing Editor

It takes a range of skills to create a successful business in the Internet of Things space, where chips sell for a few dollars and competition is intense. Circuit design and software support for multiple wireless standards must combine with manufacturing capabilities.

Daniel Cooley, senior vice president of IoT products at Silicon Labs

Daniel Cooley, senior vice president and general manager of IoT products at Silicon Labs (Austin, Tx.), said three trends are impacting the manufacture of IoT end-node devices, which usually combine an MCU, an RF transceiver, and embedded flash memory.

“There is an explosion in the amount of memory on embedded SoCs, both RAM and non-volatile memory,” said Cooley. Today’s multi-protocol wireless software stacks, graphics processing, and security requirements routinely double or quadruple the memory sizes of the past.

Secondly, while IoT edge devices continue to use trailing-edge technologies, nonetheless they also are moving to more advanced nodes. However, that movement is partially gated by the availability of embedded flash.

Thirdly, pre-certified system-in-package (SiP) solutions, running a proven software stack, “are becoming much more important,” Cooley said. These SiPs typically encapsulate an MCU, an integrated antenna and shielding, power management, crystal oscillators, and inductors and capacitors. While Silicon Labs has been shipping multi-chip modules for many years, SiPs are gaining favor in part because they can be quickly deployed by engineers with relatively little expertise in wireless development, he said.

“Personally, I believe that very advanced SIPs increasingly will be standard products, not anything exotic. They are a complete solution, like a PCB module, but encased with a molding compound. The SiP manufacturers are becoming very sophisticated, and we are ready to take that technology and apply it more broadly,” he said.

For example, Silicon Labs recently introduced a Bluetooth SiP module measuring 6.5 by 6.5 mm, designed for use in sports and fitness wearables, smartwatches, personal medical devices, wireless sensor nodes, and other space-constrained connected devices.

“We have built multi-chip packages – those go back to the first products of the company – but we haven’t done a fully certified module with a built-in antenna until now. A SiP module simplifies the go-to-market process. Customers can just put it down on a PCB and connect power and ground. Of course, they can attach other chips with the built-in interfaces, but they don’t need anything else to make the Bluetooth system work,” Cooley said.

“Designing with a certified SiP module supports better data throughput, and improves reliability as well. The SiP approach is especially beneficial for end-node customers which “haven’t gone through the process of launching a wireless product in in the market,” Cooley said.

System-in-package (SiP) solutions ease the design cycle for engineers using Bluetooth and low low-energy wireless networks. (Source: Silicon Laboratories).

The SiP packages a wireless SoC with an antenna and multiple other components in a small footprint.

Control by voice

The BGM12x Blue Gecko SiP is aimed at Bluetooth-enabled applications, a genre that is rapidly expanding as ecosystems like the Amazon Echo, Apple HomeKit, and Google Home proliferate.

The BGM12x Blue Gecko SiP is aimed at Bluetooth-enabled applications

Matt Maupin is Silicon Labs’ product marketing manager for mesh networking products, which includes SoCs and modules for low-power Zigbee and Thread wireless connectivity. Asked how a home lighting system, for example, might be connected to one of the home “ecosystems” now being sold by Amazon, Apple, Google, Nest, and others, Maupin said the major lighting suppliers, such as OSRAM, Philips, and others, often use Zigbee for lighting, rather than Bluetooth, because of Zigbee’s mesh networking capability. (Some manufactures use Bluetooth low energy (BLE) for point-to-point control from a phone.)

“The ability for a device to connect directly relies on the same protocols being used. Google and Amazon products do not support Zigbee or Thread connectivity at this time,” Maupin explained.

Normally, these lighting devices are connected to a hub. For example, Amazon’s Echo and Google’s Home “both control the Philips lights through the Philips hub. Communication happens over the Ethernet network (wireless or wired depending on the hub).  The Philips hub also supports HomeKit so that will work as well,” he said.

Maupin’s home configuration is set up so the Philips lights connect via Zigbee to the Philips hub, which connects to an Ethernet network. An Amazon Echo is connected to the Ethernet Network by WiFi.

“I have the Philips devices at home configured via their app. For example, I have lights in my bedroom configured differently for me and my wife. With voice commands, I can control these lamps with different commands such as ‘Alexa, turn off Matt’s lamp,’ or ‘Alexa, turn off the bedroom lamps.’”

Alexa communicates wirelessly to the Ethernet Network, which then goes to the Philips hub (which is sold under the brand name Philips Hue Bridge) via Ethernet, where the Philips hub then converts that to Zigbee to control that actual lamps. While that sounds complicated, Maupin said, “to consumers, it is just magic.”

A divided IoT market

Sandeep Kumar, senior vice president of worldwide operations

IoT systems can be divided into the high-performance number crunchers which deal with massive amounts of data, and the “end-node” products which drive a much different set of requirements. Sandeep Kumar, senior vice president of worldwide operations at Silicon Labs, said RF, ultra-low-power processes and embedded NVM are essential for many end-node applications, and it can take several years for foundries to develop them beyond the base technology becoming available.

“40nm is an old technology node for the big digital companies. For IoT end nodes where we need a cost-effective RF process with ultra-low leakage and embedded NVM, the state of the art is 55nm; 40 nm is just getting ready,” Kumar said.

Embedded flash or any NVM takes as long as it does because, most often, it is developed not by the foundries themselves but by independent companies, such as Silicon Storage Technology. The foundry will implement this IP after the foundry has developed the base process. (SST has been part of Microchip Technology since 2010.) Typically, the eFlash capability lags by a few years for high-volume uses, and Kumar notes that “the 40nm eFlash is still not in high-volume production for end-node devices.”

Similarly, the ultra-low-leakage versions of a technology node take time and equipment investments, as well as cooperation from IP partners. Foundry customers and the fabless design houses must requalify for the low-leakage processes. “All the models change and simulations have to be redone,” Kumar said.

“We need low-leakage for the end applications that run on a button cell (battery), so that a security door or motion sensor, for example, can run for five to seven years. After the base technology is developed, it typically takes at least three years. If 40nm was available several years ago, the ultra-low-leakage process is just becoming available now.

“And some foundries may decide not to do ultra-low-leakage on certain technology nodes. It is a big capital and R&D investment to do ultra-low-leakage. Foundries have to make choices, and we have to manage that,” Kumar said.

The majority of Silicon Labs’ IoT product volume is in 180nm, while other non-IoT products use a 55nm process. The line of Blue Gecko wireless SoCs currently is on 90nm, made in 300mm fabs, while new designs are headed toward more advanced process nodes.

Because 180nm fabs are being used for MEMS, sensors and other analog-intensive, high-volume products, there is still “somewhat of a shortage” of 180nm wafers, Kumar said, though the situation is improving. “It has gotten better because TSMC and other foundries have added capacity, having heard from several customers that the 180nm node is where they are going to stay, or at least stay longer than they expected. While the foundries have added equipment and capital, it is still quite tight. I am sure the big MEMS and sensor companies are perfectly happy with 180nm,” Kumar said.

A testing advantage

IoT is a broad-based market with thousands of customers and a lot of small volume customizations. Over the past decade Silicon Labs has deployed a proprietary ultra-low-cost tester, developed in-house and used in internal back-end operations in Austin and Singapore at assembly and test subcontractors and at a few outside module makers as well. The Silicon Labs tester is much more cost effective than commercially available testers, an important cost advantage in a market where a wireless MCU can sell in small volumes to a large number of customers for just a few dollars.

“Testing adds costs, and it is a critical part of our strategy. We use our internally developed tester for our broad-based products, and it is effective at managing costs,” Kumar said.

High-NA EUV Lithography Investment

Monday, November 28th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

As covered in a recent press release, leading lithography OEM ASML invested EUR 1 billion in cash to buy 24.9% of ZEISS subsidiary Carl Zeiss SMT, and committed to spend EUR ~760 million over the next 6 years on capital expenditures and R&D of an entirely new high numerical aperture (NA) extreme ultra-violet (EUV) lithography tool. Targeting NA >0.5 to be able to print 8 nm half-pitch features, the planned tool will use anamorphic mirrors to reduce shadowing effects from nanometer-scale mask patterns. Clever design and engineering of the mirrors could allow this new NA >0.5 tool to be able to achieve wafer throughputs similar to ASML’s current generation of 0.33 NA tools for the same source power and resist speed.

The Numerical Aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. Higher NA systems can resolve finer features by condensing light from a wider range of angles. Mirror surfaces to reflect EUV “light” are made from over 50 atomic-scale bi-layers of molybdenum (Mo) and silicon (Si), and increasing the width of mirrors to reach higher NA increases the angular spread of the light which results in shadows within patterns.

In the proceedings of last year’s European Mask and Lithography Conference, Zeiss researchers reported on  “Anamorphic high NA optics enabling EUV lithography with sub 8 nm resolution” (doi:10.1117/12.2196393). The abstract summarizes the inherent challenges of establishing high NA EUVL technology:

For such a high-NA optics a configuration of 4x magnification, full field size of 26 x 33 mm² and 6’’ mask is not feasible anymore. The increased chief ray angle and higher NA at reticle lead to non-acceptable mask shadowing effects. These shadowing effects can only be controlled by increasing the magnification, hence reducing the system productivity or demanding larger mask sizes. We demonstrate that the best compromise in imaging, productivity and field split is a so-called anamorphic magnification and a half field of 26 x 16.5 mm² but utilizing existing 6’’ mask infrastructure.

Figure 1 shows that ASML plans to introduce such a system after the year 2020, with a throughput of 185 wafers-per-hour (wph) and with overlay of <2 nm. Hans Meiling, ASML vice president of product management EUV, in an exclusive interview with Solid State Technology explained why >0.5 NA capability will not be upgradable on 0.33 NA tools, “the >0.5NA optical path is larger and will require a new platform. The anamorphic imaging will also require stage architectural changes.”

Fig.1: EUVL stepper product plans for wafers per hour (WPH) and overlay accuracy include change from 0.33 NA to a new >0.5 NA platform. (Source: ASML)

Overlay of <2 nm will be critical when patterning 8nm half-pitch features, particularly when stitching lines together between half-fields patterned by single-exposures of EUV. Minimal overlay is also needed for EUV to be used to cut grid lines that are initially formed by pitch-splitting ArFi. In addition to the high NA set of mirrors, engineers will have to improve many parts of the stepper to be able to improve on the 3 nm overlay capability promised for the NXE:3400B 0.33 NA tool ASML plans to ship next year.

“Achieving better overlay requires improvements in wafer and reticle stages regardless of NA,” explained Meiling. “The optics are one of the many components that contribute to overlay. Compare to ArF immersion lithography, where the optics NA has been at 1.35 for several generations but platform improvements have provided significant overlay improvements.”

Manufacturing Capability Plans

Figure 2 shows that anamorphic systems require anamorphic masks, so moving from 0.33 to >0.5 NA requires re-designed masks. For relatively large chips, two adjacent exposures with two different anamorphic masks will be needed to pattern the same field area which could be imaged with lower resolution by a single 0.33 NA exposure. Obviously, such adjacent exposures of one layer must be properly “stitched” together by design, which is another constraint on electronic design automation (EDA) software.

Fig.2: Anamorphic >0.5 NA EUVL system planned by ASML and Zeiss will magnify mask images by 4x in the x-direction and 8x in the y-direction. (Source: Carl Zeiss SMT)

Though large chips will require twice as many half-field masks, use of anamorphic imaging somewhat reduces the challenges of mask-making. Meiling reminds us that, “With the anamorphic imaging, the 8X direction conditions will actually relax, while the 4X direction will require incremental improvements such as have always been required node-on-node.”

ASML and Zeiss report that ideal holes which “obscure” the centers of mirrors can surprisingly allow for increased transmission of EUV by each mirror, up to twice that of the “unobscured” mirrors in the 0.33 NA tool. The holes allow the mirrors to reflect through each-other, so they all line up and reflect better. Theoretically then each >0.5 NA half-field can be exposed twice as fast as a 0.33 NA full-field, though it seems that some system throughput loss will be inevitable. Twice the number of steps across the wafer will have to slow down throughput by some percent.

White two stitched side-by-side >0.5 NA EUVL exposures will be challenging, the generally known alternatives seem likely to provide only lower throughputs and lower yields:

*   Double-exposure of full-field using 0.33 NA EUVL,

*   Octuple-exposure of full-field using ArFi, or

*   Quadruple-exposure of full-field using ArFi complemented by e-beam direct-writing (EbDW) or by directed self-assembly (DSA).

One ASML EUVL system for HVM is expected to cost ~US$100 million. As presented at the company’s October 31st Investor Day this year, ASML’s modeling indicates that a leading-edge logic fab running ~45k wafer starts per month (WSPM) would need to purchase 7-12 EUV systems to handle an anticipated 6-10 EUV layers within “7nm-node” designs. Assuming that each tool will cost >US$100 million, a leading logic fab would have to invest ~US$1 billion to be able to use EUV for critical lithography layers.

With near US$1 billion in capital investments needed to begin using EUVL, HVM fabs want to be able to get productive value out of the tools over more than a single IC product generation. If a logic fab invests US$1 billion to use 0.33 NA EUVL for the “7nm-node” there is risk that those tools will be unproductive for “5nm-node” designs expected a few years later. Some fabs may choose to push ArFi multi-patterning complemented by another lithography technology for a few years, and delay investment in EUVL until >0.5 NA tools become available.

—E.K.

Air-Gaps for FinFETs Shown at IEDM

Friday, October 28th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Researchers from IBM and Globalfoundries will report on the first use of “air-gaps” as part of the dielectric insulation around active gates of “10nm-node” finFETs at the upcoming International Electron Devices Meeting (IEDM) of the IEEE (ieee-iedm.org). Happening in San Francisco in early December, IEDM 2016 will again provide a forum for the world’s leading R&D teams to show off their latest-greatest devices, including 7nm-node finFETs by IBM/Globalfoundries/Samsung and by TSMC. Air-gaps reduce the dielectric capacitance that slows down ICs, so their integration into transistor structures leads to faster logic chips.

History of Airgaps – ILD and IPD

As this editor recently covered at SemiMD, in 1998, Ben Shieh—then a researcher at Stanford University and now a foundry interface for Apple Corp.—first published (Shieh, Saraswat & McVittie. IEEE Electron Dev. Lett., January 1998) on the use of controlled pitch design combined with CVD dielectrics to form “pinched-off keyholes” in cross-sections of inter-layer dielectrics (ILD).

In 2007, IBM researchers showed a way to use sacrificial dielectric layers as part of a subtractive process that allows air-gaps to be integrated into any existing dielectric structure. In an interview with this editor at that time, IBM Fellow Dan Edelstein explained, “we use lithography to etch a narrow channel down so it will cap off, then deliberated damage the dielectric and etch so it looks like a balloon. We get a big gap with a drop in capacitance and then a small slot that gets pinched off.

Intel presented on their integration of air-gaps into on-chip interconnects at IITC in 2010 but delayed use until the company’s 14nm-node reached production in 2014. 2D-NAND fabs have been using air-gaps as part of the inter-poly dielectric (IPD) for many years, so there is precedent for integration near the gate-stack.

Airgaps for finFETs

Now researchers from IBM and Globalfoundries will report in (IEDM Paper #17.1, “Air Spacer for 10nm FinFET CMOS and Beyond,” K. Cheng et al) on the first air-gaps used at the transistor level in logic. Figure 1 shows that for these “10nm-node” finFETs the dielectric spacing—including the air-gap and both sides of the dielectric liner—is about 10 nm. The liner needs to be ~2nm thin so that ~1nm of ultra-low-k sacrificial dielectric remains on either side of the ~5nm air-gap.

Fig.1: Schematic of partial air-gaps only above fin tops using dielectric liners to protect gate stacks during air-gap formation for 10nm finFET CMOS and beyond. (source: IEDM 2016, Paper#17.1, Fig.12)

These air-gaps reduced capacitance at the transistor level by as much as 25%, and in a ring oscillator test circuit by as much as 15%. The researchers say a partial integration scheme—where the air-gaps are formed only above the tops of fin— minimizes damage to the FinFET, as does the high-selectivity etching process used to fabricate them.

Figure 2 shows a cross-section transmission electron micrograph (TEM) of what can go wrong with etch-back air-gaps when all of the processes are not properly controlled. Because there are inherent process:design interactions needed to form repeatable air-gaps of desired shapes, this integration scheme should be extendable “beyond” the “10-nm node” to finFETs formed at tighter pitches. However, it seems likely that “5nm-node” logic FETs will use arrays of horizontal silicon nano-wires (NW), for which more complex air-gap integration schemes would seem to be needed.

Fig.2: TEM image of FinFET transistor damage—specifically, erosion of the fin and source-drain epitaxy—by improper etch-back of the air-gaps at 10nm dimensions. (source: IEDM 2016, Paper#17.1, Fig.10)

—E.K.

Next Page »