Part of the  

Solid State Technology


The Confab


About  |  Contact

Posts Tagged ‘IC’

Next Page »

D2S Releases 4th-Gen IC Computational Design Platform

Friday, September 30th, 2016


By Ed Korczynski, Sr. Technical Editor

D2S ( recently released the fourth generation of its computational design platform (CDP), which enables extremely fast (400 Teraflops) and precise simulations for semiconductor design and manufacturing. The new CDP is based on NVIDIA Tesla K80 GPUs and Intel Haswell CPUs, and is architected for 24×7 cleanroom production environments. To date, 14 CDPs across four platform generations are in use by customers around the globe, including six of the latest fourth generation. In an exclusive interview with SemiMD, D2S CEO Aki Fujimura stated, “Now that GPUs and CPUs are fast-enough, they can replace other hardware and thereby free up engineering resources to focus on adding value elsewhere.”

Mask data preparation (MDP) and other aspects of IC design and manufacturing require ever-increasing levels of speed and reliability as the data sets upon which they must operate grow larger and more complex with each device generation. The Figure shows a mask needed to print arrays of sub-wavelength features includes complex curvilinear shapes which must be precisely formed even though they do not print on the wafer. Such sub-resolution assist features (SRAF) increase in complexity and density as the half-pitch decreases, so the complexity of mask data increases far more than the density of printed features.

Sub-wavelength lithography using 193nm wavelength requires ever-more complex masks to repeatably print ever smaller half-pitch (HP) features, as shown by (LEFT) a typical mask composed of complex nested curves and dots which do not print (RIGHT) in the array of 32nm HP contacts/vias represented by the small red circles. (Source: D2S)

GPUs, which were first developed as processing engines for the complex graphical content of computer games, have since emerged as an attractive option for compute-intensive scientific applications due in part to their ability to run many more computing threads (up to 500x) compared to similar-generation CPUs. “Being able to process arbitrary shapes is something that mask shops will have to do,” explained Fujimura. “The world could go 193nm or EUV at any particular node, but either way there will be more features and higher complexity within the features, and all of that points to GPU acceleration.”

The D2S CDP is engineered for high reliability inside a cleanroom manufacturing environment. A few of the fab applications where CDPs are currently being used include:

  • model-based MDP for leading-edge designs that require increasingly complex mask shapes,
  • wafer plane analysis of SEM mask images to identify mask errors that print, and
  • inline thermal-effect correction of eBeam mask writers to lower write times.

“The amount of design data required to produce photomasks for leading-edge chip designs is increasing at an exponential rate, which puts more pressure on mask writing systems to maintain reasonable write times for these advanced masks. At the same time, writing these masks requires higher exposure doses and shot counts, which can cause resist proximity heating effects that lead to mask CD errors,” stated Noriaki Nakayamada, group manager at NuFlare Technology. “D2S GPU acceleration technology significantly reduces the calculation time required to correct these resist heating effects. By employing a resist heating correction that includes the use of the D2S CDP as an OEM option on our mask writers, NuFlare estimates that it can reduce CD errors by more than 60 percent, and reduce write times by more than 20 percent.”

In the E-beam Initiative 2015 survey, the most advanced reported mask-set contained >100 masks of which ~20% could be considered ‘critical’. The just released 2016 survey disclosed that the most complex single-layer mask design written last year required 16 TB of data, however platforms like D2S’ CDP have been used to accelerate writing such that the average reported write times have decreased to a weighted average of 4 hours. Meanwhile, the longest reported mask write time decreased from 72 to 48 hours.

3D-NAND Deposition and Etch Integration

Thursday, September 1st, 2016


By Ed Korczynski, Sr. Technical Editor

3D-NAND chips are in production or pilot-line manufacturing at all major memory manufacturers, and they are expected to rapidly replace most 2D-NAND chips in most applications due to lower costs and greater reliability. Unlike 2D-NAND which was enabled by lithography, 3D-NAND is deposition and etch enabled. “With 3D-NAND you’re talking about 40nm devices, while the most advanced 2D-NAND is running out of steam due to the limited countable number of stored electrons-per-cell, and in terms of the repeatability due to parasitics between adjacent cells,” reminded Harmeet Singh, corporate vice president of Lam Research in an exclusive interview with SemiMD to discuss the company’s presentation at the Flash Memory Summit 2016.

“We’re in an era where deposition and etch uniquely define the customer roadmap,” said Singh,“and we are the leading supplier in 3D-NAND deposition and etch.” Though each NAND manufacturer has different terminology for their unique 3D variant, from a manufacturing process integration perspective they all share similar challenges in the following simplified process sequences:

1)    Deposition of 32-64 pairs of blanket “mold stack” thin-films,

2)    Word-line hole etch through all layers and selective fill of NAND cell materials, and

3)    Formation of “staircase” contacts to each cell layer.

Each of these unique process modules is needed to form the 3D arrays of NVM cells.

For the “mold stack” deposition of blanket alternating layers, it is vital for the blanket PECVD to be defect-free since any defects are mirrored and magnified in upper-layers. All layers must also be stress-free since the stress in each deposited layer accumulates as strain in the underlying silicon wafer, and with over 32 layers the additive strain can easily warp wafers so much that lithographic overlay mismatch induces significant yield loss. Controlled-stress backside thin-film depositions can also be used to balance the stress of front-side films.

Hole Etch

“The difficult etch of the hole, the materials are different so the challenges is different,” commented Singh about the different types of 3D-NAND now being manufactured by leading fabs. “During this conference, one of our customer presented that they do not see the hole diameters shrinking, so at this point it appears to us that shrinking hole diameters will not happen until after the stacking in z-dimension reaches some limit.”

Tri-Layer Resist (TLR) stacks for the hole patterning allow for the amorphous carbon hardmask material to be tuned for maximum etch resistance without having to compromise the resolution of the photo-active layer needed for patterning. Carbon mask is over 3 microns thick and carbon-etching is usually responsive to temperature, so Lam’s latest wafer-chuck for etching features >100 temperature control zones. “This is an example of where Lam is using it’s processes expertise to optimize both the hardmask etch as well as the actual hole etch,” explained Singh.

Staircase Etch

The Figure shows a simplified cross-sectional schematic of how the unique “staircase” wordline contacts are cost-effectively manufactured. The established process of record (POR) for forming the “stairs” uses a single mask exposure of thick KrF photoresist—at 248nm wavelength—to etch 8 sets of stairs controlled by a precise resist trim. The trimming step controls the location of the steps such that they align with the contact mask, and so must be tightly controlled to minimize any misalignment yield loss.

A) Simplified cross-sectional schematic of the staircase etch for 3D-NAND contacts using thick photoresist, B) which allows for controlled resist trimming to expose the next “stair” such that C) successive trimming creates 8-16 steps from a single initial photomask exposure. (Source: Ed Korczynski)

Lam is working on ways to tighten the trimming etch uniformity such that 16 sets of stairs can be repeatably etched from a single KrF mask exposure. Halving the relative rate of vertical etch to lateral etch of the KrF resist allows for the same resist thickness to be used for double the number of etches, saving lithography cost. “We see an amazing future ahead because we are just at the beginning of this technology,” commented Singh.


Fab Facilities Data and Defectivity

Monday, August 1st, 2016


By Ed Korczynski, Sr. Technical Editor

In-the-know attendees at SEMICON West at a Thursday morning working breakfast heard from executives representing the world’s leading memory fabs discuss manufacturing challenges at the 4th annual Entegris Yield Forum. Among the excellent presenters was Norm Armour, managing director worldwide facilities and corporate EHSS of Micron. Armour has been responsible for some of the most famous fabs in the world, including the Malta, New York logic fab of GlobalFoundries, and AMD’s Fab25 in Austin, Texas. He discussed how facilities systems effect yield and parametric control in the fab.

Just recently, his organization within Micron broke records working with M&W on the new flagship Fab 10X in Singapore—now running 3D-NAND—by going from ground-breaking to first-tool-in in less than 12 months, followed by over 400 tools installed in 3 months. “The devil is in the details across the board, especially for 20nm and below,” declared Armour. “Fabs are delicate ecosystems. I’ll give a few examples from a high-volume fab of things that you would never expect to see, of component-level failures that caused major yield crashes.”

Ultra-Pure Water (UPW)

Ultra-Pure Water (UPW) is critical for IC fab processes including cleaning, etching, CMP, and immersion lithography, and contamination specs are now at the part-per-billion (ppb) or part-per-trillion (ppt) levels. Use of online monitoring is mandatory to mitigate risk of contamination. International Technology Roadmap for Semiconductors (ITRS) guidelines for UPW quality (minimum acceptable standard) include the following critical parameters:

  • Resistivity @ 25C >18.0 Mohm-cm,
  • TOC <1.0 ppb,
  • Particles/ml < 0.3 @ 0.05 um, and
  • Bacteria by culture 1000 ml <1.

In one case associated with a gate cleaning tool, elevated levels of zinc were detected with lots that had passed through one particular tool for a variation on a classic SC1 wet clean. High-purity chemistries were eliminated as sources based on analytical testing, so the root-cause analysis shifted to to the UPW system as a possible source. Then statistical analysis could show a positive correlation between UPW supply lines equipped with pressure regulators and the zinc exposure. The pressure regulator vendor confirmed use of zinc-oxide and zinc-stearate as part of the assembly process of the pressure regulator. “It was really a curing agent for an elastomer diaphragm that caused the contamination of multiple lots,” confided Armour.

UPW pressure regulators are just one of many components used in facilities builds that can significantly degrade fab yield. It is critical to implement a rigorous component testing and qualification process prior to component installation and widespread use. “Don’t take anything for granted,” advised Armour. “Things like UPW regulators have a first-order impact upon yield and they need to be characterized carefully, especially during new fab construction and fit up.”

Photoresist filtration

Photoresist filtration has always been important to ensure high yield in manufacturing, but it has become ultra-critical for lithography at the 20nm node and below. Dependable filtration is particularly important because industry lacks in-line monitoring technology capable of detecting particles in the range below ~40nm.

Micron tried using filters with 50nm pore diameters for a 20nm node process…and saw excessive yield losses along with extreme yield variability. “We characterized pressure-drop as a function of flow-rate, and looked at various filter performances for both 20nm and 40nm particles,” explained Armour. “We implemented a new filter, and lo and behold saw a step function increase in our yields. Defect densities dropped dramatically.” Tracking the yields over time showed that the variability was significantly reduced around the higher yield-entitlement level.

Airborne Molecular Contamination (AMC)

Airborne Molecular Contamination (AMC) is ‘public enemy number one’ in 20nm-node and below fabs around the world. “In one case there were forrest fires in Sumatra and the smoke was going into the atmosphere and actually went into our air intakes in a high volume fab in Taiwan thousands of miles away, and we saw a spike in hydrogen-sulfide,” confided Armour. “It increased our copper CMP defects, due to copper migration. After we installed higher-quality AMC filters for the make-up air units we saw dramatic improvement in copper defects. So what is most important is that you have real-time on-line monitoring of AMC levels.”

Building collaborative relationships with vendors is critical for troubleshooting component issues and improving component quality. “Partnering with suppliers like Entegris is absolutely essential,” continued Armour. “On AMCs for example, we have had a very close partnership that developed out of a team working together at our Inotera fab in Taiwan. There are thousands of important technologies that we need to leverage now to guarantee high yields in leading-node fabs.” The Figure shows just some of the AMCs that must be monitored in real-time.

Big Data

The only way to manage all of this complexity is with “Big Data” and in addition to primary process parameter that must be tracked there are many essential facilities inputs to analytics:

  • Environmental Parameters – temperature, humidity, pressure, particle count, AMCs, etc.
  • Equipment Parameters – run state, motor current, vibration, valve position, etc.
  • Effluent Parameters – cooling water, vacuum, UPW, chemicals, slurries, gases, etc.

“Conventional wisdom is that process tools create 90% of your defect density loss, but that’s changing toward facilities now,” said Armour. “So why not apply the same methodologies within facilities that we do in the fab?” SPC is after-the-fact reactive, while APC is real-time fault detection on input variables, including such parameters as vibration or flow-rate of a pump.

“Never enough data,” enthused Armour. “In terms of monitoring input variables, we do this through the PLCs and basically use SCADA to do the fault-detection interdiction on the critical input variables. This has been proven to be highly effective, providing a lot of protection, and letting me sleep better at night.”

Micron also uses these data to provide site-to-site comparisons. “We basically drive our laggard sites to meet our world-class sites in terms of reducing variation on facility input variables,” explained Armour. “We’re improving our forecasting as a result of this capability, and ultimately protecting our fab yields. Again, the last thing a fab manager wants to see is facilities causing yield loss and variation.”


Applied Materials Releases Selective Etch Tool

Wednesday, June 29th, 2016


By Ed Korczynski, Sr. Technical Editor

Applied Materials has disclosed commercial availability of new Selectra(TM) selective etch twin-chamber hardware for the company’s high-volume manufacturing (HVM) Producer® platform. Using standard fluorine and chlorine gases already used in traditional Reactive Ion Etch (RIE) chambers, this new tool provides atomic-level precision in the selective removal of materials in 3D devices structures increasingly used for the most advanced silicon ICs. The tool is already in use at three customer fabs for finFET logic HVM, and at two memory fab customers, with a total of >350 chambers planned to have been shipped to many customers by the end of 2016.

Figure 1 shows a simplified cross-sectional schematic of the Selectra chamber, where the dashed white line indicates some manner of screening functionality so that “Ions are blocked, chemistry passes through” according to the company. In an exclusive interview with Solid State Technology, company representative refused to disclose any hardware details. “We are using typical chemistries that are used in the industry,” explained Ajay Bhatnagar, managing director of Selective Removal Products for Applied Materials. “If there are specific new applications needed than we can use new chemistry. We have a lot of IP on how we filter ions and how we allow radicals to combine on the wafer to create selectivity.”

FIG 1: Simplified cross-sectional schematic of a silicon wafer being etched by the neutral radicals downstream of the plasma in the Selectra chamber. (Source: Applied Materials)

From first principles we can assume that the ion filtering is accomplished with some manner of electrically-grounded metal screen. This etch technology accomplishes similar process results to Atomic Layer Etch (ALE) systems sold by Lam, while avoiding the need for specialized self-limiting chemistries and the accompanying chamber throughput reductions associated with pulse-purge process recipes.

“What we are doing is being able to control the amount of radicals coming to the wafer surface and controlling the removal rates very uniformly across the wafer surface,” asserted Bhatnagar. “If you have this level of atomic control then you don’t need the self-limiting capability. Most of our customers are controlling process with time, so we don’t need to use self-limiting chemistry.” Applied Materials claims that this allows the Selectra tool to have higher relative productivity compared to an ALE tool.

Due to the intrinsic 2D resolutions limits of optical lithography, leading IC fabs now use multi-patterning (MP) litho flows where sacrificial thin-films must be removed to create the final desired layout. Due to litho limits and CMOS device scaling limits, 2D logic transistors are being replaced by 3D finFETs and eventually Gate-All-Around (GAA) horizontal nanowires (NW). Due to dielectric leakage at the atomic scale, 2D NAND memory is being replaced by 3D-NAND stacks. All of these advanced IC fab processes require the removal of atomic-scale materials with extreme selectivity to remaining materials, so the Selectra chamber is expected to be a future work-horse for the industry.

When the industry moves to GAA-NW transistors, alternating layers of Si and SiGe will be grown on the wafer surface, 2D patterned into fins, and then the sacrificial SiGe must be selectively etched to form 3D arrays of NW. Figure 2 shows the SiGe etched from alternating Si/SiGe stacks using a Selectra tool, with sharp Si corners after etch indicating excellent selectivity.

FIG 2: SEM cross-section showing excellent etch of SiGe within alternating Si/SiGe layers, as will be needed for Gate-All-Around (GAA) horizontal NanoWire (NW) transistor formation. (Source: Applied Materials)

“One of the fundamental differences between this system and old downstream plasma ashers, is that it was designed to provide extreme selectivity to different materials,” said Matt Cogorno, global product manager of Selective Removal Products for Applied Materials. “With this system we can provide silicon to titanium-nitride selectivity at 5000:1, or silicon to silicon-nitride selectivity at 2000:1. This is accomplished with the unique hardware architecture in the chamber combined with how we mix the chemistries. Also, there is no polymer formation in the etch process, so after etching there are no additional processing issues with the need for ashing and/or a wet-etch step to remove polymers.”

Systems can also be used to provide dry cleaning and surface-preparation due to the extreme selectivity and damage-free material removal.  “You can control the removal rates,” explained Cogorno. “You don’t have ions on the wafer, but you can modulate the number of radicals coming down.” For HVM of ICs with atomic-scale device structures, this new tool can widen process windows and reduce costs compared to both dry RIE and wet etching.


79 GHz CMOS RADAR Chips for Cars from Imec and Infineon

Tuesday, May 24th, 2016


By Ed Korczynski, Sr. Technical Editor

As unveiled at the annual Imec Technology Forum in Brussels (, Infineon Technologies AG ( and imec ( are working on highly integrated CMOS-based 79 GHz sensor chips for automotive radar applications. Imec provides expertise in high-frequency system, circuit, and antenna design for radar applications, complementing Infineon’s knowledge from the many learnings that go along with holding the world’s top market share in commercial radar sensor chips. Infineon and imec expect functional CMOS sensor chip samples in the third quarter of 2016. A complete radar system demonstrator is scheduled for the beginning of 2017.

Whether or not fully automated cars and trucks will be traveling on roads soon, today’s drivers want more sensors to be able to safely avoid accidents in conditions of limited visibility. Typically, there are up to three radar systems in today’s vehicle equipped with driver assistance functions. In a future with fully automated cars, up to ten radar systems and ten more sensor systems using cameras or lidar ( could be needed. Short-range radar (SRR) would look for side objects, medium-range radar (MRR) would scan widely for objects up to 50m in front and in back, and long-range radar (LRR) would focus up to 250m in front and in back for high-speed collision avoidance.

“Infineon enables the radar-based safety cocoon of the partly and fully automated car,” said Ralf Bornefeld, Vice President & General Manager, Sense & Control, Infineon Technologies AG. “In the future, we will manufacture radar sensor chips as a single-chip solution in a classic CMOS process for applications like automated parking. Infineon will continue to set industry standards in radar technology and quality.”

The Figure shows the evolution of radar technology over the last decades, leading to the current miniaturization using solid-state silicon CMOS. Key to the successful development of this 79 GHz demonstrator was choosing to use 28 nm CMOS technology. Imec has been refining this technology as shown at ISSCC ( for many years, first showing a 28nm transmitter chip in 2013, then showing a 28nm transmit and receive (a.k.a. “transceiver”) chip in 2014, and finally showing a single-chip with a transceiver and analog-digital converters (ADC) and phase-lock loops (PLL) and digital components in 2015. Long-term supply of eventual commercial chips should be ensured by using 28nm technology, which is known as a “long lived” node.

“We are excited to work with Infineon as a valuable partner in our R&D program on advanced CMOS-based 77 GHz and 79 GHz radar technology,” stated Wim Van Thillo, program director perceptive systems at imec. “Compared to the mainstream 24 GHz band, the 77 GHz and 79 GHz bands enable a finer range, Doppler and angular resolution. With these advantages, we aim to realize radar prototypes with integrated multiple-input, multiple-output (MIMO) antennas that not only detect large objects, but also pedestrians and bikers and thus contribute to a safer environment for all.”

Since the aesthetics are always important for buyers, automobile companies have been challenged to integrate all of the desired sensors into vehicles in an invisible manner. “The designers hate what they call the ‘warts’ on car bumpers that are the small holes needed for the ultrasonic sensors currently used,” explained Van Thillo in a press conference during ITF2016.

In an ITF2016 presentation, CEO Reinhard Ploss, discussed how Infineon works with industrial partners to create competitive commercial products. “When we first developed RADAR, there was a collaboration between the Tier-1 car companies and ourselves,” explained Ploss. “The key lies in the algorithms needed to process the data, since the raw data stream is essentially useless. The next generation of differentiation for semiconductors will be how to integrate algorithms. In effect, how do you translate ‘pixels’ into ‘optics’ without an expensive microprocessor?”

Evolution of radar technology over time has reached the miniaturization of 79 GHz using 28nm silicon CMOS technology. Imec is now also working on 140 GHz radar chips. (Source: imec)


Leti’s CoolCube 3D Transistor Stacking Improves with Qualcomm Help

Wednesday, April 27th, 2016

By Ed Korczynski, Sr. Technical Editor

As previously covered by Solid State Technology CEA-Leti in France has been developing monolithic transistor stacking based on laser re-crystallization of active silicon in upper layers called “CoolCube” (TM). Leading mobile chip supplier Qualcomm has been working with Leti on CoolCube R&D since late 2013, and based on preliminary results have opted to continue collaborating with the goal of building a complete ecosystem that takes the technology from design to fabrication.

“The Qualcomm Technologies and Leti teams have demonstrated the potential of this technology for designing and fabricating high-density and high-performance chips for mobile devices,” said Karim Arabi, vice president of engineering, Qualcomm Technologies, Inc. “We are optimistic that this technology could address some of the technology scaling issues and this is why we are extending our collaboration with Leti.” As part of the collaboration, Qualcomm Technologies and Leti are sharing the technology through flexible, multi-party collaboration programs to accelerate adoption.

Olivier Faynot, micro-electronic component section manager of CEA-Leti, in an exclusive interview with Solid State Technology and SemiMD explained, “Today we have a strong focus on CMOS over CMOS integration, and this is the primary integration that we are pushing. What we see today is the integration of NMOS over PMOS is interesting and suitable for new material incorporation such as III-V and germanium.”

Table: Critical thermal budget steps summary in a planar FDSOI integration and CoolCube process for top FET in 3DVLSI. (Source: VLSI Symposium 2015)

The Table shows that CMOS over CMOS integration has met transistor performance goals with low-temperature processes, such that the top transistors have at least 90% of the performance compared to the bottom. Faynot says that recent results for transistors are meeting specification, while there is still work to be done on inter-tier metal connections. For advanced ICs there is a lot of interconnect routing congestion around the contacts and the metal-1 level, so inter-tier connection (formerly termed the more generic “local interconnect”) levels are needed to route some gates at the bottom level for connection to the top level.

“The main focus now is on the thermal budget for the integration of the inter-tier level,” explained Faynot. “To do this, we are not just working on the processing but also working closely with the designers. For example, depending on the material chosen for the metal inter-tier there will be different limits on the metal link lengths.” Tungsten is relatively more stable than copper, but with higher electrical resistance for inherently lower limits on line lengths. Additional details on such process-design co-dependencies will be disclosed during the 2016 VLSI Technology Symposium, chaired by Raj Jammy.

When the industry decides to integrate III-V and Ge alternate-channel materials in CMOS, the different processing conditions for each should make NMOS over PMOS CoolCube a relatively easy performance extension. “Three-fives and germanium are basically materials with low thermal budgets, so they would be most compatible with CoolCube processing,” reminded Faynot. “To me, this kind of technology would be very interesting for mobile applications, because it would achieve a circuit where the length of the wires would be shortened. We would expect to save in area, and have less of a trade-off between power-consumption and speed.”

“This is a new wave that CoolCube is creating and it has been possible thanks to the interest and support of Qualcomm Technologies, which is pushing the technological development in a good direction and sending a strong signal to the microelectronics community,” said Leti CEO Marie Semeria. “Together, we aim to build a complete ecosystem with foundries, equipment suppliers, and EDA and design houses to assemble all the pieces of the puzzle and move the technology into the product-qualification phase.”


Controlling Variabilities When Integrating IC Fab Materials

Friday, April 15th, 2016


By Ed Korczynski, Senior Technical Editor, SemiMD/Solid State Technology

Semiconductor integrated circuit (IC) manufacturing has always relied upon the supply of critical materials from a global supply chain. Now that shrinks of IC feature sizes have begun to reach economic limits, future functionality improvements in ICs are increasingly derived from the use of new materials. The Critical Materials Conference 2016—to be held May 5-6 in Hillsboro, Oregon (—will explore best practices in the integration of novel materials into manufacturing. Dr. David Thompson, Senior Director, Center of Excellence in Chemistry, Applied Materials will present on “Agony in New Material Introductions – minimizing and correlating variabilities,” which he was willing to discuss in advance with SemiMD.

Korczynski: With more and more materials being considered for use in high-volume manufacturing (HVM) of advanced ICs, how do you begin to selectively screen out materials that will not work for one reason or another to be able to reach the best new material for a target application?

Thompson: While there’s ‘no one size fits all’ solution to this, it typically starts with a review of what’s available and known about the current offerings. With respect to the talk at the CMC, we’ll review the challenges we run into after the materials system and chemistries are set and have been proven generally viable, but still require significant optimization in order to get acceptable yields for manufacturing. It’s a very long road from device proof of concept on a new materials system to a viable manufacturing process.

Korczynski: Since new materials are being considered for use on the atomic-scale in advanced devices, doesn’t all of this have to be done with control at the atomic scale?

Thompson: For the material on the chip, many mainstream analytical techniques are used to achieve atomic level control including TEMs and AFMs with atomic resolution during film development for many applications. Unfortunately, this resolution is not available for the chemicals we’re relying on to deposit these materials. For a typical precursor that weighs in the 200 Dalton range, a gram of precursor may have 5 × 1020 molecules. That’s a lot of molecules. Even with ppb (109) resolutions on analytical, you’re still dealing with invisible populations of >1010 molecules. It gets worse. While trace metals analysis can hit ppb levels, molecular analysis techniques are typically limited in the 0.1 to 0.01 percent resolutions for most semiconductor precursors and there may be impurities which are invisible to routine analytical techniques.

Ultimately, we rely on analytical techniques to control the gross parameters and disciplined process controls to verify suppliers produce the same compositions the same way, and to manage impurities. On the process and hardware side, it’s like threading the needle trying to get the right film at the right throughput, in a process space that’s as tolerant as possible to the inevitable variability in these chemistries.

Korczynski: With all of this investment in developing one specialty material supplier for advanced IC manufacturing, what is the cost to develop and qualify a second source?

Thompson: Generally, it’s not sustainable to release a product with dual specialty material sources. The problem with dual-sourcing is chemical suppliers protect their knowledge—not simple IP—but also their sub-supply-chains and proprietary methods of production, transport and delivery. However, given how trace elements in the formulation can change depending on conditions the molecules experience over time, the customer in many cases needs to develop two separate sub-recipes based on the specific vendor’s chemistry they are using. So, redundancy in the supply chain is prudent as is making sure the vendor can produce the material in different locations.

There are countless examples over the last 20 years of what I like to call ‘the agony of the supply-chain’ when a process got locked into using a material when the only supply was from a Ph.D. chemist making it in small batches in a lab. In most cases the initial batch of any new molecule is made at a scale that would fit in a coffee mug. Sometimes though scaling up the first industrial-scale batch can alter impurity factors that change yields on the wafer even with improved purification. So while a customer would like to keep using a small batch production, it’s not sustainable but trying to qualify a second vendor in this environment presents significant challenges.

Korczynski: Can you share an example with us of how your team brought a source of subtle variation under control?

Thompson: We had a process using a new metal film, and in the early development everything looked great. Eventually we observed a drift of process results that was more pronounced with some ampoules and less so with others. The root cause initially eluded us. Then, a bright Ph.D. on our team said it’s interesting that the supplier did not report a particular contaminant that would tend to be present as a byproduct of the reaction. The supplier confirmed it was present and variable at concentrations in the 100-300 ppm concentration in the blend. This contaminant was relatively more volatile than the main component due to vapor pressure differences and much more reactive with the substrate/wafer. It was found this variability in the chemistry induced the process variation on the wafer (as shown in Figure 1).


Chasing impurities and understanding their impact requires rigor and a lot of data collection. There’s no Star Trek analyzer we can use to give us knowledge of all impurities present and the role of those impurities on the process. Many impurities are invisible to routine analytical techniques, so we work very closely with vendors to establish a chemistry analytical protocol for each precursor that may consist of 5-10 different techniques. For the impurities we can’t detect we rely on excellent manufacturing process control and sub-supply sourcing management.

Korczynski: Is the supply-chain for advanced precursors for deposition and etch supplying everything we need in early R&D?

Thompson: New precursor ideation—the science that leads to new classes of compounds with new reactivity that Roy Gordon, or more recently Chuck Winter, have  been doing in academia is critically important and while there are a few academics doing excellent work in this space, in general there’s not enough focus on this topic.While we see many IP protected molecules, too often they are obvious simple modifications to one skilled in the art, consisting of merely adding a functional group off of a ring, or mixing and matching known ligand systems. We don’t see a lot of disruptive chemistries. The industry is hunting for differentiated reactivity, and evolutionary precursor development approaches generally aren’t sufficiently disruptive. While this research is useful in terms of tuning a vapor pressure or thermal stability it only very rarely produces a differentiated reactivity.

Korczynski: Do we need new methodologies to more efficiently manage all of this?

Thompson: Applied has made significant investments over the last 5 years to help accelerate the readiness of new materials across the board. One of the best things about working at Applied is the rate at which we can learn and build an ecosystem around a new material. With our strength in chemistry, deposition, CMP, etch, metrology and a host of other technologies, we get a fast, strong feedback loop going to accelerate issue discovery, resolution and general learning around new materials.

On the chemical supply-chain front, the need is making sure that chemical vendors accelerate their analytical chemistry development on new materials. Correlating the variability of chemistry to process results and ultimately yield is the real battle. The more knowledge we have of a chemistry moving into development, the faster learning can occur. I explain to my team that we can’t be proactive and respond to things we didn’t anticipate. Situations where trying to develop the analytical technique to see the impurity responsible for causing (or resolving) a variability is to start out at a significant disadvantage. However, we’ve seen a good response from suppliers on new materials and significant improvement on the early learnings necessary to minimize the agony of new material introductions.

IoT Demands Part 2: Test and Packaging

Friday, April 15th, 2016

By Ed Korczynski, Senior Technical Editor, Solid State Technology, SemiMD

The Internet-of-Things (IoT) adds new sensing and communications to improve the functionality of all manner of things in the world. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be extremely small and low-cost. To understand the state of technology preparedness to meet the anticipated needs of the different application spaces, experts from GLOBALFOUNDRIES, Cadence, Mentor Graphics and Presto Engineering gave detailed answers to questions about IoT chip needs in EDA and fab nodes, as published in “IoT Demands:  EDA and Fab Nodes.” We continue with the conversation below.

Korczynski: For test of IoT devices which may use ultra-low threshold voltage transistors, what changes are needed compared to logic test of a typical “low-power” chip?

Steve Carlson, product management group director, Cadence

Susceptibility to process corners and operating conditions becomes heightened at near-threshold voltage levels. This translates into either more conservative design sign-off criteria, or the need for higher levels of manufacturing screening/tests. Either way, it has an impact on cost, be it hidden by over-design, or overtly through more costly qualification and test processes.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

We need to make sure that the testability has also been designed to be functional structurally in this mode. In addition, sub-threshold voltage operation must account for non-linear transistor characteristics and the strong impact of local process variation, for which the conventional testability arsenal is still very poor. Automotive screening used low voltage operation (VLV) to detect latent defects, but at very low voltage close to the transistor threshold, digital becomes analog, and therefore if the usual concept still works for defect detection, functional test and @speed tests require additional expertise to be both meaningful and efficient from a test coverage perspective.

Korczynski:  Do we have sufficient specifications within “5G” to handle IoT device interoperability for all market segments?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

The estimated timeline for standardization availability of 5G is around 2020. 5G is being designed keeping three classes of applications in mind:  Enhanced Mobile Broadband, Massive IoT, and Mission-Critical Control. Specifically for IoT, the focus is on efficient, low-cost communication with deep coverage. We will start to see early 5G technologies start to appear around 2018, and device connectivity,

interoperability and marshaling the data they generate that can apply to multiple IoT sub-segments and markets is still very much in development.

Korczynski:  Will the 1st-generation of IoT devices likely include wide varieties of solution for different market-segments such as industrial vs. retail vs. consumer, or will most device use similar form-factors and underlying technologies?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

If we use CES 2016 as a showcase, we are seeing IoT “Things” that are becoming use-case or application-centric as they apply to specific sub-segments such as Connected Home, Automotive, Medical, Security, etc. There is definitely more variety on the consumer front vs. industrial. Vendors / OEMs / System houses are differentiating at the user-interface design and form-factor levels while the “under-the-hood” IC capabilities and component technologies that provide the atomic intelligence are fairly common. ​

Steve Carlson, product management group director, Cadence

Right now it seems like everyone is swinging for the fence. Everyone wants the home-run product that will reach a billion devices sold. Generality generally leads to sub-optimality, so a single device usually fails to meet the needs and expectations of many. Devices that are optimized for more specific use cases and elements of purchasing criteria will win out. The question of interface is an interesting one.

Korczynski:  Will there be different product life-cycles for different IoT market-segments, such as 1-3 years for consumer but 5-10 years for industrial?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

That certainly seems to be the case. According to Gartner’s market analysis for IoT, Consumer is expected to grow at a faster pace in terms of units compared to Enterprise, while Enterprise is expected to lead in revenue. Also the churn-cycle in Consumer is higher / faster compared to Enterprise. Today’s wearables or smart-phones are good reference examples. This will however vary by the type of “Thing” and sub-segment. For example, you expect to have your smart refrigerator for a longer time period compared to smart clothing or eyewear. As ASPs of the “Things”come down over time and new classes of products such as disposables hit the market, we can expect even larger volumes.​

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

The market segments continue to be driven by the same use cases. In consumer wearables, short cycles are linked to fashion trends and rapid obsolescence, where consumer home use has longer cycles closer to industrial market requirements. We believe that the lifecycle norms will hold true for IoT devices.

Korczynski:  For the IoT application of infrastructure monitoring (e.g. bridges, pipelines, etc.) long-term (10-20 year) reliability will be essential, while consumer applications may be best served by 3-5 year reliability devices which cost less; how well can we quantify the trade-off between cost and chip reliability?

Steve Carlson, product management group director, Cadence

Conceptually we know very well how to make devices more reliable. We can lower current densities with bigger wires, we can run at cooler temperatures, and so on.  The difficulty is always in finding optimality for a given criterion across the, for practical purposes, infinite tradeoffs to be made.

Korczynski:  Why is the talk of IoT not just another “Dot Com” hype cycle?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

​​I participated in a panel at SEMICON China in Shanghai last month that discussed a similar question. If we think of IoT as a “brand new thing” (no pun intended), then we can think of it as hype. However if we look at the IoT as as set of use-cases that can take advantage of an evolution of Machine-to-Machine (M2M) going towards broader connectivity, huge amounts of data generated and exchanged, and a generational increase in internet and communication network bandwidths (i.e. 5G), then it seems a more down-to-earth technological progression.

Nicolas Williams, product marketing manager, Mentor Graphics

Unlike the Dot Com hype, which was built upon hope and dreams of future solutions that may or may not have been based in reality, IoT is real business. For example, in a 2016 IC Insights report, we see that last year $63.4 billion in revenue was generated for IoT systems and the market is growing at about 20% CAGR. This same report also shows IoT semiconductor sales of over $15 billion in 2015 with a CAGR of 21.1%.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is the investment needed up front to create sensing agents and an infrastructure for the hardware foundation of the IoT that will lead to big data and ultimately value creation.

Steve Carlson, product management group director, Cadence

There will be plenty of hype cycles for products and product categories along the way. However, the foundational shift of the connection of things is a diode through which civilization will only pass through in one direction.

IoT Demands Part 1: EDA and Fab Nodes

Thursday, April 14th, 2016

The Internet-of-Things (IoT) is expected to add new sensing and communications to improve the functionality of all manner of things in the world:  bridges sensing and reporting when repairs are needed, parts automatically informing where they are in storage and transport, human health monitoring, etc. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be assembled at low-cost and small-size in High Volume Manufacturing (HVM). Micro-Electro-Mechanical Systems (MEMS) and other sensors are being combined with Radio-Frequency (RF) ICs in miniaturized packages for the first wave of growth in major sub-markets.

To meet the anticipated needs of the different IoT application spaces, SemiMD asked leading companies within critical industry segments about the state of technology preparedness:


*  Electronic Design Automation (EDA) – Cadence and Mentor Graphics,

*  IC and complex system test – Presto Engineering.

Korczynski:  Today, ICs for IoT applications typically use 45nm/65nm-node which are “Node -3″ (N-3) compared to sub-20nm-node chips in HVM. Five years from now, when the bleeding-edge will use 10nm node technology, will IoT chips still use N-3 of 28nm-node (considered a “long-lived node”) or will 45nm-node remain the likely sweet-spot of price:performance?

Timothy Dry, product marketing manager, GLOBALFOUNDRIES

In 5 years time, there will be a spread of technology solutions addressing low, middle, and high ends of IoT applications. At the low end, IoT end nodes for applications like connected smoke

detectors, security sensors will be at 55, 40nm ULP and ULL for lowest system power, and low cost. These applications will be typically served by MCUs <50DMIPs. Integrated radios (BLE, 802.15.4), security, Power Management Unit (PMU), and eFlash or MRAM will be common features. Connected LED lighting is forecasted to be a high volume IoT application. The LED drivers will use BCD extensions of 130nm—40nm—that can also support the radio and protocol-MCU with Flash.

In the mid-range, applications like smart-meters and fitness/medical monitoring will need systems that have more processing power <300DMIPS. These products will be implemented in 40nm, 28nm and GLOBALFOUNDRIES’ new 22nm FDSOI technology that uses software-controlled body-biasing to tune SoC operation for lowest dynamic power. Multiple wireless (BLE/802.15.4, WiFi, LPWAN) and wired connectivity (Ethernet, PLC) protocols with security will be integrated for gateway products.

High-end products like smart-watches, learning thermostats, home security/monitoring cameras, and drones will require MPU-class IC products (~2000DMIPs) and run high-order operating systems (e.g. Linux, Android). These products will be made in leading-edge nodes starting at 22FDX, 14FF and migrating to 7FF and beyond. Design for lowest dynamic power for longest battery life will be the key driver, and these products typically require human machine Interface (HMI) with animated graphics on a high resolution displays. Connectivity will include BLE, WiFi and cellular with strong security.

Steve Carlson, product management group director, Cadence

We have seen recent announcements of IoT targeted devices at 14nm. The value created by Moore’s Law integration should hold, and with that, there will be inherent advantages to those who leverage next generation process nodes. Still, other product categories may reach functionality saturation points where there is simply no more value obtained by adding more capability. We anticipate that there will be more “live” process nodes than ever in history.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is fair to say that most IoT devices will be a heterogeneous aggregation of analog functions rather than high power digital processors. Therefore, and by similarity with Bluetooth and RFID devices, 90nm and 65nm will remain the mainstream nodes for many sub-vertical markets, enabling the integration of RF and analog front-end functions with digital gate density. By default, sensors will stay out of the monolithic path for both design and cost reasons. The best answer would be that the IoT ASIC will follow eventually the same scaling as the MCU products, with embedded non-volatile memories, which today is 55-40nm centric and will move to 28nm with industry maturity and volumes.

Korczynski:  If most IoT devices will include some manner of sensor which must be integrated with CMOS logic and memory, then do we need new capabilities in EDA-flows and burn-in/test protocols to ensure meeting time-to-market goals?

Nicolas Williams, product marketing manager, Mentor Graphics

If we define a typical IoT device as a product that contains a MEMS sensor, A/D, digital processing, and a RF-connection to the internet, we can see that the fundamental challenge of IoT design is that teams working on this product need to master the analog, digital, MEMS, and RF domains. Often, these four domains require different experience and knowledge and sometimes design in these domains is accomplished by separate teams. IoT design requires that all four domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dice that will be bonded together, they still need to work together during the layout and verification process. Therefore, a unified design flow is required.

Stephen Pateras, product marketing director, Mentor Graphics

Being able to quickly debug and create test patterns for various embedded sensor IP can be addressed with the adoption of the new IEEE 1687 IP plug-and-play standard. If a sensor IP block’s digital interface adheres to the standard, then any vendor-provided data required to initialize or operate the embedded sensor can be easily and quickly mapped to chip pins. Data sequences for multiple sensor IP blocks can also be merged to create optimized sequences that will minimize debug and test times.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

From a testing standpoint, widely used ATEs are generally focused on a few purposes, but don’t necessarily cover all elements in a system. We think that IoT devices are likely to require complex testing flows using multiple ATEs to assure adequate coverage. This is likely to prevail for some time as short run volumes characteristic of IoT demands are unlikely to drive ATE suppliers to invest R&D dollars in creating new purpose-built machines.

Korczynski:  For the EDA of IoT devices, can all sensors be modeled as analog inputs within established flows or do we need new modeling capability at the circuit level?

Steve Carlson, product management group director, Cadence

Typically, the interface to the physical world has been partitioned at the electrical boundary. But as more mechanical and electro-mechanical sensors are more deeply integrated, there has been growing value in co-design, co-analysis, and co-optimization. We should see more multi-domain analysis over time.

Nicolas Williams, product marketing manager, Mentor Graphics

Designers of IoT devices that contain MEMS sensors need quality models in order to simulate their behavior under physical conditions such as motion and temperature. Unlike CMOS IC design, there are few standardized MEMS models for system-level simulation. State of the art MEMS modeling requires automatic generation of behavioral models based on the results of Finite Element Analysis (FEA) using reduced-order modeling (ROM). ROM is a numerical methodology that reduces the analysis results to create Verilog-A models for use in AMS simulations for co-simulation of the MEMS device in the context of the IoT system.

Cadence Adds New Tools for Analog Design, Enhances Layout

Wednesday, April 6th, 2016


By Jeff Dorsch, Contributing Editor

Cadence Design Systems today is introducing new tools within its Virtuoso Analog Design Environment (ADE), along with enhancements to the Virtuoso Layout Suite.

New to Virtuoso ADE are the Virtuoso ADE Explorer, Virtuoso ADE Assembler, and Virtuoso ADE Verifier.

“The new Virtuoso ADE Verifier technology and the Virtuoso ADE Assembler technology run plan capability make our design teams more productive,” said Yanqiu Diao, deputy general manager of the Turing Processor business unit at HiSilicon Technologies Co., Ltd. “Through our early use of the new Cadence Virtuoso ADE product suite, we’ve found that we can improve analog IP verification productivity by approximately 30 percent and reduce verification issues by one-half. Our smartphone and network chip projects should benefit from these latest capabilities.”

Steve Lewis, product marketing director for Cadence’s Custom IC & PCB Group, said the electronic design automation company’s Virtuoso ADE L, XL, and GXL tools “will be kept, will be maintained, and taking that technology to the next level.”

Virtuoso ADE Verifier is “the brand-new kid on the block,” Lewis said in an interview. The tool advances analog verification technology, according to Cadence, and offers an integrated dashboard for engineers to employ.

Under international standards for automotive vehicles, medical equipment, military/aerospace systems, and other products, suppliers “have to trace every aspect of your design,” he noted. “All has to be documented.”

The digital side of chip design addressed those issues about a decade ago, according to Lewis. Such recordkeeping and documentation are “far less common on the analog,” he said. “It’s no longer okay to say the analog takes care of itself.”

Changes in analog design projects were typically tracked in spreadsheet programs, which don’t connect to the Virtuoso suite, Lewis noted, adding, “Now, I know who’s working on what.”

The new analog design tools “add a little bit more granularity” with real-number models, Lewis said. “It’s not quite SPICE,” he admitted.

Regarding Virtuoso ADE Assembler, “we made it look like ADE XL,” Lewis said, so users should have a shorter learning curve with the new tool. Virtuoso ADE Explorer provides what Cadence calls a complete corners and Monte Carlo environment for finding and correcting variation problems.

Cadence is also offering a Virtuoso Variation Option, providing fast Monte Carlo analysis for FinFET chips with 16-nanometer or smaller dimensions.

The enhancements in Virtuoso Layout Suite are a 10x to 100x improvement in graphics rendering performance, real-time customization of Module Generators with a simpler and more visual approach; and new structured device-level routing capabilities that are said to enhance routing productivity by up to 50 percent.

“We actually made significant changes in layout for L, XL,” addressing “current techniques, current designs,” Lewis commented.

Cadence Virtuoso Analog Design Environment (ADE): Reimagining analog design with emphasis on usability, performance, and innovation

Next Page »