Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘IC’

Next Page »

Fab Facilities Data and Defectivity

Monday, August 1st, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

In-the-know attendees at SEMICON West at a Thursday morning working breakfast heard from executives representing the world’s leading memory fabs discuss manufacturing challenges at the 4th annual Entegris Yield Forum. Among the excellent presenters was Norm Armour, managing director worldwide facilities and corporate EHSS of Micron. Armour has been responsible for some of the most famous fabs in the world, including the Malta, New York logic fab of GlobalFoundries, and AMD’s Fab25 in Austin, Texas. He discussed how facilities systems effect yield and parametric control in the fab.

Just recently, his organization within Micron broke records working with M&W on the new flagship Fab 10X in Singapore—now running 3D-NAND—by going from ground-breaking to first-tool-in in less than 12 months, followed by over 400 tools installed in 3 months. “The devil is in the details across the board, especially for 20nm and below,” declared Armour. “Fabs are delicate ecosystems. I’ll give a few examples from a high-volume fab of things that you would never expect to see, of component-level failures that caused major yield crashes.”

Ultra-Pure Water (UPW)

Ultra-Pure Water (UPW) is critical for IC fab processes including cleaning, etching, CMP, and immersion lithography, and contamination specs are now at the part-per-billion (ppb) or part-per-trillion (ppt) levels. Use of online monitoring is mandatory to mitigate risk of contamination. International Technology Roadmap for Semiconductors (ITRS) guidelines for UPW quality (minimum acceptable standard) include the following critical parameters:

  • Resistivity @ 25C >18.0 Mohm-cm,
  • TOC <1.0 ppb,
  • Particles/ml < 0.3 @ 0.05 um, and
  • Bacteria by culture 1000 ml <1.

In one case associated with a gate cleaning tool, elevated levels of zinc were detected with lots that had passed through one particular tool for a variation on a classic SC1 wet clean. High-purity chemistries were eliminated as sources based on analytical testing, so the root-cause analysis shifted to to the UPW system as a possible source. Then statistical analysis could show a positive correlation between UPW supply lines equipped with pressure regulators and the zinc exposure. The pressure regulator vendor confirmed use of zinc-oxide and zinc-stearate as part of the assembly process of the pressure regulator. “It was really a curing agent for an elastomer diaphragm that caused the contamination of multiple lots,” confided Armour.

UPW pressure regulators are just one of many components used in facilities builds that can significantly degrade fab yield. It is critical to implement a rigorous component testing and qualification process prior to component installation and widespread use. “Don’t take anything for granted,” advised Armour. “Things like UPW regulators have a first-order impact upon yield and they need to be characterized carefully, especially during new fab construction and fit up.”

Photoresist filtration

Photoresist filtration has always been important to ensure high yield in manufacturing, but it has become ultra-critical for lithography at the 20nm node and below. Dependable filtration is particularly important because industry lacks in-line monitoring technology capable of detecting particles in the range below ~40nm.

Micron tried using filters with 50nm pore diameters for a 20nm node process…and saw excessive yield losses along with extreme yield variability. “We characterized pressure-drop as a function of flow-rate, and looked at various filter performances for both 20nm and 40nm particles,” explained Armour. “We implemented a new filter, and lo and behold saw a step function increase in our yields. Defect densities dropped dramatically.” Tracking the yields over time showed that the variability was significantly reduced around the higher yield-entitlement level.

Airborne Molecular Contamination (AMC)

Airborne Molecular Contamination (AMC) is ‘public enemy number one’ in 20nm-node and below fabs around the world. “In one case there were forrest fires in Sumatra and the smoke was going into the atmosphere and actually went into our air intakes in a high volume fab in Taiwan thousands of miles away, and we saw a spike in hydrogen-sulfide,” confided Armour. “It increased our copper CMP defects, due to copper migration. After we installed higher-quality AMC filters for the make-up air units we saw dramatic improvement in copper defects. So what is most important is that you have real-time on-line monitoring of AMC levels.”

Building collaborative relationships with vendors is critical for troubleshooting component issues and improving component quality. “Partnering with suppliers like Entegris is absolutely essential,” continued Armour. “On AMCs for example, we have had a very close partnership that developed out of a team working together at our Inotera fab in Taiwan. There are thousands of important technologies that we need to leverage now to guarantee high yields in leading-node fabs.” The Figure shows just some of the AMCs that must be monitored in real-time.

Big Data

The only way to manage all of this complexity is with “Big Data” and in addition to primary process parameter that must be tracked there are many essential facilities inputs to analytics:

  • Environmental Parameters – temperature, humidity, pressure, particle count, AMCs, etc.
  • Equipment Parameters – run state, motor current, vibration, valve position, etc.
  • Effluent Parameters – cooling water, vacuum, UPW, chemicals, slurries, gases, etc.

“Conventional wisdom is that process tools create 90% of your defect density loss, but that’s changing toward facilities now,” said Armour. “So why not apply the same methodologies within facilities that we do in the fab?” SPC is after-the-fact reactive, while APC is real-time fault detection on input variables, including such parameters as vibration or flow-rate of a pump.

“Never enough data,” enthused Armour. “In terms of monitoring input variables, we do this through the PLCs and basically use SCADA to do the fault-detection interdiction on the critical input variables. This has been proven to be highly effective, providing a lot of protection, and letting me sleep better at night.”

Micron also uses these data to provide site-to-site comparisons. “We basically drive our laggard sites to meet our world-class sites in terms of reducing variation on facility input variables,” explained Armour. “We’re improving our forecasting as a result of this capability, and ultimately protecting our fab yields. Again, the last thing a fab manager wants to see is facilities causing yield loss and variation.”

—E.K.

Applied Materials Releases Selective Etch Tool

Wednesday, June 29th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Applied Materials has disclosed commercial availability of new Selectra(TM) selective etch twin-chamber hardware for the company’s high-volume manufacturing (HVM) Producer® platform. Using standard fluorine and chlorine gases already used in traditional Reactive Ion Etch (RIE) chambers, this new tool provides atomic-level precision in the selective removal of materials in 3D devices structures increasingly used for the most advanced silicon ICs. The tool is already in use at three customer fabs for finFET logic HVM, and at two memory fab customers, with a total of >350 chambers planned to have been shipped to many customers by the end of 2016.

Figure 1 shows a simplified cross-sectional schematic of the Selectra chamber, where the dashed white line indicates some manner of screening functionality so that “Ions are blocked, chemistry passes through” according to the company. In an exclusive interview with Solid State Technology, company representative refused to disclose any hardware details. “We are using typical chemistries that are used in the industry,” explained Ajay Bhatnagar, managing director of Selective Removal Products for Applied Materials. “If there are specific new applications needed than we can use new chemistry. We have a lot of IP on how we filter ions and how we allow radicals to combine on the wafer to create selectivity.”

FIG 1: Simplified cross-sectional schematic of a silicon wafer being etched by the neutral radicals downstream of the plasma in the Selectra chamber. (Source: Applied Materials)

From first principles we can assume that the ion filtering is accomplished with some manner of electrically-grounded metal screen. This etch technology accomplishes similar process results to Atomic Layer Etch (ALE) systems sold by Lam, while avoiding the need for specialized self-limiting chemistries and the accompanying chamber throughput reductions associated with pulse-purge process recipes.

“What we are doing is being able to control the amount of radicals coming to the wafer surface and controlling the removal rates very uniformly across the wafer surface,” asserted Bhatnagar. “If you have this level of atomic control then you don’t need the self-limiting capability. Most of our customers are controlling process with time, so we don’t need to use self-limiting chemistry.” Applied Materials claims that this allows the Selectra tool to have higher relative productivity compared to an ALE tool.

Due to the intrinsic 2D resolutions limits of optical lithography, leading IC fabs now use multi-patterning (MP) litho flows where sacrificial thin-films must be removed to create the final desired layout. Due to litho limits and CMOS device scaling limits, 2D logic transistors are being replaced by 3D finFETs and eventually Gate-All-Around (GAA) horizontal nanowires (NW). Due to dielectric leakage at the atomic scale, 2D NAND memory is being replaced by 3D-NAND stacks. All of these advanced IC fab processes require the removal of atomic-scale materials with extreme selectivity to remaining materials, so the Selectra chamber is expected to be a future work-horse for the industry.

When the industry moves to GAA-NW transistors, alternating layers of Si and SiGe will be grown on the wafer surface, 2D patterned into fins, and then the sacrificial SiGe must be selectively etched to form 3D arrays of NW. Figure 2 shows the SiGe etched from alternating Si/SiGe stacks using a Selectra tool, with sharp Si corners after etch indicating excellent selectivity.

FIG 2: SEM cross-section showing excellent etch of SiGe within alternating Si/SiGe layers, as will be needed for Gate-All-Around (GAA) horizontal NanoWire (NW) transistor formation. (Source: Applied Materials)

“One of the fundamental differences between this system and old downstream plasma ashers, is that it was designed to provide extreme selectivity to different materials,” said Matt Cogorno, global product manager of Selective Removal Products for Applied Materials. “With this system we can provide silicon to titanium-nitride selectivity at 5000:1, or silicon to silicon-nitride selectivity at 2000:1. This is accomplished with the unique hardware architecture in the chamber combined with how we mix the chemistries. Also, there is no polymer formation in the etch process, so after etching there are no additional processing issues with the need for ashing and/or a wet-etch step to remove polymers.”

Systems can also be used to provide dry cleaning and surface-preparation due to the extreme selectivity and damage-free material removal.  “You can control the removal rates,” explained Cogorno. “You don’t have ions on the wafer, but you can modulate the number of radicals coming down.” For HVM of ICs with atomic-scale device structures, this new tool can widen process windows and reduce costs compared to both dry RIE and wet etching.

—E.K.

79 GHz CMOS RADAR Chips for Cars from Imec and Infineon

Tuesday, May 24th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

As unveiled at the annual Imec Technology Forum in Brussels (itf2016.be), Infineon Technologies AG (infineon.com) and imec (imec.be) are working on highly integrated CMOS-based 79 GHz sensor chips for automotive radar applications. Imec provides expertise in high-frequency system, circuit, and antenna design for radar applications, complementing Infineon’s knowledge from the many learnings that go along with holding the world’s top market share in commercial radar sensor chips. Infineon and imec expect functional CMOS sensor chip samples in the third quarter of 2016. A complete radar system demonstrator is scheduled for the beginning of 2017.

Whether or not fully automated cars and trucks will be traveling on roads soon, today’s drivers want more sensors to be able to safely avoid accidents in conditions of limited visibility. Typically, there are up to three radar systems in today’s vehicle equipped with driver assistance functions. In a future with fully automated cars, up to ten radar systems and ten more sensor systems using cameras or lidar (https://en.wikipedia.org/wiki/Lidar) could be needed. Short-range radar (SRR) would look for side objects, medium-range radar (MRR) would scan widely for objects up to 50m in front and in back, and long-range radar (LRR) would focus up to 250m in front and in back for high-speed collision avoidance.

“Infineon enables the radar-based safety cocoon of the partly and fully automated car,” said Ralf Bornefeld, Vice President & General Manager, Sense & Control, Infineon Technologies AG. “In the future, we will manufacture radar sensor chips as a single-chip solution in a classic CMOS process for applications like automated parking. Infineon will continue to set industry standards in radar technology and quality.”

The Figure shows the evolution of radar technology over the last decades, leading to the current miniaturization using solid-state silicon CMOS. Key to the successful development of this 79 GHz demonstrator was choosing to use 28 nm CMOS technology. Imec has been refining this technology as shown at ISSCC (isscc.org) for many years, first showing a 28nm transmitter chip in 2013, then showing a 28nm transmit and receive (a.k.a. “transceiver”) chip in 2014, and finally showing a single-chip with a transceiver and analog-digital converters (ADC) and phase-lock loops (PLL) and digital components in 2015. Long-term supply of eventual commercial chips should be ensured by using 28nm technology, which is known as a “long lived” node.

“We are excited to work with Infineon as a valuable partner in our R&D program on advanced CMOS-based 77 GHz and 79 GHz radar technology,” stated Wim Van Thillo, program director perceptive systems at imec. “Compared to the mainstream 24 GHz band, the 77 GHz and 79 GHz bands enable a finer range, Doppler and angular resolution. With these advantages, we aim to realize radar prototypes with integrated multiple-input, multiple-output (MIMO) antennas that not only detect large objects, but also pedestrians and bikers and thus contribute to a safer environment for all.”

Since the aesthetics are always important for buyers, automobile companies have been challenged to integrate all of the desired sensors into vehicles in an invisible manner. “The designers hate what they call the ‘warts’ on car bumpers that are the small holes needed for the ultrasonic sensors currently used,” explained Van Thillo in a press conference during ITF2016.

In an ITF2016 presentation, CEO Reinhard Ploss, discussed how Infineon works with industrial partners to create competitive commercial products. “When we first developed RADAR, there was a collaboration between the Tier-1 car companies and ourselves,” explained Ploss. “The key lies in the algorithms needed to process the data, since the raw data stream is essentially useless. The next generation of differentiation for semiconductors will be how to integrate algorithms. In effect, how do you translate ‘pixels’ into ‘optics’ without an expensive microprocessor?”

Evolution of radar technology over time has reached the miniaturization of 79 GHz using 28nm silicon CMOS technology. Imec is now also working on 140 GHz radar chips. (Source: imec)

—E.K.

Leti’s CoolCube 3D Transistor Stacking Improves with Qualcomm Help

Wednesday, April 27th, 2016

By Ed Korczynski, Sr. Technical Editor

As previously covered by Solid State Technology CEA-Leti in France has been developing monolithic transistor stacking based on laser re-crystallization of active silicon in upper layers called “CoolCube” (TM). Leading mobile chip supplier Qualcomm has been working with Leti on CoolCube R&D since late 2013, and based on preliminary results have opted to continue collaborating with the goal of building a complete ecosystem that takes the technology from design to fabrication.

“The Qualcomm Technologies and Leti teams have demonstrated the potential of this technology for designing and fabricating high-density and high-performance chips for mobile devices,” said Karim Arabi, vice president of engineering, Qualcomm Technologies, Inc. “We are optimistic that this technology could address some of the technology scaling issues and this is why we are extending our collaboration with Leti.” As part of the collaboration, Qualcomm Technologies and Leti are sharing the technology through flexible, multi-party collaboration programs to accelerate adoption.

Olivier Faynot, micro-electronic component section manager of CEA-Leti, in an exclusive interview with Solid State Technology and SemiMD explained, “Today we have a strong focus on CMOS over CMOS integration, and this is the primary integration that we are pushing. What we see today is the integration of NMOS over PMOS is interesting and suitable for new material incorporation such as III-V and germanium.”

Table: Critical thermal budget steps summary in a planar FDSOI integration and CoolCube process for top FET in 3DVLSI. (Source: VLSI Symposium 2015)

The Table shows that CMOS over CMOS integration has met transistor performance goals with low-temperature processes, such that the top transistors have at least 90% of the performance compared to the bottom. Faynot says that recent results for transistors are meeting specification, while there is still work to be done on inter-tier metal connections. For advanced ICs there is a lot of interconnect routing congestion around the contacts and the metal-1 level, so inter-tier connection (formerly termed the more generic “local interconnect”) levels are needed to route some gates at the bottom level for connection to the top level.

“The main focus now is on the thermal budget for the integration of the inter-tier level,” explained Faynot. “To do this, we are not just working on the processing but also working closely with the designers. For example, depending on the material chosen for the metal inter-tier there will be different limits on the metal link lengths.” Tungsten is relatively more stable than copper, but with higher electrical resistance for inherently lower limits on line lengths. Additional details on such process-design co-dependencies will be disclosed during the 2016 VLSI Technology Symposium, chaired by Raj Jammy.

When the industry decides to integrate III-V and Ge alternate-channel materials in CMOS, the different processing conditions for each should make NMOS over PMOS CoolCube a relatively easy performance extension. “Three-fives and germanium are basically materials with low thermal budgets, so they would be most compatible with CoolCube processing,” reminded Faynot. “To me, this kind of technology would be very interesting for mobile applications, because it would achieve a circuit where the length of the wires would be shortened. We would expect to save in area, and have less of a trade-off between power-consumption and speed.”

“This is a new wave that CoolCube is creating and it has been possible thanks to the interest and support of Qualcomm Technologies, which is pushing the technological development in a good direction and sending a strong signal to the microelectronics community,” said Leti CEO Marie Semeria. “Together, we aim to build a complete ecosystem with foundries, equipment suppliers, and EDA and design houses to assemble all the pieces of the puzzle and move the technology into the product-qualification phase.”

—E.K.

Controlling Variabilities When Integrating IC Fab Materials

Friday, April 15th, 2016

thumbnail

By Ed Korczynski, Senior Technical Editor, SemiMD/Solid State Technology

Semiconductor integrated circuit (IC) manufacturing has always relied upon the supply of critical materials from a global supply chain. Now that shrinks of IC feature sizes have begun to reach economic limits, future functionality improvements in ICs are increasingly derived from the use of new materials. The Critical Materials Conference 2016—to be held May 5-6 in Hillsboro, Oregon (cmcfabs.org)—will explore best practices in the integration of novel materials into manufacturing. Dr. David Thompson, Senior Director, Center of Excellence in Chemistry, Applied Materials will present on “Agony in New Material Introductions – minimizing and correlating variabilities,” which he was willing to discuss in advance with SemiMD.

Korczynski: With more and more materials being considered for use in high-volume manufacturing (HVM) of advanced ICs, how do you begin to selectively screen out materials that will not work for one reason or another to be able to reach the best new material for a target application?

Thompson: While there’s ‘no one size fits all’ solution to this, it typically starts with a review of what’s available and known about the current offerings. With respect to the talk at the CMC, we’ll review the challenges we run into after the materials system and chemistries are set and have been proven generally viable, but still require significant optimization in order to get acceptable yields for manufacturing. It’s a very long road from device proof of concept on a new materials system to a viable manufacturing process.

Korczynski: Since new materials are being considered for use on the atomic-scale in advanced devices, doesn’t all of this have to be done with control at the atomic scale?

Thompson: For the material on the chip, many mainstream analytical techniques are used to achieve atomic level control including TEMs and AFMs with atomic resolution during film development for many applications. Unfortunately, this resolution is not available for the chemicals we’re relying on to deposit these materials. For a typical precursor that weighs in the 200 Dalton range, a gram of precursor may have 5 × 1020 molecules. That’s a lot of molecules. Even with ppb (109) resolutions on analytical, you’re still dealing with invisible populations of >1010 molecules. It gets worse. While trace metals analysis can hit ppb levels, molecular analysis techniques are typically limited in the 0.1 to 0.01 percent resolutions for most semiconductor precursors and there may be impurities which are invisible to routine analytical techniques.

Ultimately, we rely on analytical techniques to control the gross parameters and disciplined process controls to verify suppliers produce the same compositions the same way, and to manage impurities. On the process and hardware side, it’s like threading the needle trying to get the right film at the right throughput, in a process space that’s as tolerant as possible to the inevitable variability in these chemistries.

Korczynski: With all of this investment in developing one specialty material supplier for advanced IC manufacturing, what is the cost to develop and qualify a second source?

Thompson: Generally, it’s not sustainable to release a product with dual specialty material sources. The problem with dual-sourcing is chemical suppliers protect their knowledge—not simple IP—but also their sub-supply-chains and proprietary methods of production, transport and delivery. However, given how trace elements in the formulation can change depending on conditions the molecules experience over time, the customer in many cases needs to develop two separate sub-recipes based on the specific vendor’s chemistry they are using. So, redundancy in the supply chain is prudent as is making sure the vendor can produce the material in different locations.

There are countless examples over the last 20 years of what I like to call ‘the agony of the supply-chain’ when a process got locked into using a material when the only supply was from a Ph.D. chemist making it in small batches in a lab. In most cases the initial batch of any new molecule is made at a scale that would fit in a coffee mug. Sometimes though scaling up the first industrial-scale batch can alter impurity factors that change yields on the wafer even with improved purification. So while a customer would like to keep using a small batch production, it’s not sustainable but trying to qualify a second vendor in this environment presents significant challenges.

Korczynski: Can you share an example with us of how your team brought a source of subtle variation under control?

Thompson: We had a process using a new metal film, and in the early development everything looked great. Eventually we observed a drift of process results that was more pronounced with some ampoules and less so with others. The root cause initially eluded us. Then, a bright Ph.D. on our team said it’s interesting that the supplier did not report a particular contaminant that would tend to be present as a byproduct of the reaction. The supplier confirmed it was present and variable at concentrations in the 100-300 ppm concentration in the blend. This contaminant was relatively more volatile than the main component due to vapor pressure differences and much more reactive with the substrate/wafer. It was found this variability in the chemistry induced the process variation on the wafer (as shown in Figure 1).

FIGURE 1. RESOLUTION OF SEQUENTIAL WAFER DRIFT VIA IMPURITY MANAGEMENT

Chasing impurities and understanding their impact requires rigor and a lot of data collection. There’s no Star Trek analyzer we can use to give us knowledge of all impurities present and the role of those impurities on the process. Many impurities are invisible to routine analytical techniques, so we work very closely with vendors to establish a chemistry analytical protocol for each precursor that may consist of 5-10 different techniques. For the impurities we can’t detect we rely on excellent manufacturing process control and sub-supply sourcing management.

Korczynski: Is the supply-chain for advanced precursors for deposition and etch supplying everything we need in early R&D?

Thompson: New precursor ideation—the science that leads to new classes of compounds with new reactivity that Roy Gordon, or more recently Chuck Winter, have  been doing in academia is critically important and while there are a few academics doing excellent work in this space, in general there’s not enough focus on this topic.While we see many IP protected molecules, too often they are obvious simple modifications to one skilled in the art, consisting of merely adding a functional group off of a ring, or mixing and matching known ligand systems. We don’t see a lot of disruptive chemistries. The industry is hunting for differentiated reactivity, and evolutionary precursor development approaches generally aren’t sufficiently disruptive. While this research is useful in terms of tuning a vapor pressure or thermal stability it only very rarely produces a differentiated reactivity.

Korczynski: Do we need new methodologies to more efficiently manage all of this?

Thompson: Applied has made significant investments over the last 5 years to help accelerate the readiness of new materials across the board. One of the best things about working at Applied is the rate at which we can learn and build an ecosystem around a new material. With our strength in chemistry, deposition, CMP, etch, metrology and a host of other technologies, we get a fast, strong feedback loop going to accelerate issue discovery, resolution and general learning around new materials.

On the chemical supply-chain front, the need is making sure that chemical vendors accelerate their analytical chemistry development on new materials. Correlating the variability of chemistry to process results and ultimately yield is the real battle. The more knowledge we have of a chemistry moving into development, the faster learning can occur. I explain to my team that we can’t be proactive and respond to things we didn’t anticipate. Situations where trying to develop the analytical technique to see the impurity responsible for causing (or resolving) a variability is to start out at a significant disadvantage. However, we’ve seen a good response from suppliers on new materials and significant improvement on the early learnings necessary to minimize the agony of new material introductions.

IoT Demands Part 2: Test and Packaging

Friday, April 15th, 2016

By Ed Korczynski, Senior Technical Editor, Solid State Technology, SemiMD

The Internet-of-Things (IoT) adds new sensing and communications to improve the functionality of all manner of things in the world. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be extremely small and low-cost. To understand the state of technology preparedness to meet the anticipated needs of the different application spaces, experts from GLOBALFOUNDRIES, Cadence, Mentor Graphics and Presto Engineering gave detailed answers to questions about IoT chip needs in EDA and fab nodes, as published in “IoT Demands:  EDA and Fab Nodes.” We continue with the conversation below.

Korczynski: For test of IoT devices which may use ultra-low threshold voltage transistors, what changes are needed compared to logic test of a typical “low-power” chip?

Steve Carlson, product management group director, Cadence

Susceptibility to process corners and operating conditions becomes heightened at near-threshold voltage levels. This translates into either more conservative design sign-off criteria, or the need for higher levels of manufacturing screening/tests. Either way, it has an impact on cost, be it hidden by over-design, or overtly through more costly qualification and test processes.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

We need to make sure that the testability has also been designed to be functional structurally in this mode. In addition, sub-threshold voltage operation must account for non-linear transistor characteristics and the strong impact of local process variation, for which the conventional testability arsenal is still very poor. Automotive screening used low voltage operation (VLV) to detect latent defects, but at very low voltage close to the transistor threshold, digital becomes analog, and therefore if the usual concept still works for defect detection, functional test and @speed tests require additional expertise to be both meaningful and efficient from a test coverage perspective.

Korczynski:  Do we have sufficient specifications within “5G” to handle IoT device interoperability for all market segments?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

The estimated timeline for standardization availability of 5G is around 2020. 5G is being designed keeping three classes of applications in mind:  Enhanced Mobile Broadband, Massive IoT, and Mission-Critical Control. Specifically for IoT, the focus is on efficient, low-cost communication with deep coverage. We will start to see early 5G technologies start to appear around 2018, and device connectivity,

interoperability and marshaling the data they generate that can apply to multiple IoT sub-segments and markets is still very much in development.

Korczynski:  Will the 1st-generation of IoT devices likely include wide varieties of solution for different market-segments such as industrial vs. retail vs. consumer, or will most device use similar form-factors and underlying technologies?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

If we use CES 2016 as a showcase, we are seeing IoT “Things” that are becoming use-case or application-centric as they apply to specific sub-segments such as Connected Home, Automotive, Medical, Security, etc. There is definitely more variety on the consumer front vs. industrial. Vendors / OEMs / System houses are differentiating at the user-interface design and form-factor levels while the “under-the-hood” IC capabilities and component technologies that provide the atomic intelligence are fairly common. ​

Steve Carlson, product management group director, Cadence

Right now it seems like everyone is swinging for the fence. Everyone wants the home-run product that will reach a billion devices sold. Generality generally leads to sub-optimality, so a single device usually fails to meet the needs and expectations of many. Devices that are optimized for more specific use cases and elements of purchasing criteria will win out. The question of interface is an interesting one.

Korczynski:  Will there be different product life-cycles for different IoT market-segments, such as 1-3 years for consumer but 5-10 years for industrial?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

That certainly seems to be the case. According to Gartner’s market analysis for IoT, Consumer is expected to grow at a faster pace in terms of units compared to Enterprise, while Enterprise is expected to lead in revenue. Also the churn-cycle in Consumer is higher / faster compared to Enterprise. Today’s wearables or smart-phones are good reference examples. This will however vary by the type of “Thing” and sub-segment. For example, you expect to have your smart refrigerator for a longer time period compared to smart clothing or eyewear. As ASPs of the “Things”come down over time and new classes of products such as disposables hit the market, we can expect even larger volumes.​

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

The market segments continue to be driven by the same use cases. In consumer wearables, short cycles are linked to fashion trends and rapid obsolescence, where consumer home use has longer cycles closer to industrial market requirements. We believe that the lifecycle norms will hold true for IoT devices.

Korczynski:  For the IoT application of infrastructure monitoring (e.g. bridges, pipelines, etc.) long-term (10-20 year) reliability will be essential, while consumer applications may be best served by 3-5 year reliability devices which cost less; how well can we quantify the trade-off between cost and chip reliability?

Steve Carlson, product management group director, Cadence

Conceptually we know very well how to make devices more reliable. We can lower current densities with bigger wires, we can run at cooler temperatures, and so on.  The difficulty is always in finding optimality for a given criterion across the, for practical purposes, infinite tradeoffs to be made.

Korczynski:  Why is the talk of IoT not just another “Dot Com” hype cycle?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

​​I participated in a panel at SEMICON China in Shanghai last month that discussed a similar question. If we think of IoT as a “brand new thing” (no pun intended), then we can think of it as hype. However if we look at the IoT as as set of use-cases that can take advantage of an evolution of Machine-to-Machine (M2M) going towards broader connectivity, huge amounts of data generated and exchanged, and a generational increase in internet and communication network bandwidths (i.e. 5G), then it seems a more down-to-earth technological progression.

Nicolas Williams, product marketing manager, Mentor Graphics

Unlike the Dot Com hype, which was built upon hope and dreams of future solutions that may or may not have been based in reality, IoT is real business. For example, in a 2016 IC Insights report, we see that last year $63.4 billion in revenue was generated for IoT systems and the market is growing at about 20% CAGR. This same report also shows IoT semiconductor sales of over $15 billion in 2015 with a CAGR of 21.1%.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is the investment needed up front to create sensing agents and an infrastructure for the hardware foundation of the IoT that will lead to big data and ultimately value creation.

Steve Carlson, product management group director, Cadence

There will be plenty of hype cycles for products and product categories along the way. However, the foundational shift of the connection of things is a diode through which civilization will only pass through in one direction.

IoT Demands Part 1: EDA and Fab Nodes

Thursday, April 14th, 2016

The Internet-of-Things (IoT) is expected to add new sensing and communications to improve the functionality of all manner of things in the world:  bridges sensing and reporting when repairs are needed, parts automatically informing where they are in storage and transport, human health monitoring, etc. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be assembled at low-cost and small-size in High Volume Manufacturing (HVM). Micro-Electro-Mechanical Systems (MEMS) and other sensors are being combined with Radio-Frequency (RF) ICs in miniaturized packages for the first wave of growth in major sub-markets.

To meet the anticipated needs of the different IoT application spaces, SemiMD asked leading companies within critical industry segments about the state of technology preparedness:

*  Commercial IC HVM – GLOBALFOUNDRIES,

*  Electronic Design Automation (EDA) – Cadence and Mentor Graphics,

*  IC and complex system test – Presto Engineering.

Korczynski:  Today, ICs for IoT applications typically use 45nm/65nm-node which are “Node -3″ (N-3) compared to sub-20nm-node chips in HVM. Five years from now, when the bleeding-edge will use 10nm node technology, will IoT chips still use N-3 of 28nm-node (considered a “long-lived node”) or will 45nm-node remain the likely sweet-spot of price:performance?

Timothy Dry, product marketing manager, GLOBALFOUNDRIES

In 5 years time, there will be a spread of technology solutions addressing low, middle, and high ends of IoT applications. At the low end, IoT end nodes for applications like connected smoke

detectors, security sensors will be at 55, 40nm ULP and ULL for lowest system power, and low cost. These applications will be typically served by MCUs <50DMIPs. Integrated radios (BLE, 802.15.4), security, Power Management Unit (PMU), and eFlash or MRAM will be common features. Connected LED lighting is forecasted to be a high volume IoT application. The LED drivers will use BCD extensions of 130nm—40nm—that can also support the radio and protocol-MCU with Flash.

In the mid-range, applications like smart-meters and fitness/medical monitoring will need systems that have more processing power <300DMIPS. These products will be implemented in 40nm, 28nm and GLOBALFOUNDRIES’ new 22nm FDSOI technology that uses software-controlled body-biasing to tune SoC operation for lowest dynamic power. Multiple wireless (BLE/802.15.4, WiFi, LPWAN) and wired connectivity (Ethernet, PLC) protocols with security will be integrated for gateway products.

High-end products like smart-watches, learning thermostats, home security/monitoring cameras, and drones will require MPU-class IC products (~2000DMIPs) and run high-order operating systems (e.g. Linux, Android). These products will be made in leading-edge nodes starting at 22FDX, 14FF and migrating to 7FF and beyond. Design for lowest dynamic power for longest battery life will be the key driver, and these products typically require human machine Interface (HMI) with animated graphics on a high resolution displays. Connectivity will include BLE, WiFi and cellular with strong security.

Steve Carlson, product management group director, Cadence

We have seen recent announcements of IoT targeted devices at 14nm. The value created by Moore’s Law integration should hold, and with that, there will be inherent advantages to those who leverage next generation process nodes. Still, other product categories may reach functionality saturation points where there is simply no more value obtained by adding more capability. We anticipate that there will be more “live” process nodes than ever in history.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is fair to say that most IoT devices will be a heterogeneous aggregation of analog functions rather than high power digital processors. Therefore, and by similarity with Bluetooth and RFID devices, 90nm and 65nm will remain the mainstream nodes for many sub-vertical markets, enabling the integration of RF and analog front-end functions with digital gate density. By default, sensors will stay out of the monolithic path for both design and cost reasons. The best answer would be that the IoT ASIC will follow eventually the same scaling as the MCU products, with embedded non-volatile memories, which today is 55-40nm centric and will move to 28nm with industry maturity and volumes.

Korczynski:  If most IoT devices will include some manner of sensor which must be integrated with CMOS logic and memory, then do we need new capabilities in EDA-flows and burn-in/test protocols to ensure meeting time-to-market goals?

Nicolas Williams, product marketing manager, Mentor Graphics

If we define a typical IoT device as a product that contains a MEMS sensor, A/D, digital processing, and a RF-connection to the internet, we can see that the fundamental challenge of IoT design is that teams working on this product need to master the analog, digital, MEMS, and RF domains. Often, these four domains require different experience and knowledge and sometimes design in these domains is accomplished by separate teams. IoT design requires that all four domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dice that will be bonded together, they still need to work together during the layout and verification process. Therefore, a unified design flow is required.

Stephen Pateras, product marketing director, Mentor Graphics

Being able to quickly debug and create test patterns for various embedded sensor IP can be addressed with the adoption of the new IEEE 1687 IP plug-and-play standard. If a sensor IP block’s digital interface adheres to the standard, then any vendor-provided data required to initialize or operate the embedded sensor can be easily and quickly mapped to chip pins. Data sequences for multiple sensor IP blocks can also be merged to create optimized sequences that will minimize debug and test times.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

From a testing standpoint, widely used ATEs are generally focused on a few purposes, but don’t necessarily cover all elements in a system. We think that IoT devices are likely to require complex testing flows using multiple ATEs to assure adequate coverage. This is likely to prevail for some time as short run volumes characteristic of IoT demands are unlikely to drive ATE suppliers to invest R&D dollars in creating new purpose-built machines.

Korczynski:  For the EDA of IoT devices, can all sensors be modeled as analog inputs within established flows or do we need new modeling capability at the circuit level?

Steve Carlson, product management group director, Cadence

Typically, the interface to the physical world has been partitioned at the electrical boundary. But as more mechanical and electro-mechanical sensors are more deeply integrated, there has been growing value in co-design, co-analysis, and co-optimization. We should see more multi-domain analysis over time.

Nicolas Williams, product marketing manager, Mentor Graphics

Designers of IoT devices that contain MEMS sensors need quality models in order to simulate their behavior under physical conditions such as motion and temperature. Unlike CMOS IC design, there are few standardized MEMS models for system-level simulation. State of the art MEMS modeling requires automatic generation of behavioral models based on the results of Finite Element Analysis (FEA) using reduced-order modeling (ROM). ROM is a numerical methodology that reduces the analysis results to create Verilog-A models for use in AMS simulations for co-simulation of the MEMS device in the context of the IoT system.

Cadence Adds New Tools for Analog Design, Enhances Layout

Wednesday, April 6th, 2016

thumbnail

By Jeff Dorsch, Contributing Editor

Cadence Design Systems today is introducing new tools within its Virtuoso Analog Design Environment (ADE), along with enhancements to the Virtuoso Layout Suite.

New to Virtuoso ADE are the Virtuoso ADE Explorer, Virtuoso ADE Assembler, and Virtuoso ADE Verifier.

“The new Virtuoso ADE Verifier technology and the Virtuoso ADE Assembler technology run plan capability make our design teams more productive,” said Yanqiu Diao, deputy general manager of the Turing Processor business unit at HiSilicon Technologies Co., Ltd. “Through our early use of the new Cadence Virtuoso ADE product suite, we’ve found that we can improve analog IP verification productivity by approximately 30 percent and reduce verification issues by one-half. Our smartphone and network chip projects should benefit from these latest capabilities.”

Steve Lewis, product marketing director for Cadence’s Custom IC & PCB Group, said the electronic design automation company’s Virtuoso ADE L, XL, and GXL tools “will be kept, will be maintained, and taking that technology to the next level.”

Virtuoso ADE Verifier is “the brand-new kid on the block,” Lewis said in an interview. The tool advances analog verification technology, according to Cadence, and offers an integrated dashboard for engineers to employ.

Under international standards for automotive vehicles, medical equipment, military/aerospace systems, and other products, suppliers “have to trace every aspect of your design,” he noted. “All has to be documented.”

The digital side of chip design addressed those issues about a decade ago, according to Lewis. Such recordkeeping and documentation are “far less common on the analog,” he said. “It’s no longer okay to say the analog takes care of itself.”

Changes in analog design projects were typically tracked in spreadsheet programs, which don’t connect to the Virtuoso suite, Lewis noted, adding, “Now, I know who’s working on what.”

The new analog design tools “add a little bit more granularity” with real-number models, Lewis said. “It’s not quite SPICE,” he admitted.

Regarding Virtuoso ADE Assembler, “we made it look like ADE XL,” Lewis said, so users should have a shorter learning curve with the new tool. Virtuoso ADE Explorer provides what Cadence calls a complete corners and Monte Carlo environment for finding and correcting variation problems.

Cadence is also offering a Virtuoso Variation Option, providing fast Monte Carlo analysis for FinFET chips with 16-nanometer or smaller dimensions.

The enhancements in Virtuoso Layout Suite are a 10x to 100x improvement in graphics rendering performance, real-time customization of Module Generators with a simpler and more visual approach; and new structured device-level routing capabilities that are said to enhance routing productivity by up to 50 percent.

“We actually made significant changes in layout for L, XL,” addressing “current techniques, current designs,” Lewis commented.

Cadence Virtuoso Analog Design Environment (ADE): Reimagining analog design with emphasis on usability, performance, and innovation

Molecular Modeling of Materials Defects for Yield Recovery

Monday, March 21st, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

New materials are being integrated into High Volume Manufacturing (HVM) of semiconductor ICs, while old materials are being extended with more stringent specifications. Defects within materials cause yield losses in HVM fabs, and engineers must identify the specific source of an observed defect before corrective steps can be taken. Honeywell Electronic Materials has been using molecular modeling software provided by Scienomics to both develop new materials and to modify old materials. Modeling allowed Honeywell to uncover the origin of subtle solvation-based film defects within Bottom Anti-Reflective Coatings (BARC) which were degrading yield in a customer’s lithographic process module.

Scienomics sponsored a Materials Modeling and Simulations online seminar on February 26th of this year, featuring Dr. Nancy Iwamoto of Honeywell discussing how Scienomics software was used to accelerate response to a customer’s manufacturing yield loss. “This was a product running at a customer line,” explained Iwamoto, “and we needed to find the solution.” The product was a Bottom Anti-Reflective Coating (BARC) organo-silicate polymer delivered in solution form and then spun on wafers to a precise thickness.

Originally observed during optical inspection by fab engineers as 1-2 micron sized vague spots in the BARC, the new defect type was difficult to see yet could be correlated to lithographic yield loss. The defects appeared to be discrete within the film instead of on the top surface, so the source was likely some manner of particle, yet filters did not capture these particles.

The filter captured some particles rich in silicon, as well as other particles rich in carbon. Sequential filtration showed that particles were passing through impossibly small pores, which suggested that the particles were built of deformable gel-like phases. The challenge was to find the material handling or processing situation, which resulted in thermodynamically possible and kinetically probable conditions that could form such gels.

Fig: Materials Processes and Simulations (MAPS) gives researchers access to visualization and analysis tools in a single user interface together with access to multiple simulation engines. (Source: Scienomics)

Molecular modeling and simulation is a powerful technique that can be used for materials design, functional upgrades, process optimization, and manufacturing. The Figure shows a dashboard for Scienomics’ modeling platform. Best practices in molecular modeling to find out-of-control parameters in HVM include a sequential workflow:

  • Build correct models based on experimental observables,
  • Simulate potential molecular structures based on known chemicals and hierarchical models,
  • Analyze manufacturing variabilities to identify excursion sources, and
  • Propose remedy for failure elimination.

Honeywell Electronic Materials researchers had very few experimental observables from which to start:  phenomenon is rare (yet effects yield), not filterable, yet from thermodynamic hydrolysis parameters it must be quasi-stable. Re-testing of product and re-examination of Outgoing Quality Control (OQC) data at the Honeywell production site showed that the molecular weight of the product was consistent with the desired distribution. There was also an observed BARC thickness increase of ~1nm on the wafer associated with the presence of these defects.

Using the modeling platform, Honeywell looked at the solubility parameters for different small molecular chains off of known-branched back-bone centers. Gel-like agglomerations could certainly be formed under the wrong conditions. Once the agglomerations form, they are not very stable so they can probably dis-aggregate when being forced through a filter and then re-aggregate on the other side.

What conditions could induce gel formation? After a few weeks of modeling, it was determined that temperature variations had the greatest influence on the agglomeration, and that variability was strongest at the ~250°K recommended for storage. Storage at 230°K resulted in measurably worse agglomeration, and any extreme in heating/cooling ramp rate tended to reduce solubility.

Molecular modeling was used in a forensic manner to find that the root cause of gel-like defects was related to thermal history:

*   Thermodynamics determined the most likely oligomers that could agglomerate,

*   Temperature-dependent solubility models determined which particles would reach wafers.

Because of the on-wafer BARC thickness increase of ~1nm, fab engineers could use all of the molecular modeling information to trace the temperature variation to bottles installed in the lithographic track tool. The fab was able to change specifications for the storage and handling of the BARC bottles to bring the process back into control.

EUV Resists and Stochastic Processes

Friday, March 4th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

In an exclusive interview with Solid State Technology during SPIE-AL this year, imec Advanced Patterning Department Director Greg McIntyre said, “The big encouraging thing at the conference is the progress on EUV.” The event included a plenary presentation by TSMC Nanopatterning Technology Infrastructure Division Director and SPIE Fellow Anthony Yen on “EUV Lithography: From the Very Beginning to the Eve of Manufacturing.” TSMC is currently learning about EUVL using 10nm- and 7nm-node device test structures, with plans to deploy it for high volume manufacturing (HVM) of contact holes at the 5nm node. Intel researchers confirm that they plan to use EUVL in HVM for the 7nm node.

Recent improvements in EUV source technology— 80W source power had been shown by the end of 2014, 185W by the end of 2015, and 200W has now been shown by ASML—have been enabled by multiple laser pulses tuned to the best produce plasma from tin droplets. TSMC reports that 518 wafers per day were processed by their ASML EUV stepper, and the tool was available ~70% of the time. TSMC shows that a single EUVL process can create 46nm pitch lines/spaces using a complex 2D mask, as is needed for patterning the metal2 layer within multilevel on-chip interconnects.

To improve throughput in HVM, the resist sensitivity to the 13.54nm wavelength radiation of EUV needs to be improved, while the line-width roughness (LWR) specification must be held to low single-digit nm. With a 250W source and 25 mJ/cm2 resist sensitivity an EUV stepper should be able to process ~100 wafer-per-hour (wph), which should allow for affordable use when matched with other lithography technologies.

Researchers from Inpria—the company working on metal-oxide-based EUVL resists—looked at the absorption efficiencies of different resists, and found that the absorption of the metal oxide based resists was ≈ 4 to 5 times higher than that of the Chemically-Amplified Resist (CAR). The Figure shows that higher absorption allows for the use of proportionally thinner resist, which mitigates the issue of line collapse. Resist as thin as 18nm has been patterned over a 70nm thin Spin-On Carbon (SOC) layer without the need for another Bottom Anti-Reflective Coating (BARC). Inpria today can supply 26 mJ/cm2 resist that creates 4.6nm LWR over 140nm Depth of Focus (DoF).

To prevent pattern collapse, the thickness of resist is reduced proportionally to the minimum half-pitch (HP) of lines/spaces. (Source: JSR Micro)

JEIDEC researchers presented their summary of the trade-off between sensitivity and LWR for metal-oxide-based EUV resists:  ultra high sensitivity of 7 mJ/cm2 to pattern 17nm lines with 5.6nm LWR, or low sensitivity of 33 mJ/cm2 to pattern 23nm lines with 3.8nm LWR.

In a keynote presentation, Seong-Sue Kim of Samsung Electronics stated that, “Resist pattern defectivity remains the biggest issue. Metal-oxide resist development needs to be expedited.” The challenge is that defectivity at the nanometer-scale derives from “stochastics,” which means random processes that are not fully predictable.

Stochastics of Nanopatterning

Anna Lio, from Intel’s Portland Technology Development group, stated that the challenges of controlling resist stochastics, “could be the deal breaker.” Intel ran a 7-month test of vias made using EUVL, and found that via critical dimensions (CD), edge-placement-error (EPE), and chain resistances all showed good results compared to 193i. However, there are inherent control issues due to the random nature of phenomena involved in resist patterning:  incident “photons”, absorption, freed electrons, acid generation, acid quenching, protection groups, development processes, etc.

Stochastics for novel chemistries can only be controlled by understanding in detail the sources of variability. From first-principles, EUV resist reactions are not photon-chemistry, but are really radiation-chemistry with many different radiation paths and electrons which can be generated. If every via in an advanced logic IC must work then the failure rate must be on the order of 1 part-per-trillion (ppt), and stochastic variability from non-homogeneous chemistries must be eliminated.

Consider that for a CAR designed for 15mJ/cm2 sensitivity, there will be just:

145 photons/nm2 for 193, and

10 photons/nm2 for EUV.

To improve sensitivity and suppress failures from photon shot-noise, we need to increase resist absorption, and also re-consider chemical amplification mechanisms. “The requirements will be the same for any resist and any chemistry,” reminded Lio. “We need to evaluate all resists at the same exposure levels and at the same rules, and look at different features to show stochastics like in the tails of distributions. Resolution is important but stochastics will rule our world at the dimensions we’re dealing with.”

—E.K.

Next Page »