Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘IC’

Next Page »

79 GHz CMOS RADAR Chips for Cars from Imec and Infineon

Tuesday, May 24th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

As unveiled at the annual Imec Technology Forum in Brussels (itf2016.be), Infineon Technologies AG (infineon.com) and imec (imec.be) are working on highly integrated CMOS-based 79 GHz sensor chips for automotive radar applications. Imec provides expertise in high-frequency system, circuit, and antenna design for radar applications, complementing Infineon’s knowledge from the many learnings that go along with holding the world’s top market share in commercial radar sensor chips. Infineon and imec expect functional CMOS sensor chip samples in the third quarter of 2016. A complete radar system demonstrator is scheduled for the beginning of 2017.

Whether or not fully automated cars and trucks will be traveling on roads soon, today’s drivers want more sensors to be able to safely avoid accidents in conditions of limited visibility. Typically, there are up to three radar systems in today’s vehicle equipped with driver assistance functions. In a future with fully automated cars, up to ten radar systems and ten more sensor systems using cameras or lidar (https://en.wikipedia.org/wiki/Lidar) could be needed. Short-range radar (SRR) would look for side objects, medium-range radar (MRR) would scan widely for objects up to 50m in front and in back, and long-range radar (LRR) would focus up to 250m in front and in back for high-speed collision avoidance.

“Infineon enables the radar-based safety cocoon of the partly and fully automated car,” said Ralf Bornefeld, Vice President & General Manager, Sense & Control, Infineon Technologies AG. “In the future, we will manufacture radar sensor chips as a single-chip solution in a classic CMOS process for applications like automated parking. Infineon will continue to set industry standards in radar technology and quality.”

The Figure shows the evolution of radar technology over the last decades, leading to the current miniaturization using solid-state silicon CMOS. Key to the successful development of this 79 GHz demonstrator was choosing to use 28 nm CMOS technology. Imec has been refining this technology as shown at ISSCC (isscc.org) for many years, first showing a 28nm transmitter chip in 2013, then showing a 28nm transmit and receive (a.k.a. “transceiver”) chip in 2014, and finally showing a single-chip with a transceiver and analog-digital converters (ADC) and phase-lock loops (PLL) and digital components in 2015. Long-term supply of eventual commercial chips should be ensured by using 28nm technology, which is known as a “long lived” node.

“We are excited to work with Infineon as a valuable partner in our R&D program on advanced CMOS-based 77 GHz and 79 GHz radar technology,” stated Wim Van Thillo, program director perceptive systems at imec. “Compared to the mainstream 24 GHz band, the 77 GHz and 79 GHz bands enable a finer range, Doppler and angular resolution. With these advantages, we aim to realize radar prototypes with integrated multiple-input, multiple-output (MIMO) antennas that not only detect large objects, but also pedestrians and bikers and thus contribute to a safer environment for all.”

Since the aesthetics are always important for buyers, automobile companies have been challenged to integrate all of the desired sensors into vehicles in an invisible manner. “The designers hate what they call the ‘warts’ on car bumpers that are the small holes needed for the ultrasonic sensors currently used,” explained Van Thillo in a press conference during ITF2016.

In an ITF2016 presentation, CEO Reinhard Ploss, discussed how Infineon works with industrial partners to create competitive commercial products. “When we first developed RADAR, there was a collaboration between the Tier-1 car companies and ourselves,” explained Ploss. “The key lies in the algorithms needed to process the data, since the raw data stream is essentially useless. The next generation of differentiation for semiconductors will be how to integrate algorithms. In effect, how do you translate ‘pixels’ into ‘optics’ without an expensive microprocessor?”

Evolution of radar technology over time has reached the miniaturization of 79 GHz using 28nm silicon CMOS technology. Imec is now also working on 140 GHz radar chips. (Source: imec)

—E.K.

Leti’s CoolCube 3D Transistor Stacking Improves with Qualcomm Help

Wednesday, April 27th, 2016

By Ed Korczynski, Sr. Technical Editor

As previously covered by Solid State Technology CEA-Leti in France has been developing monolithic transistor stacking based on laser re-crystallization of active silicon in upper layers called “CoolCube” (TM). Leading mobile chip supplier Qualcomm has been working with Leti on CoolCube R&D since late 2013, and based on preliminary results have opted to continue collaborating with the goal of building a complete ecosystem that takes the technology from design to fabrication.

“The Qualcomm Technologies and Leti teams have demonstrated the potential of this technology for designing and fabricating high-density and high-performance chips for mobile devices,” said Karim Arabi, vice president of engineering, Qualcomm Technologies, Inc. “We are optimistic that this technology could address some of the technology scaling issues and this is why we are extending our collaboration with Leti.” As part of the collaboration, Qualcomm Technologies and Leti are sharing the technology through flexible, multi-party collaboration programs to accelerate adoption.

Olivier Faynot, micro-electronic component section manager of CEA-Leti, in an exclusive interview with Solid State Technology and SemiMD explained, “Today we have a strong focus on CMOS over CMOS integration, and this is the primary integration that we are pushing. What we see today is the integration of NMOS over PMOS is interesting and suitable for new material incorporation such as III-V and germanium.”

Table: Critical thermal budget steps summary in a planar FDSOI integration and CoolCube process for top FET in 3DVLSI. (Source: VLSI Symposium 2015)

The Table shows that CMOS over CMOS integration has met transistor performance goals with low-temperature processes, such that the top transistors have at least 90% of the performance compared to the bottom. Faynot says that recent results for transistors are meeting specification, while there is still work to be done on inter-tier metal connections. For advanced ICs there is a lot of interconnect routing congestion around the contacts and the metal-1 level, so inter-tier connection (formerly termed the more generic “local interconnect”) levels are needed to route some gates at the bottom level for connection to the top level.

“The main focus now is on the thermal budget for the integration of the inter-tier level,” explained Faynot. “To do this, we are not just working on the processing but also working closely with the designers. For example, depending on the material chosen for the metal inter-tier there will be different limits on the metal link lengths.” Tungsten is relatively more stable than copper, but with higher electrical resistance for inherently lower limits on line lengths. Additional details on such process-design co-dependencies will be disclosed during the 2016 VLSI Technology Symposium, chaired by Raj Jammy.

When the industry decides to integrate III-V and Ge alternate-channel materials in CMOS, the different processing conditions for each should make NMOS over PMOS CoolCube a relatively easy performance extension. “Three-fives and germanium are basically materials with low thermal budgets, so they would be most compatible with CoolCube processing,” reminded Faynot. “To me, this kind of technology would be very interesting for mobile applications, because it would achieve a circuit where the length of the wires would be shortened. We would expect to save in area, and have less of a trade-off between power-consumption and speed.”

“This is a new wave that CoolCube is creating and it has been possible thanks to the interest and support of Qualcomm Technologies, which is pushing the technological development in a good direction and sending a strong signal to the microelectronics community,” said Leti CEO Marie Semeria. “Together, we aim to build a complete ecosystem with foundries, equipment suppliers, and EDA and design houses to assemble all the pieces of the puzzle and move the technology into the product-qualification phase.”

—E.K.

Controlling Variabilities When Integrating IC Fab Materials

Friday, April 15th, 2016

thumbnail

By Ed Korczynski, Senior Technical Editor, SemiMD/Solid State Technology

Semiconductor integrated circuit (IC) manufacturing has always relied upon the supply of critical materials from a global supply chain. Now that shrinks of IC feature sizes have begun to reach economic limits, future functionality improvements in ICs are increasingly derived from the use of new materials. The Critical Materials Conference 2016—to be held May 5-6 in Hillsboro, Oregon (cmcfabs.org)—will explore best practices in the integration of novel materials into manufacturing. Dr. David Thompson, Senior Director, Center of Excellence in Chemistry, Applied Materials will present on “Agony in New Material Introductions – minimizing and correlating variabilities,” which he was willing to discuss in advance with SemiMD.

Korczynski: With more and more materials being considered for use in high-volume manufacturing (HVM) of advanced ICs, how do you begin to selectively screen out materials that will not work for one reason or another to be able to reach the best new material for a target application?

Thompson: While there’s ‘no one size fits all’ solution to this, it typically starts with a review of what’s available and known about the current offerings. With respect to the talk at the CMC, we’ll review the challenges we run into after the materials system and chemistries are set and have been proven generally viable, but still require significant optimization in order to get acceptable yields for manufacturing. It’s a very long road from device proof of concept on a new materials system to a viable manufacturing process.

Korczynski: Since new materials are being considered for use on the atomic-scale in advanced devices, doesn’t all of this have to be done with control at the atomic scale?

Thompson: For the material on the chip, many mainstream analytical techniques are used to achieve atomic level control including TEMs and AFMs with atomic resolution during film development for many applications. Unfortunately, this resolution is not available for the chemicals we’re relying on to deposit these materials. For a typical precursor that weighs in the 200 Dalton range, a gram of precursor may have 5 × 1020 molecules. That’s a lot of molecules. Even with ppb (109) resolutions on analytical, you’re still dealing with invisible populations of >1010 molecules. It gets worse. While trace metals analysis can hit ppb levels, molecular analysis techniques are typically limited in the 0.1 to 0.01 percent resolutions for most semiconductor precursors and there may be impurities which are invisible to routine analytical techniques.

Ultimately, we rely on analytical techniques to control the gross parameters and disciplined process controls to verify suppliers produce the same compositions the same way, and to manage impurities. On the process and hardware side, it’s like threading the needle trying to get the right film at the right throughput, in a process space that’s as tolerant as possible to the inevitable variability in these chemistries.

Korczynski: With all of this investment in developing one specialty material supplier for advanced IC manufacturing, what is the cost to develop and qualify a second source?

Thompson: Generally, it’s not sustainable to release a product with dual specialty material sources. The problem with dual-sourcing is chemical suppliers protect their knowledge—not simple IP—but also their sub-supply-chains and proprietary methods of production, transport and delivery. However, given how trace elements in the formulation can change depending on conditions the molecules experience over time, the customer in many cases needs to develop two separate sub-recipes based on the specific vendor’s chemistry they are using. So, redundancy in the supply chain is prudent as is making sure the vendor can produce the material in different locations.

There are countless examples over the last 20 years of what I like to call ‘the agony of the supply-chain’ when a process got locked into using a material when the only supply was from a Ph.D. chemist making it in small batches in a lab. In most cases the initial batch of any new molecule is made at a scale that would fit in a coffee mug. Sometimes though scaling up the first industrial-scale batch can alter impurity factors that change yields on the wafer even with improved purification. So while a customer would like to keep using a small batch production, it’s not sustainable but trying to qualify a second vendor in this environment presents significant challenges.

Korczynski: Can you share an example with us of how your team brought a source of subtle variation under control?

Thompson: We had a process using a new metal film, and in the early development everything looked great. Eventually we observed a drift of process results that was more pronounced with some ampoules and less so with others. The root cause initially eluded us. Then, a bright Ph.D. on our team said it’s interesting that the supplier did not report a particular contaminant that would tend to be present as a byproduct of the reaction. The supplier confirmed it was present and variable at concentrations in the 100-300 ppm concentration in the blend. This contaminant was relatively more volatile than the main component due to vapor pressure differences and much more reactive with the substrate/wafer. It was found this variability in the chemistry induced the process variation on the wafer (as shown in Figure 1).

FIGURE 1. RESOLUTION OF SEQUENTIAL WAFER DRIFT VIA IMPURITY MANAGEMENT

Chasing impurities and understanding their impact requires rigor and a lot of data collection. There’s no Star Trek analyzer we can use to give us knowledge of all impurities present and the role of those impurities on the process. Many impurities are invisible to routine analytical techniques, so we work very closely with vendors to establish a chemistry analytical protocol for each precursor that may consist of 5-10 different techniques. For the impurities we can’t detect we rely on excellent manufacturing process control and sub-supply sourcing management.

Korczynski: Is the supply-chain for advanced precursors for deposition and etch supplying everything we need in early R&D?

Thompson: New precursor ideation—the science that leads to new classes of compounds with new reactivity that Roy Gordon, or more recently Chuck Winter, have  been doing in academia is critically important and while there are a few academics doing excellent work in this space, in general there’s not enough focus on this topic.While we see many IP protected molecules, too often they are obvious simple modifications to one skilled in the art, consisting of merely adding a functional group off of a ring, or mixing and matching known ligand systems. We don’t see a lot of disruptive chemistries. The industry is hunting for differentiated reactivity, and evolutionary precursor development approaches generally aren’t sufficiently disruptive. While this research is useful in terms of tuning a vapor pressure or thermal stability it only very rarely produces a differentiated reactivity.

Korczynski: Do we need new methodologies to more efficiently manage all of this?

Thompson: Applied has made significant investments over the last 5 years to help accelerate the readiness of new materials across the board. One of the best things about working at Applied is the rate at which we can learn and build an ecosystem around a new material. With our strength in chemistry, deposition, CMP, etch, metrology and a host of other technologies, we get a fast, strong feedback loop going to accelerate issue discovery, resolution and general learning around new materials.

On the chemical supply-chain front, the need is making sure that chemical vendors accelerate their analytical chemistry development on new materials. Correlating the variability of chemistry to process results and ultimately yield is the real battle. The more knowledge we have of a chemistry moving into development, the faster learning can occur. I explain to my team that we can’t be proactive and respond to things we didn’t anticipate. Situations where trying to develop the analytical technique to see the impurity responsible for causing (or resolving) a variability is to start out at a significant disadvantage. However, we’ve seen a good response from suppliers on new materials and significant improvement on the early learnings necessary to minimize the agony of new material introductions.

IoT Demands Part 2: Test and Packaging

Friday, April 15th, 2016

By Ed Korczynski, Senior Technical Editor, Solid State Technology, SemiMD

The Internet-of-Things (IoT) adds new sensing and communications to improve the functionality of all manner of things in the world. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be extremely small and low-cost. To understand the state of technology preparedness to meet the anticipated needs of the different application spaces, experts from GLOBALFOUNDRIES, Cadence, Mentor Graphics and Presto Engineering gave detailed answers to questions about IoT chip needs in EDA and fab nodes, as published in “IoT Demands:  EDA and Fab Nodes.” We continue with the conversation below.

Korczynski: For test of IoT devices which may use ultra-low threshold voltage transistors, what changes are needed compared to logic test of a typical “low-power” chip?

Steve Carlson, product management group director, Cadence

Susceptibility to process corners and operating conditions becomes heightened at near-threshold voltage levels. This translates into either more conservative design sign-off criteria, or the need for higher levels of manufacturing screening/tests. Either way, it has an impact on cost, be it hidden by over-design, or overtly through more costly qualification and test processes.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

We need to make sure that the testability has also been designed to be functional structurally in this mode. In addition, sub-threshold voltage operation must account for non-linear transistor characteristics and the strong impact of local process variation, for which the conventional testability arsenal is still very poor. Automotive screening used low voltage operation (VLV) to detect latent defects, but at very low voltage close to the transistor threshold, digital becomes analog, and therefore if the usual concept still works for defect detection, functional test and @speed tests require additional expertise to be both meaningful and efficient from a test coverage perspective.

Korczynski:  Do we have sufficient specifications within “5G” to handle IoT device interoperability for all market segments?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

The estimated timeline for standardization availability of 5G is around 2020. 5G is being designed keeping three classes of applications in mind:  Enhanced Mobile Broadband, Massive IoT, and Mission-Critical Control. Specifically for IoT, the focus is on efficient, low-cost communication with deep coverage. We will start to see early 5G technologies start to appear around 2018, and device connectivity,

interoperability and marshaling the data they generate that can apply to multiple IoT sub-segments and markets is still very much in development.

Korczynski:  Will the 1st-generation of IoT devices likely include wide varieties of solution for different market-segments such as industrial vs. retail vs. consumer, or will most device use similar form-factors and underlying technologies?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

If we use CES 2016 as a showcase, we are seeing IoT “Things” that are becoming use-case or application-centric as they apply to specific sub-segments such as Connected Home, Automotive, Medical, Security, etc. There is definitely more variety on the consumer front vs. industrial. Vendors / OEMs / System houses are differentiating at the user-interface design and form-factor levels while the “under-the-hood” IC capabilities and component technologies that provide the atomic intelligence are fairly common. ​

Steve Carlson, product management group director, Cadence

Right now it seems like everyone is swinging for the fence. Everyone wants the home-run product that will reach a billion devices sold. Generality generally leads to sub-optimality, so a single device usually fails to meet the needs and expectations of many. Devices that are optimized for more specific use cases and elements of purchasing criteria will win out. The question of interface is an interesting one.

Korczynski:  Will there be different product life-cycles for different IoT market-segments, such as 1-3 years for consumer but 5-10 years for industrial?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

That certainly seems to be the case. According to Gartner’s market analysis for IoT, Consumer is expected to grow at a faster pace in terms of units compared to Enterprise, while Enterprise is expected to lead in revenue. Also the churn-cycle in Consumer is higher / faster compared to Enterprise. Today’s wearables or smart-phones are good reference examples. This will however vary by the type of “Thing” and sub-segment. For example, you expect to have your smart refrigerator for a longer time period compared to smart clothing or eyewear. As ASPs of the “Things”come down over time and new classes of products such as disposables hit the market, we can expect even larger volumes.​

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

The market segments continue to be driven by the same use cases. In consumer wearables, short cycles are linked to fashion trends and rapid obsolescence, where consumer home use has longer cycles closer to industrial market requirements. We believe that the lifecycle norms will hold true for IoT devices.

Korczynski:  For the IoT application of infrastructure monitoring (e.g. bridges, pipelines, etc.) long-term (10-20 year) reliability will be essential, while consumer applications may be best served by 3-5 year reliability devices which cost less; how well can we quantify the trade-off between cost and chip reliability?

Steve Carlson, product management group director, Cadence

Conceptually we know very well how to make devices more reliable. We can lower current densities with bigger wires, we can run at cooler temperatures, and so on.  The difficulty is always in finding optimality for a given criterion across the, for practical purposes, infinite tradeoffs to be made.

Korczynski:  Why is the talk of IoT not just another “Dot Com” hype cycle?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

​​I participated in a panel at SEMICON China in Shanghai last month that discussed a similar question. If we think of IoT as a “brand new thing” (no pun intended), then we can think of it as hype. However if we look at the IoT as as set of use-cases that can take advantage of an evolution of Machine-to-Machine (M2M) going towards broader connectivity, huge amounts of data generated and exchanged, and a generational increase in internet and communication network bandwidths (i.e. 5G), then it seems a more down-to-earth technological progression.

Nicolas Williams, product marketing manager, Mentor Graphics

Unlike the Dot Com hype, which was built upon hope and dreams of future solutions that may or may not have been based in reality, IoT is real business. For example, in a 2016 IC Insights report, we see that last year $63.4 billion in revenue was generated for IoT systems and the market is growing at about 20% CAGR. This same report also shows IoT semiconductor sales of over $15 billion in 2015 with a CAGR of 21.1%.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is the investment needed up front to create sensing agents and an infrastructure for the hardware foundation of the IoT that will lead to big data and ultimately value creation.

Steve Carlson, product management group director, Cadence

There will be plenty of hype cycles for products and product categories along the way. However, the foundational shift of the connection of things is a diode through which civilization will only pass through in one direction.

IoT Demands Part 1: EDA and Fab Nodes

Thursday, April 14th, 2016

The Internet-of-Things (IoT) is expected to add new sensing and communications to improve the functionality of all manner of things in the world:  bridges sensing and reporting when repairs are needed, parts automatically informing where they are in storage and transport, human health monitoring, etc. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be assembled at low-cost and small-size in High Volume Manufacturing (HVM). Micro-Electro-Mechanical Systems (MEMS) and other sensors are being combined with Radio-Frequency (RF) ICs in miniaturized packages for the first wave of growth in major sub-markets.

To meet the anticipated needs of the different IoT application spaces, SemiMD asked leading companies within critical industry segments about the state of technology preparedness:

*  Commercial IC HVM – GLOBALFOUNDRIES,

*  Electronic Design Automation (EDA) – Cadence and Mentor Graphics,

*  IC and complex system test – Presto Engineering.

Korczynski:  Today, ICs for IoT applications typically use 45nm/65nm-node which are “Node -3″ (N-3) compared to sub-20nm-node chips in HVM. Five years from now, when the bleeding-edge will use 10nm node technology, will IoT chips still use N-3 of 28nm-node (considered a “long-lived node”) or will 45nm-node remain the likely sweet-spot of price:performance?

Timothy Dry, product marketing manager, GLOBALFOUNDRIES

In 5 years time, there will be a spread of technology solutions addressing low, middle, and high ends of IoT applications. At the low end, IoT end nodes for applications like connected smoke

detectors, security sensors will be at 55, 40nm ULP and ULL for lowest system power, and low cost. These applications will be typically served by MCUs <50DMIPs. Integrated radios (BLE, 802.15.4), security, Power Management Unit (PMU), and eFlash or MRAM will be common features. Connected LED lighting is forecasted to be a high volume IoT application. The LED drivers will use BCD extensions of 130nm—40nm—that can also support the radio and protocol-MCU with Flash.

In the mid-range, applications like smart-meters and fitness/medical monitoring will need systems that have more processing power <300DMIPS. These products will be implemented in 40nm, 28nm and GLOBALFOUNDRIES’ new 22nm FDSOI technology that uses software-controlled body-biasing to tune SoC operation for lowest dynamic power. Multiple wireless (BLE/802.15.4, WiFi, LPWAN) and wired connectivity (Ethernet, PLC) protocols with security will be integrated for gateway products.

High-end products like smart-watches, learning thermostats, home security/monitoring cameras, and drones will require MPU-class IC products (~2000DMIPs) and run high-order operating systems (e.g. Linux, Android). These products will be made in leading-edge nodes starting at 22FDX, 14FF and migrating to 7FF and beyond. Design for lowest dynamic power for longest battery life will be the key driver, and these products typically require human machine Interface (HMI) with animated graphics on a high resolution displays. Connectivity will include BLE, WiFi and cellular with strong security.

Steve Carlson, product management group director, Cadence

We have seen recent announcements of IoT targeted devices at 14nm. The value created by Moore’s Law integration should hold, and with that, there will be inherent advantages to those who leverage next generation process nodes. Still, other product categories may reach functionality saturation points where there is simply no more value obtained by adding more capability. We anticipate that there will be more “live” process nodes than ever in history.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is fair to say that most IoT devices will be a heterogeneous aggregation of analog functions rather than high power digital processors. Therefore, and by similarity with Bluetooth and RFID devices, 90nm and 65nm will remain the mainstream nodes for many sub-vertical markets, enabling the integration of RF and analog front-end functions with digital gate density. By default, sensors will stay out of the monolithic path for both design and cost reasons. The best answer would be that the IoT ASIC will follow eventually the same scaling as the MCU products, with embedded non-volatile memories, which today is 55-40nm centric and will move to 28nm with industry maturity and volumes.

Korczynski:  If most IoT devices will include some manner of sensor which must be integrated with CMOS logic and memory, then do we need new capabilities in EDA-flows and burn-in/test protocols to ensure meeting time-to-market goals?

Nicolas Williams, product marketing manager, Mentor Graphics

If we define a typical IoT device as a product that contains a MEMS sensor, A/D, digital processing, and a RF-connection to the internet, we can see that the fundamental challenge of IoT design is that teams working on this product need to master the analog, digital, MEMS, and RF domains. Often, these four domains require different experience and knowledge and sometimes design in these domains is accomplished by separate teams. IoT design requires that all four domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dice that will be bonded together, they still need to work together during the layout and verification process. Therefore, a unified design flow is required.

Stephen Pateras, product marketing director, Mentor Graphics

Being able to quickly debug and create test patterns for various embedded sensor IP can be addressed with the adoption of the new IEEE 1687 IP plug-and-play standard. If a sensor IP block’s digital interface adheres to the standard, then any vendor-provided data required to initialize or operate the embedded sensor can be easily and quickly mapped to chip pins. Data sequences for multiple sensor IP blocks can also be merged to create optimized sequences that will minimize debug and test times.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

From a testing standpoint, widely used ATEs are generally focused on a few purposes, but don’t necessarily cover all elements in a system. We think that IoT devices are likely to require complex testing flows using multiple ATEs to assure adequate coverage. This is likely to prevail for some time as short run volumes characteristic of IoT demands are unlikely to drive ATE suppliers to invest R&D dollars in creating new purpose-built machines.

Korczynski:  For the EDA of IoT devices, can all sensors be modeled as analog inputs within established flows or do we need new modeling capability at the circuit level?

Steve Carlson, product management group director, Cadence

Typically, the interface to the physical world has been partitioned at the electrical boundary. But as more mechanical and electro-mechanical sensors are more deeply integrated, there has been growing value in co-design, co-analysis, and co-optimization. We should see more multi-domain analysis over time.

Nicolas Williams, product marketing manager, Mentor Graphics

Designers of IoT devices that contain MEMS sensors need quality models in order to simulate their behavior under physical conditions such as motion and temperature. Unlike CMOS IC design, there are few standardized MEMS models for system-level simulation. State of the art MEMS modeling requires automatic generation of behavioral models based on the results of Finite Element Analysis (FEA) using reduced-order modeling (ROM). ROM is a numerical methodology that reduces the analysis results to create Verilog-A models for use in AMS simulations for co-simulation of the MEMS device in the context of the IoT system.

Cadence Adds New Tools for Analog Design, Enhances Layout

Wednesday, April 6th, 2016

thumbnail

By Jeff Dorsch, Contributing Editor

Cadence Design Systems today is introducing new tools within its Virtuoso Analog Design Environment (ADE), along with enhancements to the Virtuoso Layout Suite.

New to Virtuoso ADE are the Virtuoso ADE Explorer, Virtuoso ADE Assembler, and Virtuoso ADE Verifier.

“The new Virtuoso ADE Verifier technology and the Virtuoso ADE Assembler technology run plan capability make our design teams more productive,” said Yanqiu Diao, deputy general manager of the Turing Processor business unit at HiSilicon Technologies Co., Ltd. “Through our early use of the new Cadence Virtuoso ADE product suite, we’ve found that we can improve analog IP verification productivity by approximately 30 percent and reduce verification issues by one-half. Our smartphone and network chip projects should benefit from these latest capabilities.”

Steve Lewis, product marketing director for Cadence’s Custom IC & PCB Group, said the electronic design automation company’s Virtuoso ADE L, XL, and GXL tools “will be kept, will be maintained, and taking that technology to the next level.”

Virtuoso ADE Verifier is “the brand-new kid on the block,” Lewis said in an interview. The tool advances analog verification technology, according to Cadence, and offers an integrated dashboard for engineers to employ.

Under international standards for automotive vehicles, medical equipment, military/aerospace systems, and other products, suppliers “have to trace every aspect of your design,” he noted. “All has to be documented.”

The digital side of chip design addressed those issues about a decade ago, according to Lewis. Such recordkeeping and documentation are “far less common on the analog,” he said. “It’s no longer okay to say the analog takes care of itself.”

Changes in analog design projects were typically tracked in spreadsheet programs, which don’t connect to the Virtuoso suite, Lewis noted, adding, “Now, I know who’s working on what.”

The new analog design tools “add a little bit more granularity” with real-number models, Lewis said. “It’s not quite SPICE,” he admitted.

Regarding Virtuoso ADE Assembler, “we made it look like ADE XL,” Lewis said, so users should have a shorter learning curve with the new tool. Virtuoso ADE Explorer provides what Cadence calls a complete corners and Monte Carlo environment for finding and correcting variation problems.

Cadence is also offering a Virtuoso Variation Option, providing fast Monte Carlo analysis for FinFET chips with 16-nanometer or smaller dimensions.

The enhancements in Virtuoso Layout Suite are a 10x to 100x improvement in graphics rendering performance, real-time customization of Module Generators with a simpler and more visual approach; and new structured device-level routing capabilities that are said to enhance routing productivity by up to 50 percent.

“We actually made significant changes in layout for L, XL,” addressing “current techniques, current designs,” Lewis commented.

Cadence Virtuoso Analog Design Environment (ADE): Reimagining analog design with emphasis on usability, performance, and innovation

Molecular Modeling of Materials Defects for Yield Recovery

Monday, March 21st, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

New materials are being integrated into High Volume Manufacturing (HVM) of semiconductor ICs, while old materials are being extended with more stringent specifications. Defects within materials cause yield losses in HVM fabs, and engineers must identify the specific source of an observed defect before corrective steps can be taken. Honeywell Electronic Materials has been using molecular modeling software provided by Scienomics to both develop new materials and to modify old materials. Modeling allowed Honeywell to uncover the origin of subtle solvation-based film defects within Bottom Anti-Reflective Coatings (BARC) which were degrading yield in a customer’s lithographic process module.

Scienomics sponsored a Materials Modeling and Simulations online seminar on February 26th of this year, featuring Dr. Nancy Iwamoto of Honeywell discussing how Scienomics software was used to accelerate response to a customer’s manufacturing yield loss. “This was a product running at a customer line,” explained Iwamoto, “and we needed to find the solution.” The product was a Bottom Anti-Reflective Coating (BARC) organo-silicate polymer delivered in solution form and then spun on wafers to a precise thickness.

Originally observed during optical inspection by fab engineers as 1-2 micron sized vague spots in the BARC, the new defect type was difficult to see yet could be correlated to lithographic yield loss. The defects appeared to be discrete within the film instead of on the top surface, so the source was likely some manner of particle, yet filters did not capture these particles.

The filter captured some particles rich in silicon, as well as other particles rich in carbon. Sequential filtration showed that particles were passing through impossibly small pores, which suggested that the particles were built of deformable gel-like phases. The challenge was to find the material handling or processing situation, which resulted in thermodynamically possible and kinetically probable conditions that could form such gels.

Fig: Materials Processes and Simulations (MAPS) gives researchers access to visualization and analysis tools in a single user interface together with access to multiple simulation engines. (Source: Scienomics)

Molecular modeling and simulation is a powerful technique that can be used for materials design, functional upgrades, process optimization, and manufacturing. The Figure shows a dashboard for Scienomics’ modeling platform. Best practices in molecular modeling to find out-of-control parameters in HVM include a sequential workflow:

  • Build correct models based on experimental observables,
  • Simulate potential molecular structures based on known chemicals and hierarchical models,
  • Analyze manufacturing variabilities to identify excursion sources, and
  • Propose remedy for failure elimination.

Honeywell Electronic Materials researchers had very few experimental observables from which to start:  phenomenon is rare (yet effects yield), not filterable, yet from thermodynamic hydrolysis parameters it must be quasi-stable. Re-testing of product and re-examination of Outgoing Quality Control (OQC) data at the Honeywell production site showed that the molecular weight of the product was consistent with the desired distribution. There was also an observed BARC thickness increase of ~1nm on the wafer associated with the presence of these defects.

Using the modeling platform, Honeywell looked at the solubility parameters for different small molecular chains off of known-branched back-bone centers. Gel-like agglomerations could certainly be formed under the wrong conditions. Once the agglomerations form, they are not very stable so they can probably dis-aggregate when being forced through a filter and then re-aggregate on the other side.

What conditions could induce gel formation? After a few weeks of modeling, it was determined that temperature variations had the greatest influence on the agglomeration, and that variability was strongest at the ~250°K recommended for storage. Storage at 230°K resulted in measurably worse agglomeration, and any extreme in heating/cooling ramp rate tended to reduce solubility.

Molecular modeling was used in a forensic manner to find that the root cause of gel-like defects was related to thermal history:

*   Thermodynamics determined the most likely oligomers that could agglomerate,

*   Temperature-dependent solubility models determined which particles would reach wafers.

Because of the on-wafer BARC thickness increase of ~1nm, fab engineers could use all of the molecular modeling information to trace the temperature variation to bottles installed in the lithographic track tool. The fab was able to change specifications for the storage and handling of the BARC bottles to bring the process back into control.

EUV Resists and Stochastic Processes

Friday, March 4th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

In an exclusive interview with Solid State Technology during SPIE-AL this year, imec Advanced Patterning Department Director Greg McIntyre said, “The big encouraging thing at the conference is the progress on EUV.” The event included a plenary presentation by TSMC Nanopatterning Technology Infrastructure Division Director and SPIE Fellow Anthony Yen on “EUV Lithography: From the Very Beginning to the Eve of Manufacturing.” TSMC is currently learning about EUVL using 10nm- and 7nm-node device test structures, with plans to deploy it for high volume manufacturing (HVM) of contact holes at the 5nm node. Intel researchers confirm that they plan to use EUVL in HVM for the 7nm node.

Recent improvements in EUV source technology— 80W source power had been shown by the end of 2014, 185W by the end of 2015, and 200W has now been shown by ASML—have been enabled by multiple laser pulses tuned to the best produce plasma from tin droplets. TSMC reports that 518 wafers per day were processed by their ASML EUV stepper, and the tool was available ~70% of the time. TSMC shows that a single EUVL process can create 46nm pitch lines/spaces using a complex 2D mask, as is needed for patterning the metal2 layer within multilevel on-chip interconnects.

To improve throughput in HVM, the resist sensitivity to the 13.54nm wavelength radiation of EUV needs to be improved, while the line-width roughness (LWR) specification must be held to low single-digit nm. With a 250W source and 25 mJ/cm2 resist sensitivity an EUV stepper should be able to process ~100 wafer-per-hour (wph), which should allow for affordable use when matched with other lithography technologies.

Researchers from Inpria—the company working on metal-oxide-based EUVL resists—looked at the absorption efficiencies of different resists, and found that the absorption of the metal oxide based resists was ≈ 4 to 5 times higher than that of the Chemically-Amplified Resist (CAR). The Figure shows that higher absorption allows for the use of proportionally thinner resist, which mitigates the issue of line collapse. Resist as thin as 18nm has been patterned over a 70nm thin Spin-On Carbon (SOC) layer without the need for another Bottom Anti-Reflective Coating (BARC). Inpria today can supply 26 mJ/cm2 resist that creates 4.6nm LWR over 140nm Depth of Focus (DoF).

To prevent pattern collapse, the thickness of resist is reduced proportionally to the minimum half-pitch (HP) of lines/spaces. (Source: JSR Micro)

JEIDEC researchers presented their summary of the trade-off between sensitivity and LWR for metal-oxide-based EUV resists:  ultra high sensitivity of 7 mJ/cm2 to pattern 17nm lines with 5.6nm LWR, or low sensitivity of 33 mJ/cm2 to pattern 23nm lines with 3.8nm LWR.

In a keynote presentation, Seong-Sue Kim of Samsung Electronics stated that, “Resist pattern defectivity remains the biggest issue. Metal-oxide resist development needs to be expedited.” The challenge is that defectivity at the nanometer-scale derives from “stochastics,” which means random processes that are not fully predictable.

Stochastics of Nanopatterning

Anna Lio, from Intel’s Portland Technology Development group, stated that the challenges of controlling resist stochastics, “could be the deal breaker.” Intel ran a 7-month test of vias made using EUVL, and found that via critical dimensions (CD), edge-placement-error (EPE), and chain resistances all showed good results compared to 193i. However, there are inherent control issues due to the random nature of phenomena involved in resist patterning:  incident “photons”, absorption, freed electrons, acid generation, acid quenching, protection groups, development processes, etc.

Stochastics for novel chemistries can only be controlled by understanding in detail the sources of variability. From first-principles, EUV resist reactions are not photon-chemistry, but are really radiation-chemistry with many different radiation paths and electrons which can be generated. If every via in an advanced logic IC must work then the failure rate must be on the order of 1 part-per-trillion (ppt), and stochastic variability from non-homogeneous chemistries must be eliminated.

Consider that for a CAR designed for 15mJ/cm2 sensitivity, there will be just:

145 photons/nm2 for 193, and

10 photons/nm2 for EUV.

To improve sensitivity and suppress failures from photon shot-noise, we need to increase resist absorption, and also re-consider chemical amplification mechanisms. “The requirements will be the same for any resist and any chemistry,” reminded Lio. “We need to evaluate all resists at the same exposure levels and at the same rules, and look at different features to show stochastics like in the tails of distributions. Resolution is important but stochastics will rule our world at the dimensions we’re dealing with.”

—E.K.

Many Mixes to Match Litho Apps

Thursday, March 3rd, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

“Mix and Match” has long been a mantra for lithographers in the deep-sub-wavelength era of IC device manufacturing. In general, forming patterns with resolution at minimum pitch as small as 1/4 the wavelength of light can be done using off-axis illumination (OAI) through reticle enhancement techniques (RET) on masks, using optical proximity correction (OPC) perhaps derived from inverse lithography technology (ILT). Lithographers can form 40-45nm wide lines and spaces at the same half-pitch using 193nm light (from ArF lasers) in a single exposure.

Figure 1 shows that application-specific tri-layer photoresists are used to reach the minimum resolution of 193nm-immersion (193i) steppers in a single exposure. Tighter half-pitch features can be created using all manner of multi-patterning processes, including Litho-Etch-Litho-Etch (LELE or LE2) using two masks for a single layer or Self-Aligned Double Patterning (SADP) using sidewall spacers to accomplish pitch-splitting. SADP has been used in high volume manufacturing (HVM) of logic and memory ICs for many years now, and Self-Aligned Quadruple Patterning (SAQP) has been used in HVM by at least one leading memory fab.

Fig.1: Basic tri-layer resist (TLR) technology uses thin Photoresist over silicon-containing Hard-Mask over Spin-On Carbon (SOC), for patterning critical layers of advanced ICs. (Source: Brewer Science)

Next-Generation Lithography (NGL) generally refers to any post-optical technology with at least some unique niche patterning capability of interest to IC fabs:  Extreme Ultra-Violet (EUV), Directed Self-Assembly (DSA), and Nano-Imprint Lithography (NIL). Though proponents of each NGL have dutifully shown capabilities for targeted mask layers for logic or memory, the capabilities of ArF dry and immersion (ArFi) scanners to process >250 wafers/hour with high uptime dominates the economics of HVM lithography.

The world’s leading lithographers gather each year in San Jose, California at SPIE’s Advanced Lithography conference to discuss how to extend optical lithography. So of all the NGL technologies, which will win out in the end?

It is looking most likely that the answer is “all of the above.” EUV and NIL could be used for single layers. For other unique patterning application, ArF/ArFi steppers will be used to create a basic grid/template which will be cut/trimmed using one of the available NGL. Each mask layer in an advanced fab will need application-specific patterning integration, and one of the rare commonalities between all integrated litho modules is the overwhelming need to improve pattern overlay performance.

Naga Chandrasekaran, Micron Corp. vice president of Process R&D, provided a fantastic overview of the patterning requirements for advanced memory chips in a presentation during Nikon’s LithoVision technical symposium held February 21st in San Jose, California prior to the start of SPIE-AL. While resolution improvements are always desired, in the mix-and-match era the greatest challenges involve pattern overlay issues. “In high volume manufacturing, every nanometer variation translates into yield loss, so what is the best overlay that we can deliver as a holistic solution not just considering stepper resolution?” asks Chandrasekaran. “We should talk about cost per nanometer overlay improvement.”

Extreme Ultra-Violet (EUV)

As touted by ASML at SPIE-AL, the brightness and stability and availability of tin-plasma EUV sources continues to improve to 200W in the lab “for one hour, with full dose control,” according to Michael Lercel, ASML’s director of strategic marketing. ASML’s new TWINSCAN NXE:3350B EUVL scanners are now being shipped with 125W power sources, and Intel and Samsung Electronics reported run their EUV power sources at 80W over extended periods.

During Nikon’s LithoVision event, Mark Phillips, Intel Fellow and Director of Lithography Technology Development for Logic, summarized recent progress of EUVL technology:  ~500 wafers-per-day is now standard, and ~1000 wafer-per-day can sometimes happen. However, since grids can be made with ArFi for 1/3 the cost of EUVL even assuming best productivity for the latter, ArFi multi-patterning will continue to be used for most layers. “Resolution is not the only challenge,” reminded Phillips. “Total edge-placement-error in patterning is the biggest challenge to device scaling, and this limit comes before the device physics limit.”

Directed Self-Assembly (DSA)

DSA seems most suited for patterning the periodic 2D arrays used in memory chips such as DRAMs. “Virtual fabrication using directed self-assembly for process optimization in a 14nm DRAM node” was the title of a presentation at SPIE-AL by researchers from Coventor, in which DSA compared favorably to SAQP.

Imec presented electrical results of DSA-formed vias, providing insight on DSA processing variations altering device results. In an exclusive interview with Solid State Technology and SemiMD, imec’s Advanced Patterning Department Director Greg McIntyre reminds us that DSA could save one mask in the patterning of vias which can all be combined into doublets/triplets, since two masks would otherwise be needed to use 193i to do LELE for such a via array. “There have been a lot of patterning tricks developed over the last few years to be able to reduce variability another few nanometers. So all sorts of self-alignments.”

While DSA can be used for shrinking vias that are not doubled/tripled, there are commercially proven spin-on shrink materials that cost much less to use as shown by Kaveri Jain and Scott Light from Micron in their SPIE-AL presentation, “Fundamental characterization of shrink techniques on negative-tone development based dense contact holes.” Chemical shrink processes primarily require control over times, temperatures, and ambients inside a litho track tool to be able repeatably shrink contact hole diameters by 15-25 nm.

Nano-Imprint Litho (NIL)

For advanced IC fab applications, the many different options for NIL technology have been narrowed to just one for IC HVM. The step-and-pattern technology that had been developed and trademarked as “Jet and Flash Imprint Lithography” or “J-FIL” by, has been commercialized for HVM by Canon NanoTechnologies, formerly known as Molecular Imprints. Canon shows improvements in the NIL mask-replication process, since each production mask will need to be replicated from a written master. To use NIL in HVM, mask image placement errors from replication will have to be reduced to ~1nm., while the currently available replication tool is reportedly capable of 2-3nm (3 sigma).

Figure 2 shows normalized costs modeled to produce 15nm half-pitch lines/spaces for different lithography technologies, assuming 125 wph for a single EUV stepper and 60 wph for a cluster of 4 NIL tools. Key to throughput is fast filling of the 26mmx33mm mold nano-cavities by the liquid resist, and proper jetting of resist drops over a thin adhesion layer enables filling times less than 1 second.

Fig.2: Relative estimated costs to pattern 15nm half-pitch lines/spaces for different lithography technologies, assuming 125 wph for a single EUV stepper and 60 wph for a cluster of 4 NIL tools. (Source: Canon)

Researchers from Toshiba and SK Hynix described evaluation results of a long-run defect test of NIL using the Canon FPA-1100 NZ2 pilot production tool, capable of 10 wafers per hour and 8nm overlay, in a presentation at SPIE-AL titled, “NIL defect performance toward high-volume mass production.” The team categorized defects that must be minimized into fundamentally different categories—template, non-filling, separation-related, and pattern collapse—and determined parallel paths to defect reduction to allow for using NIL in HVM of memory chips with <20nm half-pitch features.

—E.K.

Memristor Variants and Models from Knowm

Friday, January 22nd, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Knowm Inc. (www.knowm.org), a start-up pioneering next-generation advanced computing architectures and technology, recently announced the availability of two new variations of memristors targeting different neuromorphic applications. The company also announced raw device data available for purchase to help researchers develop and improve memristor models. These new Knowm offerings enable the next step in the R&D of radically new chips for pattern-recognition, machine-learning, and artificial intelligence (AI) in general.

There is general consensus between industry and academia and government that future improvements in computing are now severely limited by the amount of energy it takes to use Von Neumann architectures. Consequently, the US Whitehouse has issued a grand challenge with the Energy-Efficient Computing: from Devices to Architectures (E2CDA) program (http://www.nsf.gov/pubs/2016/nsf16526/nsf16526.htm) actively soliciting proposals through March 28, 2016.

The Figure shows a schematic cross-section of Knowm’s memristor devices—with Tin (Sn) and Chromium (Cr) metal layers as the new options to tungsten (W)—along with the device I/V curves for each. “They differ in their activation threshold,” explained Knowm CEO and co-founder Alex Nugent in an exclusive interview with Solid State Technology. “As the activation thresholds become smaller you get reduced data retention, but higher cycle endurance. As that threshold increases you have to dissipate more energy per event, and the more energy you dissipate the faster it will burn-out.” Knowm’s two new memristors, as well as the company’s previously announced device, are now available as unpackaged raw dice with masks designed for research probe stations.

Figure: Schematic cross-section of Knowm’s memristor devices using Tin (Sn) or Chromium (Cr) or tungsten (W) metal layers, along with the device I/V curves for each. (Source: Knowm)

Knowm is working on the simultaneous co-optimization of the entire “stack” from memristors to circuit architectures to application-specific algorithms. “The potential of memristors is so huge that we are seeing exponential growth in the literature, a sort of gold rush as engineers race to design new circuits and re-envision old circuits,” commented Knowm CEO and co-founder Alex Nugent. “The problem is that in the race to publish, circuit designers are adopting models that do not adequately describe real devices.” Knowm’s raw data includes AC, DC, pulse response, and retention for different memristors.

Additional memristors are being developed by Knowm’s R&D lab partner Dr. Kris Campbell of  Boise State University (http://coen.boisestate.edu/kriscampbell/), using different metal layers to achieve different activation thresholds beyond the three shown to date. “She has discovered an algorithm for creating memristors along this dimension,” said Nugent. “From a physics perspective it makes sense that there would be devices with high cycle endurance but reduced data retention.”

“In the future what I image is a single chip with multiple memristors on it. Some will be volatile and very fast, while others will be slow,” continued Nugent. “Just like analog design today uses different capacitors, future neuromophic chips would likely use memristors optimized for different changes in adaptation threshhold. If you think about memristors as fundamental elements—as per Leon Chua (https://en.wikipedia.org/wiki/Leon_O._Chua)—then it makes sense that we’ll need different memristors.”

The applications spaces for these devices have intrinsically different requirements for speed and retention. For example, to exploit these devices for pattern recognition and/or anomaly detection (keeping track of confidence in making temporal predictions) it seems best to choose relatively high activation thresholds because the number of operations is unlikely to burn-out devices. Conversely, for circuits that constantly solve optimization problems the best memristors would require low burn-out and thus low activation thresholds. However, analog applications are generally problematic because the existing memristors leak current, such that stored values degrade over time.

Knowm is shipping devices today, mostly to university researchers, and has tested thousands of devices itself. The Knowm memristors can be fabricated at <500°C using industry-standard unit-process steps, allowing for eventual integration with silicon CMOS “back-end” metallization layers. While still in early R&D, this technology could provide much of the foundation for post-Moore’s-Law silicon ICs.

—E.K.

Next Page »