Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘fab’

Next Page »

Controlling Variabilities When Integrating IC Fab Materials

Friday, April 15th, 2016

thumbnail

By Ed Korczynski, Senior Technical Editor, SemiMD/Solid State Technology

Semiconductor integrated circuit (IC) manufacturing has always relied upon the supply of critical materials from a global supply chain. Now that shrinks of IC feature sizes have begun to reach economic limits, future functionality improvements in ICs are increasingly derived from the use of new materials. The Critical Materials Conference 2016—to be held May 5-6 in Hillsboro, Oregon (cmcfabs.org)—will explore best practices in the integration of novel materials into manufacturing. Dr. David Thompson, Senior Director, Center of Excellence in Chemistry, Applied Materials will present on “Agony in New Material Introductions – minimizing and correlating variabilities,” which he was willing to discuss in advance with SemiMD.

Korczynski: With more and more materials being considered for use in high-volume manufacturing (HVM) of advanced ICs, how do you begin to selectively screen out materials that will not work for one reason or another to be able to reach the best new material for a target application?

Thompson: While there’s ‘no one size fits all’ solution to this, it typically starts with a review of what’s available and known about the current offerings. With respect to the talk at the CMC, we’ll review the challenges we run into after the materials system and chemistries are set and have been proven generally viable, but still require significant optimization in order to get acceptable yields for manufacturing. It’s a very long road from device proof of concept on a new materials system to a viable manufacturing process.

Korczynski: Since new materials are being considered for use on the atomic-scale in advanced devices, doesn’t all of this have to be done with control at the atomic scale?

Thompson: For the material on the chip, many mainstream analytical techniques are used to achieve atomic level control including TEMs and AFMs with atomic resolution during film development for many applications. Unfortunately, this resolution is not available for the chemicals we’re relying on to deposit these materials. For a typical precursor that weighs in the 200 Dalton range, a gram of precursor may have 5 × 1020 molecules. That’s a lot of molecules. Even with ppb (109) resolutions on analytical, you’re still dealing with invisible populations of >1010 molecules. It gets worse. While trace metals analysis can hit ppb levels, molecular analysis techniques are typically limited in the 0.1 to 0.01 percent resolutions for most semiconductor precursors and there may be impurities which are invisible to routine analytical techniques.

Ultimately, we rely on analytical techniques to control the gross parameters and disciplined process controls to verify suppliers produce the same compositions the same way, and to manage impurities. On the process and hardware side, it’s like threading the needle trying to get the right film at the right throughput, in a process space that’s as tolerant as possible to the inevitable variability in these chemistries.

Korczynski: With all of this investment in developing one specialty material supplier for advanced IC manufacturing, what is the cost to develop and qualify a second source?

Thompson: Generally, it’s not sustainable to release a product with dual specialty material sources. The problem with dual-sourcing is chemical suppliers protect their knowledge—not simple IP—but also their sub-supply-chains and proprietary methods of production, transport and delivery. However, given how trace elements in the formulation can change depending on conditions the molecules experience over time, the customer in many cases needs to develop two separate sub-recipes based on the specific vendor’s chemistry they are using. So, redundancy in the supply chain is prudent as is making sure the vendor can produce the material in different locations.

There are countless examples over the last 20 years of what I like to call ‘the agony of the supply-chain’ when a process got locked into using a material when the only supply was from a Ph.D. chemist making it in small batches in a lab. In most cases the initial batch of any new molecule is made at a scale that would fit in a coffee mug. Sometimes though scaling up the first industrial-scale batch can alter impurity factors that change yields on the wafer even with improved purification. So while a customer would like to keep using a small batch production, it’s not sustainable but trying to qualify a second vendor in this environment presents significant challenges.

Korczynski: Can you share an example with us of how your team brought a source of subtle variation under control?

Thompson: We had a process using a new metal film, and in the early development everything looked great. Eventually we observed a drift of process results that was more pronounced with some ampoules and less so with others. The root cause initially eluded us. Then, a bright Ph.D. on our team said it’s interesting that the supplier did not report a particular contaminant that would tend to be present as a byproduct of the reaction. The supplier confirmed it was present and variable at concentrations in the 100-300 ppm concentration in the blend. This contaminant was relatively more volatile than the main component due to vapor pressure differences and much more reactive with the substrate/wafer. It was found this variability in the chemistry induced the process variation on the wafer (as shown in Figure 1).

FIGURE 1. RESOLUTION OF SEQUENTIAL WAFER DRIFT VIA IMPURITY MANAGEMENT

Chasing impurities and understanding their impact requires rigor and a lot of data collection. There’s no Star Trek analyzer we can use to give us knowledge of all impurities present and the role of those impurities on the process. Many impurities are invisible to routine analytical techniques, so we work very closely with vendors to establish a chemistry analytical protocol for each precursor that may consist of 5-10 different techniques. For the impurities we can’t detect we rely on excellent manufacturing process control and sub-supply sourcing management.

Korczynski: Is the supply-chain for advanced precursors for deposition and etch supplying everything we need in early R&D?

Thompson: New precursor ideation—the science that leads to new classes of compounds with new reactivity that Roy Gordon, or more recently Chuck Winter, have  been doing in academia is critically important and while there are a few academics doing excellent work in this space, in general there’s not enough focus on this topic.While we see many IP protected molecules, too often they are obvious simple modifications to one skilled in the art, consisting of merely adding a functional group off of a ring, or mixing and matching known ligand systems. We don’t see a lot of disruptive chemistries. The industry is hunting for differentiated reactivity, and evolutionary precursor development approaches generally aren’t sufficiently disruptive. While this research is useful in terms of tuning a vapor pressure or thermal stability it only very rarely produces a differentiated reactivity.

Korczynski: Do we need new methodologies to more efficiently manage all of this?

Thompson: Applied has made significant investments over the last 5 years to help accelerate the readiness of new materials across the board. One of the best things about working at Applied is the rate at which we can learn and build an ecosystem around a new material. With our strength in chemistry, deposition, CMP, etch, metrology and a host of other technologies, we get a fast, strong feedback loop going to accelerate issue discovery, resolution and general learning around new materials.

On the chemical supply-chain front, the need is making sure that chemical vendors accelerate their analytical chemistry development on new materials. Correlating the variability of chemistry to process results and ultimately yield is the real battle. The more knowledge we have of a chemistry moving into development, the faster learning can occur. I explain to my team that we can’t be proactive and respond to things we didn’t anticipate. Situations where trying to develop the analytical technique to see the impurity responsible for causing (or resolving) a variability is to start out at a significant disadvantage. However, we’ve seen a good response from suppliers on new materials and significant improvement on the early learnings necessary to minimize the agony of new material introductions.

IoT Demands Part 2: Test and Packaging

Friday, April 15th, 2016

By Ed Korczynski, Senior Technical Editor, Solid State Technology, SemiMD

The Internet-of-Things (IoT) adds new sensing and communications to improve the functionality of all manner of things in the world. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be extremely small and low-cost. To understand the state of technology preparedness to meet the anticipated needs of the different application spaces, experts from GLOBALFOUNDRIES, Cadence, Mentor Graphics and Presto Engineering gave detailed answers to questions about IoT chip needs in EDA and fab nodes, as published in “IoT Demands:  EDA and Fab Nodes.” We continue with the conversation below.

Korczynski: For test of IoT devices which may use ultra-low threshold voltage transistors, what changes are needed compared to logic test of a typical “low-power” chip?

Steve Carlson, product management group director, Cadence

Susceptibility to process corners and operating conditions becomes heightened at near-threshold voltage levels. This translates into either more conservative design sign-off criteria, or the need for higher levels of manufacturing screening/tests. Either way, it has an impact on cost, be it hidden by over-design, or overtly through more costly qualification and test processes.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

We need to make sure that the testability has also been designed to be functional structurally in this mode. In addition, sub-threshold voltage operation must account for non-linear transistor characteristics and the strong impact of local process variation, for which the conventional testability arsenal is still very poor. Automotive screening used low voltage operation (VLV) to detect latent defects, but at very low voltage close to the transistor threshold, digital becomes analog, and therefore if the usual concept still works for defect detection, functional test and @speed tests require additional expertise to be both meaningful and efficient from a test coverage perspective.

Korczynski:  Do we have sufficient specifications within “5G” to handle IoT device interoperability for all market segments?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

The estimated timeline for standardization availability of 5G is around 2020. 5G is being designed keeping three classes of applications in mind:  Enhanced Mobile Broadband, Massive IoT, and Mission-Critical Control. Specifically for IoT, the focus is on efficient, low-cost communication with deep coverage. We will start to see early 5G technologies start to appear around 2018, and device connectivity,

interoperability and marshaling the data they generate that can apply to multiple IoT sub-segments and markets is still very much in development.

Korczynski:  Will the 1st-generation of IoT devices likely include wide varieties of solution for different market-segments such as industrial vs. retail vs. consumer, or will most device use similar form-factors and underlying technologies?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

If we use CES 2016 as a showcase, we are seeing IoT “Things” that are becoming use-case or application-centric as they apply to specific sub-segments such as Connected Home, Automotive, Medical, Security, etc. There is definitely more variety on the consumer front vs. industrial. Vendors / OEMs / System houses are differentiating at the user-interface design and form-factor levels while the “under-the-hood” IC capabilities and component technologies that provide the atomic intelligence are fairly common. ​

Steve Carlson, product management group director, Cadence

Right now it seems like everyone is swinging for the fence. Everyone wants the home-run product that will reach a billion devices sold. Generality generally leads to sub-optimality, so a single device usually fails to meet the needs and expectations of many. Devices that are optimized for more specific use cases and elements of purchasing criteria will win out. The question of interface is an interesting one.

Korczynski:  Will there be different product life-cycles for different IoT market-segments, such as 1-3 years for consumer but 5-10 years for industrial?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

That certainly seems to be the case. According to Gartner’s market analysis for IoT, Consumer is expected to grow at a faster pace in terms of units compared to Enterprise, while Enterprise is expected to lead in revenue. Also the churn-cycle in Consumer is higher / faster compared to Enterprise. Today’s wearables or smart-phones are good reference examples. This will however vary by the type of “Thing” and sub-segment. For example, you expect to have your smart refrigerator for a longer time period compared to smart clothing or eyewear. As ASPs of the “Things”come down over time and new classes of products such as disposables hit the market, we can expect even larger volumes.​

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

The market segments continue to be driven by the same use cases. In consumer wearables, short cycles are linked to fashion trends and rapid obsolescence, where consumer home use has longer cycles closer to industrial market requirements. We believe that the lifecycle norms will hold true for IoT devices.

Korczynski:  For the IoT application of infrastructure monitoring (e.g. bridges, pipelines, etc.) long-term (10-20 year) reliability will be essential, while consumer applications may be best served by 3-5 year reliability devices which cost less; how well can we quantify the trade-off between cost and chip reliability?

Steve Carlson, product management group director, Cadence

Conceptually we know very well how to make devices more reliable. We can lower current densities with bigger wires, we can run at cooler temperatures, and so on.  The difficulty is always in finding optimality for a given criterion across the, for practical purposes, infinite tradeoffs to be made.

Korczynski:  Why is the talk of IoT not just another “Dot Com” hype cycle?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

​​I participated in a panel at SEMICON China in Shanghai last month that discussed a similar question. If we think of IoT as a “brand new thing” (no pun intended), then we can think of it as hype. However if we look at the IoT as as set of use-cases that can take advantage of an evolution of Machine-to-Machine (M2M) going towards broader connectivity, huge amounts of data generated and exchanged, and a generational increase in internet and communication network bandwidths (i.e. 5G), then it seems a more down-to-earth technological progression.

Nicolas Williams, product marketing manager, Mentor Graphics

Unlike the Dot Com hype, which was built upon hope and dreams of future solutions that may or may not have been based in reality, IoT is real business. For example, in a 2016 IC Insights report, we see that last year $63.4 billion in revenue was generated for IoT systems and the market is growing at about 20% CAGR. This same report also shows IoT semiconductor sales of over $15 billion in 2015 with a CAGR of 21.1%.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is the investment needed up front to create sensing agents and an infrastructure for the hardware foundation of the IoT that will lead to big data and ultimately value creation.

Steve Carlson, product management group director, Cadence

There will be plenty of hype cycles for products and product categories along the way. However, the foundational shift of the connection of things is a diode through which civilization will only pass through in one direction.

IoT Demands Part 1: EDA and Fab Nodes

Thursday, April 14th, 2016

The Internet-of-Things (IoT) is expected to add new sensing and communications to improve the functionality of all manner of things in the world:  bridges sensing and reporting when repairs are needed, parts automatically informing where they are in storage and transport, human health monitoring, etc. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be assembled at low-cost and small-size in High Volume Manufacturing (HVM). Micro-Electro-Mechanical Systems (MEMS) and other sensors are being combined with Radio-Frequency (RF) ICs in miniaturized packages for the first wave of growth in major sub-markets.

To meet the anticipated needs of the different IoT application spaces, SemiMD asked leading companies within critical industry segments about the state of technology preparedness:

*  Commercial IC HVM – GLOBALFOUNDRIES,

*  Electronic Design Automation (EDA) – Cadence and Mentor Graphics,

*  IC and complex system test – Presto Engineering.

Korczynski:  Today, ICs for IoT applications typically use 45nm/65nm-node which are “Node -3″ (N-3) compared to sub-20nm-node chips in HVM. Five years from now, when the bleeding-edge will use 10nm node technology, will IoT chips still use N-3 of 28nm-node (considered a “long-lived node”) or will 45nm-node remain the likely sweet-spot of price:performance?

Timothy Dry, product marketing manager, GLOBALFOUNDRIES

In 5 years time, there will be a spread of technology solutions addressing low, middle, and high ends of IoT applications. At the low end, IoT end nodes for applications like connected smoke

detectors, security sensors will be at 55, 40nm ULP and ULL for lowest system power, and low cost. These applications will be typically served by MCUs <50DMIPs. Integrated radios (BLE, 802.15.4), security, Power Management Unit (PMU), and eFlash or MRAM will be common features. Connected LED lighting is forecasted to be a high volume IoT application. The LED drivers will use BCD extensions of 130nm—40nm—that can also support the radio and protocol-MCU with Flash.

In the mid-range, applications like smart-meters and fitness/medical monitoring will need systems that have more processing power <300DMIPS. These products will be implemented in 40nm, 28nm and GLOBALFOUNDRIES’ new 22nm FDSOI technology that uses software-controlled body-biasing to tune SoC operation for lowest dynamic power. Multiple wireless (BLE/802.15.4, WiFi, LPWAN) and wired connectivity (Ethernet, PLC) protocols with security will be integrated for gateway products.

High-end products like smart-watches, learning thermostats, home security/monitoring cameras, and drones will require MPU-class IC products (~2000DMIPs) and run high-order operating systems (e.g. Linux, Android). These products will be made in leading-edge nodes starting at 22FDX, 14FF and migrating to 7FF and beyond. Design for lowest dynamic power for longest battery life will be the key driver, and these products typically require human machine Interface (HMI) with animated graphics on a high resolution displays. Connectivity will include BLE, WiFi and cellular with strong security.

Steve Carlson, product management group director, Cadence

We have seen recent announcements of IoT targeted devices at 14nm. The value created by Moore’s Law integration should hold, and with that, there will be inherent advantages to those who leverage next generation process nodes. Still, other product categories may reach functionality saturation points where there is simply no more value obtained by adding more capability. We anticipate that there will be more “live” process nodes than ever in history.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is fair to say that most IoT devices will be a heterogeneous aggregation of analog functions rather than high power digital processors. Therefore, and by similarity with Bluetooth and RFID devices, 90nm and 65nm will remain the mainstream nodes for many sub-vertical markets, enabling the integration of RF and analog front-end functions with digital gate density. By default, sensors will stay out of the monolithic path for both design and cost reasons. The best answer would be that the IoT ASIC will follow eventually the same scaling as the MCU products, with embedded non-volatile memories, which today is 55-40nm centric and will move to 28nm with industry maturity and volumes.

Korczynski:  If most IoT devices will include some manner of sensor which must be integrated with CMOS logic and memory, then do we need new capabilities in EDA-flows and burn-in/test protocols to ensure meeting time-to-market goals?

Nicolas Williams, product marketing manager, Mentor Graphics

If we define a typical IoT device as a product that contains a MEMS sensor, A/D, digital processing, and a RF-connection to the internet, we can see that the fundamental challenge of IoT design is that teams working on this product need to master the analog, digital, MEMS, and RF domains. Often, these four domains require different experience and knowledge and sometimes design in these domains is accomplished by separate teams. IoT design requires that all four domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dice that will be bonded together, they still need to work together during the layout and verification process. Therefore, a unified design flow is required.

Stephen Pateras, product marketing director, Mentor Graphics

Being able to quickly debug and create test patterns for various embedded sensor IP can be addressed with the adoption of the new IEEE 1687 IP plug-and-play standard. If a sensor IP block’s digital interface adheres to the standard, then any vendor-provided data required to initialize or operate the embedded sensor can be easily and quickly mapped to chip pins. Data sequences for multiple sensor IP blocks can also be merged to create optimized sequences that will minimize debug and test times.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

From a testing standpoint, widely used ATEs are generally focused on a few purposes, but don’t necessarily cover all elements in a system. We think that IoT devices are likely to require complex testing flows using multiple ATEs to assure adequate coverage. This is likely to prevail for some time as short run volumes characteristic of IoT demands are unlikely to drive ATE suppliers to invest R&D dollars in creating new purpose-built machines.

Korczynski:  For the EDA of IoT devices, can all sensors be modeled as analog inputs within established flows or do we need new modeling capability at the circuit level?

Steve Carlson, product management group director, Cadence

Typically, the interface to the physical world has been partitioned at the electrical boundary. But as more mechanical and electro-mechanical sensors are more deeply integrated, there has been growing value in co-design, co-analysis, and co-optimization. We should see more multi-domain analysis over time.

Nicolas Williams, product marketing manager, Mentor Graphics

Designers of IoT devices that contain MEMS sensors need quality models in order to simulate their behavior under physical conditions such as motion and temperature. Unlike CMOS IC design, there are few standardized MEMS models for system-level simulation. State of the art MEMS modeling requires automatic generation of behavioral models based on the results of Finite Element Analysis (FEA) using reduced-order modeling (ROM). ROM is a numerical methodology that reduces the analysis results to create Verilog-A models for use in AMS simulations for co-simulation of the MEMS device in the context of the IoT system.

Molecular Modeling of Materials Defects for Yield Recovery

Monday, March 21st, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

New materials are being integrated into High Volume Manufacturing (HVM) of semiconductor ICs, while old materials are being extended with more stringent specifications. Defects within materials cause yield losses in HVM fabs, and engineers must identify the specific source of an observed defect before corrective steps can be taken. Honeywell Electronic Materials has been using molecular modeling software provided by Scienomics to both develop new materials and to modify old materials. Modeling allowed Honeywell to uncover the origin of subtle solvation-based film defects within Bottom Anti-Reflective Coatings (BARC) which were degrading yield in a customer’s lithographic process module.

Scienomics sponsored a Materials Modeling and Simulations online seminar on February 26th of this year, featuring Dr. Nancy Iwamoto of Honeywell discussing how Scienomics software was used to accelerate response to a customer’s manufacturing yield loss. “This was a product running at a customer line,” explained Iwamoto, “and we needed to find the solution.” The product was a Bottom Anti-Reflective Coating (BARC) organo-silicate polymer delivered in solution form and then spun on wafers to a precise thickness.

Originally observed during optical inspection by fab engineers as 1-2 micron sized vague spots in the BARC, the new defect type was difficult to see yet could be correlated to lithographic yield loss. The defects appeared to be discrete within the film instead of on the top surface, so the source was likely some manner of particle, yet filters did not capture these particles.

The filter captured some particles rich in silicon, as well as other particles rich in carbon. Sequential filtration showed that particles were passing through impossibly small pores, which suggested that the particles were built of deformable gel-like phases. The challenge was to find the material handling or processing situation, which resulted in thermodynamically possible and kinetically probable conditions that could form such gels.

Fig: Materials Processes and Simulations (MAPS) gives researchers access to visualization and analysis tools in a single user interface together with access to multiple simulation engines. (Source: Scienomics)

Molecular modeling and simulation is a powerful technique that can be used for materials design, functional upgrades, process optimization, and manufacturing. The Figure shows a dashboard for Scienomics’ modeling platform. Best practices in molecular modeling to find out-of-control parameters in HVM include a sequential workflow:

  • Build correct models based on experimental observables,
  • Simulate potential molecular structures based on known chemicals and hierarchical models,
  • Analyze manufacturing variabilities to identify excursion sources, and
  • Propose remedy for failure elimination.

Honeywell Electronic Materials researchers had very few experimental observables from which to start:  phenomenon is rare (yet effects yield), not filterable, yet from thermodynamic hydrolysis parameters it must be quasi-stable. Re-testing of product and re-examination of Outgoing Quality Control (OQC) data at the Honeywell production site showed that the molecular weight of the product was consistent with the desired distribution. There was also an observed BARC thickness increase of ~1nm on the wafer associated with the presence of these defects.

Using the modeling platform, Honeywell looked at the solubility parameters for different small molecular chains off of known-branched back-bone centers. Gel-like agglomerations could certainly be formed under the wrong conditions. Once the agglomerations form, they are not very stable so they can probably dis-aggregate when being forced through a filter and then re-aggregate on the other side.

What conditions could induce gel formation? After a few weeks of modeling, it was determined that temperature variations had the greatest influence on the agglomeration, and that variability was strongest at the ~250°K recommended for storage. Storage at 230°K resulted in measurably worse agglomeration, and any extreme in heating/cooling ramp rate tended to reduce solubility.

Molecular modeling was used in a forensic manner to find that the root cause of gel-like defects was related to thermal history:

*   Thermodynamics determined the most likely oligomers that could agglomerate,

*   Temperature-dependent solubility models determined which particles would reach wafers.

Because of the on-wafer BARC thickness increase of ~1nm, fab engineers could use all of the molecular modeling information to trace the temperature variation to bottles installed in the lithographic track tool. The fab was able to change specifications for the storage and handling of the BARC bottles to bring the process back into control.

Imagining China’s IC Fab Industry in 2035

Friday, January 22nd, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Editor’s Note:  In Solid State Technology’s November 1995 Asia/Pacific Supplement this editor wrote of the PRC’s status and plans for IC fabs titled “Progress creeps forward”. SEMICON/China 1995 was held in a small hall in Shanghai with 125 exhibitors and 5000 attendees discussing production of just 245M ICs units having happened in the entire country in 1994. Motorola’s Fab17 in Tianjin was planned to be able to yield 360M IC from 200mm wafers.

China has been successfully investing in technology to reach global competitiveness for many decades. Integrated circuit (IC) manufacturing technology is highly strategic for countries, enabling both economically-valuable commercial fabs as well as military power. The Wassenaar Arrangement (WA) between 40-some states has restricted exports to China of “leading” technology with potential “dual-use” by industry and military. Using the terminology of IC fab nodes/generations, WA has typically restricted exports to fab tools capable of processing ICs three nodes behind (n-3) the leading edge of commercial capability (https://en.wikipedia.org/wiki/14_nanometer). In 1995 the leading edge was 0.35 microns, so 1 micron and above was the WA limit. In 2015 the leading edge is 14nm, so 45nm and above is the WA limit, but local capability has already effectively bypassed this restriction.

On February 9, 2015, trade-organization SEMI announced (http://www.semi.org/en/node/54596) the successful lobbying of the U.S. Department of Commerce to declare the export controls on certain etch equipment and technology ineffective, thereby allowing US equipment companies to sell high-volume manufacturing (HVM) tools with capabilities closer to the leading-edge into China. Following years of discussion and negotiations, SEMI had submitted a formal petition for the Commerce Department’s Bureau of Industry and Security (BIS) to examine the foreign availably of anisotropic plasma dry etching equipment, having identified AMEC (amec-inc.com) as providing an indigenous Chinese manufacturing capability. AMEC has announced that it’s tool is being used by Samsung for V-NAND HVM (https://finance.yahoo.com/news/amec-ships-advanced-etch-tool-150000063.html), which is certainly a “leading-edge” product that happens to be made using 45nm node (n-3) design rules.

“The Future is in the Past: Projecting and Plotting the Potential Rate of Growth and Trajectory of the Structural Change of the Chinese Economy for the Next 20 Years” by Jun Zhang et al. from the Institute of World Economics and Politics, Chinese Academy of Social Sciences was first published online in 2015 (DOI: 10.1111/cwe.12098). Thanks to economic growth at an average speed of more than 9.7% annually in China over the past 35 years, it is estimated that today’s China per capital GDP has already reached approximately 23% of the USA. Because of the significant rise in per-capita income over the past 30 years, China has started to see a rapid demographic transition and a gradual rise in labor costs as seen in other high-performing East Asian economies. Benchmarking to the experiences of East Asian high-performing economies from 1950 to 2010, this paper projects potential growth rate of per-capita GDP (adjusted by purchasing power parity) for China at ~6.02% from 2015 to 2035.

The PRC still works with 5-year-plans. Figure 1 shows Deng Xiaoping touring a government-run fab during the 8th 5-year-plan (1991-1995) when central planning of local resources dominated Chinese IC industry. Paramount leader Deng had famously proclaimed, “Poverty is not socialism. To be rich is glorious,” which allowed for private enterprise and different economic classes. As reported by Robert Lawrence Kuhn in 2007’s “What Will China Look Like in 2035” in Bloomberg Business (http://www.bloomberg.com/bw/stories/2007-10-16/what-will-china-look-like-in-2035-businessweek-business-news-stock-market-and-financial-advice), researchers at the Institute of Quantitative & Technical Economics of the Chinese Academy of Social Sciences—the official government think tank housing more than 3,000 scholars and researchers—in 2007 predict that by 2030 China’s economic reform will have been basically completed, such that the major issue will be the “adjustment of interests” among different classes.

Figure 1: Deng Xiaoping is shown Shanghai Belling’s fab by General Manager Lu Dechun during the 8th 5-year-plan (1991-1995). Such small fabs are not globally competitive. (Source: Ed Korczynski)

In 2014, McKinsey&Company published proprietary research (http://www.mckinsey.com/insights/high_tech_telecoms_internet/semiconductors_in_china_brave_new_world_or_same_old_story) that >50% of PCs, and 30-40% of embedded systems contain content designed in China, either directly by mainland companies or emerging from the Chinese labs of global players. Since fewer chip designs will be moving to technologies that are 22nm node and below, low-cost Chinese technology companies will soon be able to address a larger part of the global market. Chinese companies will become more aggressive in pursuing international mergers and acquisitions, to acquire global intellectual property and expertise to be transferred back home.

Figure 2 shows that ICs represent the single greatest import cost for China, so there is great incentive to develop competitive internal fab capacity. The government, recognizing the failure of earlier centrally-planned investment initiatives, now takes a market-based investment approach. The target is a compound annual growth rate (CAGR) for the industry of 20%, with potential financial support from the government of up to 1 trillion renminbi ($170 billion) over the next five to ten years. To avoid the fragmentation issues of the past, the government will focus on creating national champions—a small set of leaders in each critical segment of the semiconductor market (including design, manufacturing, tools, and assembly and test) and a few provinces in which there is the potential to develop industry clusters.

Figure 2: The leading imports to China in 2014, showing that integrated circuits (IC) cost the country more than oil. (Source: China’s customs)

Global Cooperation and Competition

The remaining leading IC manufacturers in the world—Intel, Samsung, and TSMC—are all involved in mainland Chinese fabs. Intel’s Fab68 in Dalian began production of logic chips in 2010. Samsung’s Fab in Xian began production of V-NAND chips in 2014. TSMC has announced it is seeking approval to build a wholly-owned 300mm foundry in Nanjing (http://www.wsj.com/articles/taiwan-semiconductor-plans-to-build-chip-plant-in-china-1449503714), after rival UMC’s has invested in a jointly-owned foundry now being built in Xiamen.

“We do see significant growth, and a big part of that is due to investment by the Chinese government,” said Handel Jones of IC Insights during SEMICON Europa 2015. “Up to US$20B of government subsidy has been earmarked for IC manufacturing investment in China.” Jones forecasts that by 2025 up to 30% of global design starts will be in China, many to be designed by the ~500 fabless companies in China today. Jones estimates the total R&D investment in China today for 5G wireless technology is about US$2B per year, with about one-half of that just by Huawei Technologies Co. Ltd.

Due to the inevitable atomic-limits of Moore’s Law scaling, it is likely that the industry will have reached the end of new nodes in the next 20 years. By then, “trailing-edge” will include everything that is in R&D today, from quantum-devices to CMOS-photonic chips, of which it is highly likely that China will have globally competitive design and manufacturing capability. While today a net importer of ICs, by the year 2035 it seems likely China will be a net exporter of ICs.

—E.K.

Measuring 5nm Particles In-Line

Monday, November 30th, 2015

By Ed Korczynski, Sr. Technical Editor

Industrial Technology Research Institute (ITRI) (https://www.itri.org.tw/) worked with TSMC (http://www.tsmc.com) in Taiwan on a clever in-line monitor technology that transforms liquids and automatically-diluted-slurries into aerosols for subsequent airborn measurements. They call this “SuperSizer” technology, and claim that tests have shown resolution over the astounding range of 5nm to 1 micron, and with ability to accurately represent size distributions over that range. Any dissolved gas bubbles in the liquid are lost in the aerosol process, which allows the tool to unambiguously count solid impurities. The Figure shows the compact components within the tool that produce the aerosol.

Aerosol sub-system inside “SuperSizer” in-line particle sizing tool co-developed by ITRI/TSMC. (Source: ITRI)

Semiconductor fabrication (fab) lines require in-line measurement and control of particles in critical liquids and slurries. With the exception of those carefully added to chemical-mechanical planarization (CMP) slurries, most particles in fabs are accidental yield-killers that must be kept to an absolute minimum to ensure proper yield in IC fabs, and ever decreasing IC device feature sizes result in ever smaller particles that can kill a chip. Standard in-line tools to monitor particles rely on laser scattering through the liquid, and such technology allows for resolution of particle sizes as small as 40nm. Since we cannot control what we cannot measure, the IC fab industry needs this new ability to measure particles as small as 5nm for next-generation manufacturing.

There are two actual measurement technologies used downstream of the SuperSizer aerosol module:  a differential mobility analyzer (DMA), and a condensation particle counter (CPC). The aerosol first moves through the DMA column, where particle sizes are measured based on the force balance between air flow speed in the axial direction and an electric field in the radial direction. The subsequent CPC then provides particle concentration data.

Combining both data streams properly allows for automated output of information on particle sizes down to 5nm, size distributions, and impurity concentrations in liquids. Since the tool is intended for monitoring semiconductor high-volume manufacturing (HVM), the measurement data is automatically categorized, analyzed, and reported according to the needs of the fab’s automated yield management system. Users can edit the measurement sequences or recipes to monitor different chemicals or slurries under different conditions and schedules.

When used to control a CMP process, the SuperSizer can be configured to measure not just impurities but also the essential slurry particles themselves. During dilution and homogeneous mixing of the slurry prior to aerosolization, mechanical agitation needs to be avoided so as to prevent particle agglomeration which causes scratch defects. This new tool uses pressured gas as the driving force for solution transporting and mixing, so that any measured agglomeration in the slurry can be assigned to a source somewhere else in the fab.

TSMC has been using this tool since 2014 to measure particles in solutions including slurries, chemicals, and ultra-pure water. ITRI, which owns the technology and related patents, can now take orders to manufacture the product, but the research organization plans to license the technology to a company in Taiwan for volume manufacturing. EETimes reports (http://www.eetimes.com/document.asp?doc_id=1328283) that the current list price for a tool capable of monitoring ultra-pure water is ~US$450k, while a fully-configured tool for CMP monitoring would cost over US$700k.

—E.K.

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.

—E.K.

3DIC Technology Drivers and Roadmaps

Monday, June 22nd, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

After 15 years of targeted R&D, through-silicon via (TSV) formation technology has been established for various applications. Figure 1 shows that there are now detailed roadmaps for different types of 3-dimensional (3D) ICs well established in industry—first-order segmentation based on the wiring-level/partitioning—with all of the unit-processes and integration needed for reliable functionality shown. Using block-to-block integration with 5 micron lines at leading international IC foundries such as GlobalFoundries, systems stacking logic and memory such as the Hybrid Memory Cube (HMC) are now in production.

Fig. 1: Today’s 3D technology landscape segmented by wiring-level, showing cross-sections of typical 2-tier circuit stacks, and indicating planned reductions in contact pitches. (Source: imec)

“There are interposers for high-end complex SOC design with good yield,” informed Eric Beyne, Scientific Director Advanced Packaging & Interconnect for imec in an exclusive interview with Solid State Technology. ““For a systems company, once you’ve made the decision to go 3D there’s no way back,” said Beyne. “If you need high-bandwidth memory, for example, then you’re committed to some sort of 3D. The process is happening today.” Beyne is scheduled to talk about 3D technology driven by 3D application requirements in the imec Technology Forum to be held July 13 in San Francisco.

Adaptation of TSV for stacking of components into a complete functional system is key to high-volume demand. Phil Garrou, packaging technologist and SemiMD blogger, reported from the recent ConFab that Hynix is readying a second generation of high-bandwidth memory (HBM 2) for use in high performance computing (HPC) such as graphics, with products already announced like Pascal from Nvidia and Greenland from AMD.

For a normalized 1 cm2 of silicon area, wide-IO memory needs 1600 signal pins (not counting additional power and ground pins) so several thousand TSV are needed for high-performance stacked DRAM today, while in more advanced memory architectures it could go up by another factor of 10. For wide-IO HVM-2 (or Wide-IO2) the silicon consumed by IO circuitry is maybe 6 cm2 today, such that a 3D stack with shorter vertical connections would eliminate many of the drivers on the chip and would allow scaling of the micro-bumps to perhaps save a total of 4 cm2 in silicon area. 3D stacks provide such trade-offs between design and performance, so the best results are predicted for 3DICs where the partitioning can be re-done at the gate or transistor level. For example, a modern 8-core microprocessor could have over 50% of the silicon area consumed by L3-cache-memory and IO circuitry, and moving from 2D to 3D would reduce total wire-lengths and interconnect power consumptions by >50%.

There are inherent thresholds based on the High:Width ratio (H:W) that determine costs and challenges in process integration of TSV:

-    10:1 ratio is the limit for the use of relatively inexpensive physical vapor deposition (PVD) for the Cu barrier/seed (B/S),

-    20:1 ratio is the limit for the use of atomic-layer deposition (ALD) for B/S and electroless deposition (ELD) for Cu fill with 1.5 x 30 micron vias on the roadmap for the far future,

-    30:1 ratio and greater is unproven as manufacturable, though novel deposition technologies continue to be explored.

TSV Processing Results

The researchers at imec have evaluated different ways of connecting TSV to underlying silicon, and have determined that direct connections to micro-bumps are inherently superior to use of any re-distribution layer (RDL) metal. Consequently, there is renewed effort on scaling of micro-bump pitches to be able to match up with TSV. The standard minimum micro-bump pitch today of 40 micron has been shrunk to 20, and imec is now working on 10 micron with plans to go to 5 micron. While it may not help with TSV connections, an RDL layer may still be needed in the final stack and the Cu metal over-burden from TSV filling has been shown by imec to be sufficiently reproducible to be used as the RDL metal. The silicon surface area covered by TSV today is a few percents not 10s of percents, since the wiring level is global or semi-global.

Regarding the trade-offs between die-to-wafer (D2W) and wafer-to-wafer (W2W) stacking, D2W seems advantageous for most near-term solutions because of easier design and superior yield. D2W design is easier because the top die can be arbitrarily smaller silicon, instead of the identically sized chips needed in W2W stacks. Assuming the same defectivity levels in stacking, D2W yield will almost always be superior to W2W because of the ability to use strictly known-good-die. Still, there are high-density integration concepts out on the horizon that call for W2W stacking. Monolithic 3D (M3D) integration using re-grown active silicon instead of TSV may still be used in the future, but design and yield issues will be at least comparable to those of W2W stacking.

Beyne mentioned that during the recent ECTC 2015, EV Group showed impressive 250nm overlay accuracy on 450mm wafers, proving that W2W alignment at the next wafer size will be sufficient for 3D stacking. Beyne is also excited by the fact the at this year’s ECTC there was, “strong interest in thermo-compression bonding, with 18 papers from leading companies. It’s something that we’ve been working on for many years for die-to-wafer stacking, while people had mistakenly thought that it might be too slow or too expensive.”

Thermal issues for high-performance circuitry remain a potential issue for 3D stacking, particularly when working with finFETs. In 2D transistors the excellent thermal conductivity of the underlying silicon crystal acts like a built-in heat-sink to diffuse heat away from active regions. However, when 3D finFETs protrude from the silicon surface the main path for thermal dissipation is through the metal lines of the local interconnect stack, and so finFETs in general and stacks of finFETs in particular tend to induce more electro-migration (EM) failures in copper interconnects compared to 2D devices built on bulk silicon.

3D Designs and Cost Modeling

At a recent North California Chapter of the American Vacuum Society (NCCAVS) PAG-CMPUG-TFUG Joint Users Group Meeting discussing 3D chip technology held at Semi Global Headquarters in San Jose, Jun-Ho Choy of Mentor Graphics Corp. presented on “Electromigration Simulation Flow For Chip-Scale Parametric Failure Analysis.” Figure 2 shows the results from use of a physics-based model for temperature- and residual-stress-aware void nucleation and growth. Mentor has identified new failure mechanisms in TSV that are based on coefficient of thermal expansion (CTE) mismatch stresses. Large stresses can develop in lines near TSV during subsequent thermal processing, and the stress levels are layout dependent. In the worst cases the combined total stress can exceed the critical level required for void nucleation before any electrical stressing is applied. During electrical stress, EM voids were observed to initially nucleate under the TSV centers at the landing-pad interfaces even though these are the locations of minimal current-crowding, which requires proper modeling of CTE-mismatch induced stresses to explain.

Fig. 2: Calibration of an Electronic Design Automation (EDA) tool allows for accurate prediction of transistor performance depending on distance from a TSV. (Source: Mentor Graphics)

Planned for July 16, 2015 at SEMICON West in San Francisco, a presentation on “3DIC Technology Past, Present and Future” will be part of one of the side Semiconductor Technology Sessions (STS). Ramakanth Alapati, Director of Packaging Strategy and Marketing, GLOBALFOUNDRIES, will discuss the underlying economic, supply chain and technology factors that will drive productization of 3DIC technology as we know it today. Key to understanding the dynamic of technology adaptation is using performance/$ as a metric.

Germanium Junctions for CMOS

Tuesday, November 25th, 2014

thumbnail

By Ed Korczynski, Sr. Technical Editor, Solid State Technology and SemiMD

It is nearly certain that alternate channel materials with higher mobilities will be needed to replace silicon (Si) in future CMOS ICs. The best PMOS channels are made with germanium (Ge), while there are many possible elements and compounds in R&D competition to form the NMOS channel, in part because of difficulties in forming stable n-junctions in Ge. If the industry can do NMOS with Ge then the integration with Ge PMOS would be much simpler than having to try to integrate a compound semiconductor such as gallium-arsenide or indium-phosphide.

In considering Ge channels in future devices, we must anticipate that they will be part of finFET structures. Both bulk-silicon and silicon-on-insulator (SOI) wafers will be used to build 3D finFET device structures for future CMOS ICs. Ultra-Shallow Junctions (USJ) will be needed to make contacts to channels that are nanoscale.

John Borland is a renowned expert in junction-formation technology, and now a principle with Advanced Integrated Photonics. In a Junction Formation side-conference at SEMICON West 2014, Borland presented a summary of data that had first been shown by co-author Paul Konkola at the 2014 International Conference on Ion Implant Technology. Their work on “Implant Dopant Activation Comparison Between Silicon and Germanium” provides valuable insights into the intrinsic differences between the two semiconducting materials.

P-type implants into Ge showed an interesting self-activation (seen as a decrease in of p-type dopant after implant, especially for monomer B as the dose increases.  Using 4-Point-Probe (4PP) to measure sheet-resistance (Rs), the 5E14/cm2 B-implant Rs was 190Ω/□ and at higher implant dose of 5E15/cm2 Rs was 120Ω/□. B requires temperatures >600°C for full activation in PMOS Ge channels, and generally results in minimal dopant diffusion for USJ.

Figure 1 shows a comparison between P, As, and Sb implanted dopants at 1E16/cm2 into both a Si wafer and 1µm Ge-epilayer on Si after various anneals. The sheet-resistance values for all three n-type dopants were always lower in Ge than in Si over the 625-900°C RTA range by about 5x for P and 10x for As and Sb. Another experiment to study the results for co-implants of P+Sb, P+C, and P+F using a Si-cap layer did not show any enhanced n-type dopant activation.

Fig.1: Sheet-resistance (Rs) versus RTA temperatures for P, As, and Sb implanted dopants into Ge and Si. (Source: Borland)

Prof. Saraswat of Stanford University showed in 2005—at the spring Materials Research Society meeting— that n-type activation in Ge is inherently difficult. In that same year, Borland was the lead author of an article in Solid State Technology (July 2005, p.45) entited, “Meeting challenges for engineering the gate stack”, in which the authors advocated for using a Si-cap for P implant to enable high temperature n-type dopant activation with minimal diffusion for shallow n+ Ge junctions that can be used for Ge nMOS. Now, almost 10 year later, Borland is able to show that it can be done.

Ge Channel Integration and Metrology

Nano-scale Ge channels wrapped around 3D fin structures will be difficult to form before they can be implanted. However, whether formed in a Replacement Metal Gate (RMG) or epitaxial-etchback process, one commonality is that Ge channels will need abrupt junctions to fit into shrunk device structures. Also, as device structures have continued to shrink, the junction formation challenges between “planar” devices and 3D finFET have converged since the “2D” structures now have nano-scale 3D topography.

Adam Brand, senior director of transistor technology in the Advanced Product Technology Development group of Applied Materials, explained that, “Heated beamline implants are best when the priority is precise dose and energy control without lattice damage. Plasma doping (PLAD) is best when the priority is to deliver a high dose and conformal implant.”

Ehud Tzuri, director strategic marketing in the Process Diagnostic and Metrology group at Applied Materials reminds us that control of the Ge material quality, as specified by data on the count and lengths of stacking-faults and other crystalline dislocations, could be done by X-Ray Diffraction (XRD) or by some new disruptive technology. Cross-section Transmission Electron Microscopy (X-TEM) is the definitive technology for looking at nanoscale material quality, but since it is expensive and the sample must be destroyed it cannot be used for process control.

Figure 2 shows X-TEM results for 1 µm thick Ge epi-layers after 625°C and 900°C RTA. Due to the intrinsic lattice mis-match between Ge and Si there will always be some defects at the surface, as indicated by arrows in the figure. However, stacking faults are clearly seen in the lower RTA sample, while the 900°C anneal shows no stacking-faults and so should result in superior integrated device performance.

Fig. 2: Cross-section TEM of 1µm Ge-epi after 625°C and 900°C RTA, showing great reduction in stacking-faults with the higher annealing temperature. (Source: Borland)

Borland explains that the stacking-faults in Ge channels on finFETs would protrude to the surface, and so could not be mitigated by the use of the “Aspect-Ratio Trapping” (ART) integration trick that has been investigated by imec. However, the use of a silicon-oxide cap allows for the use of 900°C RTA which is hot enough to anneal out the defects in the crystal.

Brand provides an example of why integration challenges of Ge channels include subtle considerations, “The most important consideration for USJ in the FinFET era is to scale down the channel body width to improve electrostatics. Germanium has a higher semiconductor dielectric constant than silicon so a slightly lower body width will be needed to reach the same gate length due to the capacitive coupling.”

Junction formation in Ge channels will be one of the nanoscale materials engineering challenges for future CMOS finFETs. Either XRD or some other metrology technology will be needed for control. Integration will include the need to control the materials on the top and the bottom surfaces of channels to ensure that dopant atoms activate without diffusing away. The remaining challenge is to develop the shortest RTA process possible to minimize all diffusions.

— E. K.

The Week in Review: October 10, 2014

Friday, October 10th, 2014

Samsung Electronics announced plans on Monday to invest $14.7 billion (15.6 trillion Korean won) in a new semiconductor fabrication facility in Pyeongtaek, South Korea to meet growing demand from smartphones, enterprise computing and the emerging “Internet of Things” market.

Soraa, a developer of GaN on GaN LED technology, announced today that one of its founders, Dr. Shuji Nakamura, has been awarded the 2014 Nobel Prize in Physics. Recognizing that Nakamura’s invention, the blue light emitting diode (LED), represents a critical advancement in LED lighting, the Nobel committee explained the innovation “has enabled bright and energy-saving white light sources.”

The Semiconductor Industry Association (SIA), representing U.S. leadership in semiconductor manufacturing and design, today announced that John P. Daane, President, CEO, and Chairman of the Board of Altera, has been named the 2014 recipient of SIA’s highest honor, the Robert N. Noyce Award.

The Board of Directors of United Microelectronics Corporation (UMC), a global semiconductor foundry, this week announced a joint venture company focused on 12″ wafer foundry services with Xiamen Municipal People’s Government and FuJian Electronics & Information Group.

Emergence of new wide bandgap (WBG) technologies such as SiC and GaN materials will definitely reshape part of the established power electronics industry, according to Yole Développement (Yole).

Next Page »