Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘3D’

Next Page »

Monolithic 3D processing using non-equilibrium RTP

Friday, April 17th, 2015

thumbnail

By Ed Korczynski, Senior Technical Editor, Solid State Technology

Slightly more than one year after Qualcomm Technologies announced that it was assessing CEA-Leti’s monolithic 3D (M3D) transistor stacking technology, Qualcomm has now announced that M3D will be used instead of through-silicon vias (TSV) in the company’s next generation of cellphone handset chips. Since Qualcomm had also been a leading industrial proponent of TSV over the last few years while participating in the imec R&D consortium, this endorsement of M3D is particularly relevant.

Leti’s approach to 3D stacking of transistors starts with a conventionally built and locally-interconnected bottom layer of transistors, which are then covered with a top layer of transistors built using relatively low-temperature processes branded as “CoolCube.” Figure 1 shows a simplified cross-sectional schematic of a CoolCube stack of transistors and interconnects. CoolCube M3D does not transfer a layer of built devices as in the approach using TSV, but instead transfers just a nm-thin layer of homogenous semiconducting material for subsequent device processing.

Fig. 1: Simplified cross-sectional rendering of Monolithic 3D (M3D) transistor stacks, with critical process integration challenges indicated. (Source: CEA-Leti)

The reason that completed transistors are not transferred in the first place is because of intrinsic alignment issues, which are eliminated when transistors are instead fabricated on the same wafer. “We have lots of data to prove that alignment precision is as good as can be seen in 2D lithography, typically 3nm,” explained Maud Vinet, Leti’s advanced CMOS laboratory manager in an exclusive interview with SST.

As discussed in a blog post online at Semiconductor Manufacturing and Design (http://semimd.com/hars/2014/04/09/going-up-monolithic-3d-as-an-alternative-to-cmos-scaling/) last year by Leti researchers, the M3D approach consists of sequentially processing:

  • processing a bottom MOS transistor layer with local interconnects,
  • bonding a wafer substrate to the bottom transistor layer,
  • chemical-mechanical planarization (CMP) and SPE of the top layer,
  • processing the top device layer,
  • forming metal vias between the two device layers as interconnects, and
  • standard copper/low-k multi-level interconnect formation.

To transfer a layer of silicon for the top layer of transistors, a cleave-layer is needed within the bulk silicon or else time and money would be wasted in grinding away >95% of the silicon bulk from the backside. For CMOS:CMOS M3D thin silicon-on-insulator (SOI) is the transferred top layer, a logical extension of work done by Leti for decades. The heavy dose ion-implantation that creates the cleave-layer leaves defects in crystalline silicon which require excessively high temperatures to anneal away. Leti’s trick to overcome this thermal-budget issue is to use pre-amorphizing implants (PAI) to completely dis-order the silicon before transfer and then solid-phase epitaxy (SPE) post-transfer to grow device-grade single-crystal silicon at ~500°C.

Since neither aluminum nor copper interconnects can withstand this temperature range, the interconnects for the bottom layer of transistors need to be tungsten wires with the highest melting point of any metal but somewhat worse electrical resistance (R). Protection for the lower wires cannot use low-k dielectrics, but must use relatively higher capacitance (C) oxides. However, the increased RC delay in the lower interconnects is more than offset by the orders-of-magnitude reduction in interconnect lengths due to vertical stacking.

M3D Roadmaps

Leti shows data that M3D transistor stacking can provide immediate benefit to industry by combining two 28nm-node CMOS layers instead of trying to design and manufacture a single 14nm-node CMOS layer:  area gain 55%, performance gain 23%, and power gain 12%. With cost/transistor now expected to increase with sequential nodes, M3D thus provides a way to reduce cost and risk when developing new ICs.

For the industry to use M3D, there are some unique new unit-processes that will need to ramp into high-volume manufacturing (HVM) to ensure profitable line yield. As presented by C. Fenouillet-Beranger et al. from Leti and ST (paper 27.5) at IEDM2014 in San Francisco, “New Insights on Bottom Layer Thermal Stability and Laser Annealing Promises for High Performance 3D Monolithic Integration,” due to stability improvement in bottom transistors found through the use of doping nickel-silicide with a noble metal such as platinum, the top MOSFET processing temperature could be relaxed up to 500°C. Laser RTP annealing then allows for the activation of top MOSFETs junctions, which have been characterized morphologically and electrically as promising for high performance ICs.

Figure 2 shows the new unit-processes at <=500°C that need to be developed for top transistor formation:

*   Gate-oxide formation,

*   Dopant activation,

*   Epitaxy, and

*   Spacer deposition.

Fig. 2: Thermal processing ranges for process modules need to be below ~500°C for the top devices in M3D stacks to prevent degradation of the bottom layer. (Source: CEA-Leti)

After the above unit-processes have been integrated into high-yielding process modules for CMOS:CMOS stacking, heterogeneous integration of different types of devices are on the roadmap for M3D. Leti has already shown proof-of-concept for processes that integrate new IC functionalities into future M3D stacks:

1)       CMOS:CMOS,

2)       PMOS:NMOS,

3)       III-V:Ge, and

4)       MEMS/NEMS:CMOS.

Thomas Ernst, senior scientist, Electron Nanodevice Architectures, Leti, commented to SST, “Any application that will need a ‘pixelated’ device architecture would likely use M3D. In addition, this approach will work well for integrating new channel materials such as III-V’s and germanium, and any materials that can be deposited at relatively low temperatures such as the active layers in gas-sensors or resistive-memory cells.”

Non-Equilibrium Thermal Processing

Though the use of an oxide barrier between the active device layers provides significant thermal protection to the bottom layer of devices during top-layer fabrication, the thermal processes of the latter  cannot be run at equilibrium. “One way of controlling the thermal budget is to use what we sometimes call the crème brûlée approach to only heat the very top surface while keeping the inside cool,” explained Vinet. “Everyone knows that you want a nice crispy top surface with cool custard beneath.” Using a laser with a short wavelength prevents penetration into lower layers such that essentially all of the energy is absorbed in the surface layer in a manner that can be considered as adiabatic.

Applied Materials has been a supplier-partner with Leti in developing M3D, and the company provided responses from executive technologists to queries from SST about the general industry trend to controlling short pulses of light for thermal processing. “Laser non-equilibrium heating is enabling technology for 3D devices,” affirmed Steve Moffatt, chief technology officer, Front End Products, Applied Materials. “The idea is to heat the top layer and not the layers below. To achieve very shallow adiabatic heating the toolset needs to ramp up in less than 100 nsec. In order to get strong absorption in the top surface, shorter wavelengths are useful, less than 800 nm. Laser non-equilibrium heating in this regime can be a critical process for building monolithic 3D structures for SOC and logic devices.”

Of course, with ultra-shallow junctions (USJ) and atomic-scale gate-stacks already in use for CMOS transistors at the 22nm-node, non-equilibrium thermal processing has already been used in leading fabs. “Gate dielectric, gate metal, and contact treatments are areas where we have seen non-equilibrium anneals slowly taking the place of conventional RTP,” clarified Abhilash Mayur, senior director, Front End Products, Applied Materials. “For approximate percentages, I would say about 25 percent of thermal processing for logic at the 22nm-node is non-equilibrium, and seen to be heading toward 50 percent at the 10nm-node or lower.”

Mayur further explained some of the trade-offs in working on the leading-edge of thermal processing for demanding HVM customers. Pulse-times are in the tens of nsec, with longer pulses tending to allow the heat to diffuse deeper and adversely alter the lower layers, and with shorter pulses tending to induce surface damage or ablation. “Our roadmap is to ensure flexibility in the pulse shape to tailor the heat flow to the specific application,” said Mayur.

Now that Qualcomm has endorsed CoolCube M3D as a preferred approach to CMOS:CMOS transistor stacking in the near-term, we may assume that R&D in novel unit-processes has mostly concluded. Presumably there are pilot lots of wafers now being run through commercial foundries to fine-tune M3D integration. With a roadmap for long-term heterogeneous integration that seems both low-cost and low-risk, M3D using non-equilibrium RTP will likely be an important way to integrate new functionalities into future ICs.

Blog review March 9, 2015

Monday, March 9th, 2015

Pete Singer is delighted to announce the keynotes and other speakers for The ConFab 2015, to be held May 19-22 at The Encore at The Wynn in Las Vegas. The line-up includes Ali Sebt, President and CEO of Renesas America, Paolo Gargini, Chairman of the ITRS and Subramani Kengeri, Vice President, Global Design Solutions at GLOBALFOUNDRIES.

Mark Simmons, Product Marketing Manager, Calibre Manufacturing Group, Mentor Graphics writes about cutting fab costs and turn-around time with smart, automated resource management. He notes that the competition for market share is brutal for both the pure-play and independent device manufacturer (IDM) foundries. Success involves tuning a lot of knobs and dials. One of the important knobs is the ability to continually meet or exceed aggressive time-to-market schedules.

Paul Stockman, Commercialization Manager, Linde Electronics blogs that there is an increasing demand for and focus on sustainable manufacturing that will contribute to a greening of semiconductors. This greening must be robust and responsive to change and cannot constrain the individual processes or operation of a fab.

Applied Materials’ Max McDaniel writes on the quest for more durable displays. He says the same innovators who created such amazingly thin, light and highly functional smartphones (with the help of Applied Materials display technology) are already developing durability improvements that may eliminate the need for protective covers.

Batteries? We don’t need no stinking batteries, says Ed Korczynski. We’re still used to thinking that low-power chips for “mobile” or “Internet-of-Things (IoT)” applications will be battery powered…but the near ubiquity of lithium-ion cells powering batteries could be threatened by capacitors and energy-harvesting circuits connected to photovoltaic/thermoelectric/piezoelectric micro-power sources.

With the 2015 SPIE Advanced Lithography (AL) conference around the corner, some people have asked me what remaining EUVL challenges need to be addressed to ensure it will be ready for mass production later this year or next.  Vivek Bakshi of EUV Litho, Inc. provides thoughts on this topic and what he expects to hear at the conference.

Phil Garrou continues his look at presentations from the Grenoble SEMI 3D Summit which took place in January, focusing on an interesting presentation by ATREG consultants on the future of Assembly & Test.

On Tuesday, January 20, President Obama once again stood before a joint session of Congress to deliver a State of the Union Address.  With the newly seated Republican-controlled Congress and his Cabinet present, the President discussed topics ranging from the current state of the economy to foreign affairs and his ideas on how to move the nation forward.  Jamie Girard of SEMI was pleased to hear that the President supported multiple policy goals including expansion of free trade, corporate tax reform, support for basic science research and development and others.

5nm Node Needs EUV for Economics

Thursday, January 29th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

#mce_temp_url#

At IEDM 2014 last month in San Francisco, Applied Materials sponsored an evening panel discussion on the theme of “How do we continue past 7nm?” Given that leading fabs are now ramping 14nm node processes, and exploring manufacturing options for the 10nm node, “past 7nm” means 5nm node processing. There are many device options possible, but cost-effective manufacturing at this scale will require Extreme Ultra-Violet (EUV) lithography to avoid the costs of quadruple-patterning.

Fig. 1: Panelists discuss future IC manufacturing and design possibilities in San Francisco on December 16, 2014. (Source: Pete Singer)

Figure 1 shows the panel being moderated by Professor Mark Rodwell of the University of California Santa Barbara, composed of the following industry experts:

  • Karim Arabi, Ph.D. – vice president, engineering, Qualcomm,
  • Michael Guillorn, Ph.D. – research staff member, IBM,
  • Witek Maszara, Ph.D. – distinguished member of technical staff, GLOBALFOUNDRIES,
  • Aaron Thean, Ph.D. – vice president, logic process technologies, imec, and
  • Satheesh Kuppurao, Ph.D. – vice president, front end products group, Applied Materials.

Arabi said that from the design perspective the overarching concern is to keep “innovating at the edge” of instantaneous and mobile processing. At the transistor level, the 10nm node process will be similar to that at the 14nm node, though perhaps with alternate channels. The 7nm node will be an inflection point with more innovation needed such as gate-all-around (GAA) nanowires in a horizontal array. By the 5nm node there’s no way to avoid tunnel FETs and III-V channels and possibly vertical nanowires, though self-heating issues could become very challenging. There’s no shortage of good ideas in the front end and lots of optimism that we’ll be able to make the transistors somehow, but the situation in the backend of on-chip metal interconnect is looking like it could become a bottleneck.

Guillorn extolled the virtues of embedded-memory to accelerate logic functions, as a great example of co-optimization at the chip level providing a real boost in performance at the system level. The infection at 7nm and beyond could lead to GAA Carbon Nano-Tube (CNT) as the minimum functional device. It’s limited to think about future devices only in terms of dimensional shrinks, since much of the performance improvement will come from new materials and new device and technology integration. In addition to concerns with interconnects, maintaining acceptable resistance in transistor contacts will be very difficult with reduced contact areas.

Maszara provided target numbers for a 5nm node technology to provide a 50% area shrink over 7nm:  gate pitch of 30nm, and interconnect level Metal 1 (M1) pitch of 20nm. To reach those targets, GLOBALFOUNDRIES’ cost models show that EUV with ~0.5 N.A. would be needed. Even if much of the lithography could use some manner of Directed Self-Assembly (DSA), EUV would still be needed for cut-masks and contacts. In terms of device performance, either finFET or nanowires could provide desired off current but the challenge then becomes how to get the on current for intended mobile applications? Alternative channels with high mobility materials could work but it remains to be seen how they will be integrated. A rough calculation of cost is the number of mask layers, and for 5nm node processing the cost/transistor could still go down if the industry has ideal EUV. Otherwise, the only affordable way to go may be stay at 7nm node specs but do transistor stacking.

Thein detailed why electrostatic scaling is a key factor. Parasitics will be extraordinary for any 5nm node devices due to the intrinsically higher number of surfaces and junctions within the same volume. Just the parasitic capacitances at 7nm are modeled as being 75% of the total capacitance of the chip. The device trend from planar to finFET to nanowires means proportionally increasing relative surface areas, which results in inherently greater sensitivity to surface-defects and interface-traps. Scaling to smaller structures may not help you if you loose most of the current and voltage in non-useful traps and defects, and that has already been seen in comparisons of III-V finFETs and nanowires. Also, 2D scaling of CMOS gates is not sustainable, and so one motivation for considering vertical transistors for logic at 5nm would be to allow for 20nm gates at 30nm pitch.

Kappurao reminded attendees that while there is still uncertainty regarding the device structures beyond 7nm, there is certainty in 4 trends for equipment processes the industry will need:

  1. everything is an interface requiring precision materials engineering,
  2. film depositions are either atomic-layer or selective films or even lattice-matched,
  3. pattern definition using dry selective-removal and directed self-assembly, and
  4. architecture in 3D means high aspect-ratio processing and non-equilibrium processing.

An example of non-equilibrium processing is single-wafer rapid-thermal-annealers (RTA) that today run for nanoseconds—providing the same or even better performance than equilibrium. Figure 2 shows that a cobalt-liner for copper lines along with a selective-cobalt cap provides a 10x improvement in electromigration compared to the previous process-of-record, which is an example of precision materials engineering solving scaling performance issues.

Fig. 2: ElectroMigration (EM) lifetimes for on-chip interconnects made with either conventional Cu or Cu lined and capped with Co, showing 10 times improvement with the latter. (Source: Applied Materials)

“We have to figure out how to control these materials,” reminded Kappurao. “At 5nm we’re talking about atomic precision, and we have to invent technologies that can control these things reliably in a manufacturable manner.” Whether it’s channel or contact or gate or interconnect, all the materials are going to change as we keep adding more functionality at smaller device sizes.

There is tremendous momentum in the industry behind density scaling, but when economic limits of 2D scaling are reached then designers will have to start working on 3D monolithic. It is likely that the industry will need even more integration of design and manufacturing, because it will be very challenging to keep the cost-per-function decreasing. After CMOS there are still many options for new devices to arrive in the form of spintronics or tunnel-FETs or quantum-dots.

However, Arabi reminded attendees as to why the industry has stayed with CMOS digital synchronous technology leading to design tools and a manufacturing roadmap in an ecosystem. “The industry hit a jackpot with CMOS digital. Let’s face it, we have not even been able to do asynchronous logic…even though people tried it for many years. My prediction is we’ll go as far as we can until we hit atomic limits.”

3D memory for future nanoelectronic systems

Wednesday, June 18th, 2014

thumbnail

By Ed Korczynski, Sr. Technical Editor

The future of 3D memory will be in application-specific packages and systems. That is how innovation continues when simple 2D scaling reaches atomic-limits, and deep work on applications is now part of what global research and development (R&D) consortium Imec does. Imec is now 30 years old, and the annual Imec Technology Forum held in the first week of June in Brussels, Belgium included fun birthday celebrations and very serious discussions of the detailed R&D needed to push nanoelectronics systems into health-care, energy, and communications markets.

3D memory will generally cost more than 2D memory, so generally a system must demand high speed or small size to mandate 3D. Communications devices and cloud servers need high speed memory. Mobile and portable personalized health monitors need low power memory. In most cases, the optimum solution does not necessarily need more bits, but perhaps faster bits or more reliable bits. This is why the Hybrid Memory Cube (HMC) provides >160Gb/sec data transfer with Through-Silicon Vias (TSV) through 3D stacked DRAM layers.

“We’re not adding 70-80% more bits like we used to per generation, or even the 40% recently,” explained Mark Durcan, chief executive officer of Micron Technology. “DRAM bits will only grow at the low to mid-20%.” With those numbers come hopes of more stability and less volatility in the DRAM business. Likewise, despite the bit growth rates of the recent past, NAND is moving to 30-40%  bit-increase per new ‘generation.’

“Moore’s Law is not over, it’s just slowing,” declared Durcan. “With NAND, we’re moving from planar to 3D, and the innovation is that there are different ways of doing 3D.” Figure 1 shows the six different options that Micron defines for 3D NAND. Micron plans for future success in the memory business to be not just about bit-growth, but about application-specific memory solutions.

Fig. 1: Different options for Vertical NAND (VNAND) Flash memory design, showing cell layouts and key specifications. (Source: Micron Technology)

E. S. Jung, executive vice president Samsung Electronics, presented an overview of how “Samsung’s Breaking the Limits of Semiconductor Technology for the Future” at the Imec forum. Samsung Semiconductor announced it’s first DRAM product in 1984, and has been improving it’s capabilities in design and manufacturing ever since. Samsung also sees the future of memory chips as part of application-specific systems, and suggests that all of the innovation in end-products we envision for the future cannot occur without semiconductor memory.

Samsung’s world leading 3D vertical-NAND (VNAND) chips are based on simultaneous innovation in three different aspects of materials and design:

1)    Material changed from floating-gate,

2)    Rotated structure from horizontal to vertical (and use Gate All Around), and

3)    Stacked layers.

To accomplish these results, partners were needed from OEM and specialty-materials suppliers during the R&D of the special new hard-mask process needed to be able to form 2.5B vias with extremely high aspect-ratios.

Rick Gottscho, executive vice president of the global products group Lam Research Corp., in an exclusive interview with SST/SemiMD, explained that with proper control of hardmask deposition and etch processes the inherent line-edge-roughness (LER) of photoresist (PR) can be reduced. This sort of integrated process module can be developed independently by an OEM like Lam Research, but proving it in a device structure with other complex materials interactions requires collaboration with other leading researchers, and so Lam Research is now part of a new ‘Supplier Hub’ relationship at Imec.

Luc Van den hove, president and chief executive officer of Imec, commented, “we have been working with equipment and materials suppliers form the beginning, but we’re upgrading into this new ‘Supplier Hub.’ In the past most of the development occurred at the suppliers’ facilities and then results moved to Imec. Last year we announced a new joint ‘patterning center’ with ASML, and they’re transferring about one hundred people from Leuven. Today we announced a major collaboration with Lam Research. This is not a new relationship, since we’ve been working with Lam for over 20 years, but we’re stepping it up to a new level.”

Commitment, competence, and compromise are all vital to functional collaboration according to Aart J. de Geus, chairman and co-chief executive officer of Synopsys. Since he has long lead a major electronic design automation (EDA) company, de Geus has seen electronics industry trends over the 30 years that Imec has been running. Today’s advanced systems designs require coordination among many different players within the electronics industry ecosystem (Figure 2), with EDA and manufacturing R&D holding the center of innovation.

Fig. 2: Semiconductor manufacturing and design drive technology innovation throughout the global electronics industry. (Source: Synopsys)

“The complexity of what is being built is so high that the guarantee that what has been built will work is a challenge,” cautioned de Geus. Complexity in systems is a multiplicative function of the number of components, not a simple summation. Consequently, design verification is the greatest challenge for complex System-on-Chips (SoC). Faster simulation has always been the way to speed up verification, and future hardware and software need co-optimization. “How do you debug this, because that is 70% of the design time today when working with SoCs containing re-used IP? This will be one of the limiters in terms of product schedules,” advised de Geus.

Whether HMC stacks of DRAM, VNAND, or newer memory technologies such as spintronics or Resistive RAM (RRAM), nanoscale electronic systems will use 3D memories to reduce volume and signal delays. “Today we’re investigating all of the technologies needed to advance IC manufacturing below 10nm,” said Van den hove. The future of 3D memories will be complex, but industry R&D collaboration is preparing the foundation to be able to build such complex structures.

DISCLAIMER:  Ed Korczynski has or had a consulting relationship with Lam Research.

Blog review March 24, 2014

Monday, March 24th, 2014

IBS has recently issued a new white paper entitled Why Migration to 20nm Bulk CMOS and 16/14nm FinFETs Is Not the Best Approach for the Semiconductor Industry. Handel Jones of IBM says the focus of the analysis is on technology options that can be used to give lower cost per gate and lower cost per transistor within the next 24 to 60 months, covering the 28nm, 20nm and 14/16nm nodes.

Sitaram Arkalgud of Invensas and Rich Rogoff of Rudolph Technologies will present this Thursday as part of a free webcast focused on 2.5/3D integration and advanced packaging, including and new lithography options. Sitaram, who formerly led the 3D charge at SEMATECH, is now the vp of 3D technology at Invensas and Rich is the vp and general manager of the lithography systems group at Rudolph.

Phil Garrou provides a delightful lecture to the packaging community on nomenclature. He says the word “lecture” is one of those wonderful English words with multiple meanings. Lecture can mean “a talk or speech given to a group of people to teach them about a particular subject,” but it can also mean “a talk that criticizes someone’s behavior in an angry or serious way.” In his latest blog, lecturing means both!

Experts At The Table: Exploring the relationship between board-level design and 3D, and stacked, dies

Tuesday, December 17th, 2013

By Sara Verbruggen

SemiMD discussed what board level design can tell us about chip-level (three-dimensional) 3D and stacked dies with Sesh Ramaswami, Applied Materials’ Managing Director, TSV and Advanced Packaging, Advanced Product Technology Development, and Kevin Rinebold, Cadence’s Senior Product Marketing Manager. What follows are excerpts of that conversation.

SemiMD: What key, or major, challenge does the transition to 3D and stacked dies – and increasingly ‘advanced packaging’ – present when it comes to board-level design?

Ramaswami: The three-layer system architecture comprising the printed circuit board (PCB) system board, organic packaging substrate and silicon die offers the greatest integration flexibility. From a design perspective, this configuration places the most intensive co-design challenges on the die and substrate layers. On the substrate, the primary challenges are dielectric material, copper (Cu) line spacing and via scaling. However, when the packaged die attaches to the PCB through the ball grid array (BGA), surface-mount packaging, used for used for integrated circuits for devices such as microprocessors, the design challenges are more considerable. For example, they include limitations on chip size (I/O density), warpage and worries about co-efficient of thermal expansion mismatch between the materials.

Rinebold: Any advanced ‘BGA style’ package, regardless if it is three-dimensional (3D) or flat can have a significant impact on PCB layer count, route complexity, as well as cost. Efficient package ball pad net assignment and patterning of power and ground pins can make the difference between a four-layer and a six-layer PCB. Arriving at the optimal ball pad assignment necessitates coordinated planning across the entire interconnect chain from chip level macros to board level components. This planning requires new tools and flows capable of delivering a multi-fabric view of the system hierarchy while providing access to domain specific data like macro placement, I/O pad ring devices, bump patterns, ball pad assignments, and placement of critical PCB components and connectors.

SemiMD: 3D chip stacking and stacked die chip-scale packaging is favoured by the consumer electronics industry to enable better performing mobile electronics – in terms of faster performance, less power hungry devices, and so forth – but how do PCB design and testing tools need to adapt?

Rinebold: One benefit of these package formats is that they entail moving most of the high-performance interconnect and components off the PCB onto their own dedicated substrate. With increasing data rates and lower voltages there is little margin for error across the entire system placing a premium on signal quality and power delivery between the board and package.

In addition to high-speed constraints and checking, design tools must provide innovative functionality to assist the designer in implementing high-performance interconnect. In some situations complete automation (like auto-routing) cannot provide satisfactory results and still enforce the number of diverse and sometime ambiguous constraints. Designers will require auto-interactive tools that enable them to apply their experience and intuition supported by semi-automatic route engines for efficient implementation of constraints and interconnect. Example of such tools include the ability to plan and implement break-out on two ends of an interface connecting to high pin count BGAs to reduce route time and via counts. Without such tools the time to route high pin count BGAs can increase significantly.

Methodologies must adapt to incorporate electrical performance assessment (EPA) into the design process. EPA enables designers to evaluate electrical quality and performance throughout the design process helping avoid the backend analysis crunch – possibly jeopardizing product delivery. It utilizes extraction technology in a manner that provides actionable feedback to the designer helping identify and avoid issues related to impedance discontinuities, timing, coupling, or direct current (DC) current density.

SemiMD: More specifically, what impact will this trend towards greater compactness – i.e. smaller PCB footprint, but with more stacked dies and complex packaging – have on interconnection technologies?

Ramaswami: The trend towards better quality, higher-component density PCBs capable of supporting a wide range of die has significant implications for interconnect design. An additional challenge, is attaching complex chips on both sides of a board. Furthermore, with PCBs going thinner to fit the thin form factor requirements for mobile devices, dimensional stability and warpage must be addressed.

Rinebold: In some regards stacked applications simplify board level layout by moving high-bandwidth interconnect off the PCB and consolidating it on smaller, high density advanced package substrates. However, decreasing package ball pad pitch and increased pin density will drive use of build-up substrate technology for the PCB. This high density interconnect (HDI) enables smaller feature sizes and manufacturing accuracy necessary to support the fan-out routing requirements of these advanced package formats. Design tools must support HDI constraints and rules to ensure manufacturability along with functionality to define and manipulate the associated structures like microvias.

SemiMD: How will PCB manufacturing processes, tools and materials need to change to address this challenge?

Ramaswami: To manufacture a more robust integrated 3D stack, I think several fundamental innovations are needed. These include improving defect density and developing new materials such as low warpage laminates and less hygroscopic dielectrics. Another essential requirement is supporting finer copper line/spacing. Important considerations here are maintaining good adhesion while watching out for corrosion. Finally, for creating the necessary smaller vias, the industry needs new etching techniques to replace mechanical drilling techniques.

SemiMD: So as 3D chip stacking and stacked dies become more mainstream technologies, how will board level design need to develop, in the years to come?

Rinebold: One challenge will be visibility and consideration of the PCB during chip-level floor-planning and awareness of how decisions made early on impact downstream performance and cost. New tools that deliver a multi-fabric view of the system hierarchy while providing access to domain specific data will facilitate the necessary visibility for coordinated decision making. However these planning tools are just one component of an integrated flow encompassing logic definition, implementation, analysis, and sign-off for the chip, package, and PCB.

Blog Review: December 2, 2013

Monday, December 2nd, 2013

Phil Garrou completes his look at various packaging and 3D integration happenings from Semicon Taiwan, including news from Disco, Namics and Amkor. Choon Lee of Amkor, for example, predicted a silicon interposer cost of 2.7-4$/cm sq (100 sq mm) and expectations of organic interposer costs at 50% cost reduction.

Dynamic resource allocation can significantly improve turnaround time in post-tapeout flow. Mark Simmons of Mentor Graphics blogs about recent work that demonstrated 30% aggregate turnaround time improvement for a large set of jobs in conjunction with a greater than 90% average utilization across all hardware resources.

The MEMS Industry Group blog reflects on the trend toward sensor fusion and the role that hardware approaches such as FPGAs and microcontrollers will play in moving the technology forward.

44 years ago, the internet was born when two computers, one at UCLA and one at the Stanford Research Institute, connected over ARPANET (Advanced Research Projects Agency Network) to exchange the world’s first “host-to-host” message. Ricky Gradwohl of Applied Materials celebrates the “birthday” with thoughts on how far the internet has come.

Eliminating the Challenges of Giga-Scale Circuit Design With Nano-Scale Technologies

Monday, November 25th, 2013

By Dr. Lianfeng Yang, Vice President of Marketing, ProPlus Design Solutions, Inc., San Jose, Calif.

These days, circuit designers are talking about the increasing giga-scale circuit size. Semiconductor CMOS technology downscaled to nano-scale, forcing the move to make designing for yield (DFY) mandatory and compelling them to re-evaluate how they design and verify their chips.

That’s what brought more than 150 engineers from foundries and fabless semiconductor companies in and around Shanghai, China, in early November to hear a visionary talk from Dr. Chenming Hu, TSMC distinguished professor of the Graduate School at the University of California, Berkeley. Professor Hu, giving the keynote during a ProPlus seminar, offered a perspective on the emerging technology known as 3D FinFET transistor that he and his team invented. It was a great day for all attendees as many of them were able to ask in-depth questions about the challenges at advanced nodes such as 28nm and 16nm.

Dr. Chenming Hu, TSMC distinguished professor of the Graduate School at the University of California, Berkeley talks at the ProPlus seminar.

Professor Hu, this year’s recipient of the Phil Kaufman Award from the IEEE Council for EDA and the EDA Consortium, is a long-time friend and advisor of ProPlus’. Several members of our team, including Zhihong Liu, ProPlus’ executive chairman, were part of a research group he led with Professor Ping K. Ko that invented the first industry-standard MOSFET SPICE model known as BSIM3. (I’ll save the details on this for ProPlus’ next blog.)

One day after the seminar in Shanghai, we were in Taiwan for a similar seminar, though Professor Hu did not join us. This group of engineers gave us a similar assessment of their challenges and ongoing concerns.

The general consensus from both groups is that they would benefit from having more closely integrated modeling, SPICE simulation and DFY technologies. Their perspective is one that is generally shared throughout the semiconductor industry and the EDA industry is starting to respond.

Many of the attendees we talked with over the two days commented on the challenges of good design. That is, modeling small transistors, then putting multi-billion nano-scale transistors together and making it functional, a challenge for the foundries as well because they have to manufacture these small transistors. That’s a function of having good yield.

Process variations create difficulties when accurately modeling nano-scale transistors because they create multi-dimensional uncertainties on device characteristics. Moving to 16- and 14nm nodes, 3D FinFET structure adds in more modeling challenges due to its new structure and complicated parasitics. As such, circuit designers are requested to understand the coverage, usage and limitations of foundry SPICE models.

They’re also challenged with finding the means to put a huge number of elements together. EDA vendors have taken notice here as well because they face the challenge of simulating a large-sized circuit with high enough accuracy and affordable simulation time.

Traditional FastSPICE is showing its age and limitations. The technology trend and advanced circuit designs require highly accurate SPICE simulator that can handle giga-scale size circuit simulations. Parallelization technology is the key, but no commercial SPICE simulator with patched parallel solutions can meet the needs. The trend we see is having a giga-scale SPICE simulator, with parallelization built-in from the ground up delivering giga-scale capacity with no accuracy compromises and significant speedup over traditional SPICE. At 16- and 14nm, FinFET circuit design sizes increase dramatically due to its 3D structure and complex parasitics. Giga-scale SPICE meets such challenges. No small feat, as the circuit designers pointed out.

Using nano-scale elements to design giga-scale circuits presents its own challenges, mainly due to variability, a DFY issue. Having a large amount of extremely small elements –– nanometer-sized transistors –– tightly packed together is a variability nightmare because every tiny variation could cause the function, performance or yield to change on the whole product. Such challenge increases with the technology advancement.

Caption: The design and manufacturing challenges for foundries, fabless design houses and EDA vendors. (Figure sources: Intel Tri-Gate transistors and Intel i7 CPU).

Such variation can be accounted for in the design phase. It’s a matter of how to accurately model small variations, efficiently simulate the large-sized circuit with small variation on each small element, and with variation modeling and simulation capabilities, how to improve designs to achieve optimum performance and yield.

Yes, a huge challenge, but critical for advanced IC designs. Depending on the number of instances to be varied, simulating the impact of variations, essentially Monte Carlo simulation, would require a different number of samplings, ranging from thousands (3σ) to billions (>6σ).

Consequently, the keys here are accurate modeling, giga-scale simulation and advanced high sigma sampling technologies that can reduce the number of sampling by orders of magnitude with the same level of accuracy. FinFET creates additional challenges as it requires very high sigma simulations (e.g., 7σ) for SRAM designs.

The answer as we heard from the circuit designers in China and Taiwan and others is that the only way out of these challenges is to more tightly integrate tools for nano-scale modeling, giga-scale SPICE simulation and DFY.

Dr. Lianfeng Yang currently serves as the Vice President of Marketing at ProPlus Design Solutions, Inc. Prior to co-founding ProPlus, he was a senior product engineer at Cadence Design Systems leading the product engineering and technical support effort for the modeling product line in Asia. Dr. Yang has over 40 publications and holds a Ph.D. degree in Electrical Engineering from the University of Glasgow in the U.K.

Blog Review November 18 2013

Monday, November 18th, 2013

Dick James of Chipworks says that 28-nm samples they have seen from GLOBALFOUNDRIES and Samsung are remarkably similar, and ponders the possibility of Apple’s A7 chips being fabricated in New York in the not too distant future.

Recent progress in silicon photonics and optical interconnects is the focus of Pete Singer’s blog. Fujitsu and Intel recently demonstrated the world’s first Optical PCIe Express (OPCIe) based server, using Intel’s silicon photonics chip. Ludo Deferm of imec talks about what’s’ needed for intrachip optical communication.

Rich Wawrzyniak of Semico talks about what he learned from a discussion with Sundar Iyer, CEO of Memoir Systems, on the company’s new Pattern Aware Memory IP technology. Memoir has identified several different types of memory-processor operations and has created memories that perform these functions in the normal course of their operation within the system. In addition, this approach can save designers and device architects a considerable amount of die area, producing tangible power savings while increasing device performance.

Phil Garrou covers three new developments in the area of 3D integration this week. He looks at work from Leti/ST Microelectronics that explored the limits of conventional interconnects on RDL (vs damascene). They were able to achieve 8 µm line/spaces with high uniformity and reproducibility. He also reports on work from BESI and imec on thin wafer handling, and a new low temp via reveal process developed by SPTS.

Crossbar Unveils Resistive RAM with Simple, Three-Layer Structure

Sunday, September 1st, 2013

By Pete Singer

Crossbar, Inc., a start-up company, unveiled a new Resistive RAM (RRAM) technology that will be capable of storing up to one terabyte (TB) of data on a single 200mm2 chip. A working memory was produced array at a commercial fab, and Crossbar is entering the first phase of productization. “We have achieved all the major technical milestones that prove our RRAM technology is easy to manufacture and ready for commercialization,” said George Minassian, chief executive officer, Crossbar, Inc. The company is backed by Artiman Ventures, Kleiner Perkins Caufield & Byers and Northern Light Venture Capital.

The technology, which was conceived by Professor Wei Lu of the University of Michigan, is based on a simple three-layer structure of silver, amorphous silicon and silicon (FIGURE 1). The resistance switching mechanism is based on the formation of a filament in the switching material when a voltage is applied between the two electrodes. Minassian said the RRAM is very stable, capable of withstanding temperature swings up to 125°C, with up to 10,000 cycles, and a retention of 10 years. “The filaments are rock solid,” he said.

Crossbar has filed 100 unique patents, with 30 already issued, relating to the development, commercialization and manufacturing of RRAM technology.

After completing the technology transfer to Crossbar’s R&D fab and technology analysis and optimization, Crossbar has now successfully developed its demonstration product in a commercial fab. This working silicon is a fully integrated monolithic CMOS controller and memory array chip. The company is currently completing the characterization and optimization of this device and plans to bring its first product to market in the embedded SOC market.

Sherry Garber, Founding Partner, Convergent Semiconductors, said: “RRAM is widely considered the obvious leader in the battle for a next generation memory and Crossbar is the company most advanced to show working demo that proves the manufacturability of RRAM. This is a significant development in the industry, as it provides a clear path to commercialization of a new storage technology, capable of changing the future landscape of electronics innovation.”

FIGURE 1. The resistance switching mechanism of Crossbar’s technology is based on the formation of a filament in the silicon-based switching material when a voltage is applied between the two electrodes.

Crossbar technology can be stacked in 3D, delivering multiple terabytes of storage on a single chip. Its simplicity, stackability and CMOS compatibility enables logic and memory to be integrated onto a single chip at the latest technology node (FIGURE 2).

Crossbar’s technology will deliver 20x faster write performance; 20x lower power consumption; and 10x the endurance at half the die size, compared to today’s best-in-class NAND Flash memory. Minassian said the biggest advantage of the technology is its simplicity. “That allowed us in three years time to get from technology understanding, characterization, cell array and put a device together,” he said.

Minassian said RRAM compares favorably with NAND, which is getting more complex and expensive. “In 3D NAND, you put all of these thing layers of top of each other – 32 layers, or 64 or 128 in some cases – then you have to etch them, you have to slice them all at once and the equipment required for that accuracy and that geometry is very expensive. This is one of the reasons that 3D has been very difficult for NAND to be introduced.” With the Crossbar approach, “you’re always dealing with three layers. It’s much easier to stack these and it gives you a huge density advantage,” Minassian said.

“The switching media is highly resistive,” explains Minassian. “If you try to read the resistance between top and bottom electrode without doing anything, it’s a high resistance. That’s the off state. To turn on the device, we apply a positive voltage to the top electrode. That ionizes the metal on the top layer and puts the metal ions into the switching media. The metal ions form a filament that connect the top and bottom electrode. The moment they hit the bottom electrode, you have a short, which means that the top and bottom electrode are connected which means they have a low resistance.” The low resistance state is the on state. He said that although silver is not commonly used in front-end CMOS processing, the RRAM memory formation process is a back-end process. “You produce all your CMOS and then right before the device exits the fab, you put the silver on top,” he said. The silver is deposited, encapsulated, etched and then packaged. “That equipment is available, you just have to isolate it at the end,” Minassian said.

FIGURE 2. Crossbar’s simple and scalable memory cell structure enables a new class of 3D RRAM which can be incorporated into the back end of line of any standard CMOS manufacturing fab.

The approach is also CMOS compatible, with processes used to fabricate the memory layers all running at less than 400°C. “This allows you to not only be CMOS compatible, but it allows you to stack more and more of these memory layers on top of each other,” Minassian said. “You can put the logic, the controllers and microprocessors, next to the memory in the same die. That allows you to simplify packaging and increase performance.”

Another advantage compared to NAND is that the controllers used to address the cells can be less complicated. Minassian said that in conventional cells, 30 electrons are required to produce 1 Volt. “If you shrink that to a smaller node, the number of electrons is less. Fewer electrons are much harder to detect. You need a massive controller that does error recovery and complex coding so if the bits are changed, it can still provide you the right program to execute.” Also, because the Crossbar RRAM is capable of 10,000 write cycles, less complicated controllers are needed. Today’s NAND is capable of only 1000 write cycles. “If you write information 1000 times, that cell is destroyed. It will not contain or maintain the information. You have this complex controller that keeps track of how many cells have been written, how many times, to make sure all of them are aged equally,” Minassian said.

Non-volatile memory, expected to grow to become a $60 billion market in 2013, is the most common storage technology used for both code storage (NOR) and data storage (NAND) in a wide range of electronics applications. Crossbar plans to bring to market standalone chip solutions, optimized for both code and data storage, used in place of traditional NOR and NAND Flash memory. Crossbar also plans to license its technology to SOC developers for integration into next-generation systems-on-chips (SOC).

Michael Yang, Senior Principal Analyst, Memory and Storage, IHS, said: “Ninety percent of the data we store today was created in the past two years. The creation and instant access of data has become an integral part of the modern experience, continuing to drive dramatic growth for storage for the foreseeable future. However, the current storage medium, planar NAND, is seeing challenges as it reaches the lower lithographies, pushing against physical and engineering limits. The next generation non-volatile memory, such as Crossbar’s RRAM, would bypass those limits, and provide the performance and capacity necessary to become the replacement memory solution.”

Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.