Part of the  

Solid State Technology


The Confab


About  |  Contact

Posts Tagged ‘Applied Materials’

Next Page »

The Week in Review: March 21, 2014

Friday, March 21st, 2014

Research from University of California, Berkeley scientists sponsored by Semiconductor Research Corporation (SRC) promises to revolutionize portable radio frequency (RF) electronics and communication systems via advancements in on-chip inductors by leveraging embedded nanomagnets. The UC Berkeley research focuses on using insulated nano-composite magnetic materials as the filling material to shrink the size and improve the performance of high frequency on-chip inductors, thereby enabling a new wave of miniaturized electronics and wireless communications devices.

North America-based manufacturers of semiconductor equipment posted $1.29 billion in orders worldwide in February 2014 (three-month average basis) and a book-to-bill ratio of 1.00, according to the February EMDS Book-to-Bill Report published today by SEMI.   A book-to-bill of 1.00 means that $100 worth of orders were received for every $100 of product billed for the month.

Dr. Tzu-Yin Chiu, Chief Executive Officer & Executive Director of SMIC presented the SEMICON China 2014 opening keynote yesterday and was given a SEMI Outstanding EHS Achievement Award.

EV Group (EVG), a supplier of wafer bonding and lithography equipment for the MEMS, nanotechnology and semiconductor markets, today announced that it has opened a new, wholly owned subsidiary in Shanghai, called EV Group China Ltd., which will serve as regional headquarters for all of EVG’s operations in China.  The new subsidiary, which houses a local service center and spare parts management facility, will further strengthen EVG’s presence in the region and support the company’s ongoing efforts to improve service and response times to local customers.

ChaoLogix, Inc., a semiconductor technology provider focused on developing embedded security and low-power design intellectual property, today introduced ChaoSecure technology that deters side channel attacks on semiconductor chips and contributes a superior layer of security compared to existing solutions. ChaoLogix’s ChaoSecure technology is a hardware-based solution designed to provide optimal performance at the nexus of security and power. Proven in silicon and validated by an independent security lab, ChaoSecure is a secure standard cell library that can be easily integrated into an existing integrated circuit (IC) — making it the ideal security solution in terms of cost and performance for designing complex applications ranging from smart cards to smart phones.

Applied Materials, Inc. this week announced that it was named a 2014 World’s Most Ethical Company by the Ethisphere Institute, an independent center of research promoting best practices in corporate ethics and governance. This is the third consecutive year Applied Materials has received the annual award, which recognizes organizations that continue to demonstrate ethical leadership and corporate behavior.

Blog review March 17, 2014

Monday, March 17th, 2014

Pete Singer is delighted to report that Dr. Roawen Chen, Senior Vice Present of global operations at Qualcomm, has accepted our invitation to deliver the keynote talk at The ConFab, on Monday June 23rd. As previously announced, Dr. Gary Patton, Vice President of IBM’s Semiconductor Research and Development Center in East Fishkill, New York, will deliver the keynote on the second day, on Tuesday June 24th.

Phil Garrou takes a look at what was reported at SEMI’s 2.5/3D IC Summit held in Grenoble, focusing on presentations from Gartner, GLOBALFOUNDRIES, TSMC and imec. He writes that GLOBALFOUNDRIES has been detailing their imminent commercialization of 2.5/3D IC for several years, and provide a chart showing the current status report. TSMC offered a definition of their supply chain model where OSATS are now integrated.

Bharat Ramakrishnan of Applied Materials writes about the importance of wearable electronics in the Internet of Things (IoT) era, and the role that precision materials engineering will play. He note that one key part of the wearables ecosystem that is still in need of new innovations is the battery. Two of the biggest challenges to overcome are the thick form factor due to battery size, and the lack of adequate battery life, thus requiring frequent recharging.

Blog review March 10, 2014

Monday, March 10th, 2014

Pete Singer is pleased to announce that IBM’s Dr. Gary Patton will provide the keynote talk at The ConFab on Tuesday, June 24th. Gary is Vice President of IBM’s Semiconductor Research and Development Center in East Fishkill, New York, and has responsibility for IBM’s semiconductor R&D roadmap, operations, and technology development alliances.

Nag Patibandla of Applied Materials describes a half-day workshop at Lawrence Berkeley Lab that assembled experts to discuss challenges and identify opportunities for collaboration in semiconductor manufacturing including EUV lithography, advanced etch techniques, compound semiconductors, energy storage and materials engineering.

Adele Hars of Advanced Substrate News reports on a presentation by ST’s Joël Hartmann (EVP of Manufacturing and Process R&D, Embedded Processing Solutions) during SEMI’s recent ISS Europe Symposium. FD-SOI is significantly cheaper, outdoes planar bulk and matches bulk FinFET in the performance/power ratio, and keeps the industry on track with Moore’s Law, she writes.

Phil Garrou reports on the RTI- Architectures for Semiconductor Integration & Packaging (ASIP) conference, which is focused on commercial 3DIC technology. Timed for release at RTI ASIP was the announcement that Novati had purchased the Ziptronix facility outside RTP NC. Tezzaron had been a licensee of the Ziptronix’s direct bonding technologies, ZiBond™ and DBI® and they now have control of the Ziptronix facility to serve as a second source for their processing. In addition Tezzaron’s Robert Patti announced that they were partnering with Invensas on 2.5 and 3DIC assembly.

Vivek Bakshi, EUV Litho, Inc., blogs that most of the papers at this year’s EUVL Conference during SPIE’s 2014 Advanced Lithography program focused on topics relating to EUVL’s entrance into high volume manufacturing (HVM).

On March 2, 2014 SIA announced that worldwide sales of semiconductors reached $26.3 billion for the month of January 2014, an increase of 8.8% from January 2013 when sales were $24.2 billion. After adding in semiconductor sales from excluded companies such as Apple and Sandisk, that total is even higher, marking the industry’s highest-ever January sales total and the largest year-to-year increase in nearly three years. These results are in-line with the Semico IPI index which has been projecting strong semiconductor revenue growth for the 1st and 2nd quarters of 2014.

Experts At The Table: Commercial potential and production challenges for 3D NAND memory technology

Thursday, February 6th, 2014

The last six months have seen several developments concerning 3D memory concepts moving into production, from companies such as Samsung, Micron, Toshiba and Sandisk. What follows are excerpts from a roundtable discussion with SemiMD, Samsung Electronics (SE) in South Korea, which has begun production of its proprietary 3D NAND technology, Bradley Howard, Vice President of Advanced Technology Group, Etch Business Unit, at Applied Materials and Jim Handy from Objective Analysis, which specialises in coverage of the memory industry.

SemiMD: Where does 3D NAND fit into the long-term roadmap for memory technology and is there one technology most likely to dominate?

SE: We successfully came out with the industry’s first 3D NAND (V-NAND) in August 2013, which offers 128 GB on a chip and vertically stacks cell layers that make use of charge trap flash (CTF) technology. The V-NAND has already been used in 960 GB solid state drives (SSDs) for server and enterprise applications.

The V-NAND technology is expected to replace planar NAND market gradually, starting from the high-end enterprise market. We will continue to come up with more advanced V-NAND products with higher density and reliability. The company has also been working on a diversity of next-generation memory technologies including RRAM (or ReRAM) while strengthening its future business competence.

Howard: All major memory customers have 3D NAND transition in their roadmap. We’ll soon see the first generation of 24-layer 3D cell array devices enter the market. RRAM and STT-MRAM technologies are further out from the market as there are still critical process and manufacturing challenges for both the materials and patterning.

Handy: NAND will dominate. 3D NAND is less disruptive than alternative technologies, like RRAM, since it involves the same materials that have been used to produce NAND for its entire lifetime, while RRAM, MRAM, FRAM and so on require new materials that are not as well understood. After 3D NAND has reached a limit and can no longer place increasing numbers of transistors onto a wafer then the door is opened for alternative technologies like RRAM, but that won’t happen until 2023.

How is the semi industry (such as foundries and designers) preparing for the transition to 3D memory?

Handy: The bulk of the semiconductor industry doesn’t need to transition since these technologies are only being used for the highest-density discrete memory chips. The most significant impact should be to the capital equipment market, where a move to 3D could increase materials equipment usage while decreasing spending on lithography tools.

SE: We are working closely with global IT companies in a wide range of fields to expand the market base and application of 3D V-NAND, and we expect that the market will grow rapidly throughout the year. The 3D V-NAND is expected to be adopted in many different applications including SSDs, high-density memory cards and other applications for consumer electronics. While Samsung will work on more V-NAND based applications, the company also will contribute to global IT companies’ development of next-generation IT systems using our 3D V-NAND products.

Are there specific tooling challenges that must be overcome?

SE: The key technologies for V-NAND would be applying 3D CTF structure for individual cells and constructing vertically interconnected cell arrays. We have mastered these technological challenges and will continue to come up with more advanced V-NAND products.

Howard: Fabricating vertically to build multilayer stacks of 3D NAND cells reduces the historical reliance on lithography as the dominant and limiting factor in scaling, and increases the role of materials-enabled deposition and etch to drive vertical scaling. This shift brings formidable device performance and yield challenges for deposition and etch technologies including distortion-free high aspect ratio etching, complex staircase patterning with precise step-width control, and uniform and repeatable deposition.

From a 3D NAND fab perspective, the changing balance of tool types toward significantly more deposition and etch equipment will have a substantial impact on tool footprint and fab layout to enable optimum manufacturing efficiency. Moreover, this new tool balance compromises the capability to adjust manufacturing capacity between NAND and DRAM since planar NAND and DRAM share a high level of commonality with regard to the balance of tool types and lithography.

Handy: For 3D NAND there are significant challenges in putting down layers that have uniform thickness across the entire wafer. There are also issues with pull-back etching for stairsteps that currently increase the lithography load more than was originally anticipated, but this issue should eventually be solved. For alternative technologies there will be issues in bringing new materials into the fab, some of which are antagonistic to the underlying silicon.

To what extent can 3D memory chips be scaled and what challenges does this pose?

Howard: Scaling for the first few generations, from 32 to 48 to 64 and higher cell layer stacks will largely be a matter of adding more vertical layers. The general consensus is that this is sustainable to around 100 device layers, and this will likely require some amount of reduction on the layer thicknesses to control the aspect ratios that must be etched. Even small reductions in thickness are critical, because any reduction gets multiplied by the number of layers. Scaling beyond this will likely require more effort towards thinning the layers through advancements on the device architecture.

Lithographic-based horizontal scaling will continue, but instead of the historical 15-20 percent CD reduction per generation, we expect planar scaling to slow dramatically, equivalent to a CD shrink more on the order of ~5 percent per generation. In addition, layout changes in the peripheral circuits to achieve more efficiency, along with more efficient designs for the complex staircase structure to allow for access to the different layers, are expected. These will all play a role in overall die size efficiency.

For deposition, major challenges to effectively scale vertically are advanced thickness and uniformity controls layer to layer. Any non-uniformity in a film layer will propagate throughout the stack as subsequent layers are deposited on top of it. With more layers in the device stack, this results is more devices potentially impacted by topography.

The challenge for etch is growing high aspect ratios despite a thinning down of individual layers in the stack. Aspect ratios today are already at 60:1. Achieving etch fidelity at such aspect ratios puts pressure on getting higher selectivity to the mask material which already is very thick. In some cases, the aspect ratio for the hard mask is already greater than 20:1, and this is the aspect ratio before even starting the etch into the device stack. While thinning the hard mask layer reduces the overall aspect ratio of the feature, new more resilient patterning films will be required. Higher selectivity will be through a combination of new materials with higher etch resistance and improvements to the etch process chemistry.

SE: We have core technologies to develop more advanced high-density V-NAND devices and are seeking to define manufacturing technologies for stacking more than 24-cell-layer structure which was applied to our first 3D V-NAND device.

The most advanced process technology for conventional NAND using floating gate would be 10 nm-class technology. However, process technology refers to the width of integrated circuitry that is used for NAND on a conventional planar structure. Applying the same considerations to our V-NAND would not be appropriate because the cell array structure has been totally changed.

For example, if we compare the wafer productivity of Samsung’s new 128 GB V-NAND to previous products, it has the approximate chip size of a conventional NAND that was built using approximately mid-10nm-class process technology.

The 3D V-NAND has its strength in scalability. There are plans to continue developing more advanced V-NAND products and applications including 1 TB and higher-density SSDs for servers and enterprise systems and other next-generation memory storage which should lead to the growth of the overall NAND flash market.

Handy: 3D NAND is expected to scale in height, from 16-bit-tall strings to string heights of more than 128 bits. Meanwhile NAND makers will probably find ways of placing these strings closer to each other through more aggressive lithography. There is a lot of room for scaling in 3D.

Everything in 3D is a significant challenge. With vertical scaling the challenges include etching high aspect ratio holes, with the aspect ratio doubling with each doubling of layers. These holes must have absolutely parallel walls or scaling and device operation may be compromised. If the layers are thinned then the atomic layer deposition (ALD) of the layers must be able to apply a constant thickness layer across the entire wafer. This is also true of the layers that are deposited on the walls of the hole. The entire issue of 3D is its phenomenal complexity.

3D NAND: To 10nm and beyond

Wednesday, January 29th, 2014

By Sara Ver-Bruggen, contributing editor

In launching the iPod music player, Apple bumped consumption of NAND flash – a type of non-volatile storage device – driving down cost and paving the way for the growth of the memory technology into what is now a multibillion dollar market, supplying cost-effective storage for smart phones, tablets and other consumer electronic gadgets that do not have high density requirements.

The current iteration of NAND flash technology, 2D – or planar – NAND, is reaching its limits. In August 2013, South Korean consumer electronics brand Samsung announced the launch of its 3D NAND storage technology, in the form of a 24-layer, 128 GB chip. In 2014, memory chipmakers Micron and also SK Hynix will follow suit, heralding the arrival of a much-anticipated and debated technology during various industry conferences in recent years. Other companies, including Sandisk, are all working on 3D NAND flash technology.

Like floors in a tower block, in 3D NAND devices memory cells are stacked on top of each other, as opposed to being spread out on a two-dimensional (2D), horizontal grid like bungalows. Over the last few decades as 2D NAND technology has scaled, the X and Y dimensions have shrunk in order to go to each chip generation. But scaling, as process nodes dip below 20nm and on the path towards 10nm, is proving challenging as physical constraints begin to impinge on the performance of the basic memory cell design. While 2D NAND has yet to hit a wall, it is a matter of time.

Transition to mass production

But despite the potential of 3D NAND and announcements by the leading players in the industry, transferring 3D NAND technology into mass production is very challenging to do. As Jim Handy, from Objective Analysis, points out: “The entire issue of 3D NAND is its phenomenal complexity, and that is why no one has yet shipped a 3D NAND chip yet.” Mass production of Samsung’s device will happen this year. With 3D NAND there is the potential for vertical scaling, going from 16-bit-tall strings to string heights of more than 128 bits.

But while 3D NAND does not require leading-edge lithography, eventually resulting in manufacturing costs that are lower than they would be for the extension of planar NAND, new deposition and etch technologies are required for high-aspect-ratio etch processes. This “staircase” etching requires very precise contact landing. In 3D NAND manufacturing depositing layers of uniform thickness across the entire wafer presents issues with pull-back etching for these “stair steps” that currently increase the lithography load more than was originally anticipated.

Staircase etching requires very precise contact landing.

“Everything in 3D is a significant challenge. With vertical scaling the challenges include etching high aspect ratio holes, with the aspect ratio doubling with each doubling of layers. These holes must have absolutely parallel walls or scaling and device operation may be compromised. If the layers are thinned then the atomic-layer deposition (ALD) of the layers must be able to apply a constant thickness layer across the entire wafer, which is also true of the layers that are deposited on the walls of the hole,” according to Handy.

Indeed, while the best combination of cost, power and performance will be found in 3D NAND architectures, there still remain issues concerning cost, especially. These issues, in the context of their respective memory technology roadmaps, were discussed by memory chipmakers, including Sandisk, SK Hynix and Micron, at a forum organized and sponsored by semiconductor industry equipment manufacturer Applied Materials in December 2013, while the equipment supplier provided some in-depth discussion on 3D NAND manufacturing considerations and challenges. The session was hosted by Gill Lee, Senior Director and Principal Member of Technical Staff Silicon Systems Group at Applied Materials.

Sandisk plays its 2D hand for as long as possible

Ritu Shrivastava, Vice President Technology Development, at Sandisk Corporation, set out the challenge. “Whenever you talk about technology, it has to be in relation to the objectives of your company. In our case we have a $38 billion total available market projected to 2016 and any technology choices that we make have to serve that market.” Examples of products he was referring to include smart phones and tablets. “Our goal is to choose technologies that are most cost-effective and deliver in terms of performance.”

Sandisk has a joint NAND fab investment with Toshiba and the two have had a 128 GB 2D NAND flash chip using 19 nm lithography in production for a while now. They have also previously announced plans to build a semiconductor fab for 16-17 nm flash memory.

”One of our goals is to extend the life of 2D NAND technologies as far as possible because it reflects the huge investment that we have made in fabs and the technology, over the number of years,” said Shrivastava. “Of course, 3D NAND is extremely important and when it becomes cost-effective then it will move into production.” Sandisk plans to start producing its 3D NAND chips in 2016.

“We are travelling in what we think is the lowest cost path in every technology generation, going from 19 nm to 1Y where we at the limit with lithography, and then we will scale to 1Z, which is our next-generation 2D NAND technology. We believe that this scaling path gives us the lowest cost structure in each of the nodes and in terms of cumulative investment.”

But it is not just achieving the smallest die size, it is the cost involved in scaling. Capital equipment investment is what determines success in the market, according to Shrivastava. “Even though we are saying that 3D NAND is a reality there are a couple of things that we need to keep in mind. It leverages existing infrastructure, which is good, but there are still a lot of challenges. 3D NAND devices use TFT as opposed to the floating gate devices commonly used in 2D NAND chips. New controller schemes and boards will be required also.”

So while, according to Shrivastava, 3D NAND is looking very promising, there is a big ‘but’ for a company such as Sandisk, which produces some of the most cost-competitive flash memory devices on the market. “2D NAND still continues to be more cost-effective than 3D NAND and 3D NAND is not yet proven in volume manufacturing. Every new technology takes some time. Getting to mass manufacturing will take time. Our goal is to extend 2D NAND as long as possible, continue to work on 3D NAND and introduce it when it becomes cost-effective.”

Shrivastava sees 2D and 3D NAND technologies co-existing for the rest of the decade. Beyond 3D NAND the company is developing a 3D resistive RAM (RRAM) as the future technology beyond 3D NAND.

From 3D DRAM to 3D NAND

Next Chuck Dennison, Senior Director Process Integration, from Micron, provided an overview of where the company is today in terms of its own NAND memory technology roadmap.

“Our current generation is 16nm NAND that is now in production and we’re showing that it is getting to be a very competitive and very cost-effective technology,” according to Dennison. Micron’s new 16nm NAND process provides the greatest number of bits per sq mm at the lowest cost of any multilayer cell (MLC) device. Eight of these die can hold 128 GB of data. The 16nm storage technology will be released on next-generation solid state drives (SSDs) during 2014. SSDs consist of interconnected flash memory chips as opposed to platters with a magnetic coating used in conventional hard disk drives (HDDs).

Micron 16nm NAND die

“Our next node is a 256 GB class of the NAND memory. Technically it could be extended before taking the full step to 3D NAND.”

Today NAND is the lowest cost-per-bit memory technology and this continued cost-per-bit reduction is really driving the whole of the NAND industry, according to Dennison. It is why NAND replaced DRAM in terms of total dollars and has continued to proliferate across various applications, and is responsible for continued innovation in portable consumer electronics, such as tablets, where so much functionality enabling photography, video recording, storage of an entire music library, and so on, can be packed into one device.

Outlining Micron’s technology scaling path, Dennison explained: “We went to high-K/metal gate to 20 nm and we used the same technology to extend us to 16nm. From there, the company is moving to a vertical channel 3D NAND for a 256 GB class.

“In terms of capital expenditure (CapEx) per wafer it all looks very cost-effective, with a little bit of transition going to 20 nm,” explained Dennison, because of the high-K metal gate, but with minimal increase going to 16nm. “But when you go to 3D NAND it is expensive, per wafer. So if you are increasing your wafer costs by X amount you need a much higher amount of GB per cm sq, so the density we are choosing to go with is a 256 GB class. And when you start actively looking at 3D NAND there are a lot similarities between 3D NAND and DRAM,” he explained, referring to the stacked capacitor of DRAM. “There is a lot planarization, you are etching very high aspect ratio contacts where you need to be very controlled, in terms of how you define your control and CD uniformity. Then there are a lot of additional modules requiring ALD deposition. So we think that there is a lot of opportunity to utilize our DRAM expertise.”

He outlined an inflection point going from 16nm, again. “We’re transitioning to go to the 256 GB density. We think that when we do this it will make financial sense and it will be a cost-effective solution despite the high Capex. And then from there we will continue. With the majority, or bulk, of the market we’ll see vertical NAND continuing to scale with a couple of us scaling fast for that market.”

Dennison also touched on longer term advances in classes of flash memory, in the form of 3D cross-point technology. These are memories stacked in cross-point arrays over CMOS logic to enable memory technology with speed features akin to DRAM but the density and cost effectiveness of NAND. The 3D stacked memory arrays in 3D cross-point technology would make these devices suitable, for future, in very high density computing and even biological systems.

“But, to conclude, NAND will not be replaced and will continue to be the lowest cost, it’s going to be the largest market in tablets, phones and so on. It’s not the best memory technology – it has poor cycling endurance and it has a terrible latency – but it is very low cost at very high density so it is the most cost-effective solution. We think that 3D cross-point absolutely has a market in terms of displacing DRAM and will selectively displace some NAND in very high performance applications but we will stay with NAND and go to 3D NAND.”

Soek-Kiu Lee, VP and Head of the Flash Device Technology Group, at SK Hynix brought the audience up to speed on his company’s NAND technology. Every year SK Hynix has increased bit density per area by around 50%. The company’s 16nm 64 GB MLC NAND flash, based on floating gate technology, has been in production since mid-2013 with SK Hynix now entering full scale mass production of 16nm chips. SK Hynix will start to ship samples of its 3D NAND chips this year with mass production happening later in 2014.

Like Shrivastava, Lee expects that 2D NAND and 3D NAND will co-exist and compete with each other in terms of reliability, performance and density, for some time and that the big challenges facing the transition to 3D NAND architectures include stabilization of multi-stack patterning to improve yields, better metrology and defect monitoring in the 3D structure itself.

Head for heights

Lastly, Applied Materials was able to provide some insight into manufacturing the more complex structures that moving to 3D NAND device architecture entails. Very simplistically, to make 3D NAND flash devices requires building extremely tall multilayer structures. Every layer in the device requires an insulating layer, so – for example – a 32-layer device is really a 64-layer device. As a result of this, aspect ratios of the structure being etched are getting to be very high and the challenge that this poses is nothing less than a game-changer for etch and deposition, according to Applied Materials’ Vice President, Advanced Technology Group Etch Business Unit, Bradley Howard.

“Historically, if you look at how scaling has gone, it has been limited by lithography on getting to the next node down, now we getting to the point where scaling is being driven by deposition and etching because as the scaling is now going in a vertical direction you’ve eased out the design rules.” The reality is that lithography is still important, Howard said, listing off control, good uniformity and other factors. ‘Everything that you had to have from lithography before still needs to be there but it just does not need to be the limiting factor for scaling.”

High aspect ratios present lots of challenges. Standard photolithography will not hold up for the long etches required for etching such deep features so hard mask layers are needed. “Depositioning is transitioning from single layer depositions in typically thinner films to multilayer stacks where you go and deposit alternating stacks of films and then also very thick films for both device and the hard mask,” said Howard.

Howard addressed the gates axis, an alternating stack of materials built up with alternating layers. “You need to have very precise control and very low defectivity. Historically, if you had a defect come in on a film it affected that bit, or that area. Now if you get a defect that gets deposited on your first layer down at the bottom it becomes a propagating defect that goes up the entire stack and it is going up in regions , which means that the defect density on deposition is becoming more important.”

Howard then moved on to hard masks. “We are going to have thicker hard masks because the aspect ratios of what you are trying to etch are getting very extreme as well as the amount of depth you have to etch. Having a micron or a micron-and-a-half of hard mask is not unusual. In effect, the hard mask that you are forming is its own high aspect ratio feature and then it is forming a high aspect ratio feature below it. In addition, there are various challenges on the isolation on getting the gap filled between the features and also into these very complex three dimensional structures.

“On the etch side high aspect ratio is really the key. There are multiple features, contacts in the array, there are contacts coming out of the staircase, and 60: 1 aspect ratios are becoming the common target here.

“At the edge of the array access still has to be made at each one of the layers, so a staircase structure is made to enable different landing pads for contacts to come down. But some of the contacts – towards the top – are very shallow and the ones at the bottom are extremely deep.

“You might think it might be achieved by doing a litho step and an etch step and a litho step and an etch step and doing that 32, 64, or whatever number of times, but what happens is that you are starting out with a feature and you etch down into the feature then you pull back the resist and then you etch again and then you pull back the resist and so you start to form your ‘steps’ that way and you do that as many times as you can get away with, depending on the amount of resist that you have. So, you can envision that you are trying to pull this resist back really fast. The problem is the resist is now determining the CD for the cell, so you need to have good control in place.” Howard summarized the challenges as being about sequential processes for both deposition and etching, thick films – whether it be the alternating stack of films or the thick films that are done to separate out the different arrays – and, finally, defect densities – especially with deposition – which are becoming more critical than ever before because of the additive effect on the deposition.

The panellists:

Dr Ritu Shrivastava, Vice President Technology Development, at SanDisk Corporation

Chuck Dennison, Senior Director, Process Integration, at Micron

Dr Soek-Kiu Lee, VP and Head of the Flash Device Technology Group, at SK Hynix

Hang-Ting Liu, Deputy Director Nanotechnology R&D Division, at Macronix International Co.

Dr Bradley Howard, Vice President, Advanced Technology Group Etch Business Unit, at Applied Materials

Experts At The Table: Exploring the relationship between board-level design and 3D, and stacked, dies

Tuesday, December 17th, 2013

By Sara Verbruggen

SemiMD discussed what board level design can tell us about chip-level (three-dimensional) 3D and stacked dies with Sesh Ramaswami, Applied Materials’ Managing Director, TSV and Advanced Packaging, Advanced Product Technology Development, and Kevin Rinebold, Cadence’s Senior Product Marketing Manager. What follows are excerpts of that conversation.

SemiMD: What key, or major, challenge does the transition to 3D and stacked dies – and increasingly ‘advanced packaging’ – present when it comes to board-level design?

Ramaswami: The three-layer system architecture comprising the printed circuit board (PCB) system board, organic packaging substrate and silicon die offers the greatest integration flexibility. From a design perspective, this configuration places the most intensive co-design challenges on the die and substrate layers. On the substrate, the primary challenges are dielectric material, copper (Cu) line spacing and via scaling. However, when the packaged die attaches to the PCB through the ball grid array (BGA), surface-mount packaging, used for used for integrated circuits for devices such as microprocessors, the design challenges are more considerable. For example, they include limitations on chip size (I/O density), warpage and worries about co-efficient of thermal expansion mismatch between the materials.

Rinebold: Any advanced ‘BGA style’ package, regardless if it is three-dimensional (3D) or flat can have a significant impact on PCB layer count, route complexity, as well as cost. Efficient package ball pad net assignment and patterning of power and ground pins can make the difference between a four-layer and a six-layer PCB. Arriving at the optimal ball pad assignment necessitates coordinated planning across the entire interconnect chain from chip level macros to board level components. This planning requires new tools and flows capable of delivering a multi-fabric view of the system hierarchy while providing access to domain specific data like macro placement, I/O pad ring devices, bump patterns, ball pad assignments, and placement of critical PCB components and connectors.

SemiMD: 3D chip stacking and stacked die chip-scale packaging is favoured by the consumer electronics industry to enable better performing mobile electronics – in terms of faster performance, less power hungry devices, and so forth – but how do PCB design and testing tools need to adapt?

Rinebold: One benefit of these package formats is that they entail moving most of the high-performance interconnect and components off the PCB onto their own dedicated substrate. With increasing data rates and lower voltages there is little margin for error across the entire system placing a premium on signal quality and power delivery between the board and package.

In addition to high-speed constraints and checking, design tools must provide innovative functionality to assist the designer in implementing high-performance interconnect. In some situations complete automation (like auto-routing) cannot provide satisfactory results and still enforce the number of diverse and sometime ambiguous constraints. Designers will require auto-interactive tools that enable them to apply their experience and intuition supported by semi-automatic route engines for efficient implementation of constraints and interconnect. Example of such tools include the ability to plan and implement break-out on two ends of an interface connecting to high pin count BGAs to reduce route time and via counts. Without such tools the time to route high pin count BGAs can increase significantly.

Methodologies must adapt to incorporate electrical performance assessment (EPA) into the design process. EPA enables designers to evaluate electrical quality and performance throughout the design process helping avoid the backend analysis crunch – possibly jeopardizing product delivery. It utilizes extraction technology in a manner that provides actionable feedback to the designer helping identify and avoid issues related to impedance discontinuities, timing, coupling, or direct current (DC) current density.

SemiMD: More specifically, what impact will this trend towards greater compactness – i.e. smaller PCB footprint, but with more stacked dies and complex packaging – have on interconnection technologies?

Ramaswami: The trend towards better quality, higher-component density PCBs capable of supporting a wide range of die has significant implications for interconnect design. An additional challenge, is attaching complex chips on both sides of a board. Furthermore, with PCBs going thinner to fit the thin form factor requirements for mobile devices, dimensional stability and warpage must be addressed.

Rinebold: In some regards stacked applications simplify board level layout by moving high-bandwidth interconnect off the PCB and consolidating it on smaller, high density advanced package substrates. However, decreasing package ball pad pitch and increased pin density will drive use of build-up substrate technology for the PCB. This high density interconnect (HDI) enables smaller feature sizes and manufacturing accuracy necessary to support the fan-out routing requirements of these advanced package formats. Design tools must support HDI constraints and rules to ensure manufacturability along with functionality to define and manipulate the associated structures like microvias.

SemiMD: How will PCB manufacturing processes, tools and materials need to change to address this challenge?

Ramaswami: To manufacture a more robust integrated 3D stack, I think several fundamental innovations are needed. These include improving defect density and developing new materials such as low warpage laminates and less hygroscopic dielectrics. Another essential requirement is supporting finer copper line/spacing. Important considerations here are maintaining good adhesion while watching out for corrosion. Finally, for creating the necessary smaller vias, the industry needs new etching techniques to replace mechanical drilling techniques.

SemiMD: So as 3D chip stacking and stacked dies become more mainstream technologies, how will board level design need to develop, in the years to come?

Rinebold: One challenge will be visibility and consideration of the PCB during chip-level floor-planning and awareness of how decisions made early on impact downstream performance and cost. New tools that deliver a multi-fabric view of the system hierarchy while providing access to domain specific data will facilitate the necessary visibility for coordinated decision making. However these planning tools are just one component of an integrated flow encompassing logic definition, implementation, analysis, and sign-off for the chip, package, and PCB.

Blog review December 16, 2013

Monday, December 16th, 2013

Randhir Thakur of Applied Materials wishes the transistor a happy 66th birthday, noting that the transistor is truly one of the most amazing technological innovations of all time! He says it’s estimated that more than 1200 quintillion transistors will be manufactured in 2015, making the transistor the most ubiquitous man-made device on the planet.

Phil Garrou writes that 3DIC memory, and therefore all of 2.5/3D technology, took one step closer to full commercialization last week with the High Bandwidth Memory HBM joint development announcement from AMD and Hynix at the RTI 3D ASIP meeting in Burlingame CA.

Zhihong Liu, Executive Chairman of ProPlus Design Solutions is celebrating 20 years of BSIM3v3 SPICE models. He notes that with continuous geometry down-scaling in CMOS devices, compact models became more complicated as they needed to cover more physical effects, such as gate tunneling current, shallow trench isolation (STI) stress and well proximity effect (WPE).

Applied Materials and Tokyo Electron held a media roundtable in Japan to discuss the merger of equals announced on September 24, 2013. Tetsuro Higashi, Chairman, President and CEO of Tokyo Electron, who will become Chairman of the new company, and Mike Splinter, Executive Chairman of Applied Materials, who will serve as Vice-Chairman, addressed the audience of more than 20 members of the Japanese media. Kevin Winston blogs about the event.

Pete Singer is freshly back from the International Electron Devices Meeting (IEDM). “A dream for the device engineer could be a nightmare for a process integration engineer,” said Frederic Boeuf of ST Microelectronics in the opening talk. That seemed to be echoed throughout the conference, where the potential of new devices such as tunnel FETs or materials such as graphene were always tempered with a dose of reality that materials had to be deposited, patterned, annealed to create devices, and those devices had to be connected.

Solid State Watch: Nov. 8-14, 2013

Friday, November 15th, 2013
YouTube Preview Image

Design for Yield Trends

Tuesday, November 12th, 2013

By Sara Ver-Bruggen

Should foundries establish and share best practices to manage sub-nanometer effects to improve yield and also manufacturability?

Team effort

Design for yield (DFY) has been referred to previously on this site as the gap between what the designers assume they need in order to guarantee a reliable design and what the manufacturer or foundry thinks they need from the designer to be able to manufacture the product in a reliable fashion. Achieving and managing this two-way flow of information becomes more challenging as devices in high volume manufacturing have 28 nm dimensions and the focus is on even smaller dimension next-generation technologies. So is the onus on the foundries to implement DFY and establish and share best practices and techniques to manage sub-nanometer effects to improve yield and also manufacturability?

Read more: Experts At The Table: Design For Yield Moves Closer to the Foundry/Manufacturing Side

‘Certainly it is in the vital interest of foundries to do what it takes to enable their customers to be successful,’ says Mentor Graphics’ Senior Marketing Director, Calibre Design Solutions, Michael Buehler, adding, ‘Since success requires addressing co-optimization issues during the design phase, they must reach out to all the ecosystem players that enable their customers.’

Mentor refers to the trend of DFY moving closer to the manufacturing/foundry side as ‘design-manufacturing co-optimization’, which entails improving the design both to achieve higher yield and to increase the performance of the devices that can be achieved for a given process.

But foundries can’t do it alone. ‘The electronic design automation (EDA) providers, especially ones that enable the critical customer-to-foundry interface, have a vital part in transferring knowledge and automating the co-optimization process,’ says Buehler. IP suppliers must also have a greater appreciation for and involvement in co-optimization issues so their IP will implement the needed design enhancements required to achieve successful manufacturing in the context of a full chip design.

As they own the framework of DFY solutions, foundries that will work effectively with both the fabless and the equipment vendors will benefit from getting more tailored DFY solutions that can lead to shorter time-to-yield, says Amiad Conley, Applied Materials’ Technical Marketing Manager, Process Diagnostics and Control. But according to Ya-Chieh Lai, Engineering Director, Silicon and Signoff Verification, at Cadence, the onus and responsibility is on the entire ecosystem to establish and share best practices and techniques. ‘We will only achieve advanced nodes through a partnership between foundries, EDA, and the design community,’ says Ya-Chieh.

But whereas foundries are still taking the lead when it comes to design for manufacturability (DFM), for DFY the designer is intimately involved so he is able to account for optimal trade-off in yield versus PPA that result in choices for specific design parameters, including transistor widths and lengths.

For DFM, foundries are driving design database adjustments required to make a particular design manufacturable with good yield. ‘DFM modifications to a design database often happen at the end of a designer’s task. DFM takes the “ideal” design database and manipulates it to account for the manufacturing process,’ explains Dr Bruce McGaughy, Chief Technology Officer and Senior Vice President of Engineering at ProPlus Design Solutions.

The design database that a designer delivers must have DFY considerations to be able to yield. ‘The practices and techniques used by different design teams based on heuristics related to their specific application are therefore less centralized. Foundries recommend DFY reference flows but these are only guidelines. DFY practices and techniques are often deeply ingrained within a design team and can be considered a core competence and, with time, a key requirement,’ says McGaughy.

In the spirit of collaboration

Ultimately, as the industry continues to progress requiring manufacturing solutions that increasingly tailored and more and more device specific, this requires earlier and deeper collaboration between equipment vendors and foundry customers in defining and developing the tailored solutions that will maximize the performance of equipment in the fab. ‘It will also potentially require more three-way collaboration between the designers from fabless companies, foundries, and equipment vendors with the appropriate IP protection,’ says Conley.

A collaborative and open approach between the designer and the foundry is critical and beneficial for many reasons. ‘Designers are under tight pressures schedule-wise and any new steps in the design flow will be under intense scrutiny. The advantages of any additional steps must be very clear in terms of the improvement in yield and manufacturability and these additional steps must be in a form that designers can act on,’ says Ya-Chieh. The recent trend towards putting DFM/DFY directly into the design flow is a good example of this. ‘Instead of purely a sign-off step, DFM/DFY is accounted for in the router during place and route. The router is able to find and fix hotspots during design and, critically, to account for DFM/DFY issues during timing closure,’ he says. Similarly, Ya-Chieh refers to DFM/DFY flows that are now in place for custom design and library analysis. ‘Cases of poor transistor matching due to DFM/DFY issues can be flagged along with corresponding fixing guidelines. In terms of library analysis, standard cells that exhibit too much variability can be systematically identified and the cost associated with using such a cell can be explicitly accounted for (or that cell removed entirely).’

‘The ability to do “design-manufacturing co-optimization” is dependent on the quality of information available and an effective feedback loop that involves all the stakeholders in the entire supply chain: design customers, IP suppliers, foundries, EDA suppliers, test vendors, and so on,’ says Buehler. ‘This starts with test chips built during process development, but it must continue through risk manufacturing, early adopter experiences and volume production ramping. This means sharing design data, process data, test failure diagnosis data and field failure data,’ he adds.

A pioneer of this type of collaboration was the Common Platform Consortium initiated by IBM. Over time, foundries have assumed more of the load for enabling and coordinating the ecosystem. ‘GLOBALFOUNDRIES has identified collaboration as a key factor in its overall success since its inception and been particularly open about sharing foundry process data,’ says Buehler.

TSMC has also been a leader in establishing a well-defined program among ecosystem players, starting with the design tool reference flows it established over a decade ago. Through its Open Innovation Platform program TSMC is helping to drive compatibility among design tools and provides interfaces from its core analysis engines and third party EDA providers.

In terms of standards Si2 organizes industry stakeholders to drive adoption of collaborative technology for silicon design integration and improved IC design capability. Buehler adds: ‘Si2 working groups define and ratify standards related to design rule definitions, DFM specifications, design database facilities and process design kits.’

Open and trusting collaboration helps understand the thriving ecosystem programs that top-tier foundries have put together. McGaughy says: ‘Foundry customers, EDA and IP partners closely align during early process development and integration of tools into workable flows. One clear example is the rollout of a new process technology. From early in the process lifecycle, foundries release 0.x versions of their PDK. Customers and partners expend significant amounts of time, effort and resources to ensure the design ecosystem is ready when the process is, so that design tapeouts can start as soon as possible.’

DFY is even more critically involved in this ramp-up phase, as only when there is confidence in hitting yield targets will a process volume ramp follow. ‘As DFY directly ties into the foundation SPICE models, every new update in PDK means a new characterization or validation step. Only a close and sustained relationship can make the development and release of DFY methodologies a success,’ he states.

Experts At The Table: Design For Yield (DFY) moves closer to the foundry/manufacturing side

Friday, November 8th, 2013

By Sara Verbruggen

SemiMD discussed the trend for design for yield (DFY) moving closer to the foundry/manufacturing side with Dr Bruce McGaughy, Chief Technology Officer and Senior Vice President of Engineering, ProPlus Design Solutions, Ya-Chieh Lai, Engineering Director, Silicon and Signoff Verification, Cadence and Michael Buehler, Senior Marketing Director, Calibre Design Solutions, Mentor Graphics, and Amiad Conley, Technical Marketing Manager, Process Diagnostics and Control, Applied Materials. What follows are excerpts of that conversation.

SemiMD: What are the main advantages for design for yield (DFY) moving closer to the manufacturing/foundry side, and is it a trend with further potential?

Forte: Mentor refers to this trend as ‘design-manufacturing co-optimization’ because in the best scenario it involves improving the design both to achieve higher yield and to increase the performance of the devices that can be achieved for a given process. Companies embrace this opportunity in different ways. At one end of the scale, some fabless IC companies do the minimum they have to do to pass the foundry sign-off requirements. However, some companies embrace co-optimization as a way to compete, both by decreasing their manufacturing cost (higher yield means lower wafer costs), and by increasing the performance of their products at a given process node compared to their competition. Having a strong DFY discipline also enables fabless companies to have more portability across foundries, giving them alternate sources and purchasing power.

Ya-Chieh: Broadly speaking there are three typical insertion points for design for manufacturability (DFM)/DFY techniques. The first is in the design flow as design is being done. The second is as part of design sign-off. The last is done by the foundry as part of chip finishing.

The obvious advantage of DFY/DFM moving closer to the manufacturing/foundry side is in terms of ‘access’ to real fab data. This information is closely guarded by the fab and access is still only in terms of either encrypted data or models that closely correlate to silicon data but that have been carefully scrubbed of too many details.

However, the complexity of modern designs requires that DFM/DFY techniques need to be as far upstream in the design flows as possible/practicable. Any DFM/DFY technique that requires a modification to the design must be comprehended by designers so that any design impact can be properly accounted for so as to prevent the possibility of design re-spins late in the design cycle.

What we are seeing is not that DFM/DFY is moving closer to the manufacturing, or foundry, side, but that different techniques have been needed over the years to address the need of the designer for information as early as possible. Initially much of DFM/DFY was in the form of complex rule-based extensions to DRC, but much of this has since moved to include model-based and, in many cases, pattern-based checks (or some combination thereof).  More recently, the trend has been towards deeper integration with design tools and more automated fixing or optimization. DFM/DFY techniques that merely highlight a “hotspot” is insufficient. Designers need to know how to fix the problem and in the event there is a large number of fixes designers need to know how to automatically fix the problem. In other words the trend is about progressing towards better techniques for providing this information upstream and in ways that can be actionable by designers.

Conley: The key benefit in DFY approach is the ability to provide tailored solutions to the relevant manufacturing steps in a way that optimize performance based on device specific characteristics. This trend will definitely evolve further. We definitely see the trend in the defect inspection and review loops in foundries, which are targeted to generate paretos of the representative killer defects at major process steps. Due to the defects becoming smaller and the optical limitation of the detection tools, design information is used today to enable smarter sampling and defect classification in the foundries. To accelerate yield ramp going forward, robust infrastructure development is needed as an enabler to extract relevant information from chip design to the defect inspection, defect review and metrology equipment.

McGaughy: The foundation information used by designers in DFY analysis comes from the fab/foundry. This information is encapsulated in the form of statistical device models provided to the design community as part of the process design kit (PDK). Statistical models and, more recently, layout-dependent effect information is used by designers to determine the margin their design has for a particular process. This allows the designers to optimize their design to achieve the desired yield versus power, performance, area (PPA) trade-off. Without visibility into process variability via the foundry-provided Simulation Program with Integrated Circuit Emphasis (SPICE) models, DFY would not be viable. Hence, foundries are clearly at the epicenter of DFY. As process complexity increases and more detailed information of process variation effects are captured into SPICE models and made available to designers, it can be expected that the role of the foundry will continue to be more important in this respect over time.

SemiMD: So does this place a challenge on the EDA industry, or, how are EDA companies, such as ProPlus, helping to enable this trend?

McGaughy: The DFY challenge that designers face creates an opportunity for the EDA industry. As process complexity increases, there is less ‘margin’. Tighter physical geometries, lower single supply voltage (Vdd) and threshold voltage (Vth), new device structures, new process techniques and more complex designs all push margins. Margins refer to the slack that designers may have to ensure they can create a robust design. That not only works at nominal conditions, but under real-world variability.

Tighter margins mean a greater need to carefully asses the yield versus PPA trade-off that creates the need for DFY tools. This is where companies such as ProPlus come in. ProPlus helps designers use the foundry-provided process variation information effectively and designers can validate and even customize foundry models for specific application needs with the industry’s de-facto golden modeling tool from ProPlus.

SemiMD: Is this trend for DFY moving closer to the foundry/manufacturing side the only way to improve yields, as the industry continues to push towards further scaling, and all of the challenges that this entails?

Ya-Chieh: Actually we believe the trend is actually towards tighter integration with design, not less!

Conley: DFY solutions alone are not sufficient and they need to be developed in conjunction with wafer fabrication equipment enhancements. Looking at the wafer inspection and review (I&R) segment, the need to detect smaller defects and effectively separate yield killer defects from false and nuisance defects leads to an increased usage of SEM-based defect inspection tools that have higher sensitivity. At Applied Materials, we are very focused on improving core capabilities in imaging and classification. In our other technology segments there are also a lot of innovations on deposition and removal chamber architecture and process technologies that are focused on yield improvement. DPY schemes, as well as advancement in wafer fabrication equipment, are needed to improve yields as the industry advances scaling.

Forte: Strategies aside, the fact is that beyond about 40nm, IC designs must be optimized for the target manufacturing process. At each progressive node, the design rules become more complex and the yield becomes more specific to an individual design. For example, layouts now have to be checked to make sure they do not contain specific patterns that cannot be accurately fabricated by the process. This is mainly due to the fact that we are imaging features that are much smaller than the wavelength of the light currently used in production steppers. But there are many other complexities at advanced nodes associated with etch characteristics, via structures, fill patterns, electrical checks, chemical-mechanical polishing, double patterning, FinFET transistor nuances, and many others.

These issues are too numerous and too complex to deal with after tapeout. The foundries simply cannot remove all yield limiters by adjusting their process. For one thing, some of the issues are simply beyond the control of the process engineers. For example some layout patterns simply cannot be imaged by state-of-the-art steppers, so they must be eliminated from the design. Another problem, or challenge, is that foundries need to run designs from many customers. In most cases, very large consumer designs aside, foundries cannot afford to optimize their process flow for one customer’s design. Bottom line, design-manufacturing co-optimization issues must be taken into consideration during the physical design process.

McGaughy: More and more yield is a shared responsibility. At older nodes when defect density limits were responsible for optimal yields, the foundries took on most of the responsibility. At deep nanometer nodes, this is no longer the case. Now, the design yield must be optimized via trade-offs. Foundries are pushed to provide ever better performance at each new node and this means that they too have less process margin. Rather than guard band for process variation, foundries now provide the designer with detailed visibility into how the process variation will behave. Designers in turn can now make the choices they need to make, such as whether they need performance to be competitive or how best to achieve optimal performance with lowest yield risk. This shared responsibility for yield has pushed the DFY trend to the forefront. It serves to bridge the gap between design and manufacturing and will continue to do so as process technology scales.

Next Page »