Part of the  

Solid State Technology

  Network

About  |  Contact

Posts Tagged ‘Applied Materials’

Next Page »

3D NAND: To 10nm and beyond

Wednesday, January 29th, 2014

By Sara Ver-Bruggen, contributing editor

In launching the iPod music player, Apple bumped consumption of NAND flash – a type of non-volatile storage device – driving down cost and paving the way for the growth of the memory technology into what is now a multibillion dollar market, supplying cost-effective storage for smart phones, tablets and other consumer electronic gadgets that do not have high density requirements.

The current iteration of NAND flash technology, 2D – or planar – NAND, is reaching its limits. In August 2013, South Korean consumer electronics brand Samsung announced the launch of its 3D NAND storage technology, in the form of a 24-layer, 128 GB chip. In 2014, memory chipmakers Micron and also SK Hynix will follow suit, heralding the arrival of a much-anticipated and debated technology during various industry conferences in recent years. Other companies, including Sandisk, are all working on 3D NAND flash technology.

Like floors in a tower block, in 3D NAND devices memory cells are stacked on top of each other, as opposed to being spread out on a two-dimensional (2D), horizontal grid like bungalows. Over the last few decades as 2D NAND technology has scaled, the X and Y dimensions have shrunk in order to go to each chip generation. But scaling, as process nodes dip below 20nm and on the path towards 10nm, is proving challenging as physical constraints begin to impinge on the performance of the basic memory cell design. While 2D NAND has yet to hit a wall, it is a matter of time.

Transition to mass production

But despite the potential of 3D NAND and announcements by the leading players in the industry, transferring 3D NAND technology into mass production is very challenging to do. As Jim Handy, from Objective Analysis, points out: “The entire issue of 3D NAND is its phenomenal complexity, and that is why no one has yet shipped a 3D NAND chip yet.” Mass production of Samsung’s device will happen this year. With 3D NAND there is the potential for vertical scaling, going from 16-bit-tall strings to string heights of more than 128 bits.

But while 3D NAND does not require leading-edge lithography, eventually resulting in manufacturing costs that are lower than they would be for the extension of planar NAND, new deposition and etch technologies are required for high-aspect-ratio etch processes. This “staircase” etching requires very precise contact landing. In 3D NAND manufacturing depositing layers of uniform thickness across the entire wafer presents issues with pull-back etching for these “stair steps” that currently increase the lithography load more than was originally anticipated.

Staircase etching requires very precise contact landing.

“Everything in 3D is a significant challenge. With vertical scaling the challenges include etching high aspect ratio holes, with the aspect ratio doubling with each doubling of layers. These holes must have absolutely parallel walls or scaling and device operation may be compromised. If the layers are thinned then the atomic-layer deposition (ALD) of the layers must be able to apply a constant thickness layer across the entire wafer, which is also true of the layers that are deposited on the walls of the hole,” according to Handy.

Indeed, while the best combination of cost, power and performance will be found in 3D NAND architectures, there still remain issues concerning cost, especially. These issues, in the context of their respective memory technology roadmaps, were discussed by memory chipmakers, including Sandisk, SK Hynix and Micron, at a forum organized and sponsored by semiconductor industry equipment manufacturer Applied Materials in December 2013, while the equipment supplier provided some in-depth discussion on 3D NAND manufacturing considerations and challenges. The session was hosted by Gill Lee, Senior Director and Principal Member of Technical Staff Silicon Systems Group at Applied Materials.

Sandisk plays its 2D hand for as long as possible

Ritu Shrivastava, Vice President Technology Development, at Sandisk Corporation, set out the challenge. “Whenever you talk about technology, it has to be in relation to the objectives of your company. In our case we have a $38 billion total available market projected to 2016 and any technology choices that we make have to serve that market.” Examples of products he was referring to include smart phones and tablets. “Our goal is to choose technologies that are most cost-effective and deliver in terms of performance.”

Sandisk has a joint NAND fab investment with Toshiba and the two have had a 128 GB 2D NAND flash chip using 19 nm lithography in production for a while now. They have also previously announced plans to build a semiconductor fab for 16-17 nm flash memory.

”One of our goals is to extend the life of 2D NAND technologies as far as possible because it reflects the huge investment that we have made in fabs and the technology, over the number of years,” said Shrivastava. “Of course, 3D NAND is extremely important and when it becomes cost-effective then it will move into production.” Sandisk plans to start producing its 3D NAND chips in 2016.

“We are travelling in what we think is the lowest cost path in every technology generation, going from 19 nm to 1Y where we at the limit with lithography, and then we will scale to 1Z, which is our next-generation 2D NAND technology. We believe that this scaling path gives us the lowest cost structure in each of the nodes and in terms of cumulative investment.”

But it is not just achieving the smallest die size, it is the cost involved in scaling. Capital equipment investment is what determines success in the market, according to Shrivastava. “Even though we are saying that 3D NAND is a reality there are a couple of things that we need to keep in mind. It leverages existing infrastructure, which is good, but there are still a lot of challenges. 3D NAND devices use TFT as opposed to the floating gate devices commonly used in 2D NAND chips. New controller schemes and boards will be required also.”

So while, according to Shrivastava, 3D NAND is looking very promising, there is a big ‘but’ for a company such as Sandisk, which produces some of the most cost-competitive flash memory devices on the market. “2D NAND still continues to be more cost-effective than 3D NAND and 3D NAND is not yet proven in volume manufacturing. Every new technology takes some time. Getting to mass manufacturing will take time. Our goal is to extend 2D NAND as long as possible, continue to work on 3D NAND and introduce it when it becomes cost-effective.”

Shrivastava sees 2D and 3D NAND technologies co-existing for the rest of the decade. Beyond 3D NAND the company is developing a 3D resistive RAM (RRAM) as the future technology beyond 3D NAND.

From 3D DRAM to 3D NAND

Next Chuck Dennison, Senior Director Process Integration, from Micron, provided an overview of where the company is today in terms of its own NAND memory technology roadmap.

“Our current generation is 16nm NAND that is now in production and we’re showing that it is getting to be a very competitive and very cost-effective technology,” according to Dennison. Micron’s new 16nm NAND process provides the greatest number of bits per sq mm at the lowest cost of any multilayer cell (MLC) device. Eight of these die can hold 128 GB of data. The 16nm storage technology will be released on next-generation solid state drives (SSDs) during 2014. SSDs consist of interconnected flash memory chips as opposed to platters with a magnetic coating used in conventional hard disk drives (HDDs).

Micron 16nm NAND die

“Our next node is a 256 GB class of the NAND memory. Technically it could be extended before taking the full step to 3D NAND.”

Today NAND is the lowest cost-per-bit memory technology and this continued cost-per-bit reduction is really driving the whole of the NAND industry, according to Dennison. It is why NAND replaced DRAM in terms of total dollars and has continued to proliferate across various applications, and is responsible for continued innovation in portable consumer electronics, such as tablets, where so much functionality enabling photography, video recording, storage of an entire music library, and so on, can be packed into one device.

Outlining Micron’s technology scaling path, Dennison explained: “We went to high-K/metal gate to 20 nm and we used the same technology to extend us to 16nm. From there, the company is moving to a vertical channel 3D NAND for a 256 GB class.

“In terms of capital expenditure (CapEx) per wafer it all looks very cost-effective, with a little bit of transition going to 20 nm,” explained Dennison, because of the high-K metal gate, but with minimal increase going to 16nm. “But when you go to 3D NAND it is expensive, per wafer. So if you are increasing your wafer costs by X amount you need a much higher amount of GB per cm sq, so the density we are choosing to go with is a 256 GB class. And when you start actively looking at 3D NAND there are a lot similarities between 3D NAND and DRAM,” he explained, referring to the stacked capacitor of DRAM. “There is a lot planarization, you are etching very high aspect ratio contacts where you need to be very controlled, in terms of how you define your control and CD uniformity. Then there are a lot of additional modules requiring ALD deposition. So we think that there is a lot of opportunity to utilize our DRAM expertise.”

He outlined an inflection point going from 16nm, again. “We’re transitioning to go to the 256 GB density. We think that when we do this it will make financial sense and it will be a cost-effective solution despite the high Capex. And then from there we will continue. With the majority, or bulk, of the market we’ll see vertical NAND continuing to scale with a couple of us scaling fast for that market.”

Dennison also touched on longer term advances in classes of flash memory, in the form of 3D cross-point technology. These are memories stacked in cross-point arrays over CMOS logic to enable memory technology with speed features akin to DRAM but the density and cost effectiveness of NAND. The 3D stacked memory arrays in 3D cross-point technology would make these devices suitable, for future, in very high density computing and even biological systems.

“But, to conclude, NAND will not be replaced and will continue to be the lowest cost, it’s going to be the largest market in tablets, phones and so on. It’s not the best memory technology – it has poor cycling endurance and it has a terrible latency – but it is very low cost at very high density so it is the most cost-effective solution. We think that 3D cross-point absolutely has a market in terms of displacing DRAM and will selectively displace some NAND in very high performance applications but we will stay with NAND and go to 3D NAND.”

Soek-Kiu Lee, VP and Head of the Flash Device Technology Group, at SK Hynix brought the audience up to speed on his company’s NAND technology. Every year SK Hynix has increased bit density per area by around 50%. The company’s 16nm 64 GB MLC NAND flash, based on floating gate technology, has been in production since mid-2013 with SK Hynix now entering full scale mass production of 16nm chips. SK Hynix will start to ship samples of its 3D NAND chips this year with mass production happening later in 2014.

Like Shrivastava, Lee expects that 2D NAND and 3D NAND will co-exist and compete with each other in terms of reliability, performance and density, for some time and that the big challenges facing the transition to 3D NAND architectures include stabilization of multi-stack patterning to improve yields, better metrology and defect monitoring in the 3D structure itself.

Head for heights

Lastly, Applied Materials was able to provide some insight into manufacturing the more complex structures that moving to 3D NAND device architecture entails. Very simplistically, to make 3D NAND flash devices requires building extremely tall multilayer structures. Every layer in the device requires an insulating layer, so – for example – a 32-layer device is really a 64-layer device. As a result of this, aspect ratios of the structure being etched are getting to be very high and the challenge that this poses is nothing less than a game-changer for etch and deposition, according to Applied Materials’ Vice President, Advanced Technology Group Etch Business Unit, Bradley Howard.

“Historically, if you look at how scaling has gone, it has been limited by lithography on getting to the next node down, now we getting to the point where scaling is being driven by deposition and etching because as the scaling is now going in a vertical direction you’ve eased out the design rules.” The reality is that lithography is still important, Howard said, listing off control, good uniformity and other factors. ‘Everything that you had to have from lithography before still needs to be there but it just does not need to be the limiting factor for scaling.”

High aspect ratios present lots of challenges. Standard photolithography will not hold up for the long etches required for etching such deep features so hard mask layers are needed. “Depositioning is transitioning from single layer depositions in typically thinner films to multilayer stacks where you go and deposit alternating stacks of films and then also very thick films for both device and the hard mask,” said Howard.

Howard addressed the gates axis, an alternating stack of materials built up with alternating layers. “You need to have very precise control and very low defectivity. Historically, if you had a defect come in on a film it affected that bit, or that area. Now if you get a defect that gets deposited on your first layer down at the bottom it becomes a propagating defect that goes up the entire stack and it is going up in regions , which means that the defect density on deposition is becoming more important.”

Howard then moved on to hard masks. “We are going to have thicker hard masks because the aspect ratios of what you are trying to etch are getting very extreme as well as the amount of depth you have to etch. Having a micron or a micron-and-a-half of hard mask is not unusual. In effect, the hard mask that you are forming is its own high aspect ratio feature and then it is forming a high aspect ratio feature below it. In addition, there are various challenges on the isolation on getting the gap filled between the features and also into these very complex three dimensional structures.

“On the etch side high aspect ratio is really the key. There are multiple features, contacts in the array, there are contacts coming out of the staircase, and 60: 1 aspect ratios are becoming the common target here.

“At the edge of the array access still has to be made at each one of the layers, so a staircase structure is made to enable different landing pads for contacts to come down. But some of the contacts – towards the top – are very shallow and the ones at the bottom are extremely deep.

“You might think it might be achieved by doing a litho step and an etch step and a litho step and an etch step and doing that 32, 64, or whatever number of times, but what happens is that you are starting out with a feature and you etch down into the feature then you pull back the resist and then you etch again and then you pull back the resist and so you start to form your ‘steps’ that way and you do that as many times as you can get away with, depending on the amount of resist that you have. So, you can envision that you are trying to pull this resist back really fast. The problem is the resist is now determining the CD for the cell, so you need to have good control in place.” Howard summarized the challenges as being about sequential processes for both deposition and etching, thick films – whether it be the alternating stack of films or the thick films that are done to separate out the different arrays – and, finally, defect densities – especially with deposition – which are becoming more critical than ever before because of the additive effect on the deposition.

The panellists:

Dr Ritu Shrivastava, Vice President Technology Development, at SanDisk Corporation

Chuck Dennison, Senior Director, Process Integration, at Micron

Dr Soek-Kiu Lee, VP and Head of the Flash Device Technology Group, at SK Hynix

Hang-Ting Liu, Deputy Director Nanotechnology R&D Division, at Macronix International Co.

Dr Bradley Howard, Vice President, Advanced Technology Group Etch Business Unit, at Applied Materials

Experts At The Table: Exploring the relationship between board-level design and 3D, and stacked, dies

Tuesday, December 17th, 2013

By Sara Verbruggen

SemiMD discussed what board level design can tell us about chip-level (three-dimensional) 3D and stacked dies with Sesh Ramaswami, Applied Materials’ Managing Director, TSV and Advanced Packaging, Advanced Product Technology Development, and Kevin Rinebold, Cadence’s Senior Product Marketing Manager. What follows are excerpts of that conversation.

SemiMD: What key, or major, challenge does the transition to 3D and stacked dies – and increasingly ‘advanced packaging’ – present when it comes to board-level design?

Ramaswami: The three-layer system architecture comprising the printed circuit board (PCB) system board, organic packaging substrate and silicon die offers the greatest integration flexibility. From a design perspective, this configuration places the most intensive co-design challenges on the die and substrate layers. On the substrate, the primary challenges are dielectric material, copper (Cu) line spacing and via scaling. However, when the packaged die attaches to the PCB through the ball grid array (BGA), surface-mount packaging, used for used for integrated circuits for devices such as microprocessors, the design challenges are more considerable. For example, they include limitations on chip size (I/O density), warpage and worries about co-efficient of thermal expansion mismatch between the materials.

Rinebold: Any advanced ‘BGA style’ package, regardless if it is three-dimensional (3D) or flat can have a significant impact on PCB layer count, route complexity, as well as cost. Efficient package ball pad net assignment and patterning of power and ground pins can make the difference between a four-layer and a six-layer PCB. Arriving at the optimal ball pad assignment necessitates coordinated planning across the entire interconnect chain from chip level macros to board level components. This planning requires new tools and flows capable of delivering a multi-fabric view of the system hierarchy while providing access to domain specific data like macro placement, I/O pad ring devices, bump patterns, ball pad assignments, and placement of critical PCB components and connectors.

SemiMD: 3D chip stacking and stacked die chip-scale packaging is favoured by the consumer electronics industry to enable better performing mobile electronics – in terms of faster performance, less power hungry devices, and so forth – but how do PCB design and testing tools need to adapt?

Rinebold: One benefit of these package formats is that they entail moving most of the high-performance interconnect and components off the PCB onto their own dedicated substrate. With increasing data rates and lower voltages there is little margin for error across the entire system placing a premium on signal quality and power delivery between the board and package.

In addition to high-speed constraints and checking, design tools must provide innovative functionality to assist the designer in implementing high-performance interconnect. In some situations complete automation (like auto-routing) cannot provide satisfactory results and still enforce the number of diverse and sometime ambiguous constraints. Designers will require auto-interactive tools that enable them to apply their experience and intuition supported by semi-automatic route engines for efficient implementation of constraints and interconnect. Example of such tools include the ability to plan and implement break-out on two ends of an interface connecting to high pin count BGAs to reduce route time and via counts. Without such tools the time to route high pin count BGAs can increase significantly.

Methodologies must adapt to incorporate electrical performance assessment (EPA) into the design process. EPA enables designers to evaluate electrical quality and performance throughout the design process helping avoid the backend analysis crunch – possibly jeopardizing product delivery. It utilizes extraction technology in a manner that provides actionable feedback to the designer helping identify and avoid issues related to impedance discontinuities, timing, coupling, or direct current (DC) current density.

SemiMD: More specifically, what impact will this trend towards greater compactness – i.e. smaller PCB footprint, but with more stacked dies and complex packaging – have on interconnection technologies?

Ramaswami: The trend towards better quality, higher-component density PCBs capable of supporting a wide range of die has significant implications for interconnect design. An additional challenge, is attaching complex chips on both sides of a board. Furthermore, with PCBs going thinner to fit the thin form factor requirements for mobile devices, dimensional stability and warpage must be addressed.

Rinebold: In some regards stacked applications simplify board level layout by moving high-bandwidth interconnect off the PCB and consolidating it on smaller, high density advanced package substrates. However, decreasing package ball pad pitch and increased pin density will drive use of build-up substrate technology for the PCB. This high density interconnect (HDI) enables smaller feature sizes and manufacturing accuracy necessary to support the fan-out routing requirements of these advanced package formats. Design tools must support HDI constraints and rules to ensure manufacturability along with functionality to define and manipulate the associated structures like microvias.

SemiMD: How will PCB manufacturing processes, tools and materials need to change to address this challenge?

Ramaswami: To manufacture a more robust integrated 3D stack, I think several fundamental innovations are needed. These include improving defect density and developing new materials such as low warpage laminates and less hygroscopic dielectrics. Another essential requirement is supporting finer copper line/spacing. Important considerations here are maintaining good adhesion while watching out for corrosion. Finally, for creating the necessary smaller vias, the industry needs new etching techniques to replace mechanical drilling techniques.

SemiMD: So as 3D chip stacking and stacked dies become more mainstream technologies, how will board level design need to develop, in the years to come?

Rinebold: One challenge will be visibility and consideration of the PCB during chip-level floor-planning and awareness of how decisions made early on impact downstream performance and cost. New tools that deliver a multi-fabric view of the system hierarchy while providing access to domain specific data will facilitate the necessary visibility for coordinated decision making. However these planning tools are just one component of an integrated flow encompassing logic definition, implementation, analysis, and sign-off for the chip, package, and PCB.

Blog review December 16, 2013

Monday, December 16th, 2013

Randhir Thakur of Applied Materials wishes the transistor a happy 66th birthday, noting that the transistor is truly one of the most amazing technological innovations of all time! He says it’s estimated that more than 1200 quintillion transistors will be manufactured in 2015, making the transistor the most ubiquitous man-made device on the planet.

Phil Garrou writes that 3DIC memory, and therefore all of 2.5/3D technology, took one step closer to full commercialization last week with the High Bandwidth Memory HBM joint development announcement from AMD and Hynix at the RTI 3D ASIP meeting in Burlingame CA.

Zhihong Liu, Executive Chairman of ProPlus Design Solutions is celebrating 20 years of BSIM3v3 SPICE models. He notes that with continuous geometry down-scaling in CMOS devices, compact models became more complicated as they needed to cover more physical effects, such as gate tunneling current, shallow trench isolation (STI) stress and well proximity effect (WPE).

Applied Materials and Tokyo Electron held a media roundtable in Japan to discuss the merger of equals announced on September 24, 2013. Tetsuro Higashi, Chairman, President and CEO of Tokyo Electron, who will become Chairman of the new company, and Mike Splinter, Executive Chairman of Applied Materials, who will serve as Vice-Chairman, addressed the audience of more than 20 members of the Japanese media. Kevin Winston blogs about the event.

Pete Singer is freshly back from the International Electron Devices Meeting (IEDM). “A dream for the device engineer could be a nightmare for a process integration engineer,” said Frederic Boeuf of ST Microelectronics in the opening talk. That seemed to be echoed throughout the conference, where the potential of new devices such as tunnel FETs or materials such as graphene were always tempered with a dose of reality that materials had to be deposited, patterned, annealed to create devices, and those devices had to be connected.

Solid State Watch: Nov. 8-14, 2013

Friday, November 15th, 2013
YouTube Preview Image

Design for Yield Trends

Tuesday, November 12th, 2013

By Sara Ver-Bruggen

Should foundries establish and share best practices to manage sub-nanometer effects to improve yield and also manufacturability?

Team effort

Design for yield (DFY) has been referred to previously on this site as the gap between what the designers assume they need in order to guarantee a reliable design and what the manufacturer or foundry thinks they need from the designer to be able to manufacture the product in a reliable fashion. Achieving and managing this two-way flow of information becomes more challenging as devices in high volume manufacturing have 28 nm dimensions and the focus is on even smaller dimension next-generation technologies. So is the onus on the foundries to implement DFY and establish and share best practices and techniques to manage sub-nanometer effects to improve yield and also manufacturability?

Read more: Experts At The Table: Design For Yield Moves Closer to the Foundry/Manufacturing Side

‘Certainly it is in the vital interest of foundries to do what it takes to enable their customers to be successful,’ says Mentor Graphics’ Senior Marketing Director, Calibre Design Solutions, Michael Buehler, adding, ‘Since success requires addressing co-optimization issues during the design phase, they must reach out to all the ecosystem players that enable their customers.’

Mentor refers to the trend of DFY moving closer to the manufacturing/foundry side as ‘design-manufacturing co-optimization’, which entails improving the design both to achieve higher yield and to increase the performance of the devices that can be achieved for a given process.

But foundries can’t do it alone. ‘The electronic design automation (EDA) providers, especially ones that enable the critical customer-to-foundry interface, have a vital part in transferring knowledge and automating the co-optimization process,’ says Buehler. IP suppliers must also have a greater appreciation for and involvement in co-optimization issues so their IP will implement the needed design enhancements required to achieve successful manufacturing in the context of a full chip design.

As they own the framework of DFY solutions, foundries that will work effectively with both the fabless and the equipment vendors will benefit from getting more tailored DFY solutions that can lead to shorter time-to-yield, says Amiad Conley, Applied Materials’ Technical Marketing Manager, Process Diagnostics and Control. But according to Ya-Chieh Lai, Engineering Director, Silicon and Signoff Verification, at Cadence, the onus and responsibility is on the entire ecosystem to establish and share best practices and techniques. ‘We will only achieve advanced nodes through a partnership between foundries, EDA, and the design community,’ says Ya-Chieh.

But whereas foundries are still taking the lead when it comes to design for manufacturability (DFM), for DFY the designer is intimately involved so he is able to account for optimal trade-off in yield versus PPA that result in choices for specific design parameters, including transistor widths and lengths.

For DFM, foundries are driving design database adjustments required to make a particular design manufacturable with good yield. ‘DFM modifications to a design database often happen at the end of a designer’s task. DFM takes the “ideal” design database and manipulates it to account for the manufacturing process,’ explains Dr Bruce McGaughy, Chief Technology Officer and Senior Vice President of Engineering at ProPlus Design Solutions.

The design database that a designer delivers must have DFY considerations to be able to yield. ‘The practices and techniques used by different design teams based on heuristics related to their specific application are therefore less centralized. Foundries recommend DFY reference flows but these are only guidelines. DFY practices and techniques are often deeply ingrained within a design team and can be considered a core competence and, with time, a key requirement,’ says McGaughy.

In the spirit of collaboration

Ultimately, as the industry continues to progress requiring manufacturing solutions that increasingly tailored and more and more device specific, this requires earlier and deeper collaboration between equipment vendors and foundry customers in defining and developing the tailored solutions that will maximize the performance of equipment in the fab. ‘It will also potentially require more three-way collaboration between the designers from fabless companies, foundries, and equipment vendors with the appropriate IP protection,’ says Conley.

A collaborative and open approach between the designer and the foundry is critical and beneficial for many reasons. ‘Designers are under tight pressures schedule-wise and any new steps in the design flow will be under intense scrutiny. The advantages of any additional steps must be very clear in terms of the improvement in yield and manufacturability and these additional steps must be in a form that designers can act on,’ says Ya-Chieh. The recent trend towards putting DFM/DFY directly into the design flow is a good example of this. ‘Instead of purely a sign-off step, DFM/DFY is accounted for in the router during place and route. The router is able to find and fix hotspots during design and, critically, to account for DFM/DFY issues during timing closure,’ he says. Similarly, Ya-Chieh refers to DFM/DFY flows that are now in place for custom design and library analysis. ‘Cases of poor transistor matching due to DFM/DFY issues can be flagged along with corresponding fixing guidelines. In terms of library analysis, standard cells that exhibit too much variability can be systematically identified and the cost associated with using such a cell can be explicitly accounted for (or that cell removed entirely).’

‘The ability to do “design-manufacturing co-optimization” is dependent on the quality of information available and an effective feedback loop that involves all the stakeholders in the entire supply chain: design customers, IP suppliers, foundries, EDA suppliers, test vendors, and so on,’ says Buehler. ‘This starts with test chips built during process development, but it must continue through risk manufacturing, early adopter experiences and volume production ramping. This means sharing design data, process data, test failure diagnosis data and field failure data,’ he adds.

A pioneer of this type of collaboration was the Common Platform Consortium initiated by IBM. Over time, foundries have assumed more of the load for enabling and coordinating the ecosystem. ‘GLOBALFOUNDRIES has identified collaboration as a key factor in its overall success since its inception and been particularly open about sharing foundry process data,’ says Buehler.

TSMC has also been a leader in establishing a well-defined program among ecosystem players, starting with the design tool reference flows it established over a decade ago. Through its Open Innovation Platform program TSMC is helping to drive compatibility among design tools and provides interfaces from its core analysis engines and third party EDA providers.

In terms of standards Si2 organizes industry stakeholders to drive adoption of collaborative technology for silicon design integration and improved IC design capability. Buehler adds: ‘Si2 working groups define and ratify standards related to design rule definitions, DFM specifications, design database facilities and process design kits.’

Open and trusting collaboration helps understand the thriving ecosystem programs that top-tier foundries have put together. McGaughy says: ‘Foundry customers, EDA and IP partners closely align during early process development and integration of tools into workable flows. One clear example is the rollout of a new process technology. From early in the process lifecycle, foundries release 0.x versions of their PDK. Customers and partners expend significant amounts of time, effort and resources to ensure the design ecosystem is ready when the process is, so that design tapeouts can start as soon as possible.’

DFY is even more critically involved in this ramp-up phase, as only when there is confidence in hitting yield targets will a process volume ramp follow. ‘As DFY directly ties into the foundation SPICE models, every new update in PDK means a new characterization or validation step. Only a close and sustained relationship can make the development and release of DFY methodologies a success,’ he states.

Experts At The Table: Design For Yield (DFY) moves closer to the foundry/manufacturing side

Friday, November 8th, 2013

By Sara Verbruggen

SemiMD discussed the trend for design for yield (DFY) moving closer to the foundry/manufacturing side with Dr Bruce McGaughy, Chief Technology Officer and Senior Vice President of Engineering, ProPlus Design Solutions, Ya-Chieh Lai, Engineering Director, Silicon and Signoff Verification, Cadence and Michael Buehler, Senior Marketing Director, Calibre Design Solutions, Mentor Graphics, and Amiad Conley, Technical Marketing Manager, Process Diagnostics and Control, Applied Materials. What follows are excerpts of that conversation.

SemiMD: What are the main advantages for design for yield (DFY) moving closer to the manufacturing/foundry side, and is it a trend with further potential?

Forte: Mentor refers to this trend as ‘design-manufacturing co-optimization’ because in the best scenario it involves improving the design both to achieve higher yield and to increase the performance of the devices that can be achieved for a given process. Companies embrace this opportunity in different ways. At one end of the scale, some fabless IC companies do the minimum they have to do to pass the foundry sign-off requirements. However, some companies embrace co-optimization as a way to compete, both by decreasing their manufacturing cost (higher yield means lower wafer costs), and by increasing the performance of their products at a given process node compared to their competition. Having a strong DFY discipline also enables fabless companies to have more portability across foundries, giving them alternate sources and purchasing power.

Ya-Chieh: Broadly speaking there are three typical insertion points for design for manufacturability (DFM)/DFY techniques. The first is in the design flow as design is being done. The second is as part of design sign-off. The last is done by the foundry as part of chip finishing.

The obvious advantage of DFY/DFM moving closer to the manufacturing/foundry side is in terms of ‘access’ to real fab data. This information is closely guarded by the fab and access is still only in terms of either encrypted data or models that closely correlate to silicon data but that have been carefully scrubbed of too many details.

However, the complexity of modern designs requires that DFM/DFY techniques need to be as far upstream in the design flows as possible/practicable. Any DFM/DFY technique that requires a modification to the design must be comprehended by designers so that any design impact can be properly accounted for so as to prevent the possibility of design re-spins late in the design cycle.

What we are seeing is not that DFM/DFY is moving closer to the manufacturing, or foundry, side, but that different techniques have been needed over the years to address the need of the designer for information as early as possible. Initially much of DFM/DFY was in the form of complex rule-based extensions to DRC, but much of this has since moved to include model-based and, in many cases, pattern-based checks (or some combination thereof).  More recently, the trend has been towards deeper integration with design tools and more automated fixing or optimization. DFM/DFY techniques that merely highlight a “hotspot” is insufficient. Designers need to know how to fix the problem and in the event there is a large number of fixes designers need to know how to automatically fix the problem. In other words the trend is about progressing towards better techniques for providing this information upstream and in ways that can be actionable by designers.

Conley: The key benefit in DFY approach is the ability to provide tailored solutions to the relevant manufacturing steps in a way that optimize performance based on device specific characteristics. This trend will definitely evolve further. We definitely see the trend in the defect inspection and review loops in foundries, which are targeted to generate paretos of the representative killer defects at major process steps. Due to the defects becoming smaller and the optical limitation of the detection tools, design information is used today to enable smarter sampling and defect classification in the foundries. To accelerate yield ramp going forward, robust infrastructure development is needed as an enabler to extract relevant information from chip design to the defect inspection, defect review and metrology equipment.

McGaughy: The foundation information used by designers in DFY analysis comes from the fab/foundry. This information is encapsulated in the form of statistical device models provided to the design community as part of the process design kit (PDK). Statistical models and, more recently, layout-dependent effect information is used by designers to determine the margin their design has for a particular process. This allows the designers to optimize their design to achieve the desired yield versus power, performance, area (PPA) trade-off. Without visibility into process variability via the foundry-provided Simulation Program with Integrated Circuit Emphasis (SPICE) models, DFY would not be viable. Hence, foundries are clearly at the epicenter of DFY. As process complexity increases and more detailed information of process variation effects are captured into SPICE models and made available to designers, it can be expected that the role of the foundry will continue to be more important in this respect over time.

SemiMD: So does this place a challenge on the EDA industry, or, how are EDA companies, such as ProPlus, helping to enable this trend?

McGaughy: The DFY challenge that designers face creates an opportunity for the EDA industry. As process complexity increases, there is less ‘margin’. Tighter physical geometries, lower single supply voltage (Vdd) and threshold voltage (Vth), new device structures, new process techniques and more complex designs all push margins. Margins refer to the slack that designers may have to ensure they can create a robust design. That not only works at nominal conditions, but under real-world variability.

Tighter margins mean a greater need to carefully asses the yield versus PPA trade-off that creates the need for DFY tools. This is where companies such as ProPlus come in. ProPlus helps designers use the foundry-provided process variation information effectively and designers can validate and even customize foundry models for specific application needs with the industry’s de-facto golden modeling tool from ProPlus.

SemiMD: Is this trend for DFY moving closer to the foundry/manufacturing side the only way to improve yields, as the industry continues to push towards further scaling, and all of the challenges that this entails?

Ya-Chieh: Actually we believe the trend is actually towards tighter integration with design, not less!

Conley: DFY solutions alone are not sufficient and they need to be developed in conjunction with wafer fabrication equipment enhancements. Looking at the wafer inspection and review (I&R) segment, the need to detect smaller defects and effectively separate yield killer defects from false and nuisance defects leads to an increased usage of SEM-based defect inspection tools that have higher sensitivity. At Applied Materials, we are very focused on improving core capabilities in imaging and classification. In our other technology segments there are also a lot of innovations on deposition and removal chamber architecture and process technologies that are focused on yield improvement. DPY schemes, as well as advancement in wafer fabrication equipment, are needed to improve yields as the industry advances scaling.

Forte: Strategies aside, the fact is that beyond about 40nm, IC designs must be optimized for the target manufacturing process. At each progressive node, the design rules become more complex and the yield becomes more specific to an individual design. For example, layouts now have to be checked to make sure they do not contain specific patterns that cannot be accurately fabricated by the process. This is mainly due to the fact that we are imaging features that are much smaller than the wavelength of the light currently used in production steppers. But there are many other complexities at advanced nodes associated with etch characteristics, via structures, fill patterns, electrical checks, chemical-mechanical polishing, double patterning, FinFET transistor nuances, and many others.

These issues are too numerous and too complex to deal with after tapeout. The foundries simply cannot remove all yield limiters by adjusting their process. For one thing, some of the issues are simply beyond the control of the process engineers. For example some layout patterns simply cannot be imaged by state-of-the-art steppers, so they must be eliminated from the design. Another problem, or challenge, is that foundries need to run designs from many customers. In most cases, very large consumer designs aside, foundries cannot afford to optimize their process flow for one customer’s design. Bottom line, design-manufacturing co-optimization issues must be taken into consideration during the physical design process.

McGaughy: More and more yield is a shared responsibility. At older nodes when defect density limits were responsible for optimal yields, the foundries took on most of the responsibility. At deep nanometer nodes, this is no longer the case. Now, the design yield must be optimized via trade-offs. Foundries are pushed to provide ever better performance at each new node and this means that they too have less process margin. Rather than guard band for process variation, foundries now provide the designer with detailed visibility into how the process variation will behave. Designers in turn can now make the choices they need to make, such as whether they need performance to be competitive or how best to achieve optimal performance with lowest yield risk. This shared responsibility for yield has pushed the DFY trend to the forefront. It serves to bridge the gap between design and manufacturing and will continue to do so as process technology scales.

Applied Materials rolls out new CVD and PVD systems for IGZO-based displays

Thursday, October 17th, 2013

By Pete Singer, Editor-in-Chief, Solid State Technology

Applied Materials introduced three new tools for the display market aimed at metal oxide thin film transistors. The tools, one of which is CVD and the other two PVD, employ new hardware designs and process technology that enable better film uniformity with fewer defects, and are designed for use with next generation IGZO-based thin film transistors (TFTs). The display industry is quickly switching to metal oxide TFTs and IGZO (indium gallium zinc oxide) is the material of choice.

Higher resolution LCD displays, greater than 300 dpi, require a switch from amorphous silicon designs to either metal oxide transistors or low-temperature polysilicon (LTPS), which offer higher-mobility in a smaller area (Figure 1). They also operate at lower power levels, which is important in mobile devices. Another problem with larger transistors is that they block too much of the light in the display.

LG has already begun shipping 55-inch OLED TVs using metal oxide backplanes and by 2014, all major LCD and LED display makers will have begun the switch over to metal oxide TFTs.

The advantage of metal oxide transistors over LTPS transistors is that they consume less power and are more easily scaled.

The layers in an IGZO transistor are deposited by both PVD and CVD, according to Max McDaniel, Applied Materials’ director and chief marketing officer for its display business. Figure 2 shows a cross-section of the device. “You use PVD to deposit the metal gate material (on the glass substrate), then you have an insulator over the top of the gate (GI = gate insulator in the figure). That’s deposited by PECVD. On top of that, you’ve got the active layer, which is the IGZO. This is deposited by PVD. Then you’ve an etch stop layer (ESTL in the figure) and that’s a PVD layer. Then you’ve got the source/drain, which is a metal deposited by CVD. Finally, you’ve got the passivation on the top which is a CVD layer,” McDaniel said. He noted that these interfaces between the CVD layers and the IGZO are critical. “We want to reduce the hydrogen as much as we can, so that’s what our technology helps the customer to do,” he said, adding that Applied Materials has the capability to build transistors in house and test them. “We’re able to solve some of these integration challenges before we deliver it to the customer.”

This time last year, Applied Materials introduced two new products. One offers a new design for depositing IGZO films for TFTs; the other handles bigger substrates of low temperature polysilicon (LTPS) films to help lower manufacturing costs.

The three new products now being introduced are the Applied AKT-PiVot 55K DT PVD, Applied AKT-PiVot 25K DT PVD and Applied AKT 55KS PECVD. The 55k nomenclature is a reference to the Gen 8.5 size panesl the system can handle, which are 2.2m x 2.5 m, or 55,000 cm2. DT stands for “dual track” which is new.

The AKT-55KS

The AKT-PiVot DT PVD system.

One of the key changes in the 55KS PECVD system include is related to how process gas is distributed the substrate surface. “The hundreds of thousands of holes that the gas is distributed out of – you have to customize them across the whole area of the chamber to compensate for the shape of the plasma,” McDaniel said. “It’s not just the diameter of the holes, it’s the depth of them.” A new gas deflector pre-distributes the gas before it goes into the diffuser, and support structures were added to achieve a higher degree of flatness over the 2.5 wide area.

New hardware provide better gas distribution and better uniformity.

On the PVD side, the new systems are designed specifically for IGZO. “Unlike our prior Pivot PVD system, where you want to have lots of chambers and be able to run multiple materials in different chambers, customers really want a system that just deposited the IGZO,” McDanield said. “It gets the substrates in and out quickly, so this is a compact, efficient platform that’s designed for depositing the IGZO.” The 25K system is targets displays for mobile applications. “We’re entering a whole new segment,” McDanield added.

The Pivot employs a set of rotary cathodes and targets, which act quite differently than conventional planar targets. Planar targets don’t get consumed uniformly and there can be redeposition of the material back onto the target. This redeposited material can spall off as particles. “Our technology is different,” McDaniel said. “The target is an array of rotating targets/cathodes. As they are being bombarded and consumed, you’re actually rotating the tubes in a circle and consuming them evenly throughout the deposition. The other benefit is this is a reactive process so you also have to introduce oxygen gas into the reaction. With the planar cathode, you have to introduce the gas from around the sides of the planar target. It’s hard to get it evenly over the substrate. With this array of tubes, you can introduce the process gas in between the tubes and get it uniformly distributed over the substrate,” he said. The rotary cathode employ magnets inside the tubes for uniformity enhancement.

Old-style planar (left) vs new-style rotary (right).

Material can redeposit onto planar cathodes (left) but that doesn't happen on rotray cathodes (right).

McDaniel added that presently everyone who is doing metal oxide IGZO use the etch stop (ES) structure (Figure, right), but would like to eliminate the etch stop and use a back channel etch (BCD) directly (Figure, left). “The IGZO material is very sensitive to hydrogen. What you’re trying to do is not expose it to the etching chemistry,” he said. “You put an etch stop layer on top of the IGZO, which is a CVD SiO2 process, and that protects it while you’re etching the source and drain. That adds an extra mask and extra process step. The panel makers would like to get rid of that etch stop layer and go to a back channel etch (BCE). This is where you etch the source drain directly down all the way to the IGZO and it’s unprotected. We’re not there yet, but the industry would like to see that structure developed. That’s on the roadmap for the industry.”

The display industry hopes to use a back channel etch (left), but presently uses an etch stop layer (right), which adds an extra mask and process step.

Looking forward, the holy grail for the display industry might just be the flexible display. McDaniel said flex displays will not likely be based on LCDs, but OLEDs. “For flexible OLED, you want to deposit on a flexible, non-glass substrate and then you need to encapsulate the OLEDs with something other than rigid glass.” This could require numerous thin films, which is good news for a supplier of tool deposition systems. He added that they would probably require an alternative to ITO (a commonly used transparent conductor). “There are a number of ITO replacement materials that are being looked at now, so as metal mesh, nanowires and even carbon nanotubes,” he said.

Marrying diversification, innovation with high-volume manufacturing – the MEMS puzzle

Tuesday, October 8th, 2013

By Sara Verbruggen

Initiated by Apple’s launch of the iPhone, the subsequent explosive growth of the smartphone market has provided the MEMS industry with one of its biggest opportunities to supply high-volume demand. But if motion sensing in our portable electronics – enabled by accelerometer and gyroscope MEMS applications for example – is the tip of the iceberg for MEMS technology how can the semiconductor industry ensure that high volume markets like consumer electronics benefit from all that MEMS potentially has to offer.

As the MEMS industry evolves, in terms of further diversification of device applications in higher volumes, this creates manufacturing challenges.

‘Organizations like MIG are helping to set standards across classes of devices in terms of specifications, rating, test interfaces, and system interfaces, and this is a great advancement in helping the industry to grow. On the manufacturing side though it is unlikely that a “standard” MEMS flow will emerge even within individual foundries except for very specific and limited types of MEMS – Invensense NF Process is an example of an attempt at this,’ comments Silex Microsystems’ VP of marketing and strategic alliances Peter Himes.

The emergence of MEMS technology over the last decade into high volume markets – consumer electronics especially – has presented the semiconductor industry with the challenge of designing and fabricating devices with different functionalities (as opposed to focusing on scaling down while ramping performance). This has paved the way for electronics in industries as diverse as healthcare, energy, security and environment. The long-term growth of MEMS depends on functional diversification but also being able to manufacture devices for these various applications in significant volumes and bringing down cost.

More than Moore techniques and processes

Wafer-scaling fabrication and process technologies, to enable these ‘More than Moore architectures’ are beginning to become established in MEMS manufacturing, for high volume markets.

SEMI’s chief marketing officer Tom Morrow says: ‘To be competitive in high-volume MEMs markets, 8” production equipment and economies will be, if not already, needed. Deep reactive ion etch (DRIE) “tuned” for MEMs technologies are also required, coupled with advanced cleaning solutions such as plasma. Bonding is, with DRIE, the other key MEMS-specific technology, used for wafer level capping and wafer level packaging.’ Critical concerns include providing good hermetic solutions to maintain performance of sensitive moving parts like gyros, while taking up less area on the wafer with bond lines. ‘The bonding process tends to take time, so throughput is typically low. Room temperature bonding and temporary bonding are areas of major improvement,’ adds Morrow.

DRIE and wafer bonding are the technologies subject to significant process improvement as both technologies are increasingly used in the mainstream semiconductor industry for 3D-TSV. In addition packaging and bonding technologies today support increasing standardization.

‘While contact and proximity aligners remain prominent lithography tools for MEMs, there is some movement towards projection steppers for better CD uniformity and automated 8” volume production,’ according to Morrow. Tools also need to be able to handle thin wafers and manufacturers also demand better overlay precision.

TSV is a critical technology, agrees Silex Microsystems’ Peter Himes. The company has specialised in TSV integration into MEMS since 2005 when its Sil-Via technology went into first production. This process, developed for the mobile industry, consisted of an all-silicon interposer for 2.5D integration of a MEMS microphone and ASIC onto a silicon substrate which was then solder- bumped and mounted directly onto the PCB.

‘Since then, we have been developing more TSV options for our customers, including TSV for buried cavity MEMS, TSV for capping solutions of either MEMS or CMOS, and both metal TSV and TGV through glass substrates for RF and power applications,’ says Himes.

As MEMS companies increasingly move beyond competing on manufacturing technology to competing on functionality, more of TSV/WLP packaging solutions will become widely-used platforms, predicts Yole Développement. This would also make more use of the outsourced infrastructure to reduce costs and speed-up development time.

‘Today, a few MEMS companies such as VTI, STMicroelectronics, Robert Bosch or MEMSIC have successfully implemented 3D wafer-level packaging concepts by using TSV/TGV vertical feedthrough, redistribution layers, and bumping processes to directly connect the silicon part of the MEMS/sensor to the final motherboard but without using a ceramic, leadframe, or plastic package. We believe this trend will be accelerated even further with the shift to 200mm wafer manufacturing for MEMS: it just makes sense to use wafer-level packaging, because as soon as you can add more dies on a wafer, it is more cost-effective,’ says Eric Mounier from Yole.

AMAT’s Mike Rosa points out that wafer-scale integration techniques, to enable more device functionality on a per die area basis, in combination with system-on-chip technologies to enable greater intelligence on die is becoming a standard requirement for more advanced MEMS.  ‘The end-users (system integrators – like Apple or Samsung for example) now require the MEMS device to do a lot more of the signal processing than has traditionally been the case – hence MEMS designers have to include more signal processing (CMOS) capability on die,’ says Rosa.

Fabless model

The fabless approach in the MEMS industry is now well-established, where, in order to speed up MEMS development device cycles, foundry companies partner with designers to provide them with process modules around which designers can develop MEMS devices.

But for the fabless model to facilitate the development of more differentiated and disruptive MEMS and to ensure companies remain competitive manufacturers need to be able to embrace and adopt new manufacturing processes and material technologies – which accompany disruptive new MEMS devices. ‘In the foundry space, it’s the foundry partner who is strongest in technology development that will win market share – this there is already a clear ‘pecking order’ with the big three foundries today and that is for a very good reason,’ says Rosa.

Silex is an example of a successful business servicing the fabless segment, through its program with AMFitzgerald. ‘The fact is that new companies cannot afford the cost of building a MEMS manufacturing line, and need a foundry infrastructure to get their products to market,’ says Himes.

Several key factors point to a strengthening fabless market in the long term, he observes. These include an ongoing reduction in overall development times for MEMS over the past two decades, lowering the time to market for new MEMS devices ‘though Yole is correct in saying that it needs to come down further,’ he adds. Increasingly fabless start-ups are driving innovation in MEMS-based functionality. ‘The percentage of MEMS revenues which comes from components not on the market before 2006 has been steadily growing, pointing to increased diversity and expansion of the MEMS- enabled market,’ says Himes pointing to a recent iSuppli presentation.

‘In terms of what works, Silex’s systematic SmartBlock-based approach toward process integration coupled with our defined new product introduction (NPI) process has proven to be the best way for us to manage the risk and uncertainty which comes with any process development. While customers always want shorter time to full production, an early focus of our customer programs is to get the customer fully functional samples as early as possible so that the rest of the component or system can be developed,’ Himes explains.

According to Mounier a successful fabless model relies on a MEMS designer, or similar business, finding a reliable foundry working on the long term. ‘Depending on the application, the foundry will have to be competitive on cost (consumer, automotive) or performances (defense, industrial applications). However, as many new MEMS devices are emerging in for new applications, such as touchscreens and flat speakers, MEMS foundries must be able to think about adapting the customer design to their own process flow.’

The RocketMEMS program run by AMFitzgerald & Associates is a good example. The company has defined a product design platform for rapidly commercializing semi-custom MEMS devices (pressure sensors is the first area) based on a pre-qualified manufacturing flow at Silex. ‘We think that this is an efficient path toward design enablement that can avoid the “one product, one process” paradigm in the long term,’ says Himes. Customers would be prioritizing time to market and customized form-fit-function over fully customized and optimized MEMS process flow. ‘We can envision many more such programs being set up worldwide, and thereby expanding the capability of doing MEMS design from the PhD level down to a broader class of component design engineers,’ he adds.

There are various challenges in the MEMs industry, owing to both the required process craftsmanship seen in advanced devices and the sheer proliferation of device types. Morrow observes: ‘Foundries continue to address these challenges through process capability improvement, and are benefitting from a maturing design process ecosystem that understands the need for integration with manufacturing, particularly in high-volume segments such as inertial sensors, microphones, and optical MEMs. Lower volume products, highly specialized device types, unique packaging or ASIC integration requirements seem to support IDM-type manufacturing.’

The Week In Review: Sept. 30

Monday, September 30th, 2013

Applied Materials Inc. and Tokyo Electron Limited this week announced Applied Materials agreed to merge with Tokyo Electron in a deal valuing the Japanese semiconductor production equipment maker at $9.3 billion, creating a giant in the chip and display manufacturing-tools sector.

Micron Technology, Inc. announced that it is shipping 2GB Hybrid Memory Cube (HMC) engineering samples. Micron expects future generations of HMC to migrate to consumer applications within three to five years.

The Fraunhofer Institute for Solar Energy Systems ISE, Soitec, CEA-Leti and the Helmholtz Center Berlin jointly announced this week having achieved a new world record for the conversion of sunlight into electricity using a new solar cell structure with four solar subcells.

Fujifilm and imec have developed a new photoresist technology for organic semiconductors that enables the realization of submicron patterns.

Mentor Graphics announced the latest release of its a FloEFD concurrent computational fluid dynamics (CFD) product.

Applied Materials – Tokyo Electron Merger Hastens EDA Changes

Monday, September 30th, 2013

Paradoxically, the merger of equipment manufacturers AMAT and TEL may shrink the Electronic Design Automation (EDA) tool market while improving IP security.

In the last several days, much has been written about the proposed merger of Applied Materials (AMAT) and Tokyo Electron (TEL). Desired by both, this merger would create a company worth $29B that would be the largest semiconductor equipment company in the world by sales. In comparison, the EDA tool market is roughly valued at $1.1B.

This merger of capital equipment giants represents an ongoing consolidation of the semiconductor supply chain, from chip/component developers through the IDM/foundries and manufacturing space. One reason for this consolidation is the increasingly high costs of making chips smaller and smaller – e.g., at the leading edge process nodes.

At first glance, it would appear that the merger will have little impact on the world of semiconductor intellectual property (IP). Still, one of the stated goals of the merged companies is to extract costs, “from all layers of the supply chain,” according to a recent report from Canaccord Genuity’s analyst Josh Baribeau (see, “Size Matters: Our First Take on AMAT’s Proposed Merger with Tel.”)

While admittedly far down on the supply chain relative to capital equipment, the Electronic Design Automation (EDA) tool market – heavy dependent on design and verification IP – might feel the effects of this merger in several ways.

First, equipment manufactures use EDA tools and related processes to qualify new manufacturing systems. For example, last year Applied Materials supplied critical film properties (new materials) and device characterization data from its advanced process systems to Synopsys. This allowed the EDA vendor to create more accurate chip design and verification models.

Such new materials and processes are necessary to keep Moore’s Law on track, in contrast to the ever increasing lithographic costs at lower and lower nodes. Several new technologies and process node shrinks are also driving up the cost of manufacturing leading edge chips – such things as 3D NAND devices, 450mm wafers, finFET structures, stacked dies and more.

Still, the cost of EDA tools are low in relationship to other costs. According to long-time EDA analyst Gary Smith, the cost of EDA tools is analogous to lunch money. The real costs in SoC development are related to the cost of engineers to do the design. Greater level of chip design-verification tool automation will reduce these costs, as will, “the reuse of software, the reuse of verifiable design IP, and by reducing SoC core blocks below the typical five blocks.” (see, “Gary Smith’s Sunday Night, Pre-DAC Forecast”)

It may well be that consolidation by the equipment manufactures will result in accelerated consolidation of the lower part of the semiconductor supply chain, e.g., EDA tool vendors. Judging from the furry of acquisitions in the EDA community over the last several years, this scenario is hardly surprising.

On the other hand, this merger of equipment giants might be a good thing for the development of soft IP standards. As Warren Savage pointed out a few months ago (see, “Long Standards, Twinkie IP, Macro Trends, and Patent Trolls“), the semiconductor equipment companies need to approve any IP design standards since it will be their systems that must read the soft IP.

Consolidation of the equipment market should mean fewer companies that need to approve any such standards, thus (in theory) hastening the approval process.

Will the end result of the AMAT and TEL merger mean further consolidation of EDA tools and hence the IP markets? Will the merger lead to greater IP protection at the lower process nodes? The answer will probably be revealed in the next installment of Moore’s Law, i.e., the next process node advancement.

Next Page »