Part of the  

Solid State Technology


The Confab


About  |  Contact

Posts Tagged ‘design’

Next Page »

Multibeam Patents Direct Deposition & Direct Etch

Monday, November 14th, 2016


By Ed Korczynski, Sr. Technical Editor

Multibeam Corporation of Santa Clara, California recently announced that its e-beam patent portfolio—36 filed and 25 issued—now includes two innovations that leverage the precision placement of electrons on the wafer to activate chemical processes such as deposition and etch. As per the company’s name, multi-column parallel processing chambers will be used to target throughputs usable for commercial high-volume manufacturing (HVM) though the company does not yet have a released product. These new patents add to the company’s work in developing Complementary E-Beam Lithography (CEBL) to reduce litho cost, Direct Electron Writing (DEW) to enhance device security, and E-Beam Inspection (EBI) to speed defect detection and yield ramp.

The IC fab industry’s quest to miniaturize circuit features has already reached atomic scales, and the temperature and pressure ranges found on the surface of our planet make atoms want to move around. We are rapidly leaving the known era of deterministic manufacturing, and entering an era of stochastic manufacturing where nothing is completely determined because atomic placements and transistor characteristics vary within distributions. In this new era, we will not be able to guarantee that two adjacent transistors will function the same, which can lead to circuit failures. Something new is needed. Either we will have to use new circuit design approaches that require more chip area such as “self-healing” or extreme redundancy, or the world will have to inspect and repair transistors within the billions on every HVM chip.

In an exclusive interview with Solid State Technology, David K. Lam, Multibeam Chairman, said, “We provide a high-throughput platform that uses electron beams as an activation mechanism. Each electron-beam column integrates gas injectors, as well as sensors, which enable highly localized control of material removal and deposition. We can etch material in a precise location to a precise depth. Same with deposition.” Lam (Sc.D. MIT) was the founder and first CEO of Lam Research where he led development and market penetration of the IC fab industry’s first fully automated plasma etch system, and was inducted into the Silicon Valley Engineering Hall of Fame in 2013.

“Precision deposition using miniature-column charged particle beam arrays” (Patent #9,453,281) describes patterning of IC layers by either creating a pattern specified by the design layout database in its entirety or in a complementary fashion with other patterning processes. Reducing the total number of process steps and eliminating lithography steps in localized material addition has the dual benefit of reducing manufacturing cycle time and increasing yield by lowering the probability of defect introduction. Furthermore, highly localized, precision material deposition allows for controlled variation of deposition rate and enables creation of 3D structures such as finFETs and NanoWire (NW) arrays.

Deposition can be performed using one or more multi-column charged particle beam systems using chemical vapor deposition (CVD) alone or in concert with other deposition techniques. Direct deposition can be performed either sequentially or simultaneously by multiple columns in an array, and different columns can be configured and/or optimized to perform the same or different material depositions, or other processes such as inspection and metrology.

“Precision substrate material removal using miniature-column charged particle beam arrays” (Patent #9,466,464) describes localized etch using activation electrons directed according to the design layout database so that etch masks are no longer needed. Figure 1 shows that costs are reduced and edge placement accuracy is improved by eliminating or reducing errors associated with photomasks, litho steps, and hard masks. With highly localized process control, etch depths can vary to accommodate advanced 3D device structures.

Fig.1: Comparison of (LEFT) the many steps needed to etch ICs using conventional wafer processing and (RIGHT) the two simple steps needed to do direct etching. (Source: Multibeam)

“We aren’t inventing new etch chemistries, precursors or reactants,” explained Lam. “In direct etch, we leverage developments in reactive ion etching and atomic layer etch. In direct deposition, we leverage work in atomic layer deposition. Several research groups are also developing processes specifically for e-beam assisted etch and deposition.”

The company continues to invent new hardware, and the latest critical components are “kinetic lens” which are arrangements of smooth and rigid surfaces configured to reflect gas particles. When fixed in position with respect to a gas injector outflow opening, gas particles directed at the kinetic lens are collimated or redirected (e.g., “focused”) towards a wafer surface or a gas detector. Generally, surfaces of a kinetic lens can be thought of as similar to optical mirrors, but for gas particles. A kinetic lens can be used to improve localization on a wafer surface so as to increase partial pressure of an injected gas in a target area. A kinetic lens can also be used to increase specificity and collection rate for a gas detector within a target frame.

Complementary Lithography

Complementary lithography is a cost-effective variant of multi-patterning where some other patterning technology is used with 193nm ArF immersion (ArFi) to extend the resolution limit of the latter. The company’s Pilot™ CEBL Systems work in coordination with ArFi lithography to pattern cuts (of lines in a “1D lines-and-cuts” layout) and holes (i.e., contacts and vias) with no masks. These CEBL systems can seamlessly incorporate multicolumn EBI to accelerate HVM yield ramps, using feedback and feedforward as well as die-to-database comparison.

Figure 2 shows that “1D” refers to 1D gridded design rule. In a 1D layout, optical pattern design is restricted to lines running in a single direction, with features perpendicular to the 1D optical design formed in a complementary lithography step known as “cutting”. The complementary step can be performed using a charged particle beam lithography tool such as Multibeam’s array of electrostatically-controlled miniature electron beam columns. Use of electron beam lithography for this complementary process is also called complementary e-beam lithography, or CEBL. The company claims that low pattern-density layers such as for cuts, one multi-column chamber can provide 5 wafers-per-hour (wph) throughput.

Fig.2: Complementary E-Beam Lithography (CEBL) can be used to “cut” the lines within a 1D grid array previously formed using ArF-immersion (ArFi) optical steppers. (Source: Multibeam)

Direct deposition can be used to locally interconnect 1D lines produced by optical lithography. This is similar in design principle to complementary lithography, but without using a resist layer during the charged particle beam phase, and without many of the steps required when using a resist layer. In some applications, such as restoring interconnect continuity, the activation electrons are directed to repair defects that are detected during EBI.


D2S Releases 4th-Gen IC Computational Design Platform

Friday, September 30th, 2016


By Ed Korczynski, Sr. Technical Editor

D2S ( recently released the fourth generation of its computational design platform (CDP), which enables extremely fast (400 Teraflops) and precise simulations for semiconductor design and manufacturing. The new CDP is based on NVIDIA Tesla K80 GPUs and Intel Haswell CPUs, and is architected for 24×7 cleanroom production environments. To date, 14 CDPs across four platform generations are in use by customers around the globe, including six of the latest fourth generation. In an exclusive interview with SemiMD, D2S CEO Aki Fujimura stated, “Now that GPUs and CPUs are fast-enough, they can replace other hardware and thereby free up engineering resources to focus on adding value elsewhere.”

Mask data preparation (MDP) and other aspects of IC design and manufacturing require ever-increasing levels of speed and reliability as the data sets upon which they must operate grow larger and more complex with each device generation. The Figure shows a mask needed to print arrays of sub-wavelength features includes complex curvilinear shapes which must be precisely formed even though they do not print on the wafer. Such sub-resolution assist features (SRAF) increase in complexity and density as the half-pitch decreases, so the complexity of mask data increases far more than the density of printed features.

Sub-wavelength lithography using 193nm wavelength requires ever-more complex masks to repeatably print ever smaller half-pitch (HP) features, as shown by (LEFT) a typical mask composed of complex nested curves and dots which do not print (RIGHT) in the array of 32nm HP contacts/vias represented by the small red circles. (Source: D2S)

GPUs, which were first developed as processing engines for the complex graphical content of computer games, have since emerged as an attractive option for compute-intensive scientific applications due in part to their ability to run many more computing threads (up to 500x) compared to similar-generation CPUs. “Being able to process arbitrary shapes is something that mask shops will have to do,” explained Fujimura. “The world could go 193nm or EUV at any particular node, but either way there will be more features and higher complexity within the features, and all of that points to GPU acceleration.”

The D2S CDP is engineered for high reliability inside a cleanroom manufacturing environment. A few of the fab applications where CDPs are currently being used include:

  • model-based MDP for leading-edge designs that require increasingly complex mask shapes,
  • wafer plane analysis of SEM mask images to identify mask errors that print, and
  • inline thermal-effect correction of eBeam mask writers to lower write times.

“The amount of design data required to produce photomasks for leading-edge chip designs is increasing at an exponential rate, which puts more pressure on mask writing systems to maintain reasonable write times for these advanced masks. At the same time, writing these masks requires higher exposure doses and shot counts, which can cause resist proximity heating effects that lead to mask CD errors,” stated Noriaki Nakayamada, group manager at NuFlare Technology. “D2S GPU acceleration technology significantly reduces the calculation time required to correct these resist heating effects. By employing a resist heating correction that includes the use of the D2S CDP as an OEM option on our mask writers, NuFlare estimates that it can reduce CD errors by more than 60 percent, and reduce write times by more than 20 percent.”

In the E-beam Initiative 2015 survey, the most advanced reported mask-set contained >100 masks of which ~20% could be considered ‘critical’. The just released 2016 survey disclosed that the most complex single-layer mask design written last year required 16 TB of data, however platforms like D2S’ CDP have been used to accelerate writing such that the average reported write times have decreased to a weighted average of 4 hours. Meanwhile, the longest reported mask write time decreased from 72 to 48 hours.

Fab Facilities Data and Defectivity

Monday, August 1st, 2016


By Ed Korczynski, Sr. Technical Editor

In-the-know attendees at SEMICON West at a Thursday morning working breakfast heard from executives representing the world’s leading memory fabs discuss manufacturing challenges at the 4th annual Entegris Yield Forum. Among the excellent presenters was Norm Armour, managing director worldwide facilities and corporate EHSS of Micron. Armour has been responsible for some of the most famous fabs in the world, including the Malta, New York logic fab of GlobalFoundries, and AMD’s Fab25 in Austin, Texas. He discussed how facilities systems effect yield and parametric control in the fab.

Just recently, his organization within Micron broke records working with M&W on the new flagship Fab 10X in Singapore—now running 3D-NAND—by going from ground-breaking to first-tool-in in less than 12 months, followed by over 400 tools installed in 3 months. “The devil is in the details across the board, especially for 20nm and below,” declared Armour. “Fabs are delicate ecosystems. I’ll give a few examples from a high-volume fab of things that you would never expect to see, of component-level failures that caused major yield crashes.”

Ultra-Pure Water (UPW)

Ultra-Pure Water (UPW) is critical for IC fab processes including cleaning, etching, CMP, and immersion lithography, and contamination specs are now at the part-per-billion (ppb) or part-per-trillion (ppt) levels. Use of online monitoring is mandatory to mitigate risk of contamination. International Technology Roadmap for Semiconductors (ITRS) guidelines for UPW quality (minimum acceptable standard) include the following critical parameters:

  • Resistivity @ 25C >18.0 Mohm-cm,
  • TOC <1.0 ppb,
  • Particles/ml < 0.3 @ 0.05 um, and
  • Bacteria by culture 1000 ml <1.

In one case associated with a gate cleaning tool, elevated levels of zinc were detected with lots that had passed through one particular tool for a variation on a classic SC1 wet clean. High-purity chemistries were eliminated as sources based on analytical testing, so the root-cause analysis shifted to to the UPW system as a possible source. Then statistical analysis could show a positive correlation between UPW supply lines equipped with pressure regulators and the zinc exposure. The pressure regulator vendor confirmed use of zinc-oxide and zinc-stearate as part of the assembly process of the pressure regulator. “It was really a curing agent for an elastomer diaphragm that caused the contamination of multiple lots,” confided Armour.

UPW pressure regulators are just one of many components used in facilities builds that can significantly degrade fab yield. It is critical to implement a rigorous component testing and qualification process prior to component installation and widespread use. “Don’t take anything for granted,” advised Armour. “Things like UPW regulators have a first-order impact upon yield and they need to be characterized carefully, especially during new fab construction and fit up.”

Photoresist filtration

Photoresist filtration has always been important to ensure high yield in manufacturing, but it has become ultra-critical for lithography at the 20nm node and below. Dependable filtration is particularly important because industry lacks in-line monitoring technology capable of detecting particles in the range below ~40nm.

Micron tried using filters with 50nm pore diameters for a 20nm node process…and saw excessive yield losses along with extreme yield variability. “We characterized pressure-drop as a function of flow-rate, and looked at various filter performances for both 20nm and 40nm particles,” explained Armour. “We implemented a new filter, and lo and behold saw a step function increase in our yields. Defect densities dropped dramatically.” Tracking the yields over time showed that the variability was significantly reduced around the higher yield-entitlement level.

Airborne Molecular Contamination (AMC)

Airborne Molecular Contamination (AMC) is ‘public enemy number one’ in 20nm-node and below fabs around the world. “In one case there were forrest fires in Sumatra and the smoke was going into the atmosphere and actually went into our air intakes in a high volume fab in Taiwan thousands of miles away, and we saw a spike in hydrogen-sulfide,” confided Armour. “It increased our copper CMP defects, due to copper migration. After we installed higher-quality AMC filters for the make-up air units we saw dramatic improvement in copper defects. So what is most important is that you have real-time on-line monitoring of AMC levels.”

Building collaborative relationships with vendors is critical for troubleshooting component issues and improving component quality. “Partnering with suppliers like Entegris is absolutely essential,” continued Armour. “On AMCs for example, we have had a very close partnership that developed out of a team working together at our Inotera fab in Taiwan. There are thousands of important technologies that we need to leverage now to guarantee high yields in leading-node fabs.” The Figure shows just some of the AMCs that must be monitored in real-time.

Big Data

The only way to manage all of this complexity is with “Big Data” and in addition to primary process parameter that must be tracked there are many essential facilities inputs to analytics:

  • Environmental Parameters – temperature, humidity, pressure, particle count, AMCs, etc.
  • Equipment Parameters – run state, motor current, vibration, valve position, etc.
  • Effluent Parameters – cooling water, vacuum, UPW, chemicals, slurries, gases, etc.

“Conventional wisdom is that process tools create 90% of your defect density loss, but that’s changing toward facilities now,” said Armour. “So why not apply the same methodologies within facilities that we do in the fab?” SPC is after-the-fact reactive, while APC is real-time fault detection on input variables, including such parameters as vibration or flow-rate of a pump.

“Never enough data,” enthused Armour. “In terms of monitoring input variables, we do this through the PLCs and basically use SCADA to do the fault-detection interdiction on the critical input variables. This has been proven to be highly effective, providing a lot of protection, and letting me sleep better at night.”

Micron also uses these data to provide site-to-site comparisons. “We basically drive our laggard sites to meet our world-class sites in terms of reducing variation on facility input variables,” explained Armour. “We’re improving our forecasting as a result of this capability, and ultimately protecting our fab yields. Again, the last thing a fab manager wants to see is facilities causing yield loss and variation.”


Leti’s CoolCube 3D Transistor Stacking Improves with Qualcomm Help

Wednesday, April 27th, 2016

By Ed Korczynski, Sr. Technical Editor

As previously covered by Solid State Technology CEA-Leti in France has been developing monolithic transistor stacking based on laser re-crystallization of active silicon in upper layers called “CoolCube” (TM). Leading mobile chip supplier Qualcomm has been working with Leti on CoolCube R&D since late 2013, and based on preliminary results have opted to continue collaborating with the goal of building a complete ecosystem that takes the technology from design to fabrication.

“The Qualcomm Technologies and Leti teams have demonstrated the potential of this technology for designing and fabricating high-density and high-performance chips for mobile devices,” said Karim Arabi, vice president of engineering, Qualcomm Technologies, Inc. “We are optimistic that this technology could address some of the technology scaling issues and this is why we are extending our collaboration with Leti.” As part of the collaboration, Qualcomm Technologies and Leti are sharing the technology through flexible, multi-party collaboration programs to accelerate adoption.

Olivier Faynot, micro-electronic component section manager of CEA-Leti, in an exclusive interview with Solid State Technology and SemiMD explained, “Today we have a strong focus on CMOS over CMOS integration, and this is the primary integration that we are pushing. What we see today is the integration of NMOS over PMOS is interesting and suitable for new material incorporation such as III-V and germanium.”

Table: Critical thermal budget steps summary in a planar FDSOI integration and CoolCube process for top FET in 3DVLSI. (Source: VLSI Symposium 2015)

The Table shows that CMOS over CMOS integration has met transistor performance goals with low-temperature processes, such that the top transistors have at least 90% of the performance compared to the bottom. Faynot says that recent results for transistors are meeting specification, while there is still work to be done on inter-tier metal connections. For advanced ICs there is a lot of interconnect routing congestion around the contacts and the metal-1 level, so inter-tier connection (formerly termed the more generic “local interconnect”) levels are needed to route some gates at the bottom level for connection to the top level.

“The main focus now is on the thermal budget for the integration of the inter-tier level,” explained Faynot. “To do this, we are not just working on the processing but also working closely with the designers. For example, depending on the material chosen for the metal inter-tier there will be different limits on the metal link lengths.” Tungsten is relatively more stable than copper, but with higher electrical resistance for inherently lower limits on line lengths. Additional details on such process-design co-dependencies will be disclosed during the 2016 VLSI Technology Symposium, chaired by Raj Jammy.

When the industry decides to integrate III-V and Ge alternate-channel materials in CMOS, the different processing conditions for each should make NMOS over PMOS CoolCube a relatively easy performance extension. “Three-fives and germanium are basically materials with low thermal budgets, so they would be most compatible with CoolCube processing,” reminded Faynot. “To me, this kind of technology would be very interesting for mobile applications, because it would achieve a circuit where the length of the wires would be shortened. We would expect to save in area, and have less of a trade-off between power-consumption and speed.”

“This is a new wave that CoolCube is creating and it has been possible thanks to the interest and support of Qualcomm Technologies, which is pushing the technological development in a good direction and sending a strong signal to the microelectronics community,” said Leti CEO Marie Semeria. “Together, we aim to build a complete ecosystem with foundries, equipment suppliers, and EDA and design houses to assemble all the pieces of the puzzle and move the technology into the product-qualification phase.”


Rhines Reviews Four Decades of Design and Verification

Wednesday, March 2nd, 2016


By Jeff Dorsch, Contributing Editor

The electronic design automation industry is progressing from the “Applications Age” to a new era of field-programmable gate array prototyping where security and safety considerations are coming to the fore, according to Wally Rhines, chairman and chief executive officer of Mentor Graphics, giving the keynote address at DVCon U.S. in San Jose, Calif.

The Mentor CEO, who spent 21 years at Texas Instruments before getting into the EDA business, recalled that back in 1972, “there was no verification,” as chip designers were working on small-scale integration and medium-scale integration circuits that weren’t very complex.

Soon after, the CANCER simulator and the SPICE simulation program were developed, ushering in what Rhines called “verification era 0.0.”

This was followed by the register-transfer language design era of VHDL and the Verilog hardware description language, which he dubbed the “verification 1.0 era.”

As computers grew “faster, bigger,” Rhines said, “simulation became very fast, very productive,” leading to testbenches and “verification 2.0,” he added.

The emulation/simulation/verification segment in EDA increased to more than $1 billion in revenue during 2014, Rhines noted. This led to the “systems era” and “verification 3.0,” with multiple domains, he said.

The industry continues to evolve, from the “Pre-ICE Age” and ICE (in-circuit emulation) Age to the current times, with test creation automation and “the goal of portable stimulus,” the Mentor CEO said.

Going “beyond functional verification,” Rhines cited security as an increasing concern in IC design and verification. He pointed to Beckstrom’s Law of Cybersecurity:

  1. Anything attached to a network can be hacked.
  2. Everything is being attached to networks.
  3. Everything is vulnerable.

Semiconductors are now subject to side-channel attacks, Rhines noted. There are also the issues of counterfeit chips and malicious logic inside the chip. For the latter, the industry will resort to static tests and dynamic detection, he said.

In light of these developments, design and verification is moving to “verifying a chip does nothing it is not supposed to do,” Rhines commented.

Safety is the other big issue in chip design and verification. For automotive vehicles, there is the ISO 26262 standard. In medical equipment, it’s the IEC 60601 standard. And in military/aerospace applications, it’s the D0-254 standard, according to Rhines.

Working with such standards, subject to auditing, calls for fault injection and formal-based fault injection/verification, he said.

DVCon, short for Design and Verification Conference and Exhibition, evolved early in the 21st century from the establishment of verification standards and formation of the Accellera Systems Initiative. Annual conferences are held in Europe, India, and the U.S., with plans for a DVCon China in 2017.

Imagining China’s IC Fab Industry in 2035

Friday, January 22nd, 2016


By Ed Korczynski, Sr. Technical Editor

Editor’s Note:  In Solid State Technology’s November 1995 Asia/Pacific Supplement this editor wrote of the PRC’s status and plans for IC fabs titled “Progress creeps forward”. SEMICON/China 1995 was held in a small hall in Shanghai with 125 exhibitors and 5000 attendees discussing production of just 245M ICs units having happened in the entire country in 1994. Motorola’s Fab17 in Tianjin was planned to be able to yield 360M IC from 200mm wafers.

China has been successfully investing in technology to reach global competitiveness for many decades. Integrated circuit (IC) manufacturing technology is highly strategic for countries, enabling both economically-valuable commercial fabs as well as military power. The Wassenaar Arrangement (WA) between 40-some states has restricted exports to China of “leading” technology with potential “dual-use” by industry and military. Using the terminology of IC fab nodes/generations, WA has typically restricted exports to fab tools capable of processing ICs three nodes behind (n-3) the leading edge of commercial capability ( In 1995 the leading edge was 0.35 microns, so 1 micron and above was the WA limit. In 2015 the leading edge is 14nm, so 45nm and above is the WA limit, but local capability has already effectively bypassed this restriction.

On February 9, 2015, trade-organization SEMI announced ( the successful lobbying of the U.S. Department of Commerce to declare the export controls on certain etch equipment and technology ineffective, thereby allowing US equipment companies to sell high-volume manufacturing (HVM) tools with capabilities closer to the leading-edge into China. Following years of discussion and negotiations, SEMI had submitted a formal petition for the Commerce Department’s Bureau of Industry and Security (BIS) to examine the foreign availably of anisotropic plasma dry etching equipment, having identified AMEC ( as providing an indigenous Chinese manufacturing capability. AMEC has announced that it’s tool is being used by Samsung for V-NAND HVM (, which is certainly a “leading-edge” product that happens to be made using 45nm node (n-3) design rules.

“The Future is in the Past: Projecting and Plotting the Potential Rate of Growth and Trajectory of the Structural Change of the Chinese Economy for the Next 20 Years” by Jun Zhang et al. from the Institute of World Economics and Politics, Chinese Academy of Social Sciences was first published online in 2015 (DOI: 10.1111/cwe.12098). Thanks to economic growth at an average speed of more than 9.7% annually in China over the past 35 years, it is estimated that today’s China per capital GDP has already reached approximately 23% of the USA. Because of the significant rise in per-capita income over the past 30 years, China has started to see a rapid demographic transition and a gradual rise in labor costs as seen in other high-performing East Asian economies. Benchmarking to the experiences of East Asian high-performing economies from 1950 to 2010, this paper projects potential growth rate of per-capita GDP (adjusted by purchasing power parity) for China at ~6.02% from 2015 to 2035.

The PRC still works with 5-year-plans. Figure 1 shows Deng Xiaoping touring a government-run fab during the 8th 5-year-plan (1991-1995) when central planning of local resources dominated Chinese IC industry. Paramount leader Deng had famously proclaimed, “Poverty is not socialism. To be rich is glorious,” which allowed for private enterprise and different economic classes. As reported by Robert Lawrence Kuhn in 2007’s “What Will China Look Like in 2035” in Bloomberg Business (, researchers at the Institute of Quantitative & Technical Economics of the Chinese Academy of Social Sciences—the official government think tank housing more than 3,000 scholars and researchers—in 2007 predict that by 2030 China’s economic reform will have been basically completed, such that the major issue will be the “adjustment of interests” among different classes.

Figure 1: Deng Xiaoping is shown Shanghai Belling’s fab by General Manager Lu Dechun during the 8th 5-year-plan (1991-1995). Such small fabs are not globally competitive. (Source: Ed Korczynski)

In 2014, McKinsey&Company published proprietary research ( that >50% of PCs, and 30-40% of embedded systems contain content designed in China, either directly by mainland companies or emerging from the Chinese labs of global players. Since fewer chip designs will be moving to technologies that are 22nm node and below, low-cost Chinese technology companies will soon be able to address a larger part of the global market. Chinese companies will become more aggressive in pursuing international mergers and acquisitions, to acquire global intellectual property and expertise to be transferred back home.

Figure 2 shows that ICs represent the single greatest import cost for China, so there is great incentive to develop competitive internal fab capacity. The government, recognizing the failure of earlier centrally-planned investment initiatives, now takes a market-based investment approach. The target is a compound annual growth rate (CAGR) for the industry of 20%, with potential financial support from the government of up to 1 trillion renminbi ($170 billion) over the next five to ten years. To avoid the fragmentation issues of the past, the government will focus on creating national champions—a small set of leaders in each critical segment of the semiconductor market (including design, manufacturing, tools, and assembly and test) and a few provinces in which there is the potential to develop industry clusters.

Figure 2: The leading imports to China in 2014, showing that integrated circuits (IC) cost the country more than oil. (Source: China’s customs)

Global Cooperation and Competition

The remaining leading IC manufacturers in the world—Intel, Samsung, and TSMC—are all involved in mainland Chinese fabs. Intel’s Fab68 in Dalian began production of logic chips in 2010. Samsung’s Fab in Xian began production of V-NAND chips in 2014. TSMC has announced it is seeking approval to build a wholly-owned 300mm foundry in Nanjing (, after rival UMC’s has invested in a jointly-owned foundry now being built in Xiamen.

“We do see significant growth, and a big part of that is due to investment by the Chinese government,” said Handel Jones of IC Insights during SEMICON Europa 2015. “Up to US$20B of government subsidy has been earmarked for IC manufacturing investment in China.” Jones forecasts that by 2025 up to 30% of global design starts will be in China, many to be designed by the ~500 fabless companies in China today. Jones estimates the total R&D investment in China today for 5G wireless technology is about US$2B per year, with about one-half of that just by Huawei Technologies Co. Ltd.

Due to the inevitable atomic-limits of Moore’s Law scaling, it is likely that the industry will have reached the end of new nodes in the next 20 years. By then, “trailing-edge” will include everything that is in R&D today, from quantum-devices to CMOS-photonic chips, of which it is highly likely that China will have globally competitive design and manufacturing capability. While today a net importer of ICs, by the year 2035 it seems likely China will be a net exporter of ICs.


Identifying the Prime Challenge of IoT Design

Friday, December 18th, 2015


By Jeff Miller, Product Marketing Manager, Mentor Graphics Corporation


In his blog post for Semiconductor Manufacturing & Design, Pete Singer shared how the acquisition of Tanner EDA by Mentor Graphics provides a solution to meeting the design challenge of Internet of Things (IoT). Low-cost IoT designs, which interface the edge of the real world to the Internet, mesh together several design domains. Individually, these design domains are challenging for today’s engineers. Bringing them all together to create an IoT product can place extreme pressure on design teams. For example, let’s look at the elements of a typical IoT device (Figure 1).

Figure 1: A typical IoT device.

This IoT device contains a sensor and an actuator that interface to the Internet. The sensor signal is sent to an analog signal processing device in the form of an amplifier or a low-pass filter. The output connects to an A/D converter to digitize the signal. That signal is sent to a digital logic block that contains a microcontroller or a microprocessor. Conversely, the actuator is controlled by an analog driver through a D/A converter. The sensor telemetry is sent and control signals are received by a radio module that uses a standard protocol such as WiFi, Bluetooth, or ZigBee, or a custom protocol. The radio transmits data to the Cloud or through a smartphone or PC.

This device points out the prime challenge to IoT design: analog, digital, RF, and MEMS design domains all live together in one device. IoT design requires that all four design domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dies that will be bonded together, designers still need to work together during the integration and verification process. In this design, there are several components in multiple domains, such as the A/D converter, digital logic, a RF radio, a MEMS sensor, and an analog driver that connects to an external mechanical actuator. The design team needs to capture a mixed analog and digital, RF, and MEMS design, perform both component and top-level simulation, layout the chip, and verify the components within the complete system.

The Tanner Solution

The Tanner solution delivers a top-down design flow for IoT design, unifying the four design domains (Figure 2).

Figure 2: The Tanner IoT design flow.

Whether you are designing a single die or multiple die IoT device, you can use this design flow for creating and simulating this device:

  • Capturing and simulating the design. S-Edit captures the design at multiple levels of abstraction for any given cell. Each cell can have multiple views such as a schematic, RTL, or SPICE and then you choose which view to use for simulation. T-Spice simulates SPICE and Verilog-A representations of the design while ModelSim simulates the digital, Verilog-D/RTL portions of your design.
  • Simulating the mixed-signal design. S-Edit creates the complete Verilog-AMS netlist and passes it to T-Spice. T-Spice automatically adds Analog/Digital connection modules and then partitions the design for simulation. T-Spice simulates the analog (SPICE and Verilog-A) and sends the RTL to ModelSim for digital simulation. Both simulators are invoked automatically and during simulation the signal values are passed back and forth between the simulators whenever there is a signal change at the analog/digital boundary. This means, that regardless of the design implementation language, you drive the simulation from S-Edit and the design is automatically partitioned across the simulators. Then, you can interact with the results using the ModelSim and T-Spice waveform viewers. Behavioral models of MEMS devices can be created in Verilog-A or as equivalent lumped SPICE elements that are simulated along with the digital models for system-level verification.
  • Laying out the design. The physical design is completed using L-Edit which allows you to create the layout of the analog and MEMS components for the IoT design. The parameterized layout library of common MEMS elements and true curve support simplify the MEMS layout.
  • Completing the flow. Of course, there are other steps in the flow, such as digital synthesis, digital place and route, chip assembly, physical verification, static timing analysis, and full system verification. However, these steps are beyond the scope of this discussion.

Implementing the MEMS Device

One of the most challenging aspects of IoT design is implementing the MEMS device. So, in this article we focus on the physical design flow for this device. Let’s say that the MEMS device in our design is a magnetic actuator. A magnetic actuator is comprised of a coil and a moving paddle. The paddle is suspended by a spring. When current is sent through the coil, a magnetic field is created which moves the paddle in and out of the coil field (Figure 3).

Figure 3: MEMS magnetic actuator.

You could create a 3D model of the magnetic actuator using a 3D analysis tool and then analyze its dynamic response to different currents. To fabricate the actuator you need a 2D layout mask and deriving a 2D mask from a 3D model is error-prone and difficult to validate. A better approach is to follow the mask-forward flow that Figure 4 shows, that results in more confidence that the actuator will not only work correctly but that it can be successfully fabricated.

Figure 4: The mask-forward MEMS design flow.

The mask-forward MEMS design flow starts by creating the 2D mask layout in L-Edit. Then, use the SoftMEMS 3D Solid Modeler (integrated within L-Edit) to automatically generate the 3D model from those masks and a set of specified fabrication steps. Perform 3D analysis using your favorite finite element tool and then iterate if you find any issues. Make the appropriate changes to the 2D mask layout and then repeat the flow. Using this mask-forward design flow, you can converge on a MEMS device that you are confident can be fabricated correctly because you creating the 3D model directly from the masks that will eventually be used for fabrication, rather than trying to work backwards from the 3D model.


The prime challenge of IoT design is working in four design domains: analog, digital, RF, and MEMS. The Tanner design flow is architected to seamlessly work across all of these design domains by employing an integrated design flow for design, simulation, layout, and verification.
For more information about the IoT design flow, see:

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015


By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.


Monte Carlo Analysis Has Become A Gamble

Monday, October 21st, 2013

Dr. Bruce McGaughy, CTO and SVP of Engineering at ProPlus Design Solutions, Inc. blogs about the wisdom of Monte Carlo analysis when high sigma methods are perhaps better suited to today’s designs.

Years ago, someone overhead a group of us talking about Monte Carlo analysis and thought we were referring to the gambling center of Monaco and not computational algorithms that have become the gold standard for yield prediction. All of us standing by the company water cooler had a good laugh. That someone was forgiven because he was a recent college graduate with a degree in Finance and a new hire. As a fast learner, he quickly came to understand the benefits of Monte Carlo analysis.

I was recently reminded of this scene as the limitations of Monte Carlo analysis approaches are becoming more acute because of capacity. No circuit designer would mistake Monte Carlo analysis for a roulette wheel, though chip design may seem like a game of chance today. We continue to use the Monte Carlo approach for high-dimension integration and failure analysis even as new approaches emerge.

Emerging they are. For example, high sigma methods with proven techniques are becoming more prevalent for the design of airplanes, bridges, financial models, integrated circuits and more. Moreover, high sigma methods also are used for electronic design for various applications and are proving to be accurate by validation in hardware.

New technologies, such as16nm FinFET, add extra design challenges that require high sigma greater than six and closer to 7 sigma, making Monte Carlo simulation even less desirable.

Let’s explore a real-world scenario using a memory design as an example where process variations at advanced technologies become more severe, leading to a greater impact on SRAM yield.

The repetitive structure circuits of an SRAM design means extremely low cell failure rate is necessary to ensure high chip yield. Traditional Monte Carlo analysis is impractical in this application. In fact, it’s nearly impossible to finish the needed sampling because it typically requires millions or even billions of runs.

Conversely, a high sigma method can cut Monte Carlo analysis sampling by orders of magnitude. A one megabyte SRAM would require the yield of a bitline cell to reach as high as 99.999999% in order to achieve a chip yield of 99%. Monte Carlo analysis would need billions of samples. The high sigma method would need mere thousands of samples to achieve the same accuracy, shortening the statistical simulation time and making it possible for designers to do yield analysis for this kind of application.

High sigma methods are able to identify and filter sensitive parameters, and identify failure regions. Results are shared in various outputs and include sigma convergence data, failure rates, and yield data equivalent to Monte Carlo samples.

Monte Carlo analysis has had a good long run for yield prediction, but for many cases it’s become impractical. Emerging high sigma methods improve designer confidence for yield, power, performance and area, shorten the process development cycle and have the potential to save cost. The ultimate validation, of course, is in hardware and production usage. High sigma methods are gaining extensive silicon validation over volume production.

Let’s not gamble with yield prediction and take a more careful look at high sigma methods.

About Bruce McGaughy

Bruce McGaughy, CTO and Senior VP of Engineering at ProPlus Solutions in San Jose, CA.
Bruce McGaughy, CTO and Senior VP of Engineering at ProPlus Solutions in San Jose, CA.

Dr. Bruce McGaughy is chief technology officer and senior vice president of Engineering of ProPlus Design Solutions, Inc. He was most recently the Chief Architect of Simulation Division and Distinguished Engineer at Cadence Design Systems Inc. Dr. McGaughy previously also served as a R&D VP at BTA Technology Inc. and Celestry Design Technology Inc., and later an Engineering Group Director at Cadence Design Systems Inc. Dr. McGaughy holds a Ph.D. degree in EECS from the University of California at Berkeley.

Model-Based Hints: GPS for LFD Success

Wednesday, October 16th, 2013
By Joe Kwan, Mentor Graphics

For several technology nodes now, designers have been required to run lithography-friendly design (LFD) checks prior to tape out and acceptance by the foundry. Due to resolution enhancement technology (RET) limitations at advanced nodes, we are seeing significantly more manufacturing issues [1] [2], even in DRC-clean designs. Regions in a design layout that have poor manufacturability characteristics, even with the application of RET techniques, are called lithographic (litho) hotspots, and they can only be corrected by modifying the layout polygons in the design verification flow.

A litho hotspot fix should satisfy two conditions:

  • First, implementing a fix cannot cause an internal or external DRC violation (i.e., applying a fix should not result in completely removing a polygon, making its width less than the minimum DRC width, merging two polygons, or making the distance between them less than the minimum DRC space).
  • Second, the fix must be LFD-clean, which means it should not only fix the hotspot under consideration, but also make sure that it does not produce new hotspots.

However, layout edges that should be moved to fix a litho hotspot are not necessarily the edges directly touching it. Determining which layout edges to move to fix a litho hotspot can be pretty complicated, because getting from a design layout to a printed contour involves a bunch of complex non-linear steps (such as RET) that alter the original layout shapes, and optical effects that take into account the effect of the layout features context. Since any layout modifications needed to fix litho hotspots must be made by the designer, who is generally not familiar with these post-tapeout processes, it’s pretty obvious that EDA tools need to provide the designer with some help during the fix process.

At Mentor Graphics, we call this help model-based hints (MBH). MBH can evaluate the hotspot, determine what fix options are available, run simulations to determine which fixes also comply with the required conditions, then provide the designer with appropriate fix hints (Figure 1). A fix can include single-edge movements or group-edge movement, and a litho hotspot may have more than one viable fix. Also, post-generation verification can detect any new minimum DRC width or space violations, but it will not be able to detect deleting or merging polygons, so the MBH system must incorporate this knowledge into hint generation. Being able to see all the viable fix options in one hint gives the designer both the information needed to correct the hotspot and the flexibility to implement the fix most suitable to that design.

Figure 1. Litho hotspot analysis with model-based hinting (adapted from “Model Based Hint for Litho Hotspot Fixing Beyond 20nm node,” SPIE 2013)Figure 1. Litho hotspot analysis with model-based hinting (adapted from “Model Based Hint for Litho Hotspot Fixing Beyond 20nm node,” SPIE 2013)

Another cool thing about MBH systems—they can be expanded to support hints for litho hotspots found in layers manufactured using double or triple patterning, by using the decomposed layers along with the original target layers as an input. This enables designers to continue resolving litho hotspots at 20 nm and below. In fact, we’ve had multiple customers tape out 20 nm chips using litho simulation and MBH on a variety of designs to eliminate litho hotspots

Of course, it goes without saying that any software solutions generating such hints also need to be accurate and fast. But we said it anyway.

As designers must take on more and more responsibility for ensuring designs can be manufactured with increasingly complex production processes, EDA software must evolve to fill the knowledge gap. LFD tools with MBH capability are one example of how EDA systems can be the bridge between design and manufacturing.


Joe Kwan is the Product Marketing Manager for Calibre LFD and Calibre DFM Services at Mentor Graphics. He previously worked at VLSI Technology, COMPASS Design Automation, and Virtual Silicon. Joe received a BA in Computer Science from the University of California, Berkeley, and an MS in Electrical Engineering from Stanford University. He can be reached at

Next Page »