Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘EDA’

Next Page »

Mentor’s Pattern Matching Tackles IC Verification and Manufacturing Problems

Sunday, June 5th, 2016

thumbnail

Mentor Graphics Corporation announced that customers and ecosystem partners are expanding their use of Calibre Pattern Matching solution to overcome previously intractable IC verification and manufacturing problems. The solution is integrated into the Mentor® Calibre nmPlatform solution, creating a synergy that drives these new applications at IC design companies and foundries, across multiple process nodes.

Calibre Pattern Matching technology supplements multi-operational text-based design rules with an automated visual geometry capture and compare process. This visual approach is both very powerful in its ability to capture complex pattern relationships, and to work within mixed tool flows, making it much easier for Mentor customers to create new applications to solve difficult problems. Because it is integrated into the Calibre nmPlatform toolset, the Calibre Pattern Matching functionality can leverage the industry-leading performance and accuracy of all Calibre tools and flows to create new opportunities for design-rule checking (DRC), reliability checking, DFM, yield enhancement, and failure analysis.

“Our customers count on eSilicon’s design services, IP, and ecosystem management to help them succeed in delivering market-leading ICs,” said Deepak Sabharwal, general manager, IP products & services at eSilicon. “We use Calibre Pattern Matching to create and apply a Calibre-based yield-detractor design kit that helps identify and eliminate design patterns that impact production ramp-up time.”

Since its introduction, use models for Calibre Pattern Matching technology have rapidly expanded, solving problems that were previously too complex or time-consuming to be implemented. New use cases include the following:

  • Physical verification of IC designs with curved structures—for analog, high-power, radio frequency (RF) and microelectromechanical (MEMS) circuitry—is extremely difficult with products designed to work with rectilinear design data. Calibre customers are automating that verification using a combination of Calibre Pattern Matching technology and other Calibre tools for much greater efficiency and accuracy, especially when compared to manual techniques.
  • Calibre Pattern Matching technology can be used to quickly locate and remove design patterns that are known or suspected of  being difficult to manufacture (“yield detractors”). Foundries or design companies create libraries of yield detractor patterns that are specific to a process node or a particular design methodology. Samsung Foundry used this approach in its Closed-Loop DFM solution to help its customers ramp to volume faster, and reduce process-design variability.
  • Some customers use Calibre Pattern Matching technology with Calibre Auto-Waivers™ functionality to define a specific context for waiving a DRC violation. This enhancement allows for automatic filtering of those violations for significant time savings and improved design quality.

“To help our customers create manufacturing-ready designs, we use Calibre Pattern Matching to create and use a yield detractor database to fix most of the litho hotspots in the block level. Then we perform fast signoff DFM litho checking at the chip level using an integrated solution with Calibre Pattern Matching and Calibre LFD” said Min-Hwa Chi, senior vice president, SMIC. “By offering a solution for manufacturability robustness that is built on the Calibre platform, we are seeing ready customer adoption of SMIC’s DFM solution.”

With the Calibre Pattern Matching tool, design companies can now optimize their physical verification checking to their unique design styles. The tool is easy to adopt because it doesn’t rely on expertise in scripting languages. Instead, any engineer can readily define a visual pattern that captures the designer’s expertise in the critical geometries and context for that configuration.

“With the growing adoption of Calibre Pattern Matching technology, Mentor continues to help our customers address increasing design complexity, regardless of the process node they are targeting,” said Joe Sawicki, vice president and general manager of the Design-to-Silicon division at Mentor Graphics. “By incorporating the Calibre Pattern Matching tool, the Calibre platform becomes an even more valuable bridge between design and manufacturing for the ecosystem.”

At the 2016 Design Automation Conference, Mentor has a Calibre Pattern Matching presentation on Tuesday, June 7 at 3PM in the Mentor booth #949. Register for the session using the registration form.

https://www.mentor.com/events/design-automation-conference/schedule

Leti’s CoolCube 3D Transistor Stacking Improves with Qualcomm Help

Wednesday, April 27th, 2016

By Ed Korczynski, Sr. Technical Editor

As previously covered by Solid State Technology CEA-Leti in France has been developing monolithic transistor stacking based on laser re-crystallization of active silicon in upper layers called “CoolCube” (TM). Leading mobile chip supplier Qualcomm has been working with Leti on CoolCube R&D since late 2013, and based on preliminary results have opted to continue collaborating with the goal of building a complete ecosystem that takes the technology from design to fabrication.

“The Qualcomm Technologies and Leti teams have demonstrated the potential of this technology for designing and fabricating high-density and high-performance chips for mobile devices,” said Karim Arabi, vice president of engineering, Qualcomm Technologies, Inc. “We are optimistic that this technology could address some of the technology scaling issues and this is why we are extending our collaboration with Leti.” As part of the collaboration, Qualcomm Technologies and Leti are sharing the technology through flexible, multi-party collaboration programs to accelerate adoption.

Olivier Faynot, micro-electronic component section manager of CEA-Leti, in an exclusive interview with Solid State Technology and SemiMD explained, “Today we have a strong focus on CMOS over CMOS integration, and this is the primary integration that we are pushing. What we see today is the integration of NMOS over PMOS is interesting and suitable for new material incorporation such as III-V and germanium.”

Table: Critical thermal budget steps summary in a planar FDSOI integration and CoolCube process for top FET in 3DVLSI. (Source: VLSI Symposium 2015)

The Table shows that CMOS over CMOS integration has met transistor performance goals with low-temperature processes, such that the top transistors have at least 90% of the performance compared to the bottom. Faynot says that recent results for transistors are meeting specification, while there is still work to be done on inter-tier metal connections. For advanced ICs there is a lot of interconnect routing congestion around the contacts and the metal-1 level, so inter-tier connection (formerly termed the more generic “local interconnect”) levels are needed to route some gates at the bottom level for connection to the top level.

“The main focus now is on the thermal budget for the integration of the inter-tier level,” explained Faynot. “To do this, we are not just working on the processing but also working closely with the designers. For example, depending on the material chosen for the metal inter-tier there will be different limits on the metal link lengths.” Tungsten is relatively more stable than copper, but with higher electrical resistance for inherently lower limits on line lengths. Additional details on such process-design co-dependencies will be disclosed during the 2016 VLSI Technology Symposium, chaired by Raj Jammy.

When the industry decides to integrate III-V and Ge alternate-channel materials in CMOS, the different processing conditions for each should make NMOS over PMOS CoolCube a relatively easy performance extension. “Three-fives and germanium are basically materials with low thermal budgets, so they would be most compatible with CoolCube processing,” reminded Faynot. “To me, this kind of technology would be very interesting for mobile applications, because it would achieve a circuit where the length of the wires would be shortened. We would expect to save in area, and have less of a trade-off between power-consumption and speed.”

“This is a new wave that CoolCube is creating and it has been possible thanks to the interest and support of Qualcomm Technologies, which is pushing the technological development in a good direction and sending a strong signal to the microelectronics community,” said Leti CEO Marie Semeria. “Together, we aim to build a complete ecosystem with foundries, equipment suppliers, and EDA and design houses to assemble all the pieces of the puzzle and move the technology into the product-qualification phase.”

—E.K.

IoT Demands Part 2: Test and Packaging

Friday, April 15th, 2016

By Ed Korczynski, Senior Technical Editor, Solid State Technology, SemiMD

The Internet-of-Things (IoT) adds new sensing and communications to improve the functionality of all manner of things in the world. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be extremely small and low-cost. To understand the state of technology preparedness to meet the anticipated needs of the different application spaces, experts from GLOBALFOUNDRIES, Cadence, Mentor Graphics and Presto Engineering gave detailed answers to questions about IoT chip needs in EDA and fab nodes, as published in “IoT Demands:  EDA and Fab Nodes.” We continue with the conversation below.

Korczynski: For test of IoT devices which may use ultra-low threshold voltage transistors, what changes are needed compared to logic test of a typical “low-power” chip?

Steve Carlson, product management group director, Cadence

Susceptibility to process corners and operating conditions becomes heightened at near-threshold voltage levels. This translates into either more conservative design sign-off criteria, or the need for higher levels of manufacturing screening/tests. Either way, it has an impact on cost, be it hidden by over-design, or overtly through more costly qualification and test processes.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

We need to make sure that the testability has also been designed to be functional structurally in this mode. In addition, sub-threshold voltage operation must account for non-linear transistor characteristics and the strong impact of local process variation, for which the conventional testability arsenal is still very poor. Automotive screening used low voltage operation (VLV) to detect latent defects, but at very low voltage close to the transistor threshold, digital becomes analog, and therefore if the usual concept still works for defect detection, functional test and @speed tests require additional expertise to be both meaningful and efficient from a test coverage perspective.

Korczynski:  Do we have sufficient specifications within “5G” to handle IoT device interoperability for all market segments?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

The estimated timeline for standardization availability of 5G is around 2020. 5G is being designed keeping three classes of applications in mind:  Enhanced Mobile Broadband, Massive IoT, and Mission-Critical Control. Specifically for IoT, the focus is on efficient, low-cost communication with deep coverage. We will start to see early 5G technologies start to appear around 2018, and device connectivity,

interoperability and marshaling the data they generate that can apply to multiple IoT sub-segments and markets is still very much in development.

Korczynski:  Will the 1st-generation of IoT devices likely include wide varieties of solution for different market-segments such as industrial vs. retail vs. consumer, or will most device use similar form-factors and underlying technologies?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

If we use CES 2016 as a showcase, we are seeing IoT “Things” that are becoming use-case or application-centric as they apply to specific sub-segments such as Connected Home, Automotive, Medical, Security, etc. There is definitely more variety on the consumer front vs. industrial. Vendors / OEMs / System houses are differentiating at the user-interface design and form-factor levels while the “under-the-hood” IC capabilities and component technologies that provide the atomic intelligence are fairly common. ​

Steve Carlson, product management group director, Cadence

Right now it seems like everyone is swinging for the fence. Everyone wants the home-run product that will reach a billion devices sold. Generality generally leads to sub-optimality, so a single device usually fails to meet the needs and expectations of many. Devices that are optimized for more specific use cases and elements of purchasing criteria will win out. The question of interface is an interesting one.

Korczynski:  Will there be different product life-cycles for different IoT market-segments, such as 1-3 years for consumer but 5-10 years for industrial?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

That certainly seems to be the case. According to Gartner’s market analysis for IoT, Consumer is expected to grow at a faster pace in terms of units compared to Enterprise, while Enterprise is expected to lead in revenue. Also the churn-cycle in Consumer is higher / faster compared to Enterprise. Today’s wearables or smart-phones are good reference examples. This will however vary by the type of “Thing” and sub-segment. For example, you expect to have your smart refrigerator for a longer time period compared to smart clothing or eyewear. As ASPs of the “Things”come down over time and new classes of products such as disposables hit the market, we can expect even larger volumes.​

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

The market segments continue to be driven by the same use cases. In consumer wearables, short cycles are linked to fashion trends and rapid obsolescence, where consumer home use has longer cycles closer to industrial market requirements. We believe that the lifecycle norms will hold true for IoT devices.

Korczynski:  For the IoT application of infrastructure monitoring (e.g. bridges, pipelines, etc.) long-term (10-20 year) reliability will be essential, while consumer applications may be best served by 3-5 year reliability devices which cost less; how well can we quantify the trade-off between cost and chip reliability?

Steve Carlson, product management group director, Cadence

Conceptually we know very well how to make devices more reliable. We can lower current densities with bigger wires, we can run at cooler temperatures, and so on.  The difficulty is always in finding optimality for a given criterion across the, for practical purposes, infinite tradeoffs to be made.

Korczynski:  Why is the talk of IoT not just another “Dot Com” hype cycle?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

​​I participated in a panel at SEMICON China in Shanghai last month that discussed a similar question. If we think of IoT as a “brand new thing” (no pun intended), then we can think of it as hype. However if we look at the IoT as as set of use-cases that can take advantage of an evolution of Machine-to-Machine (M2M) going towards broader connectivity, huge amounts of data generated and exchanged, and a generational increase in internet and communication network bandwidths (i.e. 5G), then it seems a more down-to-earth technological progression.

Nicolas Williams, product marketing manager, Mentor Graphics

Unlike the Dot Com hype, which was built upon hope and dreams of future solutions that may or may not have been based in reality, IoT is real business. For example, in a 2016 IC Insights report, we see that last year $63.4 billion in revenue was generated for IoT systems and the market is growing at about 20% CAGR. This same report also shows IoT semiconductor sales of over $15 billion in 2015 with a CAGR of 21.1%.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is the investment needed up front to create sensing agents and an infrastructure for the hardware foundation of the IoT that will lead to big data and ultimately value creation.

Steve Carlson, product management group director, Cadence

There will be plenty of hype cycles for products and product categories along the way. However, the foundational shift of the connection of things is a diode through which civilization will only pass through in one direction.

IoT Demands Part 1: EDA and Fab Nodes

Thursday, April 14th, 2016

The Internet-of-Things (IoT) is expected to add new sensing and communications to improve the functionality of all manner of things in the world:  bridges sensing and reporting when repairs are needed, parts automatically informing where they are in storage and transport, human health monitoring, etc. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be assembled at low-cost and small-size in High Volume Manufacturing (HVM). Micro-Electro-Mechanical Systems (MEMS) and other sensors are being combined with Radio-Frequency (RF) ICs in miniaturized packages for the first wave of growth in major sub-markets.

To meet the anticipated needs of the different IoT application spaces, SemiMD asked leading companies within critical industry segments about the state of technology preparedness:

*  Commercial IC HVM – GLOBALFOUNDRIES,

*  Electronic Design Automation (EDA) – Cadence and Mentor Graphics,

*  IC and complex system test – Presto Engineering.

Korczynski:  Today, ICs for IoT applications typically use 45nm/65nm-node which are “Node -3″ (N-3) compared to sub-20nm-node chips in HVM. Five years from now, when the bleeding-edge will use 10nm node technology, will IoT chips still use N-3 of 28nm-node (considered a “long-lived node”) or will 45nm-node remain the likely sweet-spot of price:performance?

Timothy Dry, product marketing manager, GLOBALFOUNDRIES

In 5 years time, there will be a spread of technology solutions addressing low, middle, and high ends of IoT applications. At the low end, IoT end nodes for applications like connected smoke

detectors, security sensors will be at 55, 40nm ULP and ULL for lowest system power, and low cost. These applications will be typically served by MCUs <50DMIPs. Integrated radios (BLE, 802.15.4), security, Power Management Unit (PMU), and eFlash or MRAM will be common features. Connected LED lighting is forecasted to be a high volume IoT application. The LED drivers will use BCD extensions of 130nm—40nm—that can also support the radio and protocol-MCU with Flash.

In the mid-range, applications like smart-meters and fitness/medical monitoring will need systems that have more processing power <300DMIPS. These products will be implemented in 40nm, 28nm and GLOBALFOUNDRIES’ new 22nm FDSOI technology that uses software-controlled body-biasing to tune SoC operation for lowest dynamic power. Multiple wireless (BLE/802.15.4, WiFi, LPWAN) and wired connectivity (Ethernet, PLC) protocols with security will be integrated for gateway products.

High-end products like smart-watches, learning thermostats, home security/monitoring cameras, and drones will require MPU-class IC products (~2000DMIPs) and run high-order operating systems (e.g. Linux, Android). These products will be made in leading-edge nodes starting at 22FDX, 14FF and migrating to 7FF and beyond. Design for lowest dynamic power for longest battery life will be the key driver, and these products typically require human machine Interface (HMI) with animated graphics on a high resolution displays. Connectivity will include BLE, WiFi and cellular with strong security.

Steve Carlson, product management group director, Cadence

We have seen recent announcements of IoT targeted devices at 14nm. The value created by Moore’s Law integration should hold, and with that, there will be inherent advantages to those who leverage next generation process nodes. Still, other product categories may reach functionality saturation points where there is simply no more value obtained by adding more capability. We anticipate that there will be more “live” process nodes than ever in history.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is fair to say that most IoT devices will be a heterogeneous aggregation of analog functions rather than high power digital processors. Therefore, and by similarity with Bluetooth and RFID devices, 90nm and 65nm will remain the mainstream nodes for many sub-vertical markets, enabling the integration of RF and analog front-end functions with digital gate density. By default, sensors will stay out of the monolithic path for both design and cost reasons. The best answer would be that the IoT ASIC will follow eventually the same scaling as the MCU products, with embedded non-volatile memories, which today is 55-40nm centric and will move to 28nm with industry maturity and volumes.

Korczynski:  If most IoT devices will include some manner of sensor which must be integrated with CMOS logic and memory, then do we need new capabilities in EDA-flows and burn-in/test protocols to ensure meeting time-to-market goals?

Nicolas Williams, product marketing manager, Mentor Graphics

If we define a typical IoT device as a product that contains a MEMS sensor, A/D, digital processing, and a RF-connection to the internet, we can see that the fundamental challenge of IoT design is that teams working on this product need to master the analog, digital, MEMS, and RF domains. Often, these four domains require different experience and knowledge and sometimes design in these domains is accomplished by separate teams. IoT design requires that all four domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dice that will be bonded together, they still need to work together during the layout and verification process. Therefore, a unified design flow is required.

Stephen Pateras, product marketing director, Mentor Graphics

Being able to quickly debug and create test patterns for various embedded sensor IP can be addressed with the adoption of the new IEEE 1687 IP plug-and-play standard. If a sensor IP block’s digital interface adheres to the standard, then any vendor-provided data required to initialize or operate the embedded sensor can be easily and quickly mapped to chip pins. Data sequences for multiple sensor IP blocks can also be merged to create optimized sequences that will minimize debug and test times.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

From a testing standpoint, widely used ATEs are generally focused on a few purposes, but don’t necessarily cover all elements in a system. We think that IoT devices are likely to require complex testing flows using multiple ATEs to assure adequate coverage. This is likely to prevail for some time as short run volumes characteristic of IoT demands are unlikely to drive ATE suppliers to invest R&D dollars in creating new purpose-built machines.

Korczynski:  For the EDA of IoT devices, can all sensors be modeled as analog inputs within established flows or do we need new modeling capability at the circuit level?

Steve Carlson, product management group director, Cadence

Typically, the interface to the physical world has been partitioned at the electrical boundary. But as more mechanical and electro-mechanical sensors are more deeply integrated, there has been growing value in co-design, co-analysis, and co-optimization. We should see more multi-domain analysis over time.

Nicolas Williams, product marketing manager, Mentor Graphics

Designers of IoT devices that contain MEMS sensors need quality models in order to simulate their behavior under physical conditions such as motion and temperature. Unlike CMOS IC design, there are few standardized MEMS models for system-level simulation. State of the art MEMS modeling requires automatic generation of behavioral models based on the results of Finite Element Analysis (FEA) using reduced-order modeling (ROM). ROM is a numerical methodology that reduces the analysis results to create Verilog-A models for use in AMS simulations for co-simulation of the MEMS device in the context of the IoT system.

Wally Rhines of Mentor Graphics Gets Phil Kaufman Award

Monday, November 16th, 2015

By Jeff Dorsch, Contributing Editor

There was a celebrity roast on 4th Street in San Jose, Calif., on Thursday night.

The occasion was the presentation of the annual Phil Kaufman Award to Wally Rhines, chairman and chief executive officer of Mentor Graphics, for his contributions in the field of electronic design automation. Dr. Rhines has served as Mentor’s CEO since 1993 and as chairman of the EDA software and services company since 2000.

The Phil Kaufman Award is presented by the Electronic Design Automation Consortium (EDAC) and the IEEE Council on Electronic Design Automation (CEDA). It honors the memory of Philip A. Kaufman, the EDA industry pioneer, electronics engineer, and entrepreneur, who died in 1992.

Rhines received some gentle ribbing from Craig Barrett, the former Intel chairman and CEO, who once was a Stanford University professor and served on the advisory panel for Rhines’ doctoral thesis.

Barrett said of Rhines, who was a top chip executive at Texas Instruments prior to joining Mentor, “We competed for about 20 years, which is probably why he went to Mentor Graphics.”

He added, “His hairline is receding faster than mine.”

The retired Intel executive later said Rhines’ career has been “fantastic,” adding, “He certainly exceeded all our expectations. You done good, man. Keep it up.”

A video shown before the formal presentation offered Barrett and other top executives showering accolades on Rhines, who turned 69 years old on Wednesday, November 11. Among those praising Rhines were Aart de Geus, chairman and co-CEO of Synopsys, and Lip-Bu Tan, president and CEO of Cadence Design Systems – business rivals and friends.

“He’s actually a cool cat,” de Geus said of Rhines in the video.

In his remarks, Rhines returned the favor to those praising him, saying of de Geus and Tan, “We’ve had enjoyable interactions.

“I’m particularly gratified that my professor, Craig Barrett, came here for my roast,” he said. “He willingly paid for the beer at The Oasis in Menlo Park.”

On a more serious note, Rhines said of Barrett, “He was very critical to my success.”

Rhines recalled the days when chip designers used rubylith sheets to lay out integrated circuits. “We evolved an industry,” he commented. While IC design and layout has become highly automated with EDA software, system design in many industries remains in the rubylith era, Rhines said. He called for a movement to “automate system design the way we automated electronic design.”

The evening drew to a close with a spoof video depicting Rhines as not only a visionary leader in EDA, but also as a race-car mechanic, a sushi chef, and a hair stylist. A good time was had by all.

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.

—E.K.

Time to “shift left” in chip design and verification, Synopsys founder says

Wednesday, March 4th, 2015

By Jeff Dorsch, contributing editor

The world is moving toward “Smart Everything,” according to Aart de Geus, founder, chairman, and co-CEO of Synopsys. “The door will open gradually, and then quickly,” he said in Tuesday’s keynote address at the Design and Verification Conference and Exhibition, or DVCon, in San Jose, Calif.

“The assisted brain is on the way,” de Geus told the standing-room-only audience. “This may be dreaming, but I don’t think so.”

Taking “Smart Design from Silicon to Software” as his official theme, the veteran executive urged attendees to “shift left” – in other words, “squeezing the schedule” to design, verify, debug, and manufacture semiconductors. “Schedules haven’t changed much,” de Geus said. The difference now is that the marketing department has as much influence in planning and scheduling a new product as the engineering department, he noted.

Chip designers also should “shift left” on semiconductor intellectual property, de Geus said. “IP reuse is the biggest change in 15 to 30 years,” he asserted. “Reuse leverages your innovation.”

After plugging the concepts of unified compilation and unified debugging architectures, de Geus touted the use of virtual prototypes in chip design. “Software guys are impatient with you,” he said. Synopsys, he noted, has created 400 million lines of software code.

Turning to the Internet of Things, de Geus said, “There are a lot of opportunities there.” The problem is “these things are full of cracks,” he added. There are significant engineering and security issues that must be addressed in networks of connected devices.

Developing the FinFET “was said to be impossible seven to eight years ago,” de Geus said. Nonetheless, the semiconductor industry was able to realize that advanced technology to move beyond the 28-nanometer process node, he noted. The future is likely to present similar challenges.

Blog review October 27, 2014

Monday, October 27th, 2014

Does your design’s interconnect have high enough wire width to withstand ESD? Frank Feng of Mentor Graphics writes in his blog that although applying DRC to check for ESD protection has been in use for a while, designers still struggle to perform this check, because a pure DRC approach can’t identify the direction of an electrical current flow, which means the check can’t directly differentiate the width or length of a wire polygon against a current flow.

Phil Garrou blogs that most of us know of Nanium as a contract assembly house in Portugal who licensed the Infineon eWLB fan out technology and is supplying such packages on 300mm wafers. NANIUM also has extensive volume manufacturing experience in WB multi-chip memory packages, combining Wafer-level RDL techniques (redistribution) with multiple die stacking in a package.

Gabe Moretti says it is always a pleasure to talk to Dr. Lucio Lanza and I took the opportunity of being in Silicon Valley to interview Lucio since he has just been awarded the 2014 Phil Kaufman award. Dr. Lanza poses this challenge: “The capability of EDA tools will grow in relation to design complexity so that cost of design will remain constant relative to the number of transistors on a die.”

Are we at an inflection point with silicon scaling and homogeneous ICs? Bill Martin, President and VP of Engineering, E-System Design thinks so. I lays out the case for considering Moore’s Law 2.0 where 3D integration becomes the key to continued scaling.

Congratulations to Applied Materials Executive Chairman Mike Splinter on receiving the Silicon Valley Education Foundation’s (SVEF) Pioneer Business Leader Award for driving change in business and education philanthropy by using his passion and influence to make a positive impact on people’s lives.

At the recent FD-SOI Forum in Shanghai, the IoT (Internet of Things) was the #1 topic in all the presentations. As Adele Hars reports, speakers included experts from Synopsys, ST, GF, Soitec, IBS, Synapse Design, VeriSilicon, Wave Semi and IBM.

Deeper Dive — Mentor Graphics Looks to the Future

Tuesday, October 14th, 2014

Mentor Graphics is a survivor.

Established in 1981, the electronic design automation software and services company, based in Wilsonville, Ore., was once part of the “DMV” triumvirate in EDA. That acronym stood for Daisy Systems, Mentor Graphics, and Valid Logic Systems. Daisy and Valid are long gone, supplanted by Cadence Design Systems and Synopsys. Mentor abides.

Walden C. (Wally) Rhines has been Mentor’s chairman and chief executive officer since 2000, and before that served as the company’s president and CEO for seven years. His 21 years at Mentor now matches his 21 years at Texas Instruments, where he worked before joining Mentor.

For the fiscal year ended January 31, 2014, Mentor posted revenue of $1.156 billion and net income of $155.3 million. For the six months ended July 31, 2014, the company reported revenue of $512.4 million and net income of $11.6 million. System and software revenue accounted for nearly 64 percent of Mentor’s revenue in the past fiscal year, while service and support revenue represented 36 percent.

Like its main competitors, Cadence and Synopsys, Mentor Graphics is active in acquisitions. In late 2013, the company bought certain assets of Oasys Design Systems, the startup’s Oasys RealTime engine in particular. During fiscal 2014, Mentor acquired the assets of four privately-held companies for a total of $19.3 million. More recently, the company has acquired Berkeley Design Automation for nearly $47 million in cash, Nimbic, and XS Embedded.

The technical challenges of the semiconductor industry are the bread and butter of Mentor’s business, and it faces its own technical challenges in the nanoscale era of chip design and manufacturing. Mentor notes in its 10-K annual report, “Nanometer process geometries cause design challenges in the creation of ICs which are not present at larger geometries. As a result, nanometer process technologies, used to deliver the majority of today’s ICs, are the product of careful design and precision manufacturing. The increasing complexity and smaller size of designs have changed how those responsible for the physical layout of an IC design deliver their design to the IC manufacturer or foundry. In older technologies, this handoff was a relatively simple layout database check when the design went to manufacturing. Now it is a multi-step process where the layout database is checked and modified so the design can be manufactured with cost-effective yields of ICs.”

There has been a great deal of handwringing and naysaying about the industry’s progress to the 14/16-nanometer process node, along with wailing and gnashing of teeth about the slow progress of extreme-ultraviolet lithography, which was supposed to ease the production of 14nm or 16nm chips.

Joseph Sawicki, vice president and general manager of Mentor’s Design-to-Silicon Division, is having none of it.

Joe Sawicki

He recalls seeing a 1988 article about the impending doom of the chip business, faced with making IC features smaller than 1 micron. The submicron era didn’t destroy the semiconductor industry, of course. At the 130nm process node, there was serious discussion that it wouldn’t be necessary to progress to 90nm, which would be difficult or impossible to achieve, according to Sawicki. “Now, we’re hearing the same talk” in discussions about the forthcoming 10nm and 7nm process generations, he says.

In the past and at present, it’s necessary to maintain a spirit of “willful optimism,” Sawicki asserts. He points to Apple’s A8 processor, a custom chip inside the iPhone 6 and iPhone 6 Plus handsets, as an example of outstanding 20nm design that offers twice the density of its predecessors for Apple’s mobile devices.

What makes Sawicki optimistic about the current challenges is “this wonderful ecosystem, all the players, including EDA,” he says. “Scaling is not as easy,” he acknowledges. “It’s not nearly as bad as people are portraying it.” Mentor is working with such parties as imec, the University of Albany’s College of Nanoscale Science & Engineering, and the Semiconductor Research Corporation, according to Sawicki.

When it comes to fretful discussions of what will happen at 3nm and 5nm, Sawicki doesn’t see a reason to panic. “That’s three nodes out,” he notes. “Everything looks impossible.” Looking one node ahead, “we think we’re okay,” he adds.

The semiconductor industry, Sawicki says, has “a pretty clear path out there for the next six to 12 years. It really has to be willful optimism.”

Foundry, EDA partnership eases move to advanced process nodes

Monday, September 15th, 2014

By Dr. Lianfeng Yang, Vice President of Marketing, ProPlus Design Solutions, Inc., San Jose, Calif.

Partnerships are the lifeblood of the semiconductor industry, and when moving to new advanced nodes, industry trends show closer partnerships and deeper collaborations between foundries, EDA vendors and design companies to ease the transition.

It’s fitting, then, for me to pay homage in this blog post to a successful and long-term partnership between a foundry and an EDA tool supplier.

A leading semiconductor foundry and an EDA vendor with design-for-yield (DFY) solutions have enjoyed a long-term partnership. Recently, they worked together to leverage DFY technologies for process technology development and design flow enhancement. The goals were to improve SRAM yield and provide faster turnaround of a new process platform development.

The foundry used the EDA firm’s high-sigma DFY solution to optimize its SRAM yield for 28nm processes development. Early this year, it announced 28nm readiness for multi-project wafer (MPW) customers. One of the reasons it was able to release the 28nm process with acceptable SRAM yield in a short time was due to a new methodology for SRAM yield ramping that deployed a DFY engine.

During advanced technology development, the time spent on SRAM yield ramping is significant because statistical process variation, particularly local variation between two identical neighboring devices sometimes called mismatch, limits SRAM parametric yield. The impact of local process variation increases when moving to smaller CMOS technology nodes.

In the meantime, supply voltage is reduced, so operating regions are smaller. The difficulty achieving high yield for SRAM is greater because smaller nodes require higher SRAM density. Such challenges require very high sigma robustness or high SRAM bitcell yield. Statistically, the analysis for the SRAM bitcell at 28nm needs to be at around 6 σ, while FinFET technology at 16/14nm sets even higher sigma requirements for SRAM bitcell yield.

During technology development, foundry engineers improve the process to solve defect-related yield issues first. Design-for-manufacturing methodologies can be used to eliminate some systematic process variations. However, many random process variations, such as random dopant fluctuations (RDF), line edge and width roughness (LER, LWR), are fundamental limiting factors for parametric yield particular to SRAM.

Traditionally, foundry engineers rely on experience and know-how from previous node development efforts to analyze and decide how to run different process splits for different process improvement scenarios to optimize SRAM yield. These efforts are often time-consuming and less effective at advanced nodes like 28nm because the optimization margin is much smaller.

The fab’s new SRAM yielding flow used a high sigma statistical simulator as the core engine. It provided fast and accurate 3-7+σ yield prediction and optimization functions for memory, logic and analog circuit designs. During process development, the tool proved its technology advantages in both accuracy and performance, and was validated by silicon in several rounds of tape outs throughout the development process. It required no additional tuning on technology or special settings on the tool usage, so even process engineers who are not familiar with EDA tools could run them to get reliable results to guide their process tuning for SRAM yield improvement.

The flow was able to predict SRAM yield for different process and operating conditions. It simulated SRAM yield improvement trends and provided process improvement direction and guidelines within hours. A methodology such as this becomes necessary for advanced nodes where the remaining optimization margin is small. A simulation-based methodology can run through all possible combinations that process engineers want to explore, providing better yield results and faster yield ramping. Comparatively, the traditional way of exploration based on experiences and running large amount of process splits is limited and may not yield optimum results. It also is time consuming as the engineer would need to wait for tape out results then run another set of trials that could consume months.

The flow saved months ramping up SRAM yield for the 28nm process node. It reduced iteration time and saved wafer cost. Process engineers now only need to fabricate selective wafers to validate simulation results. They know which direction is optimal and have guidelines to run process splits that will help them identify the best conditions and converge on the best yield. They gained greater certainty as they saw more simulation-to-silicon correlation data as the project progressed.

A well-established methodology and flow brings value to process engineers because they can rely on DFY high sigma simulations to lay the foundation for their process improvement strategies to reach certain SRAM yield targets. They can run selective process splits to verify the results for lower wafer costs, fewer process tuning iterations and faster time to market.

Overall, this is a highly successful and mutually beneficial partnership, and the value of DFY to process technology development, is obvious. The same DFY methodology can be used for memory designers as SRAM yield is their primary target as well. The only difference is it tunes design variables using the same methodology, flow and tool solutions.

It’s easy to see the value of a tight collaboration between the foundry, EDA vendor and design companies and why it will be a trend on top of the “foundry-fabless” business model.

About Dr. Lianfeng Yang

Lianfeng Yang, ProPlus Solutions, Inc.

Dr. Lianfeng Yang currently serves as the Vice President of Marketing at ProPlus Design Solutions, Inc. Prior to co-founding ProPlus, he was a senior product engineer at Cadence Design Systems leading the product engineering and technical support effort for the modeling product line in Asia. Dr. Yang has over 40 publications and holds a Ph.D. degree in Electrical Engineering from the University of Glasgow in the U.K.

Next Page »