Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘EDA’

Next Page »

Deep Learning Could Boost Yields, Increase Revenues

Thursday, March 23rd, 2017

thumbnail

By Dave Lammers, Contributing Editor

While it is still early days for deep-learning techniques, the semiconductor industry may benefit from the advances in neural networks, according to analysts and industry executives.

First, the design and manufacturing of advanced ICs can become more efficient by deploying neural networks trained to analyze data, though labelling and classifying that data remains a major challenge. Also, demand will be spurred by the inference engines used in smartphones, autos, drones, robots and other systems, while the processors needed to train neural networks will re-energize demand for high-performance systems.

Abel Brown, senior systems architect at Nvidia, said until the 2010-2012 time frame, neural networks “didn’t have enough data.” Then, a “big bang” occurred when computing power multiplied and very large labelled data sets grew at Amazon, Google, and elsewhere. The trifecta was complete with advances in neural network techniques for image, video, and real-time voice recognition, among others.

During the training process, Brown noted, neural networks “figure out the important parts of the data” and then “converge to a set of significant features and parameters.”

Chris Rowen, who recently started Cognite Ventures to advise deep-learning startups, said he is “becoming aware of a lot more interest from the EDA industry” in deep learning techniques, adding that “problems in manufacturing also are very suitable” to the approach.

Chris Rowen, Cognite Ventures

For the semiconductor industry, Rowen said, deep-learning techniques are akin to “a shiny new hammer” that companies are still trying to figure out how to put to good use. But since yield questions are so important, and the causes of defects are often so hard to pinpoint, deep learning is an attractive approach to semiconductor companies.

“When you have masses of data, and you know what the outcome is but have no clear idea of what the causality is, (deep learning) can bring a complex model of causality that is very hard to do with manual methods,” said Rowen, an IEEE fellow who earlier was the CEO of Tensilica Inc.

The magic of deep learning, Rowen said, is that the learning process is highly automated and “doesn’t require a fab expert to look at the particular defect patterns.”

“It really is a rather brute force, naïve method. You don’t really know what the constituent patterns are that lead to these particular failures. But if you have enough examples that relate inputs to outputs, to defects or to failures, then you can use deep learning.”

Juan Rey, senior director of engineering at Mentor Graphics, said Mentor engineers have started investigating deep-learning techniques which could improve models of the lithography process steps, a complex issue that Rey said “is an area where deep neural networks and machine learning seem to be able to help.”

Juan Rey, Mentor Graphics

In the lithography process “we need to create an approximate model of what needs to be analyzed. For example, for photolithography specifically, there is the transition between dark and clear areas, where the slope of intensity for that transition zone plays a very clear role in the physics of the problem being solved. The problem tends to be that the design, the exact formulation, cannot be used in every space, and we are limited by the computational resources. We need to rely on a few discrete measurements, perhaps a few tens of thousands, maybe more, but it still is a discrete data set, and we don’t know if that is enough to cover all the cases when we model the full chip,” he said.

“Where we see an opportunity for deep learning is to try to do an interpretation for that problem, given that an exhaustive analysis is impossible. Using these new types of algorithms, we may be able to move from a problem that is continuous to a problem with a discrete data set.”

Mentor seeks to cooperate with academia and with research consortia such as IMEC. “We want to find the right research projects to sponsor between our research teams and academic teams. We hope that we can get better results with these new types of algorithms, and in the longer term with the new hardware that is being developed,” Rey said.

Many companies are developing specialized processors to run machine-learning algorithms, including non-Von Neumann, asynchronous architectures, which could offer several orders of magnitude less power consumption. “We are paying a lot of attention to the research, and would like to use some of these chips to solve some of the problems that the industry has, problems that are not very well served right now,” Rey said.

While power savings can still be gained with synchronous architectures, Rey said brain-inspired projects such as Qualcomm’s Zeroth processor, or the use of memristors being developed at H-P Labs, may be able to deliver significant power savings. “These are all worth paying attention to. It is my feeling that different architectures may be needed to deal with unstructured data. Otherwise, total power consumption is going through the roof. For unstructured data, these types of problem can be dealt with much better with neuromorphic computers.”

The use of deep learning techniques is moving beyond the biggest players, such as Google, Amazon, and the like. Just as various system integrators package the open source modules of the Hadoop data base technology into a more-secure offering, several system integrators are offering workstations packaged with the appropriate deep-learning tools.

Deep learning has evolved to play a role in speech recognition used in Amazon’s Echo. Source: Amazon

Robert Stober, director of systems engineering at Bright Computing, bundles AI software and tools with hardware based on Nvidia or Intel processors. “Our mission statement is to deploy deep learning packages, infrastructure, and clusters, so there is no more digging around for weeks and weeks by your expensive data scientists,” Stober said.

Deep learning is driving new the need for new types of processors as well as high-speed interconnects. Tim Miller, senior vice president at One Stop Systems, said that training the neural networks used in deep learning is an ideal task for GPUs because they can perform parallel calculations, sharply reducing the training time. However, GPUs often are large and require cooling, which most systems are not equipped to handle.

David Kanter, principal consultant at Real World Technologies, said “as I look at what’s driving the industry, it’s about convolutional neural networks, and using general-purpose hardware to do this is not the most efficient thing.”

However, research efforts focused on using new materials or futuristic architectures may over-complicate the situation for data scientists outside of the research arena. At the International Electron Devices Meeting (IEDM 2017), several research managers discussed using spin torque magnetic (STT-MRAM) technology, or resistive RAMs (ReRAM), to create dense, power-efficient networks of artificial neurons.

While those efforts are worthwhile from a research standpoint, Kanter said “when proving a new technology, you want to minimize the situation, and if you change the software architecture of neural networks, that is asking a lot of programmers, to adopt a different programming method.”

While Nvidia, Intel, and others battle it out at the high end for the processors used in training the neural network, the inference engines which use the results of that training must be less expensive and consume far less power.

Kanter said “today, most inference processing is done on general-purpose CPUs. It does not require a GPU. Most people I know at Google do not use a GPU. Since the (inference processing) workload load looks like the processing of DSP algorithms, it can be done with special-purpose cores from Tensilica (now part of Cadence) or ARC (now part of Synopsys). That is way better than any GPU,” Kanter said.

Rowen was asked if the end-node inference engine will blossom into large volumes. “I would emphatically say, yes, powerful inference engines will be widely deployed” in markets such as imaging, voice processing, language recognition, and modeling.

“There will be some opportunity for stand-alone inference engines, but most IEs will be part of a larger system. Inference doesn’t necessarily need hundreds of square millimeters of silicon. But it will be a major sub-system, widely deployed in a range of SoC platforms,” Rowen said.

Kanter noted that Nvidia has a powerful inference engine processor that has gained traction in the early self-driving cars, and Google has developed an ASIC to process its Tensor deep learning software language.

In many other markets, what is needed are very low power consumption IEs that can be used in security cameras, voice processors, drones, and many other markets. Nvidia CEO Jen Hsung Huang, in a blog post early this year, said that deep learning will spur demand for billions of devices deployed in drones, portable instruments, intelligent cameras, and autonomous vehicles.

“Someday, billions of intelligent devices will take advantage of deep learning to perform seemingly intelligent tasks,” Huang wrote. He envisions a future in which drones will autonomously find an item in a warehouse, for example, while portable medical instruments will use artificial intelligence to diagnose blood samples on-site.

In the long run, that “billions” vision may be correct, Kanter said, adding that the Nvidia CEO, an adept promoter as well as an astute company leader, may be wearing his salesman hat a bit.

“Ten years from now, inference processing will be widespread, and many SoCs will have an inference accelerator on board,” Kanter said.

Mentor’s Pattern Matching Tackles IC Verification and Manufacturing Problems

Sunday, June 5th, 2016

thumbnail

Mentor Graphics Corporation announced that customers and ecosystem partners are expanding their use of Calibre Pattern Matching solution to overcome previously intractable IC verification and manufacturing problems. The solution is integrated into the Mentor® Calibre nmPlatform solution, creating a synergy that drives these new applications at IC design companies and foundries, across multiple process nodes.

Calibre Pattern Matching technology supplements multi-operational text-based design rules with an automated visual geometry capture and compare process. This visual approach is both very powerful in its ability to capture complex pattern relationships, and to work within mixed tool flows, making it much easier for Mentor customers to create new applications to solve difficult problems. Because it is integrated into the Calibre nmPlatform toolset, the Calibre Pattern Matching functionality can leverage the industry-leading performance and accuracy of all Calibre tools and flows to create new opportunities for design-rule checking (DRC), reliability checking, DFM, yield enhancement, and failure analysis.

“Our customers count on eSilicon’s design services, IP, and ecosystem management to help them succeed in delivering market-leading ICs,” said Deepak Sabharwal, general manager, IP products & services at eSilicon. “We use Calibre Pattern Matching to create and apply a Calibre-based yield-detractor design kit that helps identify and eliminate design patterns that impact production ramp-up time.”

Since its introduction, use models for Calibre Pattern Matching technology have rapidly expanded, solving problems that were previously too complex or time-consuming to be implemented. New use cases include the following:

  • Physical verification of IC designs with curved structures—for analog, high-power, radio frequency (RF) and microelectromechanical (MEMS) circuitry—is extremely difficult with products designed to work with rectilinear design data. Calibre customers are automating that verification using a combination of Calibre Pattern Matching technology and other Calibre tools for much greater efficiency and accuracy, especially when compared to manual techniques.
  • Calibre Pattern Matching technology can be used to quickly locate and remove design patterns that are known or suspected of  being difficult to manufacture (“yield detractors”). Foundries or design companies create libraries of yield detractor patterns that are specific to a process node or a particular design methodology. Samsung Foundry used this approach in its Closed-Loop DFM solution to help its customers ramp to volume faster, and reduce process-design variability.
  • Some customers use Calibre Pattern Matching technology with Calibre Auto-Waivers™ functionality to define a specific context for waiving a DRC violation. This enhancement allows for automatic filtering of those violations for significant time savings and improved design quality.

“To help our customers create manufacturing-ready designs, we use Calibre Pattern Matching to create and use a yield detractor database to fix most of the litho hotspots in the block level. Then we perform fast signoff DFM litho checking at the chip level using an integrated solution with Calibre Pattern Matching and Calibre LFD” said Min-Hwa Chi, senior vice president, SMIC. “By offering a solution for manufacturability robustness that is built on the Calibre platform, we are seeing ready customer adoption of SMIC’s DFM solution.”

With the Calibre Pattern Matching tool, design companies can now optimize their physical verification checking to their unique design styles. The tool is easy to adopt because it doesn’t rely on expertise in scripting languages. Instead, any engineer can readily define a visual pattern that captures the designer’s expertise in the critical geometries and context for that configuration.

“With the growing adoption of Calibre Pattern Matching technology, Mentor continues to help our customers address increasing design complexity, regardless of the process node they are targeting,” said Joe Sawicki, vice president and general manager of the Design-to-Silicon division at Mentor Graphics. “By incorporating the Calibre Pattern Matching tool, the Calibre platform becomes an even more valuable bridge between design and manufacturing for the ecosystem.”

At the 2016 Design Automation Conference, Mentor has a Calibre Pattern Matching presentation on Tuesday, June 7 at 3PM in the Mentor booth #949. Register for the session using the registration form.

https://www.mentor.com/events/design-automation-conference/schedule

Leti’s CoolCube 3D Transistor Stacking Improves with Qualcomm Help

Wednesday, April 27th, 2016

By Ed Korczynski, Sr. Technical Editor

As previously covered by Solid State Technology CEA-Leti in France has been developing monolithic transistor stacking based on laser re-crystallization of active silicon in upper layers called “CoolCube” (TM). Leading mobile chip supplier Qualcomm has been working with Leti on CoolCube R&D since late 2013, and based on preliminary results have opted to continue collaborating with the goal of building a complete ecosystem that takes the technology from design to fabrication.

“The Qualcomm Technologies and Leti teams have demonstrated the potential of this technology for designing and fabricating high-density and high-performance chips for mobile devices,” said Karim Arabi, vice president of engineering, Qualcomm Technologies, Inc. “We are optimistic that this technology could address some of the technology scaling issues and this is why we are extending our collaboration with Leti.” As part of the collaboration, Qualcomm Technologies and Leti are sharing the technology through flexible, multi-party collaboration programs to accelerate adoption.

Olivier Faynot, micro-electronic component section manager of CEA-Leti, in an exclusive interview with Solid State Technology and SemiMD explained, “Today we have a strong focus on CMOS over CMOS integration, and this is the primary integration that we are pushing. What we see today is the integration of NMOS over PMOS is interesting and suitable for new material incorporation such as III-V and germanium.”

Table: Critical thermal budget steps summary in a planar FDSOI integration and CoolCube process for top FET in 3DVLSI. (Source: VLSI Symposium 2015)

The Table shows that CMOS over CMOS integration has met transistor performance goals with low-temperature processes, such that the top transistors have at least 90% of the performance compared to the bottom. Faynot says that recent results for transistors are meeting specification, while there is still work to be done on inter-tier metal connections. For advanced ICs there is a lot of interconnect routing congestion around the contacts and the metal-1 level, so inter-tier connection (formerly termed the more generic “local interconnect”) levels are needed to route some gates at the bottom level for connection to the top level.

“The main focus now is on the thermal budget for the integration of the inter-tier level,” explained Faynot. “To do this, we are not just working on the processing but also working closely with the designers. For example, depending on the material chosen for the metal inter-tier there will be different limits on the metal link lengths.” Tungsten is relatively more stable than copper, but with higher electrical resistance for inherently lower limits on line lengths. Additional details on such process-design co-dependencies will be disclosed during the 2016 VLSI Technology Symposium, chaired by Raj Jammy.

When the industry decides to integrate III-V and Ge alternate-channel materials in CMOS, the different processing conditions for each should make NMOS over PMOS CoolCube a relatively easy performance extension. “Three-fives and germanium are basically materials with low thermal budgets, so they would be most compatible with CoolCube processing,” reminded Faynot. “To me, this kind of technology would be very interesting for mobile applications, because it would achieve a circuit where the length of the wires would be shortened. We would expect to save in area, and have less of a trade-off between power-consumption and speed.”

“This is a new wave that CoolCube is creating and it has been possible thanks to the interest and support of Qualcomm Technologies, which is pushing the technological development in a good direction and sending a strong signal to the microelectronics community,” said Leti CEO Marie Semeria. “Together, we aim to build a complete ecosystem with foundries, equipment suppliers, and EDA and design houses to assemble all the pieces of the puzzle and move the technology into the product-qualification phase.”

—E.K.

IoT Demands Part 2: Test and Packaging

Friday, April 15th, 2016

By Ed Korczynski, Senior Technical Editor, Solid State Technology, SemiMD

The Internet-of-Things (IoT) adds new sensing and communications to improve the functionality of all manner of things in the world. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be extremely small and low-cost. To understand the state of technology preparedness to meet the anticipated needs of the different application spaces, experts from GLOBALFOUNDRIES, Cadence, Mentor Graphics and Presto Engineering gave detailed answers to questions about IoT chip needs in EDA and fab nodes, as published in “IoT Demands:  EDA and Fab Nodes.” We continue with the conversation below.

Korczynski: For test of IoT devices which may use ultra-low threshold voltage transistors, what changes are needed compared to logic test of a typical “low-power” chip?

Steve Carlson, product management group director, Cadence

Susceptibility to process corners and operating conditions becomes heightened at near-threshold voltage levels. This translates into either more conservative design sign-off criteria, or the need for higher levels of manufacturing screening/tests. Either way, it has an impact on cost, be it hidden by over-design, or overtly through more costly qualification and test processes.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

We need to make sure that the testability has also been designed to be functional structurally in this mode. In addition, sub-threshold voltage operation must account for non-linear transistor characteristics and the strong impact of local process variation, for which the conventional testability arsenal is still very poor. Automotive screening used low voltage operation (VLV) to detect latent defects, but at very low voltage close to the transistor threshold, digital becomes analog, and therefore if the usual concept still works for defect detection, functional test and @speed tests require additional expertise to be both meaningful and efficient from a test coverage perspective.

Korczynski:  Do we have sufficient specifications within “5G” to handle IoT device interoperability for all market segments?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

The estimated timeline for standardization availability of 5G is around 2020. 5G is being designed keeping three classes of applications in mind:  Enhanced Mobile Broadband, Massive IoT, and Mission-Critical Control. Specifically for IoT, the focus is on efficient, low-cost communication with deep coverage. We will start to see early 5G technologies start to appear around 2018, and device connectivity,

interoperability and marshaling the data they generate that can apply to multiple IoT sub-segments and markets is still very much in development.

Korczynski:  Will the 1st-generation of IoT devices likely include wide varieties of solution for different market-segments such as industrial vs. retail vs. consumer, or will most device use similar form-factors and underlying technologies?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

If we use CES 2016 as a showcase, we are seeing IoT “Things” that are becoming use-case or application-centric as they apply to specific sub-segments such as Connected Home, Automotive, Medical, Security, etc. There is definitely more variety on the consumer front vs. industrial. Vendors / OEMs / System houses are differentiating at the user-interface design and form-factor levels while the “under-the-hood” IC capabilities and component technologies that provide the atomic intelligence are fairly common. ​

Steve Carlson, product management group director, Cadence

Right now it seems like everyone is swinging for the fence. Everyone wants the home-run product that will reach a billion devices sold. Generality generally leads to sub-optimality, so a single device usually fails to meet the needs and expectations of many. Devices that are optimized for more specific use cases and elements of purchasing criteria will win out. The question of interface is an interesting one.

Korczynski:  Will there be different product life-cycles for different IoT market-segments, such as 1-3 years for consumer but 5-10 years for industrial?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

That certainly seems to be the case. According to Gartner’s market analysis for IoT, Consumer is expected to grow at a faster pace in terms of units compared to Enterprise, while Enterprise is expected to lead in revenue. Also the churn-cycle in Consumer is higher / faster compared to Enterprise. Today’s wearables or smart-phones are good reference examples. This will however vary by the type of “Thing” and sub-segment. For example, you expect to have your smart refrigerator for a longer time period compared to smart clothing or eyewear. As ASPs of the “Things”come down over time and new classes of products such as disposables hit the market, we can expect even larger volumes.​

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

The market segments continue to be driven by the same use cases. In consumer wearables, short cycles are linked to fashion trends and rapid obsolescence, where consumer home use has longer cycles closer to industrial market requirements. We believe that the lifecycle norms will hold true for IoT devices.

Korczynski:  For the IoT application of infrastructure monitoring (e.g. bridges, pipelines, etc.) long-term (10-20 year) reliability will be essential, while consumer applications may be best served by 3-5 year reliability devices which cost less; how well can we quantify the trade-off between cost and chip reliability?

Steve Carlson, product management group director, Cadence

Conceptually we know very well how to make devices more reliable. We can lower current densities with bigger wires, we can run at cooler temperatures, and so on.  The difficulty is always in finding optimality for a given criterion across the, for practical purposes, infinite tradeoffs to be made.

Korczynski:  Why is the talk of IoT not just another “Dot Com” hype cycle?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

​​I participated in a panel at SEMICON China in Shanghai last month that discussed a similar question. If we think of IoT as a “brand new thing” (no pun intended), then we can think of it as hype. However if we look at the IoT as as set of use-cases that can take advantage of an evolution of Machine-to-Machine (M2M) going towards broader connectivity, huge amounts of data generated and exchanged, and a generational increase in internet and communication network bandwidths (i.e. 5G), then it seems a more down-to-earth technological progression.

Nicolas Williams, product marketing manager, Mentor Graphics

Unlike the Dot Com hype, which was built upon hope and dreams of future solutions that may or may not have been based in reality, IoT is real business. For example, in a 2016 IC Insights report, we see that last year $63.4 billion in revenue was generated for IoT systems and the market is growing at about 20% CAGR. This same report also shows IoT semiconductor sales of over $15 billion in 2015 with a CAGR of 21.1%.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is the investment needed up front to create sensing agents and an infrastructure for the hardware foundation of the IoT that will lead to big data and ultimately value creation.

Steve Carlson, product management group director, Cadence

There will be plenty of hype cycles for products and product categories along the way. However, the foundational shift of the connection of things is a diode through which civilization will only pass through in one direction.

IoT Demands Part 1: EDA and Fab Nodes

Thursday, April 14th, 2016

The Internet-of-Things (IoT) is expected to add new sensing and communications to improve the functionality of all manner of things in the world:  bridges sensing and reporting when repairs are needed, parts automatically informing where they are in storage and transport, human health monitoring, etc. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be assembled at low-cost and small-size in High Volume Manufacturing (HVM). Micro-Electro-Mechanical Systems (MEMS) and other sensors are being combined with Radio-Frequency (RF) ICs in miniaturized packages for the first wave of growth in major sub-markets.

To meet the anticipated needs of the different IoT application spaces, SemiMD asked leading companies within critical industry segments about the state of technology preparedness:

*  Commercial IC HVM – GLOBALFOUNDRIES,

*  Electronic Design Automation (EDA) – Cadence and Mentor Graphics,

*  IC and complex system test – Presto Engineering.

Korczynski:  Today, ICs for IoT applications typically use 45nm/65nm-node which are “Node -3″ (N-3) compared to sub-20nm-node chips in HVM. Five years from now, when the bleeding-edge will use 10nm node technology, will IoT chips still use N-3 of 28nm-node (considered a “long-lived node”) or will 45nm-node remain the likely sweet-spot of price:performance?

Timothy Dry, product marketing manager, GLOBALFOUNDRIES

In 5 years time, there will be a spread of technology solutions addressing low, middle, and high ends of IoT applications. At the low end, IoT end nodes for applications like connected smoke

detectors, security sensors will be at 55, 40nm ULP and ULL for lowest system power, and low cost. These applications will be typically served by MCUs <50DMIPs. Integrated radios (BLE, 802.15.4), security, Power Management Unit (PMU), and eFlash or MRAM will be common features. Connected LED lighting is forecasted to be a high volume IoT application. The LED drivers will use BCD extensions of 130nm—40nm—that can also support the radio and protocol-MCU with Flash.

In the mid-range, applications like smart-meters and fitness/medical monitoring will need systems that have more processing power <300DMIPS. These products will be implemented in 40nm, 28nm and GLOBALFOUNDRIES’ new 22nm FDSOI technology that uses software-controlled body-biasing to tune SoC operation for lowest dynamic power. Multiple wireless (BLE/802.15.4, WiFi, LPWAN) and wired connectivity (Ethernet, PLC) protocols with security will be integrated for gateway products.

High-end products like smart-watches, learning thermostats, home security/monitoring cameras, and drones will require MPU-class IC products (~2000DMIPs) and run high-order operating systems (e.g. Linux, Android). These products will be made in leading-edge nodes starting at 22FDX, 14FF and migrating to 7FF and beyond. Design for lowest dynamic power for longest battery life will be the key driver, and these products typically require human machine Interface (HMI) with animated graphics on a high resolution displays. Connectivity will include BLE, WiFi and cellular with strong security.

Steve Carlson, product management group director, Cadence

We have seen recent announcements of IoT targeted devices at 14nm. The value created by Moore’s Law integration should hold, and with that, there will be inherent advantages to those who leverage next generation process nodes. Still, other product categories may reach functionality saturation points where there is simply no more value obtained by adding more capability. We anticipate that there will be more “live” process nodes than ever in history.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is fair to say that most IoT devices will be a heterogeneous aggregation of analog functions rather than high power digital processors. Therefore, and by similarity with Bluetooth and RFID devices, 90nm and 65nm will remain the mainstream nodes for many sub-vertical markets, enabling the integration of RF and analog front-end functions with digital gate density. By default, sensors will stay out of the monolithic path for both design and cost reasons. The best answer would be that the IoT ASIC will follow eventually the same scaling as the MCU products, with embedded non-volatile memories, which today is 55-40nm centric and will move to 28nm with industry maturity and volumes.

Korczynski:  If most IoT devices will include some manner of sensor which must be integrated with CMOS logic and memory, then do we need new capabilities in EDA-flows and burn-in/test protocols to ensure meeting time-to-market goals?

Nicolas Williams, product marketing manager, Mentor Graphics

If we define a typical IoT device as a product that contains a MEMS sensor, A/D, digital processing, and a RF-connection to the internet, we can see that the fundamental challenge of IoT design is that teams working on this product need to master the analog, digital, MEMS, and RF domains. Often, these four domains require different experience and knowledge and sometimes design in these domains is accomplished by separate teams. IoT design requires that all four domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dice that will be bonded together, they still need to work together during the layout and verification process. Therefore, a unified design flow is required.

Stephen Pateras, product marketing director, Mentor Graphics

Being able to quickly debug and create test patterns for various embedded sensor IP can be addressed with the adoption of the new IEEE 1687 IP plug-and-play standard. If a sensor IP block’s digital interface adheres to the standard, then any vendor-provided data required to initialize or operate the embedded sensor can be easily and quickly mapped to chip pins. Data sequences for multiple sensor IP blocks can also be merged to create optimized sequences that will minimize debug and test times.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

From a testing standpoint, widely used ATEs are generally focused on a few purposes, but don’t necessarily cover all elements in a system. We think that IoT devices are likely to require complex testing flows using multiple ATEs to assure adequate coverage. This is likely to prevail for some time as short run volumes characteristic of IoT demands are unlikely to drive ATE suppliers to invest R&D dollars in creating new purpose-built machines.

Korczynski:  For the EDA of IoT devices, can all sensors be modeled as analog inputs within established flows or do we need new modeling capability at the circuit level?

Steve Carlson, product management group director, Cadence

Typically, the interface to the physical world has been partitioned at the electrical boundary. But as more mechanical and electro-mechanical sensors are more deeply integrated, there has been growing value in co-design, co-analysis, and co-optimization. We should see more multi-domain analysis over time.

Nicolas Williams, product marketing manager, Mentor Graphics

Designers of IoT devices that contain MEMS sensors need quality models in order to simulate their behavior under physical conditions such as motion and temperature. Unlike CMOS IC design, there are few standardized MEMS models for system-level simulation. State of the art MEMS modeling requires automatic generation of behavioral models based on the results of Finite Element Analysis (FEA) using reduced-order modeling (ROM). ROM is a numerical methodology that reduces the analysis results to create Verilog-A models for use in AMS simulations for co-simulation of the MEMS device in the context of the IoT system.

Wally Rhines of Mentor Graphics Gets Phil Kaufman Award

Monday, November 16th, 2015

By Jeff Dorsch, Contributing Editor

There was a celebrity roast on 4th Street in San Jose, Calif., on Thursday night.

The occasion was the presentation of the annual Phil Kaufman Award to Wally Rhines, chairman and chief executive officer of Mentor Graphics, for his contributions in the field of electronic design automation. Dr. Rhines has served as Mentor’s CEO since 1993 and as chairman of the EDA software and services company since 2000.

The Phil Kaufman Award is presented by the Electronic Design Automation Consortium (EDAC) and the IEEE Council on Electronic Design Automation (CEDA). It honors the memory of Philip A. Kaufman, the EDA industry pioneer, electronics engineer, and entrepreneur, who died in 1992.

Rhines received some gentle ribbing from Craig Barrett, the former Intel chairman and CEO, who once was a Stanford University professor and served on the advisory panel for Rhines’ doctoral thesis.

Barrett said of Rhines, who was a top chip executive at Texas Instruments prior to joining Mentor, “We competed for about 20 years, which is probably why he went to Mentor Graphics.”

He added, “His hairline is receding faster than mine.”

The retired Intel executive later said Rhines’ career has been “fantastic,” adding, “He certainly exceeded all our expectations. You done good, man. Keep it up.”

A video shown before the formal presentation offered Barrett and other top executives showering accolades on Rhines, who turned 69 years old on Wednesday, November 11. Among those praising Rhines were Aart de Geus, chairman and co-CEO of Synopsys, and Lip-Bu Tan, president and CEO of Cadence Design Systems – business rivals and friends.

“He’s actually a cool cat,” de Geus said of Rhines in the video.

In his remarks, Rhines returned the favor to those praising him, saying of de Geus and Tan, “We’ve had enjoyable interactions.

“I’m particularly gratified that my professor, Craig Barrett, came here for my roast,” he said. “He willingly paid for the beer at The Oasis in Menlo Park.”

On a more serious note, Rhines said of Barrett, “He was very critical to my success.”

Rhines recalled the days when chip designers used rubylith sheets to lay out integrated circuits. “We evolved an industry,” he commented. While IC design and layout has become highly automated with EDA software, system design in many industries remains in the rubylith era, Rhines said. He called for a movement to “automate system design the way we automated electronic design.”

The evening drew to a close with a spoof video depicting Rhines as not only a visionary leader in EDA, but also as a race-car mechanic, a sushi chef, and a hair stylist. A good time was had by all.

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.

—E.K.

Time to “shift left” in chip design and verification, Synopsys founder says

Wednesday, March 4th, 2015

By Jeff Dorsch, contributing editor

The world is moving toward “Smart Everything,” according to Aart de Geus, founder, chairman, and co-CEO of Synopsys. “The door will open gradually, and then quickly,” he said in Tuesday’s keynote address at the Design and Verification Conference and Exhibition, or DVCon, in San Jose, Calif.

“The assisted brain is on the way,” de Geus told the standing-room-only audience. “This may be dreaming, but I don’t think so.”

Taking “Smart Design from Silicon to Software” as his official theme, the veteran executive urged attendees to “shift left” – in other words, “squeezing the schedule” to design, verify, debug, and manufacture semiconductors. “Schedules haven’t changed much,” de Geus said. The difference now is that the marketing department has as much influence in planning and scheduling a new product as the engineering department, he noted.

Chip designers also should “shift left” on semiconductor intellectual property, de Geus said. “IP reuse is the biggest change in 15 to 30 years,” he asserted. “Reuse leverages your innovation.”

After plugging the concepts of unified compilation and unified debugging architectures, de Geus touted the use of virtual prototypes in chip design. “Software guys are impatient with you,” he said. Synopsys, he noted, has created 400 million lines of software code.

Turning to the Internet of Things, de Geus said, “There are a lot of opportunities there.” The problem is “these things are full of cracks,” he added. There are significant engineering and security issues that must be addressed in networks of connected devices.

Developing the FinFET “was said to be impossible seven to eight years ago,” de Geus said. Nonetheless, the semiconductor industry was able to realize that advanced technology to move beyond the 28-nanometer process node, he noted. The future is likely to present similar challenges.

Blog review October 27, 2014

Monday, October 27th, 2014

Does your design’s interconnect have high enough wire width to withstand ESD? Frank Feng of Mentor Graphics writes in his blog that although applying DRC to check for ESD protection has been in use for a while, designers still struggle to perform this check, because a pure DRC approach can’t identify the direction of an electrical current flow, which means the check can’t directly differentiate the width or length of a wire polygon against a current flow.

Phil Garrou blogs that most of us know of Nanium as a contract assembly house in Portugal who licensed the Infineon eWLB fan out technology and is supplying such packages on 300mm wafers. NANIUM also has extensive volume manufacturing experience in WB multi-chip memory packages, combining Wafer-level RDL techniques (redistribution) with multiple die stacking in a package.

Gabe Moretti says it is always a pleasure to talk to Dr. Lucio Lanza and I took the opportunity of being in Silicon Valley to interview Lucio since he has just been awarded the 2014 Phil Kaufman award. Dr. Lanza poses this challenge: “The capability of EDA tools will grow in relation to design complexity so that cost of design will remain constant relative to the number of transistors on a die.”

Are we at an inflection point with silicon scaling and homogeneous ICs? Bill Martin, President and VP of Engineering, E-System Design thinks so. I lays out the case for considering Moore’s Law 2.0 where 3D integration becomes the key to continued scaling.

Congratulations to Applied Materials Executive Chairman Mike Splinter on receiving the Silicon Valley Education Foundation’s (SVEF) Pioneer Business Leader Award for driving change in business and education philanthropy by using his passion and influence to make a positive impact on people’s lives.

At the recent FD-SOI Forum in Shanghai, the IoT (Internet of Things) was the #1 topic in all the presentations. As Adele Hars reports, speakers included experts from Synopsys, ST, GF, Soitec, IBS, Synapse Design, VeriSilicon, Wave Semi and IBM.

Deeper Dive — Mentor Graphics Looks to the Future

Tuesday, October 14th, 2014

Mentor Graphics is a survivor.

Established in 1981, the electronic design automation software and services company, based in Wilsonville, Ore., was once part of the “DMV” triumvirate in EDA. That acronym stood for Daisy Systems, Mentor Graphics, and Valid Logic Systems. Daisy and Valid are long gone, supplanted by Cadence Design Systems and Synopsys. Mentor abides.

Walden C. (Wally) Rhines has been Mentor’s chairman and chief executive officer since 2000, and before that served as the company’s president and CEO for seven years. His 21 years at Mentor now matches his 21 years at Texas Instruments, where he worked before joining Mentor.

For the fiscal year ended January 31, 2014, Mentor posted revenue of $1.156 billion and net income of $155.3 million. For the six months ended July 31, 2014, the company reported revenue of $512.4 million and net income of $11.6 million. System and software revenue accounted for nearly 64 percent of Mentor’s revenue in the past fiscal year, while service and support revenue represented 36 percent.

Like its main competitors, Cadence and Synopsys, Mentor Graphics is active in acquisitions. In late 2013, the company bought certain assets of Oasys Design Systems, the startup’s Oasys RealTime engine in particular. During fiscal 2014, Mentor acquired the assets of four privately-held companies for a total of $19.3 million. More recently, the company has acquired Berkeley Design Automation for nearly $47 million in cash, Nimbic, and XS Embedded.

The technical challenges of the semiconductor industry are the bread and butter of Mentor’s business, and it faces its own technical challenges in the nanoscale era of chip design and manufacturing. Mentor notes in its 10-K annual report, “Nanometer process geometries cause design challenges in the creation of ICs which are not present at larger geometries. As a result, nanometer process technologies, used to deliver the majority of today’s ICs, are the product of careful design and precision manufacturing. The increasing complexity and smaller size of designs have changed how those responsible for the physical layout of an IC design deliver their design to the IC manufacturer or foundry. In older technologies, this handoff was a relatively simple layout database check when the design went to manufacturing. Now it is a multi-step process where the layout database is checked and modified so the design can be manufactured with cost-effective yields of ICs.”

There has been a great deal of handwringing and naysaying about the industry’s progress to the 14/16-nanometer process node, along with wailing and gnashing of teeth about the slow progress of extreme-ultraviolet lithography, which was supposed to ease the production of 14nm or 16nm chips.

Joseph Sawicki, vice president and general manager of Mentor’s Design-to-Silicon Division, is having none of it.

Joe Sawicki

He recalls seeing a 1988 article about the impending doom of the chip business, faced with making IC features smaller than 1 micron. The submicron era didn’t destroy the semiconductor industry, of course. At the 130nm process node, there was serious discussion that it wouldn’t be necessary to progress to 90nm, which would be difficult or impossible to achieve, according to Sawicki. “Now, we’re hearing the same talk” in discussions about the forthcoming 10nm and 7nm process generations, he says.

In the past and at present, it’s necessary to maintain a spirit of “willful optimism,” Sawicki asserts. He points to Apple’s A8 processor, a custom chip inside the iPhone 6 and iPhone 6 Plus handsets, as an example of outstanding 20nm design that offers twice the density of its predecessors for Apple’s mobile devices.

What makes Sawicki optimistic about the current challenges is “this wonderful ecosystem, all the players, including EDA,” he says. “Scaling is not as easy,” he acknowledges. “It’s not nearly as bad as people are portraying it.” Mentor is working with such parties as imec, the University of Albany’s College of Nanoscale Science & Engineering, and the Semiconductor Research Corporation, according to Sawicki.

When it comes to fretful discussions of what will happen at 3nm and 5nm, Sawicki doesn’t see a reason to panic. “That’s three nodes out,” he notes. “Everything looks impossible.” Looking one node ahead, “we think we’re okay,” he adds.

The semiconductor industry, Sawicki says, has “a pretty clear path out there for the next six to 12 years. It really has to be willful optimism.”

Next Page »