Part of the  

Solid State Technology


The Confab


About  |  Contact

Posts Tagged ‘EDA’

Next Page »

Wally Rhines of Mentor Graphics Gets Phil Kaufman Award

Monday, November 16th, 2015

By Jeff Dorsch, Contributing Editor

There was a celebrity roast on 4th Street in San Jose, Calif., on Thursday night.

The occasion was the presentation of the annual Phil Kaufman Award to Wally Rhines, chairman and chief executive officer of Mentor Graphics, for his contributions in the field of electronic design automation. Dr. Rhines has served as Mentor’s CEO since 1993 and as chairman of the EDA software and services company since 2000.

The Phil Kaufman Award is presented by the Electronic Design Automation Consortium (EDAC) and the IEEE Council on Electronic Design Automation (CEDA). It honors the memory of Philip A. Kaufman, the EDA industry pioneer, electronics engineer, and entrepreneur, who died in 1992.

Rhines received some gentle ribbing from Craig Barrett, the former Intel chairman and CEO, who once was a Stanford University professor and served on the advisory panel for Rhines’ doctoral thesis.

Barrett said of Rhines, who was a top chip executive at Texas Instruments prior to joining Mentor, “We competed for about 20 years, which is probably why he went to Mentor Graphics.”

He added, “His hairline is receding faster than mine.”

The retired Intel executive later said Rhines’ career has been “fantastic,” adding, “He certainly exceeded all our expectations. You done good, man. Keep it up.”

A video shown before the formal presentation offered Barrett and other top executives showering accolades on Rhines, who turned 69 years old on Wednesday, November 11. Among those praising Rhines were Aart de Geus, chairman and co-CEO of Synopsys, and Lip-Bu Tan, president and CEO of Cadence Design Systems – business rivals and friends.

“He’s actually a cool cat,” de Geus said of Rhines in the video.

In his remarks, Rhines returned the favor to those praising him, saying of de Geus and Tan, “We’ve had enjoyable interactions.

“I’m particularly gratified that my professor, Craig Barrett, came here for my roast,” he said. “He willingly paid for the beer at The Oasis in Menlo Park.”

On a more serious note, Rhines said of Barrett, “He was very critical to my success.”

Rhines recalled the days when chip designers used rubylith sheets to lay out integrated circuits. “We evolved an industry,” he commented. While IC design and layout has become highly automated with EDA software, system design in many industries remains in the rubylith era, Rhines said. He called for a movement to “automate system design the way we automated electronic design.”

The evening drew to a close with a spoof video depicting Rhines as not only a visionary leader in EDA, but also as a race-car mechanic, a sushi chef, and a hair stylist. A good time was had by all.

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015


By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.


Time to “shift left” in chip design and verification, Synopsys founder says

Wednesday, March 4th, 2015

By Jeff Dorsch, contributing editor

The world is moving toward “Smart Everything,” according to Aart de Geus, founder, chairman, and co-CEO of Synopsys. “The door will open gradually, and then quickly,” he said in Tuesday’s keynote address at the Design and Verification Conference and Exhibition, or DVCon, in San Jose, Calif.

“The assisted brain is on the way,” de Geus told the standing-room-only audience. “This may be dreaming, but I don’t think so.”

Taking “Smart Design from Silicon to Software” as his official theme, the veteran executive urged attendees to “shift left” – in other words, “squeezing the schedule” to design, verify, debug, and manufacture semiconductors. “Schedules haven’t changed much,” de Geus said. The difference now is that the marketing department has as much influence in planning and scheduling a new product as the engineering department, he noted.

Chip designers also should “shift left” on semiconductor intellectual property, de Geus said. “IP reuse is the biggest change in 15 to 30 years,” he asserted. “Reuse leverages your innovation.”

After plugging the concepts of unified compilation and unified debugging architectures, de Geus touted the use of virtual prototypes in chip design. “Software guys are impatient with you,” he said. Synopsys, he noted, has created 400 million lines of software code.

Turning to the Internet of Things, de Geus said, “There are a lot of opportunities there.” The problem is “these things are full of cracks,” he added. There are significant engineering and security issues that must be addressed in networks of connected devices.

Developing the FinFET “was said to be impossible seven to eight years ago,” de Geus said. Nonetheless, the semiconductor industry was able to realize that advanced technology to move beyond the 28-nanometer process node, he noted. The future is likely to present similar challenges.

Blog review October 27, 2014

Monday, October 27th, 2014

Does your design’s interconnect have high enough wire width to withstand ESD? Frank Feng of Mentor Graphics writes in his blog that although applying DRC to check for ESD protection has been in use for a while, designers still struggle to perform this check, because a pure DRC approach can’t identify the direction of an electrical current flow, which means the check can’t directly differentiate the width or length of a wire polygon against a current flow.

Phil Garrou blogs that most of us know of Nanium as a contract assembly house in Portugal who licensed the Infineon eWLB fan out technology and is supplying such packages on 300mm wafers. NANIUM also has extensive volume manufacturing experience in WB multi-chip memory packages, combining Wafer-level RDL techniques (redistribution) with multiple die stacking in a package.

Gabe Moretti says it is always a pleasure to talk to Dr. Lucio Lanza and I took the opportunity of being in Silicon Valley to interview Lucio since he has just been awarded the 2014 Phil Kaufman award. Dr. Lanza poses this challenge: “The capability of EDA tools will grow in relation to design complexity so that cost of design will remain constant relative to the number of transistors on a die.”

Are we at an inflection point with silicon scaling and homogeneous ICs? Bill Martin, President and VP of Engineering, E-System Design thinks so. I lays out the case for considering Moore’s Law 2.0 where 3D integration becomes the key to continued scaling.

Congratulations to Applied Materials Executive Chairman Mike Splinter on receiving the Silicon Valley Education Foundation’s (SVEF) Pioneer Business Leader Award for driving change in business and education philanthropy by using his passion and influence to make a positive impact on people’s lives.

At the recent FD-SOI Forum in Shanghai, the IoT (Internet of Things) was the #1 topic in all the presentations. As Adele Hars reports, speakers included experts from Synopsys, ST, GF, Soitec, IBS, Synapse Design, VeriSilicon, Wave Semi and IBM.

Deeper Dive — Mentor Graphics Looks to the Future

Tuesday, October 14th, 2014

Mentor Graphics is a survivor.

Established in 1981, the electronic design automation software and services company, based in Wilsonville, Ore., was once part of the “DMV” triumvirate in EDA. That acronym stood for Daisy Systems, Mentor Graphics, and Valid Logic Systems. Daisy and Valid are long gone, supplanted by Cadence Design Systems and Synopsys. Mentor abides.

Walden C. (Wally) Rhines has been Mentor’s chairman and chief executive officer since 2000, and before that served as the company’s president and CEO for seven years. His 21 years at Mentor now matches his 21 years at Texas Instruments, where he worked before joining Mentor.

For the fiscal year ended January 31, 2014, Mentor posted revenue of $1.156 billion and net income of $155.3 million. For the six months ended July 31, 2014, the company reported revenue of $512.4 million and net income of $11.6 million. System and software revenue accounted for nearly 64 percent of Mentor’s revenue in the past fiscal year, while service and support revenue represented 36 percent.

Like its main competitors, Cadence and Synopsys, Mentor Graphics is active in acquisitions. In late 2013, the company bought certain assets of Oasys Design Systems, the startup’s Oasys RealTime engine in particular. During fiscal 2014, Mentor acquired the assets of four privately-held companies for a total of $19.3 million. More recently, the company has acquired Berkeley Design Automation for nearly $47 million in cash, Nimbic, and XS Embedded.

The technical challenges of the semiconductor industry are the bread and butter of Mentor’s business, and it faces its own technical challenges in the nanoscale era of chip design and manufacturing. Mentor notes in its 10-K annual report, “Nanometer process geometries cause design challenges in the creation of ICs which are not present at larger geometries. As a result, nanometer process technologies, used to deliver the majority of today’s ICs, are the product of careful design and precision manufacturing. The increasing complexity and smaller size of designs have changed how those responsible for the physical layout of an IC design deliver their design to the IC manufacturer or foundry. In older technologies, this handoff was a relatively simple layout database check when the design went to manufacturing. Now it is a multi-step process where the layout database is checked and modified so the design can be manufactured with cost-effective yields of ICs.”

There has been a great deal of handwringing and naysaying about the industry’s progress to the 14/16-nanometer process node, along with wailing and gnashing of teeth about the slow progress of extreme-ultraviolet lithography, which was supposed to ease the production of 14nm or 16nm chips.

Joseph Sawicki, vice president and general manager of Mentor’s Design-to-Silicon Division, is having none of it.

Joe Sawicki

He recalls seeing a 1988 article about the impending doom of the chip business, faced with making IC features smaller than 1 micron. The submicron era didn’t destroy the semiconductor industry, of course. At the 130nm process node, there was serious discussion that it wouldn’t be necessary to progress to 90nm, which would be difficult or impossible to achieve, according to Sawicki. “Now, we’re hearing the same talk” in discussions about the forthcoming 10nm and 7nm process generations, he says.

In the past and at present, it’s necessary to maintain a spirit of “willful optimism,” Sawicki asserts. He points to Apple’s A8 processor, a custom chip inside the iPhone 6 and iPhone 6 Plus handsets, as an example of outstanding 20nm design that offers twice the density of its predecessors for Apple’s mobile devices.

What makes Sawicki optimistic about the current challenges is “this wonderful ecosystem, all the players, including EDA,” he says. “Scaling is not as easy,” he acknowledges. “It’s not nearly as bad as people are portraying it.” Mentor is working with such parties as imec, the University of Albany’s College of Nanoscale Science & Engineering, and the Semiconductor Research Corporation, according to Sawicki.

When it comes to fretful discussions of what will happen at 3nm and 5nm, Sawicki doesn’t see a reason to panic. “That’s three nodes out,” he notes. “Everything looks impossible.” Looking one node ahead, “we think we’re okay,” he adds.

The semiconductor industry, Sawicki says, has “a pretty clear path out there for the next six to 12 years. It really has to be willful optimism.”

Foundry, EDA partnership eases move to advanced process nodes

Monday, September 15th, 2014

By Dr. Lianfeng Yang, Vice President of Marketing, ProPlus Design Solutions, Inc., San Jose, Calif.

Partnerships are the lifeblood of the semiconductor industry, and when moving to new advanced nodes, industry trends show closer partnerships and deeper collaborations between foundries, EDA vendors and design companies to ease the transition.

It’s fitting, then, for me to pay homage in this blog post to a successful and long-term partnership between a foundry and an EDA tool supplier.

A leading semiconductor foundry and an EDA vendor with design-for-yield (DFY) solutions have enjoyed a long-term partnership. Recently, they worked together to leverage DFY technologies for process technology development and design flow enhancement. The goals were to improve SRAM yield and provide faster turnaround of a new process platform development.

The foundry used the EDA firm’s high-sigma DFY solution to optimize its SRAM yield for 28nm processes development. Early this year, it announced 28nm readiness for multi-project wafer (MPW) customers. One of the reasons it was able to release the 28nm process with acceptable SRAM yield in a short time was due to a new methodology for SRAM yield ramping that deployed a DFY engine.

During advanced technology development, the time spent on SRAM yield ramping is significant because statistical process variation, particularly local variation between two identical neighboring devices sometimes called mismatch, limits SRAM parametric yield. The impact of local process variation increases when moving to smaller CMOS technology nodes.

In the meantime, supply voltage is reduced, so operating regions are smaller. The difficulty achieving high yield for SRAM is greater because smaller nodes require higher SRAM density. Such challenges require very high sigma robustness or high SRAM bitcell yield. Statistically, the analysis for the SRAM bitcell at 28nm needs to be at around 6 σ, while FinFET technology at 16/14nm sets even higher sigma requirements for SRAM bitcell yield.

During technology development, foundry engineers improve the process to solve defect-related yield issues first. Design-for-manufacturing methodologies can be used to eliminate some systematic process variations. However, many random process variations, such as random dopant fluctuations (RDF), line edge and width roughness (LER, LWR), are fundamental limiting factors for parametric yield particular to SRAM.

Traditionally, foundry engineers rely on experience and know-how from previous node development efforts to analyze and decide how to run different process splits for different process improvement scenarios to optimize SRAM yield. These efforts are often time-consuming and less effective at advanced nodes like 28nm because the optimization margin is much smaller.

The fab’s new SRAM yielding flow used a high sigma statistical simulator as the core engine. It provided fast and accurate 3-7+σ yield prediction and optimization functions for memory, logic and analog circuit designs. During process development, the tool proved its technology advantages in both accuracy and performance, and was validated by silicon in several rounds of tape outs throughout the development process. It required no additional tuning on technology or special settings on the tool usage, so even process engineers who are not familiar with EDA tools could run them to get reliable results to guide their process tuning for SRAM yield improvement.

The flow was able to predict SRAM yield for different process and operating conditions. It simulated SRAM yield improvement trends and provided process improvement direction and guidelines within hours. A methodology such as this becomes necessary for advanced nodes where the remaining optimization margin is small. A simulation-based methodology can run through all possible combinations that process engineers want to explore, providing better yield results and faster yield ramping. Comparatively, the traditional way of exploration based on experiences and running large amount of process splits is limited and may not yield optimum results. It also is time consuming as the engineer would need to wait for tape out results then run another set of trials that could consume months.

The flow saved months ramping up SRAM yield for the 28nm process node. It reduced iteration time and saved wafer cost. Process engineers now only need to fabricate selective wafers to validate simulation results. They know which direction is optimal and have guidelines to run process splits that will help them identify the best conditions and converge on the best yield. They gained greater certainty as they saw more simulation-to-silicon correlation data as the project progressed.

A well-established methodology and flow brings value to process engineers because they can rely on DFY high sigma simulations to lay the foundation for their process improvement strategies to reach certain SRAM yield targets. They can run selective process splits to verify the results for lower wafer costs, fewer process tuning iterations and faster time to market.

Overall, this is a highly successful and mutually beneficial partnership, and the value of DFY to process technology development, is obvious. The same DFY methodology can be used for memory designers as SRAM yield is their primary target as well. The only difference is it tunes design variables using the same methodology, flow and tool solutions.

It’s easy to see the value of a tight collaboration between the foundry, EDA vendor and design companies and why it will be a trend on top of the “foundry-fabless” business model.

About Dr. Lianfeng Yang

Lianfeng Yang, ProPlus Solutions, Inc.

Dr. Lianfeng Yang currently serves as the Vice President of Marketing at ProPlus Design Solutions, Inc. Prior to co-founding ProPlus, he was a senior product engineer at Cadence Design Systems leading the product engineering and technical support effort for the modeling product line in Asia. Dr. Yang has over 40 publications and holds a Ph.D. degree in Electrical Engineering from the University of Glasgow in the U.K.

Blog review September 8, 2014

Monday, September 8th, 2014

Jeff Wilson of Mentor Graphics writes that, in IC design, we’re currently seeing the makings of a perfect storm when it comes to the growing complexity of fill. The driving factors contributing to the growth of this storm are the shrinking feature sizes and spacing requirements between fill shapes, new manufacturing processes that use fill to meet uniformity requirements, and larger design sizes that require more fill.

Is 3D NAND a Disruptive Technology for Flash Storage? Absolutely! That’s the view of Dr. Er-Xuan Ping of Applied Materials. He said a panel at the 2014 Flash Memory Summit agreed that 3D NAND will be the most viable storage technology in the years to come, although our opinions were mixed on when that disruption would be evident.

Phil Garrou takes a look at some of the “Fan Out” papers that were presented at the 2014 ECTC, focusing on STATSChipPAC (SCP) and the totally encapsulated WLP, Siliconware (SPIL) panel fan-out packaging (P-FO), Nanium’s eWLB Dielectric Selection, and an electronics contact lens for diabetics from Google/Novartis.

Ed Koczynski says he now knows how wafers feel when moving through a fab. Leti in Grenoble, France does so much technology integration that in 2010 it opened a custom-developed people-mover to integrate cleanrooms (“Salles Blanches” in French) it calls a Liaison Blanc-Blanc (LBB) so workers can remain in bunny-suits while moving batches of wafers between buildings.

Handel Jones of IBS provides a study titled “How FD-SOI will Enable Innovation and Growth in Mobile Platform Sales” that concludes that the benefits of FD-SOI are overwhelming for mobile platforms through Q4/2017 based on a number of key metrics.

Gabe Moretti of Chip Design blogs that a grown industry looks at the future, not just to short term income.  EDA is demonstrating to be such an industry with significant participation by its members to foster and support the education of its future developers and users through educational licenses and other projects that foster education.

An EDA view of semiconductor manufacturing

Thursday, July 24th, 2014

By Gabe Moretti, Contributing Editor

The concern that there is a significant break between tools used by designers targeting leading edge processes, those at 32 nm and smaller to be precise, and those used to target older processes was dispelled during the recent Design Automation Conference (DAC).  In his address as a DAC keynote speaker in June at the Moscone Center in San Francisco Dr. Antun Domic, Executive Vice President and General Manager, Synopsys Design Group, pointed out that advances in EDA tools in response to the challenges posed by the newer semiconductor process technologies also benefit designs targeting older processes.

Mary Ann White, Product Marketing Director for the Galaxy Implementation Platform at Synopsys, echoed Dr. Domic remarks and stated:” There seems to be a misconception that all advanced designs needed to be fabricated on leading process geometries such as 28nm and below, including FinFET. We have seen designs with compute-intensive applications, such as processors or graphics processing, move to the most advanced process geometries for performance reasons. These products also tend to be highly digital. With more density, almost double for advanced geometries in many cases, more functionality can also be added. In this age of disposable mobile products where cellphones are quickly replaced with newer versions, this seems necessary to remain competitive.

However, even if designers are targeting larger, established process technologies (planar CMOS), it doesn’t necessarily mean that their designs are any less advanced in terms of application than those that target the advanced nodes.  There are plenty of chips inside the mobile handset that are manufactured on established nodes, such as those with noise cancellation, touchscreen, and MEMS (Micro-Electronic Sensors) functionality. MEMS chips are currently manufactured at the 180nm node, and there are no foreseeable plans to move to smaller process geometries. Other chips at established nodes tend to also have some analog capability, which doesn’t make them any less complex.”

This is very important since the companies that can afford to use leading edge processes are diminishing in number due to the very high ($100 million and more) non recurring investment required.  And of course the cost of each die is also greater than with previous processes.  If the tools could only be used by those customers doing leading edge designs revenues would necessarily fall.

Design Complexity

Steve Carlson, Director of Marketing at Cadence, states that “when you think about design complexity there are few axes that might be used to measure it.  Certainly raw gate count or transistor count is one popular measure.  From a recent article in Chip Design a look at complexity on a log scale shows the billion mark has been eclipsed.”  Figure 1, courtesy of Cadence, shows the increase of transistors per die through the last 22 years.

Fig 1

Steve continued: “Another way to look at complexity is looking at the number of functional IP units being integrated together.  The graph in figure 2, provided by Cadence, shows the steep curve of IP integration that SoCs have been following.  This is another indication of the complexity of the design, rather than of the complexity of designing for a particular node.  At the heart of the process complexity question are metrics such as number of parasitic elements needed to adequately model a like structure in one process versus another.”  It is important to notice that the percentage of IP blocks provided by third parties is getting close to 50%.

Fig 2

Steve concludes with: “Yet another way to look at complexity is through the lens of the design rules and the design rule decks.  The graphs below show the upward trajectory for these measures in a very significant way.” Figure 3, also courtesy of Cadence, shows the increased complexity of the Design Rules provided by each foundry.  This trend makes second sourcing a design impossible, since having a second source foundry would be similar to having a different design.

Fig 3

Another problem designers have to deal with is the increasing complexity due to the decreasing features sizes.  Anand Iyer, Calypto Director of Product Marketing, observed that: “Complexity of design is increasing across many categories such as Variability, Design for Manufacturability (DFM) and Design for Power (DFP). Advanced geometries are prone to variation due to double patterning technology. Some foundries are worst casing the variation, which can lead to reduced design performance. DFM complexity is causing design performance to be evaluated across multiple corners much more than they were used to. There are also additional design rules that the foundry wants to impose due to DFM issues. Finally, DFP is a major factor for adding design complexity because power, especially dynamic power is a major issue in these process nodes. Voltage cannot scale due to the noise margin and process variation considerations and the capacitance is relatively unchanged or increasing.”

Impact on Back End Tools.

I have been wondering if the increasing dependency on transistors geometries and the parasitic effects peculiar to each foundry would eventually mean that a foundry specific Place and Route tool would be better than adapting a generic tool to a Design Rules file that is becoming very complex.  I my mind complexity means greater probability of errors due to ambiguity among a large set of rules.  Thus by building rules specific Place and Route tools would directly lower the number of DR checks required.

Mary Ann White of Synopsys answered: “We do not believe so.  Double and multiple patterning are definitely newer techniques introduced to mitigate the lithographic effects required to handle the small multi-gate transistors. However, in the end, even if the FinFET process differs, it doesn’t mean that the tool has to be different.  The use of multi patterning, coloring and decomposition is the same process even if the design rules between foundries may differ.”

On the other hand Steve Carlson of Cadence shares the opinion.  “There have been subtle differences between requirements at new process nodes for many generations.  Customers do not want to have different tool strategies for second source of foundry, so the implementation tools have to provide the union of capabilities needed to enable each node (or be excluded from consideration).   In more recent generations of process nodes there has been a growing divergence of the requirements to support

like-named nodes. This has led to added cost for EDA providers.  It is doubtful that different tools will be spawned for different foundries.  How the (overlapping) sets of capabilities get priced and packaged by the EDA vendors will be a business model decision.  The use model users want is singular across all foundry options.  How far things diverge and what the new requirements are at 7nm and 5nm may dictate a change in strategy.  Time will tell.”

This is clear for now.  But given the difficulty of second sourcing I expect that a de4sign company will choose one foundry and use it exclusively.  Changing foundry will be almost always a business decision based on financial considerations.

New processes also change the requirements for TCAD tools.  At the just finished DAC conference I met with Dr. Asen Asenov, CEO of Gold Standard Simulations, an EDA company in Scotland that focuses on the simulation of statistical variability in nan-CMOS devices.

He is of the opinion that Design-Technology Co-Optimization (DTCO) has become mandatory in advanced technology nodes.  Modeling and simulation play an increasing important role in the DTCO process with the benefits of speeding up and reducing the cost of the technology, circuit and system development and hence reducing the time-to-market.  He said: “It is well understood that tailoring the transistor characteristics by tuning the technology is not sufficient any more. The transistor characteristics have to meet the requirement for design and optimization of particular circuits, systems and corresponding products.  One of the main challenges is to factor accurately the device variability in the DTCO tools and practices. The focus at 28nm and 20nm bulk CMOS is the high statistical variability introduced by the high doping concentration in the channel needed to secure the required electrostatic integrity. However the introduction of FDSOI transistors and FinFETs, that tolerate low channel doping, has shifted the attention to the process induced variability related predominantly to silicon channel thickness or shape  variation.”  He continued: “However until now TCAD simulations, compact model extraction and circuit simulations are typically handled by different groups of experts and often by separate departments in the semiconductor industry and this leads to significant delays in the simulation based DTCO cycle. The fact that TCAD, compact model extraction and circuit simulation tools are typically developed and licensed by different EDA vendors does not help the DTCO practices.”

Ansys pointed out that in advanced finFET process nodes, the operating voltage for the devices have drastically reduced. This reduction in operating voltage has also lead to a decrease in operating margins for the devices. With several transient modes of operation in a low power ICs, having an accurate representation of the package model is mandatory for accurate noise coupling simulations. Distributed package models with a bump resolution are required for performing Chip-Package-System simulations for accurate noise coupling analysis.

Further Exploration

The topic of Semiconductors Manufacturing has generated a large number of responses.  As a result the next monthly article will continue to cover the topic with particular focus on the impact of leading edge processes on EDA tools and practices.

This article was originally published on Systems Design Engineering.

Big sell: IP Trends and Strategies

Monday, March 10th, 2014

By Sara Ver-Bruggen, SemiMD Editor

Experts at the table: Continued strong growth for semiconductor intellectual property (IP) through 2017 has been forecast by Semico Research. Semiconductor Manufacturing & Design invited Steve Roddy, Product Line Group Director, IP Group at Cadence, Bob Smith, Senior Vice President of Marketing and Business Development at Uniquify and Grant Pierce, CEO at Sonics to discuss how the IP landscape is changing and provide some perspectives, as the industry moves to new device architectures.

SemiMD: How are existing SIP strategies adapting for the transition to 20 nm generation of system- on-chips (SoCs)?

Roddy: The move to 22/16 nm process nodes has accelerated the trend towards the adoption of commercial Interface and physical IP. The massive learning curve in dealing with new transistor structures (FinFET, fully depleted SOI, high-k) raised the price of building in-house physical IP for internal consumption, thus compelling yet another wave of larger semiconductor IDMs and fabless semi vendors to leverage external IP for a greater share of their overall portfolio of physical IP needs.

Pierce: With 20 nm processes, the number of SIP cores and the size of memory accessed by those cores is seeing double digit growth. This growth translates into tremendous complexity that requires a solution for abstracting away the sheer volume of data generated by chip designs. The 20 nm processes will drive the need for SoC subsystems that abstract away the detailed interaction of step-by-step processing. For example, raising the abstraction of a video stream up to the level of a video subsystem; the collection of the various pieces of video processing into a single unit.
In this scenario, the big challenge becomes integration of subsystem units to create the final SoC. Meeting this challenge places a premium value on SIP that facilitates the efficient management of memory bandwidth to feed the growing number of SoC subsystems in the designs. Furthermore, 20 nm SoC designs will also place higher value on SIP that helps manage and control power in the context of applications running across these subsystems.

Smith: We are seeing many of the larger systems companies bypassing 20 nm entirely and moving from 28nm process technologies to the upcoming generation of 16 nm/14 nm FinFET technologies. FinFET offers the benefits of much lower power at equivalent performance or much higher performance at similar power to existing technologies. While 20 nm offers some gains, there are compelling competitive reasons to move quickly beyond 28/20 nm.
The demand for FinFET processes will naturally push the demand for the critical SIP blocks needed to support SoC designs at this node. SIP providers will need to migrate SIP blocks to the new technology and, for the most critical, will need to prove them out in silicon. The foundries will need to encourage this activity as SIP will typically make up more than 60-70% of the designs that will be slated for the new FinFET processes.

SemiMD: Within the semiconductor intellectual property (SIP) SoC subsystems market, which subsystem categories are likely to see most growth and how is the market evolving in the near term?

Pierce: Internet of Things (IoT) is causing an explosion in the number of sensors per device that are collecting huge amounts of data to be used locally or in the cloud. However, many of these sensors will need to operate at very low power levels, off of tiny batteries or scavenged energy. Sensor subsystems will need to carefully integrate the required processing and memory resources without support from the host processor. Some of the most interesting and challenging sensor subsystems will be imaging-related, where the processing loads can be highly dynamic, but the power requirements can be particularly challenging. Additionally, MEMS subsystems will grow in importance because this technology will often be used for power harvesting in IoT endpoint devices.

Smith: High-speed interfaces will see the most growth. DDR is at the top with DDR typically being the highest performance interface in the system and also the most critical. The DDR interface is at the heart of system operation and, if it does not operate reliably, the system won’t function. Other high-speed interfaces especially for video will also see tremendous growth, particularly in the mobile area.

Roddy: The emergence of a ‘subsystems’ IP market is to date over-hyped. That’s not to say that customers of IP are content with the status quo of 2008 where many IP blocks were purchased in isolation from a multitude of vendors. Customers do want a large portfolio of IP blocks that they can quickly stitch together, with known interoperability, provided with useful and usable verification IP. For that reason, we’ve seen a consolidation in the semiconductor IP business within the past five years, accelerating even further in 2012 and 2013. Larger providers such as Cadence can deliver a broad portfolio of IP while ensuring consistency, common support infrastructure, consistent best-in-class verification, and lowered transaction costs. But what customers don’t want is a pre-baked black-box that locks down system design issues that are best answered by the SOC designer in the context of the specific chip architecture. For that reason we expect to see slow growth in the class of ready-made, fully-integrated subsystems where the cost of development for the IP vendor far outweighs the added value delivered.

SemiMD: How will third party SIP outsourcing models become more important as the industry embarks 20 nm generation SoCs and what are IP vendors doing to enable the industry’s transition to the 20 nm generation of SoCs?

Roddy: As the costs of physical IP development scale up with the increasing costs of advance process node design, more consumers of IP are increasing the percentage of IP they outsource. Buyers of IP will always analyze the make versus buy equation by weighing several factors, including the degree of differentiation that a particular piece of IP can bring to their chips. Fully commoditized IP is easy to decide to outsource. Highly proprietary IP stays in house. But the lines are never black and white – there are always shades of grey. The IP vendors that can provide rapid means to customize pre-existing IP blocks are the vendors that will capture those incrementally outsourced blocks. The Cadence IP Factory concept of using automation to assemble and configure IP cores is one way that IP vendors can offer a blend of off-the-shelf cost savings with an appropriate touch of value added differentiation.

Pierce: From a business perspective, SIP outsourcing is inevitable for all functions that are not proprietary to the end system or SoC. It will not be feasible to develop and maintain all the expertise necessary to design and build a 20 nm device. The demand to abstract up to a subsystem solution will drive a consolidation of SIP suppliers under a common method of integration, for example a platform-based approach built around on-chip networks. Platform integration will be a key requirement for SIP suppliers.

Smith: SIP vendors are looking to the foundries and/or large systems companies to become partners in the development of the critical IP blocks needed to support the move to FinFET.

SemiMD: Are there examples of the ‘perfect’ SIP strategy in the industry, in terms of leveraging internal and third party SIP?

Smith: Yes. Even the largest semiconductor companies go outside for certain SIP blocks. It is virtually impossible for any individual company to have the resources (both human and capital) to develop and support the wide variety of SIP needed in today’s most complex SoC designs.

Pierce: The perfect SIP strategy in the industry is one that readily enables use of any SIP in any chip at any time. Pliability of architecture over a broad range of applications is a winning strategy. Agile integration of SIP cores and subsystems will become a critical strategic advantage. No one company exemplifies perfect SIP strategy today, but the rewards will be great for those companies that get closest to perfection first.

Roddy: There is no one-size-fits-all IP strategy that is perfect for all SOC design teams. The teams have to carefully consider their unique business proposition before embarking on an IP procurement strategy. For example, the tier 1 market leader in a given segment is striving to define and exploit new markets. That Tier 1 vendor will need to push new standards; add new value-add software features; and innovate in hardware, software and business models. For the Tier 1, building key value-add IP in-house, or partnering with an IP vendor that can rapidly customize standards-based IP is the way to go. On the other end of the spectrum, the ‘fast follower’ company looking to exploit a rapidly expanding market will be best served by outsourcing as close to 100% as possible of the needed IP. For this type of company, speed is of the essence and critical is the need to partner with IP vendors with the broadest possible portfolio to get a chip done fast and done right.

SemiMD: What challenges and also what opportunities is China’s growing SIP subsystems market presenting for the semiconductor industry?

Roddy: China is one of the most dynamic markets today for semiconductor IP. The overall Chinese semiconductor market is growing rapidly and a growing number of Chinese system OEMs are increasing investment levels, including taking on SOC design challenges previously left to the semiconductor vendors. By partnering with the key foundries to enable a portfolio of IP in specific process technology nodes for mutual customers, the leading IP providers such as Cadence are setting the buffet table at which the Chinese SOC design teams will fill their plates with interoperable, compatible, tested and verified physical IP blocks that will ensure fast time to market success.

Pierce: China is a fast growing market for SIP solutions in general. It is also a market that highly values the time-to-market benefit that SIP delivers as the majority of China’s products are consumer-oriented with short design cycles. SIP subsystems will be the most palatable for consumption by the China market. However, because China has adopted a number of regional standards, there will be substantial pressure on subsystem providers to optimize for local standards.

Smith: We see tremendous opportunities in terms of new business for SIP from both established companies and many entrepreneurial startups. Challenges include pricing pressure and the concern over IP leakage or copying. While this has become less of an issue over the years, it is still a concern. The good news is that the market in China is very aggressive and willing to take risks to get ahead.

Safety critical devices drive fast adoption of advanced DFT

Monday, January 6th, 2014

By Ron Press, Mentor Graphics Corp

Devices used in safety critical applications need be known to work and have the ability to be regularly verified. Therefore, a very high-quality test is important, as is a method to perform a built-in self-test. Recently, there has been a strong growth in the automotive market and the number of processors within each car is steadily increasing. These devices are used for more and more functions such as braking systems, engine control, heads-up display, navigation systems, image sensors, and more. As a result, we see many companies designing devices for the automotive market or trying to enter the automotive market.

2011 saw the publication of the ISO standard 26262, which specifies standard criteria for automobile electronics. Our experience is that recently two test requirements are being adopted or at least evaluated by most companies developing safety critical devices. One requirement is to perform a very high-quality test such that there are virtually no defective parts that escape the tests. The other is to perform a built-in self-test such that the part can be tested when in the safety critical application.

There are various pattern types that help support the zero DPM (defects per million) shipped devices goal. In particular, Cell-Aware test is proven to uniquely detect defects that escape traditional tests. Cell-Aware test can find defects that would escape a 100% stuck-at, transition, and timing-aware test set.. This is because it works by first modeling the actual defects that can occur in the physical layout of standard cells. Cell-Aware pattern size was recently improved and reduced, but a complete pattern set is larger than a traditional pattern set so embedded compression is used.

At Mentor Graphics, we started seeing more and more customers implementing logic BIST and embedded compression for the same circuits. Therefore, it made sense to integrate both into common logic that can be shared, since both technologies interface to scan chains in a similar manner. The embedded compression decompressor could be configured into a linear feedback shift register (LFSR) to produce pseudo-random patterns for logic BIST. Both the logic BIST and embedded compression logic provide data to scan chains through a phase shifter so that logic is fully shared. The scan chain outputs are compacted together in embedded compression. This logic is mostly shared with logic BIST to reduce the number of scan chain outputs that enter a signature calculator.

The hybrid embedded compression/logic BIST circuit is useful for meeting the safety-critical device quality and self-test requirements. In addition, since logic is shared the controller is 20-30% smaller than implementing embedded compression and logic BIST separately. As previously mentioned, we have seen this logic being adopted or in evaluation very broadly by automotive device designers.

One side effect of using embedded compression and logic BIST is that each makes the other better. For example, embedded compression can supply an extremely high quality production test. So, fewer test points are necessary in logic BIST to make random pattern resistant logic more testable, which reduces the area of logic BIST test points. Conversely, the X-bounding and any test points that are added for logic BIST make the circuit more testable and improve the embedded compression coverage and pattern count results.

Ron Press is the technical marketing manager of the Silicon Test Solutions products at Mentor Graphics. The 25-year veteran of the test and DFT (design-for-test) industry has presented seminars on DFT and test throughout the world. He has published dozens of papers in the field of test, is a member of the International Test Conference (ITC) Steering Committee, is a Golden Core member of the IEEE Computer Society, and a Senior Member of IEEE. Press has patents on reduced-pin-count testing, glitch-free clock switching, and patents pending on 3D DFT.

Next Page »