Part of the  

Solid State Technology

  Network

About  |  Contact

Posts Tagged ‘EDA’

Foundry, EDA partnership eases move to advanced process nodes

Monday, September 15th, 2014

By Dr. Lianfeng Yang, Vice President of Marketing, ProPlus Design Solutions, Inc., San Jose, Calif.

Partnerships are the lifeblood of the semiconductor industry, and when moving to new advanced nodes, industry trends show closer partnerships and deeper collaborations between foundries, EDA vendors and design companies to ease the transition.

It’s fitting, then, for me to pay homage in this blog post to a successful and long-term partnership between a foundry and an EDA tool supplier.

A leading semiconductor foundry and an EDA vendor with design-for-yield (DFY) solutions have enjoyed a long-term partnership. Recently, they worked together to leverage DFY technologies for process technology development and design flow enhancement. The goals were to improve SRAM yield and provide faster turnaround of a new process platform development.

The foundry used the EDA firm’s high-sigma DFY solution to optimize its SRAM yield for 28nm processes development. Early this year, it announced 28nm readiness for multi-project wafer (MPW) customers. One of the reasons it was able to release the 28nm process with acceptable SRAM yield in a short time was due to a new methodology for SRAM yield ramping that deployed a DFY engine.

During advanced technology development, the time spent on SRAM yield ramping is significant because statistical process variation, particularly local variation between two identical neighboring devices sometimes called mismatch, limits SRAM parametric yield. The impact of local process variation increases when moving to smaller CMOS technology nodes.

In the meantime, supply voltage is reduced, so operating regions are smaller. The difficulty achieving high yield for SRAM is greater because smaller nodes require higher SRAM density. Such challenges require very high sigma robustness or high SRAM bitcell yield. Statistically, the analysis for the SRAM bitcell at 28nm needs to be at around 6 σ, while FinFET technology at 16/14nm sets even higher sigma requirements for SRAM bitcell yield.

During technology development, foundry engineers improve the process to solve defect-related yield issues first. Design-for-manufacturing methodologies can be used to eliminate some systematic process variations. However, many random process variations, such as random dopant fluctuations (RDF), line edge and width roughness (LER, LWR), are fundamental limiting factors for parametric yield particular to SRAM.

Traditionally, foundry engineers rely on experience and know-how from previous node development efforts to analyze and decide how to run different process splits for different process improvement scenarios to optimize SRAM yield. These efforts are often time-consuming and less effective at advanced nodes like 28nm because the optimization margin is much smaller.

The fab’s new SRAM yielding flow used a high sigma statistical simulator as the core engine. It provided fast and accurate 3-7+σ yield prediction and optimization functions for memory, logic and analog circuit designs. During process development, the tool proved its technology advantages in both accuracy and performance, and was validated by silicon in several rounds of tape outs throughout the development process. It required no additional tuning on technology or special settings on the tool usage, so even process engineers who are not familiar with EDA tools could run them to get reliable results to guide their process tuning for SRAM yield improvement.

The flow was able to predict SRAM yield for different process and operating conditions. It simulated SRAM yield improvement trends and provided process improvement direction and guidelines within hours. A methodology such as this becomes necessary for advanced nodes where the remaining optimization margin is small. A simulation-based methodology can run through all possible combinations that process engineers want to explore, providing better yield results and faster yield ramping. Comparatively, the traditional way of exploration based on experiences and running large amount of process splits is limited and may not yield optimum results. It also is time consuming as the engineer would need to wait for tape out results then run another set of trials that could consume months.

The flow saved months ramping up SRAM yield for the 28nm process node. It reduced iteration time and saved wafer cost. Process engineers now only need to fabricate selective wafers to validate simulation results. They know which direction is optimal and have guidelines to run process splits that will help them identify the best conditions and converge on the best yield. They gained greater certainty as they saw more simulation-to-silicon correlation data as the project progressed.

A well-established methodology and flow brings value to process engineers because they can rely on DFY high sigma simulations to lay the foundation for their process improvement strategies to reach certain SRAM yield targets. They can run selective process splits to verify the results for lower wafer costs, fewer process tuning iterations and faster time to market.

Overall, this is a highly successful and mutually beneficial partnership, and the value of DFY to process technology development, is obvious. The same DFY methodology can be used for memory designers as SRAM yield is their primary target as well. The only difference is it tunes design variables using the same methodology, flow and tool solutions.

It’s easy to see the value of a tight collaboration between the foundry, EDA vendor and design companies and why it will be a trend on top of the “foundry-fabless” business model.

About Dr. Lianfeng Yang

Lianfeng Yang, ProPlus Solutions, Inc.

Dr. Lianfeng Yang currently serves as the Vice President of Marketing at ProPlus Design Solutions, Inc. Prior to co-founding ProPlus, he was a senior product engineer at Cadence Design Systems leading the product engineering and technical support effort for the modeling product line in Asia. Dr. Yang has over 40 publications and holds a Ph.D. degree in Electrical Engineering from the University of Glasgow in the U.K.

Blog review September 8, 2014

Monday, September 8th, 2014

Jeff Wilson of Mentor Graphics writes that, in IC design, we’re currently seeing the makings of a perfect storm when it comes to the growing complexity of fill. The driving factors contributing to the growth of this storm are the shrinking feature sizes and spacing requirements between fill shapes, new manufacturing processes that use fill to meet uniformity requirements, and larger design sizes that require more fill.

Is 3D NAND a Disruptive Technology for Flash Storage? Absolutely! That’s the view of Dr. Er-Xuan Ping of Applied Materials. He said a panel at the 2014 Flash Memory Summit agreed that 3D NAND will be the most viable storage technology in the years to come, although our opinions were mixed on when that disruption would be evident.

Phil Garrou takes a look at some of the “Fan Out” papers that were presented at the 2014 ECTC, focusing on STATSChipPAC (SCP) and the totally encapsulated WLP, Siliconware (SPIL) panel fan-out packaging (P-FO), Nanium’s eWLB Dielectric Selection, and an electronics contact lens for diabetics from Google/Novartis.

Ed Koczynski says he now knows how wafers feel when moving through a fab. Leti in Grenoble, France does so much technology integration that in 2010 it opened a custom-developed people-mover to integrate cleanrooms (“Salles Blanches” in French) it calls a Liaison Blanc-Blanc (LBB) so workers can remain in bunny-suits while moving batches of wafers between buildings.

Handel Jones of IBS provides a study titled “How FD-SOI will Enable Innovation and Growth in Mobile Platform Sales” that concludes that the benefits of FD-SOI are overwhelming for mobile platforms through Q4/2017 based on a number of key metrics.

Gabe Moretti of Chip Design blogs that a grown industry looks at the future, not just to short term income.  EDA is demonstrating to be such an industry with significant participation by its members to foster and support the education of its future developers and users through educational licenses and other projects that foster education.

An EDA view of semiconductor manufacturing

Thursday, July 24th, 2014

By Gabe Moretti, Contributing Editor

The concern that there is a significant break between tools used by designers targeting leading edge processes, those at 32 nm and smaller to be precise, and those used to target older processes was dispelled during the recent Design Automation Conference (DAC).  In his address as a DAC keynote speaker in June at the Moscone Center in San Francisco Dr. Antun Domic, Executive Vice President and General Manager, Synopsys Design Group, pointed out that advances in EDA tools in response to the challenges posed by the newer semiconductor process technologies also benefit designs targeting older processes.

Mary Ann White, Product Marketing Director for the Galaxy Implementation Platform at Synopsys, echoed Dr. Domic remarks and stated:” There seems to be a misconception that all advanced designs needed to be fabricated on leading process geometries such as 28nm and below, including FinFET. We have seen designs with compute-intensive applications, such as processors or graphics processing, move to the most advanced process geometries for performance reasons. These products also tend to be highly digital. With more density, almost double for advanced geometries in many cases, more functionality can also be added. In this age of disposable mobile products where cellphones are quickly replaced with newer versions, this seems necessary to remain competitive.

However, even if designers are targeting larger, established process technologies (planar CMOS), it doesn’t necessarily mean that their designs are any less advanced in terms of application than those that target the advanced nodes.  There are plenty of chips inside the mobile handset that are manufactured on established nodes, such as those with noise cancellation, touchscreen, and MEMS (Micro-Electronic Sensors) functionality. MEMS chips are currently manufactured at the 180nm node, and there are no foreseeable plans to move to smaller process geometries. Other chips at established nodes tend to also have some analog capability, which doesn’t make them any less complex.”

This is very important since the companies that can afford to use leading edge processes are diminishing in number due to the very high ($100 million and more) non recurring investment required.  And of course the cost of each die is also greater than with previous processes.  If the tools could only be used by those customers doing leading edge designs revenues would necessarily fall.

Design Complexity

Steve Carlson, Director of Marketing at Cadence, states that “when you think about design complexity there are few axes that might be used to measure it.  Certainly raw gate count or transistor count is one popular measure.  From a recent article in Chip Design a look at complexity on a log scale shows the billion mark has been eclipsed.”  Figure 1, courtesy of Cadence, shows the increase of transistors per die through the last 22 years.

Fig 1

Steve continued: “Another way to look at complexity is looking at the number of functional IP units being integrated together.  The graph in figure 2, provided by Cadence, shows the steep curve of IP integration that SoCs have been following.  This is another indication of the complexity of the design, rather than of the complexity of designing for a particular node.  At the heart of the process complexity question are metrics such as number of parasitic elements needed to adequately model a like structure in one process versus another.”  It is important to notice that the percentage of IP blocks provided by third parties is getting close to 50%.

Fig 2

Steve concludes with: “Yet another way to look at complexity is through the lens of the design rules and the design rule decks.  The graphs below show the upward trajectory for these measures in a very significant way.” Figure 3, also courtesy of Cadence, shows the increased complexity of the Design Rules provided by each foundry.  This trend makes second sourcing a design impossible, since having a second source foundry would be similar to having a different design.

Fig 3

Another problem designers have to deal with is the increasing complexity due to the decreasing features sizes.  Anand Iyer, Calypto Director of Product Marketing, observed that: “Complexity of design is increasing across many categories such as Variability, Design for Manufacturability (DFM) and Design for Power (DFP). Advanced geometries are prone to variation due to double patterning technology. Some foundries are worst casing the variation, which can lead to reduced design performance. DFM complexity is causing design performance to be evaluated across multiple corners much more than they were used to. There are also additional design rules that the foundry wants to impose due to DFM issues. Finally, DFP is a major factor for adding design complexity because power, especially dynamic power is a major issue in these process nodes. Voltage cannot scale due to the noise margin and process variation considerations and the capacitance is relatively unchanged or increasing.”

Impact on Back End Tools.

I have been wondering if the increasing dependency on transistors geometries and the parasitic effects peculiar to each foundry would eventually mean that a foundry specific Place and Route tool would be better than adapting a generic tool to a Design Rules file that is becoming very complex.  I my mind complexity means greater probability of errors due to ambiguity among a large set of rules.  Thus by building rules specific Place and Route tools would directly lower the number of DR checks required.

Mary Ann White of Synopsys answered: “We do not believe so.  Double and multiple patterning are definitely newer techniques introduced to mitigate the lithographic effects required to handle the small multi-gate transistors. However, in the end, even if the FinFET process differs, it doesn’t mean that the tool has to be different.  The use of multi patterning, coloring and decomposition is the same process even if the design rules between foundries may differ.”

On the other hand Steve Carlson of Cadence shares the opinion.  “There have been subtle differences between requirements at new process nodes for many generations.  Customers do not want to have different tool strategies for second source of foundry, so the implementation tools have to provide the union of capabilities needed to enable each node (or be excluded from consideration).   In more recent generations of process nodes there has been a growing divergence of the requirements to support

like-named nodes. This has led to added cost for EDA providers.  It is doubtful that different tools will be spawned for different foundries.  How the (overlapping) sets of capabilities get priced and packaged by the EDA vendors will be a business model decision.  The use model users want is singular across all foundry options.  How far things diverge and what the new requirements are at 7nm and 5nm may dictate a change in strategy.  Time will tell.”

This is clear for now.  But given the difficulty of second sourcing I expect that a de4sign company will choose one foundry and use it exclusively.  Changing foundry will be almost always a business decision based on financial considerations.

New processes also change the requirements for TCAD tools.  At the just finished DAC conference I met with Dr. Asen Asenov, CEO of Gold Standard Simulations, an EDA company in Scotland that focuses on the simulation of statistical variability in nan-CMOS devices.

He is of the opinion that Design-Technology Co-Optimization (DTCO) has become mandatory in advanced technology nodes.  Modeling and simulation play an increasing important role in the DTCO process with the benefits of speeding up and reducing the cost of the technology, circuit and system development and hence reducing the time-to-market.  He said: “It is well understood that tailoring the transistor characteristics by tuning the technology is not sufficient any more. The transistor characteristics have to meet the requirement for design and optimization of particular circuits, systems and corresponding products.  One of the main challenges is to factor accurately the device variability in the DTCO tools and practices. The focus at 28nm and 20nm bulk CMOS is the high statistical variability introduced by the high doping concentration in the channel needed to secure the required electrostatic integrity. However the introduction of FDSOI transistors and FinFETs, that tolerate low channel doping, has shifted the attention to the process induced variability related predominantly to silicon channel thickness or shape  variation.”  He continued: “However until now TCAD simulations, compact model extraction and circuit simulations are typically handled by different groups of experts and often by separate departments in the semiconductor industry and this leads to significant delays in the simulation based DTCO cycle. The fact that TCAD, compact model extraction and circuit simulation tools are typically developed and licensed by different EDA vendors does not help the DTCO practices.”

Ansys pointed out that in advanced finFET process nodes, the operating voltage for the devices have drastically reduced. This reduction in operating voltage has also lead to a decrease in operating margins for the devices. With several transient modes of operation in a low power ICs, having an accurate representation of the package model is mandatory for accurate noise coupling simulations. Distributed package models with a bump resolution are required for performing Chip-Package-System simulations for accurate noise coupling analysis.

Further Exploration

The topic of Semiconductors Manufacturing has generated a large number of responses.  As a result the next monthly article will continue to cover the topic with particular focus on the impact of leading edge processes on EDA tools and practices.

This article was originally published on Systems Design Engineering.

Big sell: IP Trends and Strategies

Monday, March 10th, 2014

By Sara Ver-Bruggen, SemiMD Editor

Experts at the table: Continued strong growth for semiconductor intellectual property (IP) through 2017 has been forecast by Semico Research. Semiconductor Manufacturing & Design invited Steve Roddy, Product Line Group Director, IP Group at Cadence, Bob Smith, Senior Vice President of Marketing and Business Development at Uniquify and Grant Pierce, CEO at Sonics to discuss how the IP landscape is changing and provide some perspectives, as the industry moves to new device architectures.

SemiMD: How are existing SIP strategies adapting for the transition to 20 nm generation of system- on-chips (SoCs)?

Roddy: The move to 22/16 nm process nodes has accelerated the trend towards the adoption of commercial Interface and physical IP. The massive learning curve in dealing with new transistor structures (FinFET, fully depleted SOI, high-k) raised the price of building in-house physical IP for internal consumption, thus compelling yet another wave of larger semiconductor IDMs and fabless semi vendors to leverage external IP for a greater share of their overall portfolio of physical IP needs.

Pierce: With 20 nm processes, the number of SIP cores and the size of memory accessed by those cores is seeing double digit growth. This growth translates into tremendous complexity that requires a solution for abstracting away the sheer volume of data generated by chip designs. The 20 nm processes will drive the need for SoC subsystems that abstract away the detailed interaction of step-by-step processing. For example, raising the abstraction of a video stream up to the level of a video subsystem; the collection of the various pieces of video processing into a single unit.
In this scenario, the big challenge becomes integration of subsystem units to create the final SoC. Meeting this challenge places a premium value on SIP that facilitates the efficient management of memory bandwidth to feed the growing number of SoC subsystems in the designs. Furthermore, 20 nm SoC designs will also place higher value on SIP that helps manage and control power in the context of applications running across these subsystems.

Smith: We are seeing many of the larger systems companies bypassing 20 nm entirely and moving from 28nm process technologies to the upcoming generation of 16 nm/14 nm FinFET technologies. FinFET offers the benefits of much lower power at equivalent performance or much higher performance at similar power to existing technologies. While 20 nm offers some gains, there are compelling competitive reasons to move quickly beyond 28/20 nm.
The demand for FinFET processes will naturally push the demand for the critical SIP blocks needed to support SoC designs at this node. SIP providers will need to migrate SIP blocks to the new technology and, for the most critical, will need to prove them out in silicon. The foundries will need to encourage this activity as SIP will typically make up more than 60-70% of the designs that will be slated for the new FinFET processes.

SemiMD: Within the semiconductor intellectual property (SIP) SoC subsystems market, which subsystem categories are likely to see most growth and how is the market evolving in the near term?

Pierce: Internet of Things (IoT) is causing an explosion in the number of sensors per device that are collecting huge amounts of data to be used locally or in the cloud. However, many of these sensors will need to operate at very low power levels, off of tiny batteries or scavenged energy. Sensor subsystems will need to carefully integrate the required processing and memory resources without support from the host processor. Some of the most interesting and challenging sensor subsystems will be imaging-related, where the processing loads can be highly dynamic, but the power requirements can be particularly challenging. Additionally, MEMS subsystems will grow in importance because this technology will often be used for power harvesting in IoT endpoint devices.

Smith: High-speed interfaces will see the most growth. DDR is at the top with DDR typically being the highest performance interface in the system and also the most critical. The DDR interface is at the heart of system operation and, if it does not operate reliably, the system won’t function. Other high-speed interfaces especially for video will also see tremendous growth, particularly in the mobile area.

Roddy: The emergence of a ‘subsystems’ IP market is to date over-hyped. That’s not to say that customers of IP are content with the status quo of 2008 where many IP blocks were purchased in isolation from a multitude of vendors. Customers do want a large portfolio of IP blocks that they can quickly stitch together, with known interoperability, provided with useful and usable verification IP. For that reason, we’ve seen a consolidation in the semiconductor IP business within the past five years, accelerating even further in 2012 and 2013. Larger providers such as Cadence can deliver a broad portfolio of IP while ensuring consistency, common support infrastructure, consistent best-in-class verification, and lowered transaction costs. But what customers don’t want is a pre-baked black-box that locks down system design issues that are best answered by the SOC designer in the context of the specific chip architecture. For that reason we expect to see slow growth in the class of ready-made, fully-integrated subsystems where the cost of development for the IP vendor far outweighs the added value delivered.

SemiMD: How will third party SIP outsourcing models become more important as the industry embarks 20 nm generation SoCs and what are IP vendors doing to enable the industry’s transition to the 20 nm generation of SoCs?

Roddy: As the costs of physical IP development scale up with the increasing costs of advance process node design, more consumers of IP are increasing the percentage of IP they outsource. Buyers of IP will always analyze the make versus buy equation by weighing several factors, including the degree of differentiation that a particular piece of IP can bring to their chips. Fully commoditized IP is easy to decide to outsource. Highly proprietary IP stays in house. But the lines are never black and white – there are always shades of grey. The IP vendors that can provide rapid means to customize pre-existing IP blocks are the vendors that will capture those incrementally outsourced blocks. The Cadence IP Factory concept of using automation to assemble and configure IP cores is one way that IP vendors can offer a blend of off-the-shelf cost savings with an appropriate touch of value added differentiation.

Pierce: From a business perspective, SIP outsourcing is inevitable for all functions that are not proprietary to the end system or SoC. It will not be feasible to develop and maintain all the expertise necessary to design and build a 20 nm device. The demand to abstract up to a subsystem solution will drive a consolidation of SIP suppliers under a common method of integration, for example a platform-based approach built around on-chip networks. Platform integration will be a key requirement for SIP suppliers.

Smith: SIP vendors are looking to the foundries and/or large systems companies to become partners in the development of the critical IP blocks needed to support the move to FinFET.

SemiMD: Are there examples of the ‘perfect’ SIP strategy in the industry, in terms of leveraging internal and third party SIP?

Smith: Yes. Even the largest semiconductor companies go outside for certain SIP blocks. It is virtually impossible for any individual company to have the resources (both human and capital) to develop and support the wide variety of SIP needed in today’s most complex SoC designs.

Pierce: The perfect SIP strategy in the industry is one that readily enables use of any SIP in any chip at any time. Pliability of architecture over a broad range of applications is a winning strategy. Agile integration of SIP cores and subsystems will become a critical strategic advantage. No one company exemplifies perfect SIP strategy today, but the rewards will be great for those companies that get closest to perfection first.

Roddy: There is no one-size-fits-all IP strategy that is perfect for all SOC design teams. The teams have to carefully consider their unique business proposition before embarking on an IP procurement strategy. For example, the tier 1 market leader in a given segment is striving to define and exploit new markets. That Tier 1 vendor will need to push new standards; add new value-add software features; and innovate in hardware, software and business models. For the Tier 1, building key value-add IP in-house, or partnering with an IP vendor that can rapidly customize standards-based IP is the way to go. On the other end of the spectrum, the ‘fast follower’ company looking to exploit a rapidly expanding market will be best served by outsourcing as close to 100% as possible of the needed IP. For this type of company, speed is of the essence and critical is the need to partner with IP vendors with the broadest possible portfolio to get a chip done fast and done right.

SemiMD: What challenges and also what opportunities is China’s growing SIP subsystems market presenting for the semiconductor industry?

Roddy: China is one of the most dynamic markets today for semiconductor IP. The overall Chinese semiconductor market is growing rapidly and a growing number of Chinese system OEMs are increasing investment levels, including taking on SOC design challenges previously left to the semiconductor vendors. By partnering with the key foundries to enable a portfolio of IP in specific process technology nodes for mutual customers, the leading IP providers such as Cadence are setting the buffet table at which the Chinese SOC design teams will fill their plates with interoperable, compatible, tested and verified physical IP blocks that will ensure fast time to market success.

Pierce: China is a fast growing market for SIP solutions in general. It is also a market that highly values the time-to-market benefit that SIP delivers as the majority of China’s products are consumer-oriented with short design cycles. SIP subsystems will be the most palatable for consumption by the China market. However, because China has adopted a number of regional standards, there will be substantial pressure on subsystem providers to optimize for local standards.

Smith: We see tremendous opportunities in terms of new business for SIP from both established companies and many entrepreneurial startups. Challenges include pricing pressure and the concern over IP leakage or copying. While this has become less of an issue over the years, it is still a concern. The good news is that the market in China is very aggressive and willing to take risks to get ahead.

Safety critical devices drive fast adoption of advanced DFT

Monday, January 6th, 2014

By Ron Press, Mentor Graphics Corp

Devices used in safety critical applications need be known to work and have the ability to be regularly verified. Therefore, a very high-quality test is important, as is a method to perform a built-in self-test. Recently, there has been a strong growth in the automotive market and the number of processors within each car is steadily increasing. These devices are used for more and more functions such as braking systems, engine control, heads-up display, navigation systems, image sensors, and more. As a result, we see many companies designing devices for the automotive market or trying to enter the automotive market.

2011 saw the publication of the ISO standard 26262, which specifies standard criteria for automobile electronics. Our experience is that recently two test requirements are being adopted or at least evaluated by most companies developing safety critical devices. One requirement is to perform a very high-quality test such that there are virtually no defective parts that escape the tests. The other is to perform a built-in self-test such that the part can be tested when in the safety critical application.

There are various pattern types that help support the zero DPM (defects per million) shipped devices goal. In particular, Cell-Aware test is proven to uniquely detect defects that escape traditional tests. Cell-Aware test can find defects that would escape a 100% stuck-at, transition, and timing-aware test set.. This is because it works by first modeling the actual defects that can occur in the physical layout of standard cells. Cell-Aware pattern size was recently improved and reduced, but a complete pattern set is larger than a traditional pattern set so embedded compression is used.

At Mentor Graphics, we started seeing more and more customers implementing logic BIST and embedded compression for the same circuits. Therefore, it made sense to integrate both into common logic that can be shared, since both technologies interface to scan chains in a similar manner. The embedded compression decompressor could be configured into a linear feedback shift register (LFSR) to produce pseudo-random patterns for logic BIST. Both the logic BIST and embedded compression logic provide data to scan chains through a phase shifter so that logic is fully shared. The scan chain outputs are compacted together in embedded compression. This logic is mostly shared with logic BIST to reduce the number of scan chain outputs that enter a signature calculator.

The hybrid embedded compression/logic BIST circuit is useful for meeting the safety-critical device quality and self-test requirements. In addition, since logic is shared the controller is 20-30% smaller than implementing embedded compression and logic BIST separately. As previously mentioned, we have seen this logic being adopted or in evaluation very broadly by automotive device designers.

One side effect of using embedded compression and logic BIST is that each makes the other better. For example, embedded compression can supply an extremely high quality production test. So, fewer test points are necessary in logic BIST to make random pattern resistant logic more testable, which reduces the area of logic BIST test points. Conversely, the X-bounding and any test points that are added for logic BIST make the circuit more testable and improve the embedded compression coverage and pattern count results.

Ron Press is the technical marketing manager of the Silicon Test Solutions products at Mentor Graphics. The 25-year veteran of the test and DFT (design-for-test) industry has presented seminars on DFT and test throughout the world. He has published dozens of papers in the field of test, is a member of the International Test Conference (ITC) Steering Committee, is a Golden Core member of the IEEE Computer Society, and a Senior Member of IEEE. Press has patents on reduced-pin-count testing, glitch-free clock switching, and patents pending on 3D DFT.

Blog Review: December 2, 2013

Monday, December 2nd, 2013

Phil Garrou completes his look at various packaging and 3D integration happenings from Semicon Taiwan, including news from Disco, Namics and Amkor. Choon Lee of Amkor, for example, predicted a silicon interposer cost of 2.7-4$/cm sq (100 sq mm) and expectations of organic interposer costs at 50% cost reduction.

Dynamic resource allocation can significantly improve turnaround time in post-tapeout flow. Mark Simmons of Mentor Graphics blogs about recent work that demonstrated 30% aggregate turnaround time improvement for a large set of jobs in conjunction with a greater than 90% average utilization across all hardware resources.

The MEMS Industry Group blog reflects on the trend toward sensor fusion and the role that hardware approaches such as FPGAs and microcontrollers will play in moving the technology forward.

44 years ago, the internet was born when two computers, one at UCLA and one at the Stanford Research Institute, connected over ARPANET (Advanced Research Projects Agency Network) to exchange the world’s first “host-to-host” message. Ricky Gradwohl of Applied Materials celebrates the “birthday” with thoughts on how far the internet has come.

A Call To Action: How 20nm Will Change IC Design

Thursday, February 21st, 2013

The 20nm process node represents a turning point for the electronics industry. While it brings tremendous power, performance and area advantages, it also comes with new challenges in such areas as lithography, variability, and complexity. The good news is that these become manageable challenges with 20nm-aware EDA tools when they are used within end-to-end, integrated design flows based on a “prevent, analyze, and optimize” methodology.

To download this white paper, click here.