Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘Mentor’

Is Test the Missing Link in Yield Optimization?

Monday, July 18th, 2011

By John Blyler

Earlier this year, SemiMD explored a yield-optimization technique that went beyond the traditional design-for-manufacturing (DFM) approach (see “Not Your Father’s DFM”). The goal of this relatively new technique—tentatively called “design for intelligent manufacturing” or “co-optimization”—was to optimize yields by providing greater feedback amongst semiconductor design, manufacturing, and testing. To find out what progress has been made since that time, SemiMD got an update from Mentor Graphics’ Jean-Marie Brunet, product marketing director for model-based DFM and integration to Olympus. Here are the highlights from that conversation:

SemiMD: Have there been any new developments on the design-for-intelligent-manufacturing or co-optimization front?

Brunet: We recently announced a new model for net-based critical-area analysis (CAA) and scan-test diagnostics that results in improved yields (see the Figure). We’ve worked with Samsung to develop and refine the model. This approach allows users to do a bit more yield prediction through CAA tools. In general, these tools provide a measure of the sensitivity to manufacturing defects that can increase the accuracy of yield models.

There’s already that link between design flow and yield prediction. Now, we’re adding a link to testing to increase the predictability of how sensitive a design is to process variability and random particle problems. These are things that can impact yield.

Figure: Net-based critical-area analysis (CAA) shows that a net has excessive parallel run length in Metal Layer 4.

SemiMD: Is test data being used as feedback to further improve future yields on a given product that’s already in production?

Brunet: Exactly. That is really what we have announced with Samsung. I’ll go over that shortly. But first, here’s a bit more background information. These optimization and failure-analysis tools are capable of doing a good assessment of yield and process sensitivity early in the design flow. Many integrated device manufacturers (IDMs) use these tools at tapeout. Additionally, IDMs do yield assessment based on rejects and test data that they accumulate during the manufacturing process. They collect all of this data to be sure that the knowledge is brought back to the designer. So next time, the design is improved.

Our recent announcement with Samsung provides a link to testing. I don’t mean testing as in the design-for-test (DFT) activities that occur during tapeout. Rather, “testing” refers to tests on the tester, which is really a diagnostics issue. When you have rejects and faults, the traditional approach is to run many different vectors on a tester to pinpoint the source and location of the problem.

SemiMD: Are you linking diagnostic test to yield optimization?

Brunet: Yes. We’ve linked the test tools with certain features in the CAA portion of Calibre (e.g., the capability to use diagnostic data to trace a location on the net or bus). You may not know the exact what or why of a failure, but you can trace it to a particular net. We can look at the net from a geometric prospective to run analysis tools, such as CAA or lithographic simulations. The resulting information will give us more confidence in the nature of the problem (i.e., whether it’s systematic or random). Such feedback information will lead to a process or design improvement.

SemiMD: By looking at the geometric data, you can tell on which net the problem occurs. Once you have that information, you can work backwards to determine the location in the design. Doesn’t that involve two different engineering camps—one in design and the other in manufacturing?

Brunet: That’s correct. Designers understand the geometry. They can pinpoint a starting and an ending point on the net. On the other hand, testers and process engineers have a more input/output (I/O) perspective (e.g., they know which pin is failing). To them, the chip is a black box. They cannot see inside it. Instead, they run a fault simulation to reveal problems on an I/O interface on a particular pin.

What has been missing is that link between what’s inside the box and what’s seen externally to the box on the tester. Let’s say that you have a DUT that’s functioning properly, but not at speed. Such a problem is usually attributed to a process-variability issue (i.e., the design is not robust and thus very sensitive to process variability). Furthermore, going to advanced nodes only increases the problem with process variation.

SemiMD: How is this information useful? How do you address the problems that you find?

Brunet: Imagine that you have a manufacturing problem caused by copper pooling. This is very useful information that can be fed back to the manufacturer. You can pinpoint the area, explaining that you have more copper pooling on a particular net or bus than was expected. The manufacturer might acknowledge a slight manufacturing shift at that point. Or perhaps that level of pooling is normal, but not captured very well. These are the types of conversations you can now have with the manufacturing fab. This is a level of interaction that never really took place before.

SemiMD: What are the specifics of your recent announcement with Samsung?

Brunet: Internally, Samsung is using our net-based-aware crucial-area analysis, which links our Calibre CAA technology to the testing system. With this capability, Samsung can run CAA on a very particular net rather than running CAA full chip. It’s faster to look at a particular problem.

This is very interesting technology, but I don’t want to mislead you into thinking that we’re fixing all of the yield problems. We haven’t. Rather, this latest technology allows us to understand—with much greater speed and accuracy—those portions of a net or area in the design that are failing. That is the key concept.

SemiMD: Thank you.

++++++++++++++++

Bio:

Jean-Marie Brunet

Jean-Marie Brunet is the product marketing director for model-based DFM and integration to Olympus at Mentor Graphics Corp. Over the past 15 years, he has served in application engineering, marketing, and management roles in the EDA industry. Brunet also has held IC-design and design-management positions at STMicroelectronics, Cadence, and Micron, among others. His experience includes working with pure-play foundries to resolve complex yield issues related to OPC and RET. Brunet holds a masters degree in electrical engineering from I.S.E.N Electronic Engineering School in Lille, France. Jean-Marie Brunet can be reached at jm_brunet@mentor.com.

Not Your Father’s DFM

Tuesday, February 15th, 2011

By John Blyler

Historically, the design-for-manufacturing (DFM) approach had two goals. One was to ensure that a given design actually could be manufactured. The second goal was to determine how much yield improvement could be achieved with a given tool. But the costs to quantify the improvements in yield with actual data were too high to justify the effort.

The problem is exacerbated as you go below 40nm. Semiconductor fabs will not customize their manufacturing process to a customer specific design style—save for a few large customers like Apple. Instead, fabs have normalized their process activities to serve the greatest number of chip customers and different markets.

But the fabless companies are not without resources. Many have a great deal of data that is returned with their test and production wafers after the first few tapeouts. While chip companies may have 30 to 40 tapes or more for a specific process node, the first several tapeouts typically provide data from tens of thousands of test and probe wafers per month. This is valuable data that could be fed back to designers to optimize yields in the next series of tapeouts.

“The EDA industry does not provide a mechanism or system to re-simulate a fix (adjusting line spacing, for example) to see its impact on future designs,” notes Michael Buehler, marketing director for the Design-to-Silicon Division at Mentor Graphics. “Today, we focus the problem design by design, as opposed to looking holistically at a family of designs coming up.”

In the past, there was far less sensitivity to individual design features. Any problems that arose were fixed in the manufacturing process. At today’s advanced nodes, specific designs can create systematic failures that cannot be efficiently fixed by tweaking the process. Further, few fabs will want to re-center their production operation to fix one customer’s problems when it means that the rest of the customer base must change its manufacturing rules.

This idea of a feedback mechanism within the manufacturing process for a given family of chips at a given node is not new. But neither does it fit into the traditional DFM mindset.
What should this optimized process based on incremental feedback be called? Some have suggested Design for Intelligent Manufacturing. While descriptive enough, the acronym is something of a problem—DIM. Others players tout the name, Design Manufacturing Co-Optimization, which highlights the collaborative nature of the approach.

Collaboration is indeed critical. This process is affected by everybody who impacts the yield, which includes the entire ecosystem. What was a traditional DFM tool path now becomes an information flow between the tools in design, place and route, production and test. The results are fed back into the tools flow, simulated, optimized and used to tweak the next tapeout. The fix, say to a line spacing margin that turns out to be too tight, could happen a number of different ways—in the manufacturing fabrication, with a change to the test or the router, or even with the IP.

This feedback and optimization process goes way beyond what was typically thought of as DFM. Few disagree on this point. About the only thing open for debate is what this new approach should be called.

Manufacturing Closure with Calibre InRoute and Olympus-SoC

Thursday, February 10th, 2011

Achieving manufacturing signoff is getting more difficult at each node due to significant manufacturing limitations and variability. This paper from Mentor Graphics describes the physical signoff challenges seen in advanced node designs. It then demonstrates how the Calibre InRoute platform provides faster and more reliable DRC/DFM signoff by using the Calibre verification and DFM platform to drive routing and optimization within the Olympus-SoC place and route environment.

Metric Pitch BGA And Micro BGA Routing Solutions

Tuesday, February 8th, 2011

The following paper provides Via Fanout and Trace Routing solutions for various metric pitch Ball Grid Array Packages. Note: the “metric” dimensions are the ruling numbers. To solve the metric pitch BGA dilemma, one should have a basic understanding of the metric feature sizes for:

  • BGA Ball Sizes and BGA Land Pattern Pad Construction
  • BGA Via Anatomy
  • Trace/Space
  • Trace and Via Routing Grid

To download this paper, click here.