Part of the  

Solid State Technology

  Network

About  |  Contact

Collaborative SoC Verification

March 23rd, 2016

By Matthew Hogan, Product Marketing Manager, Calibre Design Solutions, Mentor Graphics

With the widespread use of system-on-chip (SoC) designs, efficient integrated circuit (IC) design and validation is now a team sport. In many cases, long gone are the days where a single person was charged with responsibility for the entire design. Extensive intellectual property (IP) use, design re-use, and re-design from both internal and external sources have made successful IC design as much about efficient IP management and integration, as it is about creating new blocks and functionality.

A lot of attention is paid to the top-level integration tasks of final assembly and full-chip verification, but these tasks happen towards the end of your IC design journey. While it is important to validate the correct interconnects and top-level assembly of each of your blocks in the design as a whole (Figure 1), how those blocks get there, bug-free and operationally correct, is an important aspect of meeting your design timelines.

Figure 1 - Multiple IP blocks comprise the top-level design, which must be validated as a whole.

Bottom-up design flows

Designing each IP block in isolation provides a great deal of autonomy, if isolation can actually be achieved. Issues such as interface definitions, as well as compliance to I/O switching and power consumption requirements, all pose challenges throughout the design flow. Validating individual IP blocks as you go, fixing each one as issues are found, provides a methodical and scalable workflow that accepts design growth and the addition of more functional blocks with relative ease. Understanding the context in which your IP will be used is an important aspect of the verification methodology that you employ. Interconnects must be robust, able to handle not only the internal voltage and currents that will be generated, but also to cope with the intended stresses of the final design assembly. On an IP block level that you control, this verification is probably a manageable task to ensure that your block either complies with the requirements and rule checks provided for validation, or that you have waiver documents in place for any outliers.

Top-down verification

Top-down full-chip verification is the most reliable methodology by far, when all the pieces are in place. However, that reliability comes at a cost. Waiting until all of the constituent IP blocks and elements are in place puts you towards the end of the project schedule. Time is short, schedules often slip, and final verification becomes a challenging and stressful period. Pulling in pre-verified IP blocks can limit the introduction of any newly-found violations to the integration process. How these blocks are connected, their implementation in the context of the larger design, and the application of previously created waivers can play a significant role in determining how challenging final verification is. A well thought-out verification flow can help immensely in the closing hours, but it’s not a one-person job when design issues are found, particularly those that span multiple IPs. The challenge is to effectively engage all team members, each with specific knowledge, in a collaborative manner across the whole chip.

Waiver flows need to be collaborative, too

Once “obvious” errors have been eliminated, the subtle job of understanding the interactions and nuances of the system being created falls to those who created it. Many times, a robust framework of IC validation checks can focus attention on those final few issues that will probably need specific IP knowledge to either waive or fix.

Many traditional waiver flows rely on a static model of verification results, and a single user wading through all results, for all IP blocks. While conceptually simple, this model creates a significant bottleneck in today’s large IP-based designs. With a high degree of IP re-use, the IP owner is best suited to validate the context of a flagged issue for that IP. The challenge, however, is to allow multiple IP owners to review, waive, and interact with the results for the entire design at the same time. Their efforts need to be collaborative and additive. Fighting for a “timeshare” on a design and asking others to stop working is not a productive solution. Moving forward, existing automated waiver management technology can and should be employed to support simultaneous waiver analysis and identification for multiple IP.

Conclusion

The ways we design and validate the complex interactions in designs with significant IP content have evolved over time to accommodate the changing requirements of such designs. However, waiver methodologies have lagged behind, creating potential bottlenecks. Validating IP blocks in isolation (including waiver annotation) as they are being designed can help, as can employing automated waiver management at different levels of design integration. Waiver flows that we have become accustomed to using for individual IP must evolve to accommodate multiple IP owners and the specific knowledge they hold for how these blocks are used in the context of larger SoC designs.

Matthew Hogan is a Product Marketing Manager for Calibre Design Solutions at Mentor Graphics, with over 15 years of design and field experience. He is actively working with customers who have an interest in Calibre PERC. Matthew is an active member of the ESD Association—involved with the EDA working group, the Symposium technical program committee, and the IEW management committee. Matthew is also a Senior Member of IEEE, and a member of ACM. He holds a B. Eng. from the Royal Melbourne Institute of Technology, and an MBA from Marylhurst University. Matthew can be reached at matthew_hogan@mentor.com.

System-Level MEMS Design: An Evolutionary Path to Standardization

February 16th, 2016

By: Qi Jing, Technical Marketing Engineer, Mentor Graphics Corporation

Introduction

Successful design of highly-integrated IoT systems (Figure 1) requires simulating MEMS components together with the peripheral circuitry. However, MEMS devices are traditionally designed using CAD tools that are completely different from IC design tools. In the past two decades, both academia and industry have been seeking new methodologies and have chosen to implement multi-disciplinary MEMS design within the IC design environment. Performing MEMS-IC co-simulation in IC design environment allows designers to take advantage of advanced analog circuit solvers and the system verification capabilities that IC tools offer.

Figure 1: Typical IoT design.

A good system-level design methodology should facilitate MEMS device models and structure representations that are compatible with the IC design flow, and provide simulation accuracy and speed that are comparable or superior to typical analysis tools in the appropriate physical domains. It should also provide broad coverage of physical effects, and be able to support large systems. The three methodologies in use today for system-level MEMS modeling and simulation are:

  • Lumped-element modeling with equivalent circuits
  • Hierarchical abstraction of MEMS and analytical behavioral modeling
  • MEMS behavioral modeling based on Finite Element Analysis (FEA) and Boundary Element Analysis (BEA)

Lumped-Element Modeling with Equivalent Circuits

To implement SPICE-compatible modeling and simulation for MEMS, the most straightforward method is to create equivalent circuits for MEMS based on lumped-element modeling. For example, Figure 2(a) shows a spring-mass-damper system.  A formal analogy can be derived between the mechanical and electrical elements, leading to an equivalent circuit “in series” topology as Figure 2(b) shows. Similarly, an “in parallel” circuit analogy can be derived as Figure 2(c) shows.

Figure 2: (a) A spring-mass-damper system (b) Equivalent “in series” circuit topology (c) Equivalent “in parallel” circuit topology.

Although the equivalent-circuit methods appear straightforward, designers must be aware of their viability and limitations. First, analogies shown in Figure 2 are based on the assumption that MEMS structure can be significantly simplified into a spring-mass-damper system and that the effective mass, stiffness, and damping factor can be derived. This is only suitable for simple MEMS devices. For complex devices, the derivation could be too complicated and thus impractical to perform.

Secondly, the equivalent circuits are not easy to extend. Designers have to re-derive new models in order to account for additional physical effects or to adapt to changes in geometry, topology, or boundary conditions of the design.

Therefore, it is not uncommon for designers to determine that equivalent-circuit methods are too difficult or impossible to implement. More advanced methodologies are needed.

Hierarchical Abstraction of MEMS and Analytical Behavioral Modeling

In IC design, complex systems are built up hierarchically using building blocks at different abstraction levels. Hierarchical schematics are created to represent systems as structural networks comprising instances of these building blocks, connected together based on design topologies. Similar ideas have been explored and applied to MEMS design.

Figure 3 provides an example of the hierarchical abstraction of a folded-flexure resonator that contains a MEMS transducer and an electrical interface circuit. The MEMS transducer is an electrostatic device that is hierarchically built using a set of functional-level elements, each of which are further decomposed into atomic-level elements.

Figure 3: Hierarchical abstraction of a folded-flexure resonator.

Behavioral models for MEMS elements can be written in analog hardware description languages such as Verilog-A, Verilog-AMS, and VHDL-AMS. Resulting models are compatible with SPICE simulators, thus serve well for co-simulation purposes. Analytical behavioral models for MEMS contain the following:

  • Definition of terminals, with the associated physical disciplines specified.
  • Definition of model parameters, including material and process properties as well as geometric sizing and layout orientation parameters.
  • Description of model behavior using a series of Differential Algebraic Equations (DAEs) that govern the relationship between, across and through variables of the terminals, with coefficients formed by parameters and internal variables.

It’s crucial to obtain precise values of the material and process parameters in order for the models to match silicon. For standardized MEMS designs, foundries have started to develop and offer MEMS PDKs. For novel MEMS designs, designers have to fabricate test structures first then extract the parameters from lab measurement results.

After models are ready, they form model libraries that can be used for many designs in the appropriate design space. For example, atomic-level elements shown in Figure 3 not only serve as the foundation for folded-flexure resonators, but also work for many other typical suspended MEMS designs, such as accelerometers, gyroscopes, resonator filters, micro mirrors, and RF switches. Model libraries make it possible for people unfamiliar with MEMS to use the models for system integration, and help protect MEMS IP.

Due to the large variety of MEMS designs in underlying physics, fabrication processes and design styles, no model library can be a universal solution that fits all. If the device employs unique, irregular geometries, or if the device involves physics mechanisms that are not well-understood, a new model has to be developed from scratch.

MEMS Behavioral Modeling Based on FEA/BEA

Because geometry shapes supported by analytical models are discrete and limited, MEMS designers sometimes resort to Finite Element Analysis (FEA) and Boundary Element Analysis (BEA) tools. FEA/BEA tools use conventional numerical analysis methods for simulations in mechanical, electrostatic, magnetic, and thermal domains. They often rely on auto-meshers to partition a continuum structure into a mesh comprised of low-order finite elements. The tools then construct system matrices based on the meshing and solve the matrices within boundary conditions.

Efficient simulation of coupled physical domains is often a challenge to FEA/BEA-based tools. For example, to model the interaction between mechanical and electrostatic domains, some FEA/BEA tools must perform analyses for each domain separately and iteratively until a converged solution is found. Superior tools can simulate coupled domains all-together, but the simulation is computationally expensive and may result in unacceptable run times.

To alleviate limitations of FEA/BEA-based methods, while still utilizing their strength, Reduced Order Modeling (ROM) has been deployed, effectively bridging the gap between traditional FEA/BEA tools and electrical circuit simulators. ROM is a numerical methodology that attempts to reduce the degrees of freedom within system matrices to create macro models for MEMS devices. The resulting models can be constructed in languages like Verilog-A, then exported into SPICE simulators for co-simulation.

Up-to-date ROMs can be built not only from FEA/BEA results, but also from user-defined analytical equations and experimental data. Parameters in the reduced models can be preserved, so that design variations can be evaluated without going through the FEA and model order reduction process again. This enhances the coverage and efficiency of model libraries based on FEA/BEA and ROM.

Like all modeling methodologies, FEA/BEA-based methods cannot fully cover the entire MEMS design space either. Physical effects, as well as design and process imperfections, must be pre-defined in the original FEM/BEM model in order to be captured. In addition, creation of accurate models not only requires solid understanding of the underlying physics of MEMS devices, but also knowledge in both FEA/BEA tools and the model order reduction process.

Conclusion

To meet the need for MEMS-IC co-simulation, multiple modeling and simulation methodologies have been proposed, explored, and developed over the past two decades. Equivalent-circuit methods, structural analytical behavioral modeling, and reduced-order modeling based on FEA/BEA, are all effective methods and each has its own advantages and limitations. Knowing when to use which type of modeling method is important:

  • When the design is small and simple, equivalent-circuit methods are the most straightforward.
  • When the design is decomposable and the geometry, process, and dominant physical effects are close to what was used in the creation of primitive model libraries, hierarchical analytical modeling and structural system composition are the best choice.
  • For unique designs using complex geometries, ROM methods based on FEA/BEA are more flexible and powerful.

For IC design, it took decades of academia and industrial endeavors for models, SPICE simulators, and foundry PDKs to emerge, mature, and converge into well-adopted industry standards. The MEMS modeling and simulation counterparts need to go through the same evolutionary path. This path has even more challenges than IC design, due to the much broader multi-physics coverage of MEMS and the diversity of MEMS manufacturing processes, applications, and design styles. Joint effort from design companies, foundries, and EDA tool vendors is required to enable this evolution. For more information about system-level MEMS modeling and simulation, download the whitepaper “System-Level MEMS Design – Exploring Modeling and Simulation Methodologies”.

Five Steps to Double Patterning Debug Success

January 26th, 2016

By David Abercrombie, Program Manager, Advanced Physical Verification Methodology, Mentor Graphics

Has debugging double patterning (DP) errors got you pulling your hair out, or wishing you had pursued that career in real estate, like your mom suggested? Now you can unlock the secrets of DP debugging in five easy steps! Once you learn these steps, you’ll be the envy of your team, as you deliver clean DP designs on schedule, and still have time to eat lunch each day!

So, what are the five steps of successful DP debugging? Are you alone? Okay, lean in and listen closely…There are five types of errors you typically find in a DP design layer (excluding any errors associated with using stitches), and the order in which you debug them can make the difference between success and an endless insanity loop of debug-fix-check. I hope you’re taking notes, because here are the five steps in which you should debug your DP errors…

1. Debug all minimum opposite mask spacing errors first.

This condition is the most fundamental DP error you will encounter. Minimum opposite mask spacing errors, just like traditional design rule checking (DRC) spacing errors, involve only two polygons and the space between them. However, there is no coloring solution that resolves the error. In addition, violating a minimum spacing constraint can create other misleading DP errors, as shown in Figure 1.

Figure 1: Minimum opposite mask spacing errors can generate unnecessary odd cycle DP errors.

When these two polygons violate the minimum opposite mask spacing constraint, they also create a diagonal tip to tip separator constraint in the layout, which leads to two odd cycle violations between the original two polygons and the two adjacent polygons. Because designers often assume the best way to fix an odd cycle is to adjust any of the spaces involved in the odd cycle, they can end up making two corrections to fix these odd cycle errors, without even correcting the original minimum spacing violation. However, if you fix the minimum spacing violation first, the odd cycles don’t even occur, so you fix both issues at once.

2. Correct all self-conflict same mask spacing errors.

These errors consist of single polygons that have notch spaces between themselves that violate minimum same-mask spacing constraints. This error type is also isolated to a single polygon, but when this polygon interacts with other polygons, other error types can occur. Fixing this error eliminates these secondary errors (Figure 2).

Figure 2: Self-conflict same mask spacing error causing unnecessary odd cycle DP errors.

The red polygon is in conflict with itself. This error is usually flagged by highlighting the polygon. The separator constraints that form in this layout example create an odd cycle error of one. Again, fixing the self-conflict error fixes the odd cycle error as well.

3. Next, resolve all anchor self-conflict errors.

Anchor self-conflict errors are the result of conflicting anchor requests associated with a single polygon. Depending on the automated coloring solution your DP tool selects, an anchor path error may or may not be created. Resolving these errors removes that uncertainty (Figure 3).

Figure 3: Anchor self-conflict errors can potentially cause anchor path errors.

The layout contains a single polygon with two color anchor markers. The separator interactions with other polygons create a path from this polygon down to the bottom polygon. Due to the marker conflict, the DP tool has no guidance, so it randomly decides which color to assign to the polygon. If the tool selects the green anchor, no anchor path error is created, but if the tool chooses the blue anchor, an anchor path error to the green anchored polygon at the bottom is created. However, if you eliminate all anchor self-conflict errors first, any anchor path errors you encounter will be deterministic, rather than random and unpredictable.

4. Fix all odd cycle errors.

Because odd cycle errors can lead to an anchor path error (Figure 4), fix all odd cycle errors next.

Figure 4: Odd cycle errors can lead to anchor path errors.

Due to the separator interactions with other polygons, the odd cycle interacts with two anchored polygons at the top and bottom, creating an anchor path error. By adjusting any one of the spacings in the odd cycle, both the odd cycle error and the anchor path error are fixed.

5. Finally, fix the anchor path errors.

Why save these for last? If you try to fix the anchor path error in Figure 4 before looking at the odd cycle error, you might decide to adjust the space between the top anchor and the middle polygon, or the space between the bottom anchor and the middle polygon. Both of those corrections fix the anchor path error, but leave the odd cycle error in place. Correcting all of your odd cycle errors first can save you a lot of debugging time by making some of those anchor path errors simply vanish.

Following these five steps won’t magically make all your double patterning errors disappear. But they will help you avoid making unnecessary design changes, as well as reduce your overall debugging cycle time. Double patterning design is challenging enough; don’t allow error complexity to make it any harder. Try following these five debugging steps on your next layout, and I think you’ll be pleasantly surprised.

For more information about double patterning debugging, watch “Why IC Designers Need New Double Patterning Debug Capabilities at 20nm” with Jean-Marie Brunet of Mentor Graphics.

David Abercrombie is the Program Manager for Advanced Physical Verification Methodology at Mentor Graphics. Since coming to Mentor, he has driven the roadmap for developing new and enhanced EDA tools to solve the growing challenges in advanced physical verification and design for manufacturing (DFM). Most recently, he has directed the development of solutions for multi-patterning decomposition and checking. Prior to joining Mentor, David managed yield enhancement programs in semiconductor manufacturing at LSI Logic, Motorola, Harris, and General Electric. He is extensively published in papers and patents on semiconductor processing, yield enhancement, and physical verification. David received his BSEE from Clemson University, and his MSEE from North Carolina State University. He may be reached at david_abercrombie@mentor.com.

Electromigration and IC Reliability Risk

December 10th, 2015

By Dina Medhat, Mentor Graphics

Electromigration (EM) is the transport of material caused by the gradual movement of the ions in a conductor, due to the momentum transfer between conducting electrons and diffusing metal atoms (Figure 1). The EM effect is important in applications where high direct current densities are used, such as in microelectronics and related structures. As the structure size in electronics such as integrated circuits (ICs) decreases, the practical significance of the EM effect increases, decreasing the reliability of those ICs.

Figure 1: EM is caused by the momentum transfer from electrons moving in a wire. (source: Wikipedia)

EM can cause the eventual loss of connections, or failure of an entire circuit. Since reliability is critically important for applications such as space travel, military systems, anti-lock braking systems, and medical equipment and implanted devices, and is a significant consumer demand in personal systems such as home computers, entertainment systems, mobile phones, and the like, the reliability of ICs is a major focus of research efforts in the semiconductor industry.

Reliability risk goes beyond that of physical device reliability (a challenge unto itself), extending to interconnects and their susceptibility to EM effects. Failure analysis techniques can identify failure types, locations, and conditions, based on empirical data, and use that data to re­fine IC design rules.

Let’s look at one approach using the Calibre® PERC™ reliability solution. The Calibre PERC tool can perform topology identification for pins/nets of interest, run parasitic extraction and static simulation, compare the results against EM constraints, then present violations for debugging using the Calibre RVE™ results viewing environment (Figure 2).

Figure 2: Automated EM analysis flow.

With basic EM analysis explained, let’s discuss in greater detail some selected EM analysis techniques, such as current density analysis, Blech Effect analysis, and hydrostatic stress analysis. Current density analysis seeks to identify the maximum current any piece of metallization can sustain before failing. Current densities below this threshold can be used to predict EM effects over time. Blech Length is a process- and layer-defi­ned wire length at which EM effects are unlikely to occur. By fi­nding these short wires, designers can quickly eliminate error results representing false violations. Hydrostatic stress analysis derives the degradation of the electrical resistance of interconnect segments from the solution of a kinetics equation describing the time evolution of stress in the interconnect segment.

A toolset that can combine geometrical and electrical data, like the Calibre PERC™ logic-driven-layout framework, can dynamically and programmatically target reliability checks to specifi­c design features and elements. This flexibility allows designers to selectively target and dynamically con­figure EM analysis to those specifi­c interconnect wires that are most critical, or most susceptible to EM failure. This design-context-aware interconnect reliability technology provides a scalable, full-chip EM analysis and veri­fication solution that considers interconnect resistance, the Blech Effect, and nodal hydrostatic stress analysis for failure prediction. It also allows designers to apply EM analysis techniques to a broad range of designs and process technologies, with only minor adjustments to the setup and con­figuration.

Although fi­xed constraints work well in most IC verification cases, EM analysis and verifi­cation requires a much more flexible constraint mechanism. In current density analysis, allowing current density constraints to be a function of properties of the parasitic resistor (such as the length and width of the resistor) enables layouts to contain resistors with a smaller length and width and a higher current density. The dynamic constraint infrastructure allows adjustments to the current density constraint based on the parasitic resistor properties.

In Blech Effect analysis, the Calibre PERC solution provides access to the measured EM length for any interconnect tree. If the longest path of the interconnect tree is less than the Blech Length, the tool returns a current density constraint of some very large value, which acts as a constraint waiver for this resistor with a segment on an immortal interconnect tree.

Hydrostatic stress analysis must be performed on each interconnect tree. For each node, the Calibre PERC tool compares σi to σcrit. For any interconnect tree where σi ≥ σcrit , the interconnect tree and its individual nodes can then be highlighted in a layout viewer, as well as possible EM failure locations. The determination of σcrit is a function of process technology and segment geometry, and ideally should be provided by the foundry.

Once the EM analysis is complete, an importantaspect of ensuring reliability is debugging any errors or issues. Figure 3 demonstrates the debugging of EM violations by grouping and sorting them, then using colormaps to see current density violations/severity on the layout.

Figure 3: Debugging EM violations

Combining hydrostatic stress analysis with Blech Effect and current density analysis provides a well-rounded platform for the prediction of EM failure, allowing designers to filter out trees that are considered immortal. With the knowledge gained from such analyses, design rules can be modified to eliminate or minimize EM conditions in future designs. Using a reliability analysis tool like the Calibre PERC solution, designers can be more confident that their layouts are resistant to the long-term effects of EM, and will perform as designed for the intended lifetime of the product.

Dina Medhat is a technical lead for Calibre Design Solutions at Mentor Graphics. She has been with Mentor Graphics for ten years in various product and technical marketing roles. She holds a BS and an MS from Ain Shames University in Cairo, Egypt. She can be reached at dina_medhat@mentor.com.

Technical Workshops – Providing Access to the Industry’s Best

December 1st, 2015

By Matthew Hogan, Product Marketing Manager, Calibre Design Solutions

It may not seem like such a revelation, but many of the opinions and traits we carry around with us are often attributable to our peer group. From a professional perspective, this could include colleagues, advisors, managers, and a host of other influencers that have crossed your path along the way. Good, bad, or indifferent, these experiences influence how you work and what you consider “normal.” In some of the focused and specialized fields of IC design and verification, like electrostatic discharge (ESD) and reliability, it is often a challenge to find and connect with suitably well-informed individuals that you can bounce ideas off, learn from, and grow with.

There are a number of pockets of excellence within the industry, but if you are not fortunate enough to have been introduced to the right post-graduate program or advisor, or to work in a company that supports a thriving eco-system of like-minded individuals, you’re pretty much left to your own devices in a vacuum. So, if you are working on an island, how do you build bridges to other experts in your field, outside your organization? One way to gain exposure to new ideas, techniques and best practices is to attend industry conferences. Another is to forgo the large-scale format that conferences provide, and look at what workshops have to offer.

Not familiar with the workshop format? Generally speaking, workshops provide 3-4 long days with the same folks, in an environment probably a lot like those summer camps you attended as a kid. You all eat together, attend the keynote, invited talks, and paper/poster presentations together, and participate in one or more discussion groups occupying the evenings. The focus of a workshop is, by design, much narrower than a large industry conference, so everyone attending has the same range of interests and issues. Overall, with the smaller groups of the workshop format, there is a lot of time for discussion and interactions with others. Want to know something? Ask! In my experience, the pedigree of attendees is often outstanding, with a welcoming and inclusive disposition to newcomers looking to learn more about the field. None of us are experts in every field, and being able to learn firsthand from insightful and interactive discussions only bolsters the learning experience. Another advantage extends past the workshop itself—the forging of professional relationships that can provide valuable advice, consultation, and collaboration long after the event is finished.

Over the last five years, I’ve seen a plethora of emails turn up at my inbox, proclaiming the 2nd or 3rd annual workshop on such and such a topic. These organizations are getting the ball rolling. I’ve even seen a number of 1st annual invitations. While I haven’t kept track of how many of these newer workshops survive to maturity, two established events that I’m particularly fond of are the International ESD Workshop, who are starting to ramp up for their 2016 event (which will be their tenth year), and the International Integrated Reliability Workshop, who can trace their origins as far back as 1982. For me, these legacies have demonstrated that smaller, focused groups having a high degree of interaction and discussions bring participants together, not only to focus on the program material, but also to bring a sense of community to a tight-knit and focused group.

I’d be interested to hear about your experiences of attending both conferences and workshops. For me, each has its place, but the workshop format provides a significantly more robust and in-depth framework to share a lot of ideas in a short, concentrated period of time, while really getting to know colleagues in your field.

Matthew Hogan is a Product Marketing Manager for Calibre Design Solutions at Mentor Graphics, with over 15 years of design and field experience. He is actively working with customers who have an interest in Calibre PERC. Matthew is an active member of the ESD Association—involved with the EDA working group, the Symposium technical program committee, and the IEW management committee. Matthew is also a Senior Member of IEEE, and a member of ACM. He holds a B. Eng. from the Royal Melbourne Institute of Technology, and an MBA from Marylhurst University. Matthew can be reached at matthew_hogan@mentor.com.

LEF/DEF IO Ring Check Automation

October 26th, 2015

By Matthew Hogan, Mentor Graphics

Background

Designing today’s complex system-on-chips (SoCs) requires careful consideration when planning input/output (IO) pad rings. Intellectual property (IP) used in an SoC often comes from multiple IP vendors, and can range from digital/analog cores to IO pads, power/ground pads, termination cells, etc. Each vendor has its own rules for these IO rings to protect the IP from electrostatic discharge (ESD) and other reliability concerns. The constraints for these rules are different from one foundry to another, as well as from one technology node to another, or from one IP vendor to another (Figure 1).

Fig. 1: Sample rule file constraints from IP supplier (1).

While detailed rules are available from each IP vendor on how to comply with their IO ring layout rules, what is not generally available are instructions for applying those rules in the presence of other IPs. Typically, SoC designers have IP from a CPU supplier and memory supplier, in addition to the IO cells. A holistic and integrated approach that allows for all of these IP pads to co-exist is needed.

Foundries provide a design rule manual (DRM) that contain guidelines for pad cell placement to protect against ESD. Typical rules found in a DRM include:

  • Cell types that can or must be used in an IO ring
  • Minimum number of a specified power cell per IO ring section and given power domain
  • Maximum spacing between two power cells for a given power /ground pair in a power domain
  • Maximum distance from the IO ring section termination (breaker cells) to every power cell
  • Maximum distance from IO to closest power cells
  • Cells that must be present at least once per corresponding power domain section
  • Constraints for multi-row implementation

The SoC designer is then tasked with achieving the desired ESD protection across the SoC while incorporating all of the dissimilar cells and their unique rules. Needless to say, manual approaches are time-consuming, and subject to the ever-present human error factor. An automated solution that can implement the checking needed to consider all rule interactions provides a substantive improvement in both time and quality of results.

LEF/DEF IO Ring Checker

In collaboration with an IP design company, Mentor Graphics developed an automated framework (Figure 2) to verify SoC compliance with these foundry IO placement rules, using the Calibre® PERC™ reliability verification tool. The Calibre PERC tool can combine both the geometrical and electrical constraints of a design to perform complex checks that incorporate layout restrictions based on electrical constraints or variations.

Fig. 2: Automated framework for IO ring verification.

This IO ring checker framework provides the following characteristics:

  • No technology dependencies: ESD placement rules are coded in the IO ring checker, and are not constrained by or subject to any technology file dependencies.
  • Easy set-up: The constraints interface (Figure 3) allows for customized cell naming conventions and spacing variables.

Fig. 3: The constraints interface enables easy customization.

Figure 4 shows the IO ring checker verification flow. An initial placement of IO cells is made with only IO cells in the DEF. Violations identify locations where changes to the IO ring initial placement must be made. This process continues until final placements are available. As the design nears completion, all cells and routings are now present. Another round of validation is performed until the design is complete, with all errors in IO ring placements corrected for final sign-off validation.

Fig. 4: IO ring checker verification flow.

Outputs

Two results database (RDB) files are output as part of the checking flow:

  • IO_ring_checker.rdb: contains violations
  • debug.rdb: contains additional information for debug

Violations can be highlighted from the IO_ring_checker.rdb file (Figure 5).

Fig. 5: The IO-ring-checker.rdb file provides quick identification of rule check violations.

If more details are required, the debug.rdb file (Figure 6) can be used to display complementary information that describes the violation more explicitly, such as:

  • Cell identification (power clamp families, breakers, etc.)
  • Sub-check results (for each power domain)

Fig. 6: The details provided in the debug.rdb file help designers understand and correct violations quickly and accurately.

For ease of use, results can be loaded and highlighted in a results viewing environment, such as the Calibre RVE tool (Figure 7), using a DEF or GDS Database. Highlighting options in the Calibre RVE tool include:

  • Rule violation: IO distance to PWR/GND cells (in red)
  • Cell marking: IO PWR/GND pairs (in green)
  • Power domain: digital section (in blue)

Fig. 7: A results viewing environment allows designers to visualize error results.

Results

The LED/DEF IO Ring Checker framework was applied to multiple GPIO test chips. Results demonstrate the effectiveness and speed of the framework in applying multiple rule checks across the chip (Table 1).

Table 1: Automated verification results from multiple=

These results not only demonstrate the accuracy of this approach, but also the speed of such a solution. Manual checking could easily take days to validate, and still be error-prone. With more and more IP being implemented in SoC designs, the number of rules is only expected to grow, further complicating the verification task.

Conclusion

A flexible and automated approach to IO pad ring placement verification allows designers to focus on their design, using the IO ring checking framework and the Calibre PERC tool to confirm the validity of the layout they create. The ability to perform this validation on LEF/DEF designs allows early completion of this task in the design cycle, while there is still an opportunity to optimize and refine the design before beginning final signoff verification.

Automated approaches for advanced reliability verification issues such as IO ring checking are providing significant benefits within SoC design flows. Designers who take advantage of these techniques and tools can deliver highly reliable designs, validating them quickly and efficiently, all with greater confidence in the quality of their final products. With the complexity of IP eco-systems used within SoCs constantly on the rise, and the allure of new markets such as automotive, with its exacting reliability standards, the use of automated reliability verification for these complex interactions can only be expected to grow. Adding determinism and repeatability to your IO ring checking strategy is a strong move towards improving your reliability verification capabilities.

References

[1] EDA Tool Working Group (2014), “ESD Electronic Design Automation Checks (ESD TR18.0-01-14)”, New York: Electrostatic Discharge Association, January 2015, https://www.esda.org/standards/device-design/electronic-design-automation-eda/view/1713

Design Rule Checking for Silicon Photonics

September 30th, 2015

By Ruping Cao, Mentor Graphics

The silicon photonics integrated circuit (PIC) holds the promise of providing breakthrough improvements to data communications, telecommunications, supercomputing, biomedical applications, etc. [1][2][3][4]. Silicon photonics stands out as the most competitive candidate among potential technologies, due in large part to its compactness and potential low-cost, large-scale production capability leveraged by current CMOS fabrication facilities. However, as silicon PICs gain success and prospects, designers find themselves in need of an extended design rule checking (DRC) methodology that can ensure the required reliability and scalability for mass fabrication.

Traditional DRC ensures that the geometric layout of a design, as represented in GDSII or OASIS, complies with the foundry’s design rules, which guide designers to create integrated circuits (ICs) that can achieve acceptable yields. DRC compliance is the fundamental checkpoint an IC design must achieve to be accepted for fabrication in the foundry. DRC results obtained from an automated DRC tool from a trusted EDA provider are required to validate the compliance of a design with the physical constraints imposed by the technology.

However, traditional DRC uses one-dimensional measurements of features and geometries to determine rule compliance. PICs present new geometric challenges and novel device and routing designs, where non-Manhattan-like shapes—such as curves, spikes, and tapers—exist intentionally. These shapes expand the complexity of the DRC task, even to the extent that it is impossible to fully describe some physical constraints with traditional one-dimensional DRC rules.

To address the DRC challenge in photonic designs, new verification techniques are required. At Mentor, we developed the Calibre (R) eqDRC (TM) technology, an extension to the Calibre nmDRC tool. The Calibre eqDRC functionality is an equation-based set of statements that extend the capabilities of traditional DRC to allow users to analyze complex, multi-dimensional interactions that are difficult or impossible to verify using traditional DRC methods. While the development of the Calibre eqDRC functionality was originally motivated by the IC physical verification difficulty at advanced technology nodes [5], the Calibre eqDRC process is equally adept at satisfying the demand for PIC geometrical verification requirements. Users can define multi-dimensional feature measurements with flexible mathematical expressions that can be used to develop, calibrate and optimize models for design analysis and verification [6][7]. Let’s look at a couple of examples.

False Errors Induced by Curvilinear Structure

Current EDA tools support layout formats as GDSII, where geometric shapes are represented in polygons (Manhattan design). The vertices of these polygons are snapped to a grid, the size of which is specified by the technology. This mechanism produces specific DRC problems for photonic designs that include curvilinear shapes (like bends for routing) in a range of device structures, which derive from the requirement of total internal reflection for light guiding while minimizing light loss. With traditional EDA tools, the curved design layer is fragmented into sets of polygons that approximate the curvilinear shape, which results in some discrepancy from the design intent.

While this discrepancy of a few nanometers (dependent on the grid size) is negligible compared to a typical waveguide design with a width of 100 mm, its impact on DRC is significant. The tiniest geometrical discrepancy can generate false DRC errors, which can add up to a huge number, making the design nearly impossible to debug. Figure 1 shows a curved waveguide design layer, with the inset figure showing a DRC violation of minimum width. Although the waveguide is correctly designed, there is a discrepancy in width value between the design layer (off-grid) and the fragmented polygon layer (on-grid), creating a false width error. Even though these properly designed structures do not violate manufacturability requirements, they generate a significant number of false DRC errors. Debugging or manually waiving these errors is both time-consuming and prone to human error.

Figure 1. Design of a curved waveguide on a 1 nm grid. The enlarged view shows the polygon layer that flags the width error of the waveguide. The polygon vertices are on-grid, which results in the discrepancy in width measurement.

By taking advantage of the Calibre eqDRC capabilities, users can query various geometrical properties (including the properties of error layers), and perform further manipulations on them with user-defined mathematical expressions. Therefore, in addition to knowing whether the shape passes or fails the DRC rule, users can also determine any error amount, apply tolerance to compensate for the grid snapping effect, perform checks with property values, process the data with mathematical expressions, and so on.

Multi-dimensional rule check on taper structure

Another important photonic design feature that does not exist in IC design is the taper, or spike (any geometrical facet where the two adjacent edges are not parallel to each other), as shown in Figure 2. This kind of geometry exists intentionally, especially in the waveguide structure, where the optical mode profile is modified according to the cross-section variation (including the width from the layout view, and the depth determined by the technology).

Figure 2. Tapers are a common construct in photonics designs.

The DRC check to ensure fabrication for these structures must flag those taper ends that have been thinned down too far, which can lead to  breakage, and possible diffusion to other locations on the chip to create physical defects. A primitive rule to describe this constraint could be stated as:

minimum taper width should be larger than w; otherwise, if it is smaller than w, the included angle at the taper end must be larger than α.

This rule is a simple form of describing the constraint that, as the taper angle increases, the taper end width can decrease. In even simpler words, a sharper pointy end is allowed as taper end width increases. The implementation of this rule is impossible with one-dimensional traditional DRC, since more than one parameter is involved at the same time.

When using eqDRC capability, however, a multi-dimensional check can be written:

Sharp_End := angle(wg, width < w) < α

where angle stands for the DRC operation that evaluates the angle of the taper end with a width condition (smaller than w). This is a primitive check example, but it serves to show the power of the Calibre eqDRC functionality in photonics verification.

Modeled rule check on taper structure

The previous two examples primarily demonstrate the capability of the Calibre eqDRC process to manipulate property values to implement more precise rules for photonic-specific designs. As discussed, the Calibre eqDRC technique allows custom manipulation over various measurable characteristics of layout objects, offering greater flexibility in rule defining and coding.  The examples imply the availability of user-defined expressions for property data manipulation, which can be enabled by an API for dynamic libraries or some other means, so that mathematical expressions are available through built-in languages such as Tcl.

However, the Calibre eqDRC process also enables complex DRC checks or yield prediction by applying user-defined models. In Figure 3, we’ll expand the taper DRC check to demonstrate eqDRC’s modeling capability.

Figure 3. (A) Depiction of design rule for tapers. w is the minimum width of the taper; w1, w2 and w3 are the width values (w1 < w2 < w3); α is the including angle of the two adjacent edges of the taper end; α1, α2 and α3 are the angle values (α1 > α2 > α3). (B) Plot of rule (three separate rules are used in this case). The red line represents the modeled rule (violation happens below the curve).

Figure 3(A) depicts the rules that might be applied to those taper designs, bearing in mind that the constraint of width is correlated with the angle of the taper end. With a traditional one-dimensional rule, we can measure the critical angle value at discrete width values, and describe the fabrication constraint with three separate rules:

Rule 1: sharp_end := angle(wg, width ≥ 0, width < w1) < α1

Rule 2: sharp_end := angle(wg, width ≥ w1, width < w2) < α1

Rule 3: sharp_end := angle(wg, width ≥ w2, width < w3) < α2

Of course, additional critical angle values can be probed to add more rules to this rule set to better fit the model, but that increases the complexity of rule check tasks. So, here we have an inevitable compromise between the rule check complexity and physical constraint description integrity.

In fact, the interpolation of the critical conditions (relation of width and angle) can be expressed with a model (which should come from research and/or experimental results). Figure 2(b) depicts the constraints of width and angle given by the above rules (in shaded area). The model of critical angle against width is provided by the interpolation of the values. Here, no values are provided, and the model is only given as qualitative. The real model would require further research and proof by experimental results. Nevertheless, the example demonstrates that photonics design requires more flexible design that leads to more complex geometrical verification.

Fortunately, users can implement such models using the Calibre eqDRC capability, avoiding the dilemma of raising rule check complexity or reducing DRC accuracy. Since user-defined mathematical expressions are allowed, the relation of critical angle and width can be expressed in the rule check as follows:

sharp_ end := f(width(wg)/angle(wg) > 1

where width and angle constitute the DRC operation that evaluates the minimum width and including angle of the taper end respectively; function f is the model relating the critical angle αc and width w: αc = f(w); w is the actual measured width value. This syntax fully describes the physical constraint given by the model as depicted by the red line in Figure 3(B). In addition to being more accurate, this rule check replaces the three rule checks previously used, which simplifies the rule writing and rule check procedure.

There are several advantages to be gained by applying eqDRC to the physical verification of photonics designs.

  • Ease debugging effort
    Unlike traditional DRC which provides only pass and fail results, the Calibre eqDRC process produces customized information that can facilitate debugging efforts. Such information can identify the severity of the violation, suggest possible corrections, etc. This information can be displayed on the layout, helping the engineer to quickly debug and fix the violation.
  • Reduce false DRC errors
    For photonic designs, where the existence of curvilinear shapes can lead to false errors with traditional DRC, the Calibre eqDRC method makes it possible reduce or eliminate these false errors. Because the user can apply tolerances and conditions to the check criteria, most false errors can be filtered out of the results. In addition, further investigation of remaining errors is made easier with the availability of customized information.
  • Enable multi-dimensional checks
    Traditional DRC can measure and apply pass-fail tests in only one dimension, while the Calibre eqDRC process allows the assessment of multi-dimension parameters. Because photonic designs have a greater degree of freedom in design geometries, in many cases, multiple parameters are required to describe a physical constraint. Presently, multiple parameter analysis is only possible with the Calibre eqDRC technology.
  • Reduce rule coding complexity and improve accuracy
    With the Calibre eqDRC process, we can apply user-defined mathematical expressions when processing layout geometry characteristics. In this way, multi-dimensional parameters that interact with each other when involved in a physical constraint can be abstracted as a model to be applied in a rule check. This not only simplifies the rule coding and rule check procedure, but also improves the accuracy of the check by replacing discrete rules with the model, which covers all combinations of parameters in one continuous function. For photonics cases where physical constraints are more complex and applied more universally, the rule coding efficiency and accuracy improvement become more important than ever. With the Calibre eqDRC approach, physical constraints can be applied closest to their design intent, and in a straightforward manner using a model description.

As photonic circuit designs allow and require a wide variety of geometrical shapes that do not exist in IC designs, traditional DRC finds it exhaustive to fulfill the requirements for reliable and consistent geometrical verification of such layouts. With the availability of property libraries to users, and the ability to interface these libraries with a programmable engine to perform mathematical calculations, the Calibre eqDRC technique offers a perfect solution for an accurate, efficient, and easy-debugging DRC approach for PICs. By finding those errors that would otherwise be missed, and flagging far fewer false errors, the Calibre eqDRC technology enables the accurate and efficient geometrical verification needed to make silicon photonics commercially viable.

References

[1]         Goodman, J. W., Leonberger, F. J., & Athale, R. A. (1984). Optical interconnections for VLSI systems. Proceedings of the IEEE, 72(7), 850–866.

[2]         Haurylau, M., Member, A., Chen, G., Chen, H., Zhang, J., Nelson, N. A., Member, S., et al. (2006). On-Chip Optical Interconnect Roadmap : Challenges and Critical Directions, 12(6), 1699–1705.

[3]         Kash, J. A., Benner, A. F., Doany, F. E., Kuchta, D. M., Lee, B. G., Pepeljugoski, P. K., Schares, L., et al. (2010). Optical interconnects in exascale supercomputers. 2010 IEEE Photinic Society’s 23rd Annual Meeting (pp. 483–484). IEEE.

[4]         Chrostowski, L., Grist, S. M., Schmidt, S., & Ratner, D. (2012). Assessing silicon photonic biosensors for home healthcare. SPIE Newsroom, 10–12.

[5]         Hurat, P., & Cote, M. (2005). DFM for Manufacturers and Designers, Proc. SPIE 5992, 25th Annual BACUS Symposium on Photomask Technology (Vol. 5992, 59920G).

[6]         Pikus, F. G. (2010). What is eqDRC? ACM SIGDA Newsletter, 40(2), 3–7.

[7]         Pikus, F. G., Programmable Design Rule Checking. U.S. Patent Application US20090106715 A1. http://1.usa.gov/1KIYfhy

Author

Ruping Cao is a PhD student presently completing an internship with Mentor Graphics in Grenoble, France. One of the areas she is investigating is the application of electronic design automation techniques and tools to the verification of integrated circuit designs that incorporate silicon photonics. Ruping holds a Bachelor’s degree in Microelectronics from East China Normal University, and a Master’s degree in Nanoscale Engineering from l’École Centrale de Lyon. She may be reached at ruping_cao@mentor.com.

The Changing (and Challenging) IC Reliability Landscape

August 20th, 2015

By Matthew Hogan, Product Marketing Manager, Calibre Design Solutions, Mentor Graphics

It seems that a laser focus on integrated circuit (IC) reliability is all around us now. Gone are the days when a little “over design,” or additional design margin, could cover the reliability issues in a design layout. Designers now need to articulate to partners, both internal and external, just how well their designs function over time and within their intended environment.

Functional safety and the push from the automotive electronics industry with ISO 26262 are not the only realms where this critical focus is being applied. General consumer devices that are always on, manufactured in the 10s to 100s of millions, are seeking the benefit of eliminating reliability issues at the IC design stage. The broad interest being shown over a wide range of process nodes, from the largest, most-established nodes to the emerging “bleeding edge” nodes, demonstrate this shift in attitude about and consideration given to reliability issues.

Many IC designers and verification teams no longer consider the obvious DRC and LVS milestones as sufficient stopping points—they continue on to advanced reliability checks aimed at increasing the longevity, performance and quality of their designs. They have taken the proactive approach of looking “one layer deeper” towards quality to avoid these subtle design problems that will impact the lifetime operation of their products.

Earlier this year, I saw a great article on what some folks are doing on the modeling side by considering random telegraph noise (RTN), and its contribution to negative-bias temperature instability (NBTI) failures and device shifts in VT [1]. It reminded me of the work I was anticipating seeing at the 2015 International Reliability Physics Symposium (IRPS). With a focus on reliability, you’d expect advanced detection and verification topics to shine and garner great interest, and they did! The conference organizers also presented the Best Paper and Outstanding Paper awards from last year’s conference. A. Oates and M.-H. Lin from Taiwan Semiconductor Manufacturing Company (TSMC) took the honors for Best Paper with “Electromigration Failure of Circuit-Like Interconnects: Short Length Failure Time Distributions with Active Sinks and Reservoirs” [2]. For Outstanding Paper, it was the team from TU Wien and IMEC (T. Grasser, K. Rott, H. Reisinger, M. Waltl, J. Franco and B. Kaczer), that delivered “A Unified Perspective of RTN and BTI” [3]. This work evaluates the suggestion that RTN and bias temperature instability (BTI) are due to similar defects. Understanding the failure mode of these effects is critically important, especially when designing accelerated test procedures to create data. Stress the device in the “wrong” way, and maybe you’re not capturing the degradation effects you think you are.

Some reliability failure modes are more familiar to designers than others, just because you tend to hear about them more often, including electromigration (EM), electrical overstress (EOS), and electrostatic discharge (ESD). With standards now calling out effects like charged device model (CDM), hot carrier injection (HCI), NBTI, and others, IC designers and verification specialists are finding there’s a whole new set of acronyms to learn about and remember. Not familiar with these? Now is the time to study up. There is increasing pressure to have validated mitigation strategies for these effects in place for the physical design implementation stage.

What’s That Mean?

To help those of you new to this field, here’s a brief introduction to the effects I just mentioned. There are many, many great references out there, and I’d encourage you to start exploring reliability design and verification resources, if you’re not already. I’ve supplied a few at the end of this article that would make a good beginning library.

CDM is a model that characterizes the susceptibility of an electronic device to damage from ESD. The CDM model is an alternative to the human body model (HBM), which is built on the generation and discharge of electricity from (you guessed it) a human body. The CDM model simulates the build-up and discharge of electricity that occurs in other circumstances, like handling during the assembly and manufacturing process. Devices that are classified according to CDM are exposed to a charge at a standardized voltage level, and then tested for survival. If the device withstands this voltage level, it is tested at the next level, and so on, until the device fails. CDM is standardized by JEDEC in JESD22-C101E [4].

HCI is a phenomenon in solid-state electronic devices where an electron (or “hole”) gains sufficient kinetic energy to overcome a potential barrier and break an interface state. The term “hot” does not refer to the overall temperature of the device, but to the effective temperature used to model carrier density. The switching characteristics of the transistor can be permanently changed, as these charge carriers can become permanently trapped in the gate dielectric of a MOS transistor. As HCI degradation slows down circuit speeds, it is sometimes considered more of a performance problem than a reliability issue, despite potentially leading to operational failure of the circuit. [5] [6].

NBTI is a key reliability issue in MOSFETs that manifests as an increase in the threshold voltage. It also causes a decrease in drain current and transconductance of a MOSFET. This degradation exhibits logarithmic dependence on time. While NBTI is of immediate concern in p-channel MOS devices, since they almost always operate with negative gate-to-source voltage, the very same mechanism also affects nMOS transistors when biased in the accumulation regime (i.e., with a negative bias applied to the gate) [7]. In the past, designers had no effective means of detecting potential NBTI conditions, so often the only option was to design all parts of the chip to absolute worst-case corner conditions. Newer verification tools that can combine both geometrical and electrical data can now locate NBTI sensitivities.

There is a growing need to be familiar with these and other reliability concerns to meet the market requirements of today’s IC customers. Not to be caught resting on their laurels, however, the reliability experts are forging ahead on advanced reliability topics and techniques. One that caught my eye is an effort to develop a unified aging model of NBTI and HCI by leveraging the way degradation for both are modeled [8]. By employing a common reaction-diffusion (R-D) framework, a proposal for a geometry-dependent unified R-D model for NBTI and HCI has been proposed [9]. How well will it work? Can it be used to develop design constraints? These are still unanswered questions by many. I’m expecting that advances in this field will represent the next milestone of required checks that our devices will need to pass.

Some Final Thoughts

From a practical perspective, the difference between yield and reliability is when the failure occurs. Focus on yield issues has been at the forefront for a good many years, but it now seems that the industry is migrating to greater awareness on reliability issues. Tackling issues in this space requires an in-depth understanding of the physical layout and interactions that may be present. Of course, the guidance and creation of design rules for overcoming these issues is in the hands of the reliability experts, and the development of the tools that will help designers perform the analysis and mitigation is in the hands of the EDA vendors, but based on the research and activity presently underway, I feel confident that the future of reliability design and verification is headed in the right direction.

Reliability Resources

Understanding Automotive Reliability and ISO 26262 for Safety-Critical Systems

Physical Verification Flow for Hierarchical Analog IC Design Constraints

Reliability Characterisation of Electrical and Electronic Systems, Jonathan Swingler (Editor), ISBN:978-1782422211 (January 2015)

References

[1]   The End of Silicon?, Katherine Derbyshire, May 2015, http://semiengineering.com/the-end-of-silicon/

[2]   A. Oates and M.-H. Lin, “Electromigration Failure of Circuit-Like Interconnects: Short Length Failure Time Distributions with Active Sinks and Reservoirs”, IRPS 2014, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860657

[3]   T. Grasser, K. Rott, H. Reisinger, M. Waltl, J. Franco and B. Kaczer, “A Unified Perspective of RTN and BTI”, IRPS 2014, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860643

[4]   Charged-device model, https://en.wikipedia.org/wiki/Charged-device_model

[5]   Hot-carrier injection, https://en.wikipedia.org/wiki/Hot-carrier_injection

[6]   John Keane, Chris H. Kim, “Transistor Aging”, IEEE Spectrum, May 2011, http://spectrum.ieee.org/semiconductors/processors/transistor-aging/0

[7]   Negative-bias temperature instability, https://en.wikipedia.org/wiki/Negative-bias_temperature_instability

[8]   Yao Wang, Sorin Cotofana, Liang Fang , “A Unified Aging Model of NBTI and HCI Degradation towards Lifetime Reliability Management for Nanoscale MOSFET Circuits”, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5941501

[9]   H. Kufluoglu and M. Ashraful Alam, “A Geometrical Unification of the Theories of NBTI and HCI Time-exponents and its Implications for Ultra-scaled Planar and Surround-Gate MOSFETs,” in IEEE International Electron Devices Meeting, IEDM Technical Digest, Dec. 2004, pp. 113 – 116. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1419081

Author

Matthew Hogan is a Product Marketing Manager for Calibre Design Solutions at Mentor Graphics, with over 15 years of design and field experience. He is actively working with customers who have an interest in Calibre PERC. Matthew is an active member of IIRW and the ESD Association—involved with the EDA working group, the Symposium technical program committee, and the IEW management committee. Matthew is also a Senior Member of IEEE, and a member of ACM. He holds a B. Eng. from the Royal Melbourne Institute of Technology, and an MBA from Marylhurst University. Matthew can be reached at matthew_hogan@mentor.com.

Manage Giga-Gate Testing Hierarchically

July 31st, 2015

By Ron Press, Mentor Graphics Corp

When designs get big, designers often implement hierarchical “divide and conquer” approaches through all phases and aspects of chip design, including the design-for-test (DFT). With hierarchical test, the DFT features and test patterns are completed on blocks are re-used at the top level. Hierarchical DFT is most useful for designs with 20 million gates or more, or when the same cores are used across multiple designs. The benefits of hierarchical test include reduction of test time, reduction of automatic test pattern generation (ATPG) run time, better management of design and integration tasks, and moving DFT insertion and pattern generation much earlier in the design process. Figure 1 depicts hierarchical test.

Figure 1. A conceptual drawing of hierarchical test.

Today’s hierarchical test methodologies are different from those used years ago. Hierarchical test used to mean just testing one block in the top-level design while all the other blocks are black-boxed. The block being tested is isolated with special wrapper scan chains added at the boundary. While this method improves the run time and workstation memory requirements, it still requires you to have a complete top-level netlist prior to creating patterns. Plus, patterns created previously cannot be easily combined with other similarly generated patterns in parallel; they are used exactly as they were constructed during ATPG (i.e. generated and applied from top level pins).

Fortunately, the automation around hierarchical test has significantly improved in recent years. There are significant advantages in managing design and integration tasks and design schedule, including:

  • It moves DFT effort earlier in the design process because all the block DFT work and ATPG can be completed with only the block available; You don’t need to wait until the top-level design or test access mechanism (TAM) is complete.
  • It helps with core reuse and use of 3rd party IP. Block-level patterns and design information are saved as plug-and-play pieces that can be reused in any design.
  • It allows design teams in different locations to work on blocks without conflicts. A top-level design is never needed in order to generate the block-level patterns. Only block data is needed to verify that the block patterns can be effectively retargeted in the top-level design and that the top-level design can be initialized such that the block being tested is accessible.
  • It simplifies the integration of cores at the top level. Various block patterns are generated independently for each different block, but if the top-level design enables access to multiple blocks in parallel, then the patterns can be merged together automatically when retargeting to the top-level design.

In addition to the design, integration, and schedule benefits of hierarchical test, it also reduces ATPG and workstation memory.  Many people assume that top-level pattern generation for the entire chip in one ATPG run is more efficient for test time than testing blocks individually. In fact, hierarchical test is often 2-3x more efficient than top-level test. I’ll try to describe why with an example. Figure 2 shows two approaches to test an IC. For each block we maintain a 200x chain-to-channel ratio. Thus, in the top-level ATPG case 800 chains with 4 channels results in a 200x compression ratio. However, in the hierarchical case there are 12 channels available for each core, so to maintain 200x compression ratio we would have 2400 chains. These chains would be 1/3 the length of the chains in core 3 top-level ATPG.

Figure 2. Flat ATPG tests all cores in parallel. In this case, core 1 requires fewer patterns than core 3. After 1000 patterns are applied, the four channels used for core 1 are useless bandwidth.

Top-level ATPG pattern count will be dictated by the block with the largest number of patterns. In this case, the tester cycles will be equal to

{(core 3 scan cells) / (800 chains)} * 4000 patterns

The hierarchical ATPG will run each block sequentially in this case. So each block can use all 12 channels and would have 2400 chains internally

{(core 1 scan cells)/ (2400 chains)} * 1000

+ {(core 2 cells)/ (2400 chains)} * 2000}

+ {(core 3 cells)/ (2400 chains)} *4000}

If each core has the same number of scan cells, then we get this comparison:

(scan cells)/800 * 4000= (scan cells) * 5 for flat ATPG

and (scan chain length) * {(1000/2400) + (2000/2400) + (4000/2400)}

= (scan chain length) * 2.9 patterns for hierarchical ATPG

So in this case, hierarchical test is 60% the test application time as flat ATPG.

In hierarchical ATPG, the bandwidth of all channels are used on one block at a time. Thus, more chains can be used on each block to maximize the channel bandwidth. This can significantly improve the efficiency of DFT.  The impact can be more pronounced when different blocks require different pattern types.

Hierarchical DFT flow

The flow starts with core-level DFT, which includes insertion of scan chains, generation and insertion of compression IP, an adding wrapper chains to isolate cores. You can reuse existing function flops as shared wrapper cells and only use dedicated wrapper cells if absolutely necessary.

The next step is core pattern generation. Using ATPG software, you create the core-level test patterns and generate gray-box models. The gray-box models are light weight models for external test and pattern retargeting. You have some flexibility to preserve specific instances outside of what the automation might choose for you.

Pattern retargeting is next. You retarget the core-level patterns to the top level and can merge the pattern sets to create a “intest” pattern sets. The full netlist is not needed for pattern retargeting; just the top-level logic and core-level gray box models or even black box model with a “core description file” that provides information about the block level test structure.

After pattern retargeting, you create top-level interconnect tests. When making the top-level “extest” patterns, the full netlist never needs to be loaded into memory, just the top-level logic and core-level gray box models.

With some up-front design effort and planning, the biggest challenges of testing giga-gate SoCs can be addressed with a hierarchical DFT methodology.

For details about hierarchical DFT, you can download the whitepaper Divide and Conquer: Hierarchical DFT for SoC Designs.


Ron Press is the technical marketing manager of the Silicon Test Solutions products at Mentor Graphics. The 25-year veteran of the test and DFT (design-for-test) industry has presented seminars on DFT and test throughout the world. He has published dozens of papers in the field of test, is a member of the International Test Conference (ITC) Steering Committee, and is a Golden Core member of the IEEE Computer Society, and a Senior Member of IEEE. Press has patents on reduced-pin-count testing and glitch-free clock switching, and pending patents on 3D test.

Custom Layout Designers Need New Tools for New and Expanding Markets

May 27th, 2015

By Srinivas Velivala, Mentor Graphics

For a long time, digital was the darling of the semiconductor industry. But then a funny thing happened—the advent of cell phones and GPS and tablets and a zillion other new products made things like power consumption and battery life important market factors. But this new emphasis on analog and mixed-signal designs also brought new market pressure to custom designers. Now more than ever, time to market could mean the difference between so-so results and profitability. With that came the need to reduce design and verification timelines while still ensuring high-quality products.

In response to that demand, we introduced Calibre® RealTime, which provides interactive DRC feedback in a custom layout environment using the same sign-off Calibre design rule checking (DRC) deck that is used for batch Calibre DRC jobs. By enabling signoff DRC during the design process, Calibre RealTime helped designers reduce the time to tapeout. Initially, the use model was intended for debugging DRC results in standard cells and block designs. As such, we included an integrated toolbar, so layout designers could highlight and step through DRC results as per the order of the results generated, or select a specific DRC check and step through the DRC results belonging to that check.

However, layout designers continued to expand the application of Calibre RealTime to larger designs, such as partial layout of a macro, or even full-chip designs, invoking it during final DRC review before tapeout (using a combination of batch Calibre and Calibre RealTime). With this use came a desire to see a complete picture of the DRC results: how many DRC checks are violated, how many DRC results are present in each check, how many DRC results can be disregarded at this design stage, and so on. Providing this type of analysis required an expanded interface GUI to allow layout designers to debug their DRC results efficiently.

The Calibre RealTime-RVE interface has the same look and feel as the Calibre RVE™ tool, to provide custom layout designers the flexibility to analyze DRC results generated from a Calibre RealTime job and formulate an efficient strategy to debug and fix the DRC errors. The interface opens up automatically after a Calibre RealTime DRC job run (Figure 1). Designers can select a specific DRC check and highlight the specific result/s belonging to that check. Designers also get a clear description of the DRC check that has been violated. In this example, the description of the check indicates that this is a double patterning (DP) error.

Figure 1. DRC error results in the Calibre RealTime-RVE interface.

The Calibre RealTime toolbar and Calibre RealTime-RVE interface are always synchronized (Figure 2), allowing designers to highlight DRC results from either the toolbar or the interface.

Figure 2. The Calibre RealTime toolbar and Calibre RealTime-RVE interface are always in sync.

In addition, designers can display and sort DRC results by associated characteristics, reducing visual “clutter” and allowing them to focus more efficiently on their debugging tasks (Figure 3).

Figure 3. Custom designers can display and sort by error characteristics.

To maximize efficiency, designers can run Calibre RealTime DRC jobs on multiple designs in the layout environment, and browse all the results using the Calibre RealTime-RVE interface. The interface opens separate tabs to display the results generated from each design, preventing any mix-up or confusion, and ensuring that there is no additional delay. Designers can select any particular results tab and highlight the results from that tab. The Calibre RealTime-RVE interface automatically ensures that the DRC results are highlighted in the design window corresponding to the DRC results tab from which the highlight commands are issued.

Figure 4. DRC results for multiple designs are displayed separately.

As custom layout designers use Calibre RealTime in an ever-expanding set of use models, they can be confident they will be able to easily comprehend, analyze and debug the DRC results using the Calibre-RealTime-RVE debug interface. Tools like this are essential to supporting the increasing market for custom designs while ensuring companies can produce reliable products in a timely, profitable manner.

Author

Srinivas Velivala is a Product Manager with the Design to Silicon Division of Mentor Graphics, focusing on developing Calibre integration and interface technologies. Before joining Mentor, he designed high-density SRAM compilers, and has more than seven years of design, field, and marketing experience. Srinivas holds a B.S. and M.S. in Electrical and Computer Engineering. In his spare time, he likes to travel and play cricket. He can be reached at srinivas_velivala@mentor.com.

Next Page »