Part of the  

Solid State Technology

  Network

About  |  Contact

Technical Workshops – Providing Access to the Industry’s Best

December 1st, 2015

By Matthew Hogan, Product Marketing Manager, Calibre Design Solutions

It may not seem like such a revelation, but many of the opinions and traits we carry around with us are often attributable to our peer group. From a professional perspective, this could include colleagues, advisors, managers, and a host of other influencers that have crossed your path along the way. Good, bad, or indifferent, these experiences influence how you work and what you consider “normal.” In some of the focused and specialized fields of IC design and verification, like electrostatic discharge (ESD) and reliability, it is often a challenge to find and connect with suitably well-informed individuals that you can bounce ideas off, learn from, and grow with.

There are a number of pockets of excellence within the industry, but if you are not fortunate enough to have been introduced to the right post-graduate program or advisor, or to work in a company that supports a thriving eco-system of like-minded individuals, you’re pretty much left to your own devices in a vacuum. So, if you are working on an island, how do you build bridges to other experts in your field, outside your organization? One way to gain exposure to new ideas, techniques and best practices is to attend industry conferences. Another is to forgo the large-scale format that conferences provide, and look at what workshops have to offer.

Not familiar with the workshop format? Generally speaking, workshops provide 3-4 long days with the same folks, in an environment probably a lot like those summer camps you attended as a kid. You all eat together, attend the keynote, invited talks, and paper/poster presentations together, and participate in one or more discussion groups occupying the evenings. The focus of a workshop is, by design, much narrower than a large industry conference, so everyone attending has the same range of interests and issues. Overall, with the smaller groups of the workshop format, there is a lot of time for discussion and interactions with others. Want to know something? Ask! In my experience, the pedigree of attendees is often outstanding, with a welcoming and inclusive disposition to newcomers looking to learn more about the field. None of us are experts in every field, and being able to learn firsthand from insightful and interactive discussions only bolsters the learning experience. Another advantage extends past the workshop itself—the forging of professional relationships that can provide valuable advice, consultation, and collaboration long after the event is finished.

Over the last five years, I’ve seen a plethora of emails turn up at my inbox, proclaiming the 2nd or 3rd annual workshop on such and such a topic. These organizations are getting the ball rolling. I’ve even seen a number of 1st annual invitations. While I haven’t kept track of how many of these newer workshops survive to maturity, two established events that I’m particularly fond of are the International ESD Workshop, who are starting to ramp up for their 2016 event (which will be their tenth year), and the International Integrated Reliability Workshop, who can trace their origins as far back as 1982. For me, these legacies have demonstrated that smaller, focused groups having a high degree of interaction and discussions bring participants together, not only to focus on the program material, but also to bring a sense of community to a tight-knit and focused group.

I’d be interested to hear about your experiences of attending both conferences and workshops. For me, each has its place, but the workshop format provides a significantly more robust and in-depth framework to share a lot of ideas in a short, concentrated period of time, while really getting to know colleagues in your field.

Matthew Hogan is a Product Marketing Manager for Calibre Design Solutions at Mentor Graphics, with over 15 years of design and field experience. He is actively working with customers who have an interest in Calibre PERC. Matthew is an active member of the ESD Association—involved with the EDA working group, the Symposium technical program committee, and the IEW management committee. Matthew is also a Senior Member of IEEE, and a member of ACM. He holds a B. Eng. from the Royal Melbourne Institute of Technology, and an MBA from Marylhurst University. Matthew can be reached at matthew_hogan@mentor.com.

LEF/DEF IO Ring Check Automation

October 26th, 2015

By Matthew Hogan, Mentor Graphics

Background

Designing today’s complex system-on-chips (SoCs) requires careful consideration when planning input/output (IO) pad rings. Intellectual property (IP) used in an SoC often comes from multiple IP vendors, and can range from digital/analog cores to IO pads, power/ground pads, termination cells, etc. Each vendor has its own rules for these IO rings to protect the IP from electrostatic discharge (ESD) and other reliability concerns. The constraints for these rules are different from one foundry to another, as well as from one technology node to another, or from one IP vendor to another (Figure 1).

Fig. 1: Sample rule file constraints from IP supplier (1).

While detailed rules are available from each IP vendor on how to comply with their IO ring layout rules, what is not generally available are instructions for applying those rules in the presence of other IPs. Typically, SoC designers have IP from a CPU supplier and memory supplier, in addition to the IO cells. A holistic and integrated approach that allows for all of these IP pads to co-exist is needed.

Foundries provide a design rule manual (DRM) that contain guidelines for pad cell placement to protect against ESD. Typical rules found in a DRM include:

  • Cell types that can or must be used in an IO ring
  • Minimum number of a specified power cell per IO ring section and given power domain
  • Maximum spacing between two power cells for a given power /ground pair in a power domain
  • Maximum distance from the IO ring section termination (breaker cells) to every power cell
  • Maximum distance from IO to closest power cells
  • Cells that must be present at least once per corresponding power domain section
  • Constraints for multi-row implementation

The SoC designer is then tasked with achieving the desired ESD protection across the SoC while incorporating all of the dissimilar cells and their unique rules. Needless to say, manual approaches are time-consuming, and subject to the ever-present human error factor. An automated solution that can implement the checking needed to consider all rule interactions provides a substantive improvement in both time and quality of results.

LEF/DEF IO Ring Checker

In collaboration with an IP design company, Mentor Graphics developed an automated framework (Figure 2) to verify SoC compliance with these foundry IO placement rules, using the Calibre® PERC™ reliability verification tool. The Calibre PERC tool can combine both the geometrical and electrical constraints of a design to perform complex checks that incorporate layout restrictions based on electrical constraints or variations.

Fig. 2: Automated framework for IO ring verification.

This IO ring checker framework provides the following characteristics:

  • No technology dependencies: ESD placement rules are coded in the IO ring checker, and are not constrained by or subject to any technology file dependencies.
  • Easy set-up: The constraints interface (Figure 3) allows for customized cell naming conventions and spacing variables.

Fig. 3: The constraints interface enables easy customization.

Figure 4 shows the IO ring checker verification flow. An initial placement of IO cells is made with only IO cells in the DEF. Violations identify locations where changes to the IO ring initial placement must be made. This process continues until final placements are available. As the design nears completion, all cells and routings are now present. Another round of validation is performed until the design is complete, with all errors in IO ring placements corrected for final sign-off validation.

Fig. 4: IO ring checker verification flow.

Outputs

Two results database (RDB) files are output as part of the checking flow:

  • IO_ring_checker.rdb: contains violations
  • debug.rdb: contains additional information for debug

Violations can be highlighted from the IO_ring_checker.rdb file (Figure 5).

Fig. 5: The IO-ring-checker.rdb file provides quick identification of rule check violations.

If more details are required, the debug.rdb file (Figure 6) can be used to display complementary information that describes the violation more explicitly, such as:

  • Cell identification (power clamp families, breakers, etc.)
  • Sub-check results (for each power domain)

Fig. 6: The details provided in the debug.rdb file help designers understand and correct violations quickly and accurately.

For ease of use, results can be loaded and highlighted in a results viewing environment, such as the Calibre RVE tool (Figure 7), using a DEF or GDS Database. Highlighting options in the Calibre RVE tool include:

  • Rule violation: IO distance to PWR/GND cells (in red)
  • Cell marking: IO PWR/GND pairs (in green)
  • Power domain: digital section (in blue)

Fig. 7: A results viewing environment allows designers to visualize error results.

Results

The LED/DEF IO Ring Checker framework was applied to multiple GPIO test chips. Results demonstrate the effectiveness and speed of the framework in applying multiple rule checks across the chip (Table 1).

Table 1: Automated verification results from multiple=

These results not only demonstrate the accuracy of this approach, but also the speed of such a solution. Manual checking could easily take days to validate, and still be error-prone. With more and more IP being implemented in SoC designs, the number of rules is only expected to grow, further complicating the verification task.

Conclusion

A flexible and automated approach to IO pad ring placement verification allows designers to focus on their design, using the IO ring checking framework and the Calibre PERC tool to confirm the validity of the layout they create. The ability to perform this validation on LEF/DEF designs allows early completion of this task in the design cycle, while there is still an opportunity to optimize and refine the design before beginning final signoff verification.

Automated approaches for advanced reliability verification issues such as IO ring checking are providing significant benefits within SoC design flows. Designers who take advantage of these techniques and tools can deliver highly reliable designs, validating them quickly and efficiently, all with greater confidence in the quality of their final products. With the complexity of IP eco-systems used within SoCs constantly on the rise, and the allure of new markets such as automotive, with its exacting reliability standards, the use of automated reliability verification for these complex interactions can only be expected to grow. Adding determinism and repeatability to your IO ring checking strategy is a strong move towards improving your reliability verification capabilities.

References

[1] EDA Tool Working Group (2014), “ESD Electronic Design Automation Checks (ESD TR18.0-01-14)”, New York: Electrostatic Discharge Association, January 2015, https://www.esda.org/standards/device-design/electronic-design-automation-eda/view/1713

Design Rule Checking for Silicon Photonics

September 30th, 2015

By Ruping Cao, Mentor Graphics

The silicon photonics integrated circuit (PIC) holds the promise of providing breakthrough improvements to data communications, telecommunications, supercomputing, biomedical applications, etc. [1][2][3][4]. Silicon photonics stands out as the most competitive candidate among potential technologies, due in large part to its compactness and potential low-cost, large-scale production capability leveraged by current CMOS fabrication facilities. However, as silicon PICs gain success and prospects, designers find themselves in need of an extended design rule checking (DRC) methodology that can ensure the required reliability and scalability for mass fabrication.

Traditional DRC ensures that the geometric layout of a design, as represented in GDSII or OASIS, complies with the foundry’s design rules, which guide designers to create integrated circuits (ICs) that can achieve acceptable yields. DRC compliance is the fundamental checkpoint an IC design must achieve to be accepted for fabrication in the foundry. DRC results obtained from an automated DRC tool from a trusted EDA provider are required to validate the compliance of a design with the physical constraints imposed by the technology.

However, traditional DRC uses one-dimensional measurements of features and geometries to determine rule compliance. PICs present new geometric challenges and novel device and routing designs, where non-Manhattan-like shapes—such as curves, spikes, and tapers—exist intentionally. These shapes expand the complexity of the DRC task, even to the extent that it is impossible to fully describe some physical constraints with traditional one-dimensional DRC rules.

To address the DRC challenge in photonic designs, new verification techniques are required. At Mentor, we developed the Calibre (R) eqDRC (TM) technology, an extension to the Calibre nmDRC tool. The Calibre eqDRC functionality is an equation-based set of statements that extend the capabilities of traditional DRC to allow users to analyze complex, multi-dimensional interactions that are difficult or impossible to verify using traditional DRC methods. While the development of the Calibre eqDRC functionality was originally motivated by the IC physical verification difficulty at advanced technology nodes [5], the Calibre eqDRC process is equally adept at satisfying the demand for PIC geometrical verification requirements. Users can define multi-dimensional feature measurements with flexible mathematical expressions that can be used to develop, calibrate and optimize models for design analysis and verification [6][7]. Let’s look at a couple of examples.

False Errors Induced by Curvilinear Structure

Current EDA tools support layout formats as GDSII, where geometric shapes are represented in polygons (Manhattan design). The vertices of these polygons are snapped to a grid, the size of which is specified by the technology. This mechanism produces specific DRC problems for photonic designs that include curvilinear shapes (like bends for routing) in a range of device structures, which derive from the requirement of total internal reflection for light guiding while minimizing light loss. With traditional EDA tools, the curved design layer is fragmented into sets of polygons that approximate the curvilinear shape, which results in some discrepancy from the design intent.

While this discrepancy of a few nanometers (dependent on the grid size) is negligible compared to a typical waveguide design with a width of 100 mm, its impact on DRC is significant. The tiniest geometrical discrepancy can generate false DRC errors, which can add up to a huge number, making the design nearly impossible to debug. Figure 1 shows a curved waveguide design layer, with the inset figure showing a DRC violation of minimum width. Although the waveguide is correctly designed, there is a discrepancy in width value between the design layer (off-grid) and the fragmented polygon layer (on-grid), creating a false width error. Even though these properly designed structures do not violate manufacturability requirements, they generate a significant number of false DRC errors. Debugging or manually waiving these errors is both time-consuming and prone to human error.

Figure 1. Design of a curved waveguide on a 1 nm grid. The enlarged view shows the polygon layer that flags the width error of the waveguide. The polygon vertices are on-grid, which results in the discrepancy in width measurement.

By taking advantage of the Calibre eqDRC capabilities, users can query various geometrical properties (including the properties of error layers), and perform further manipulations on them with user-defined mathematical expressions. Therefore, in addition to knowing whether the shape passes or fails the DRC rule, users can also determine any error amount, apply tolerance to compensate for the grid snapping effect, perform checks with property values, process the data with mathematical expressions, and so on.

Multi-dimensional rule check on taper structure

Another important photonic design feature that does not exist in IC design is the taper, or spike (any geometrical facet where the two adjacent edges are not parallel to each other), as shown in Figure 2. This kind of geometry exists intentionally, especially in the waveguide structure, where the optical mode profile is modified according to the cross-section variation (including the width from the layout view, and the depth determined by the technology).

Figure 2. Tapers are a common construct in photonics designs.

The DRC check to ensure fabrication for these structures must flag those taper ends that have been thinned down too far, which can lead to  breakage, and possible diffusion to other locations on the chip to create physical defects. A primitive rule to describe this constraint could be stated as:

minimum taper width should be larger than w; otherwise, if it is smaller than w, the included angle at the taper end must be larger than α.

This rule is a simple form of describing the constraint that, as the taper angle increases, the taper end width can decrease. In even simpler words, a sharper pointy end is allowed as taper end width increases. The implementation of this rule is impossible with one-dimensional traditional DRC, since more than one parameter is involved at the same time.

When using eqDRC capability, however, a multi-dimensional check can be written:

Sharp_End := angle(wg, width < w) < α

where angle stands for the DRC operation that evaluates the angle of the taper end with a width condition (smaller than w). This is a primitive check example, but it serves to show the power of the Calibre eqDRC functionality in photonics verification.

Modeled rule check on taper structure

The previous two examples primarily demonstrate the capability of the Calibre eqDRC process to manipulate property values to implement more precise rules for photonic-specific designs. As discussed, the Calibre eqDRC technique allows custom manipulation over various measurable characteristics of layout objects, offering greater flexibility in rule defining and coding.  The examples imply the availability of user-defined expressions for property data manipulation, which can be enabled by an API for dynamic libraries or some other means, so that mathematical expressions are available through built-in languages such as Tcl.

However, the Calibre eqDRC process also enables complex DRC checks or yield prediction by applying user-defined models. In Figure 3, we’ll expand the taper DRC check to demonstrate eqDRC’s modeling capability.

Figure 3. (A) Depiction of design rule for tapers. w is the minimum width of the taper; w1, w2 and w3 are the width values (w1 < w2 < w3); α is the including angle of the two adjacent edges of the taper end; α1, α2 and α3 are the angle values (α1 > α2 > α3). (B) Plot of rule (three separate rules are used in this case). The red line represents the modeled rule (violation happens below the curve).

Figure 3(A) depicts the rules that might be applied to those taper designs, bearing in mind that the constraint of width is correlated with the angle of the taper end. With a traditional one-dimensional rule, we can measure the critical angle value at discrete width values, and describe the fabrication constraint with three separate rules:

Rule 1: sharp_end := angle(wg, width ≥ 0, width < w1) < α1

Rule 2: sharp_end := angle(wg, width ≥ w1, width < w2) < α1

Rule 3: sharp_end := angle(wg, width ≥ w2, width < w3) < α2

Of course, additional critical angle values can be probed to add more rules to this rule set to better fit the model, but that increases the complexity of rule check tasks. So, here we have an inevitable compromise between the rule check complexity and physical constraint description integrity.

In fact, the interpolation of the critical conditions (relation of width and angle) can be expressed with a model (which should come from research and/or experimental results). Figure 2(b) depicts the constraints of width and angle given by the above rules (in shaded area). The model of critical angle against width is provided by the interpolation of the values. Here, no values are provided, and the model is only given as qualitative. The real model would require further research and proof by experimental results. Nevertheless, the example demonstrates that photonics design requires more flexible design that leads to more complex geometrical verification.

Fortunately, users can implement such models using the Calibre eqDRC capability, avoiding the dilemma of raising rule check complexity or reducing DRC accuracy. Since user-defined mathematical expressions are allowed, the relation of critical angle and width can be expressed in the rule check as follows:

sharp_ end := f(width(wg)/angle(wg) > 1

where width and angle constitute the DRC operation that evaluates the minimum width and including angle of the taper end respectively; function f is the model relating the critical angle αc and width w: αc = f(w); w is the actual measured width value. This syntax fully describes the physical constraint given by the model as depicted by the red line in Figure 3(B). In addition to being more accurate, this rule check replaces the three rule checks previously used, which simplifies the rule writing and rule check procedure.

There are several advantages to be gained by applying eqDRC to the physical verification of photonics designs.

  • Ease debugging effort
    Unlike traditional DRC which provides only pass and fail results, the Calibre eqDRC process produces customized information that can facilitate debugging efforts. Such information can identify the severity of the violation, suggest possible corrections, etc. This information can be displayed on the layout, helping the engineer to quickly debug and fix the violation.
  • Reduce false DRC errors
    For photonic designs, where the existence of curvilinear shapes can lead to false errors with traditional DRC, the Calibre eqDRC method makes it possible reduce or eliminate these false errors. Because the user can apply tolerances and conditions to the check criteria, most false errors can be filtered out of the results. In addition, further investigation of remaining errors is made easier with the availability of customized information.
  • Enable multi-dimensional checks
    Traditional DRC can measure and apply pass-fail tests in only one dimension, while the Calibre eqDRC process allows the assessment of multi-dimension parameters. Because photonic designs have a greater degree of freedom in design geometries, in many cases, multiple parameters are required to describe a physical constraint. Presently, multiple parameter analysis is only possible with the Calibre eqDRC technology.
  • Reduce rule coding complexity and improve accuracy
    With the Calibre eqDRC process, we can apply user-defined mathematical expressions when processing layout geometry characteristics. In this way, multi-dimensional parameters that interact with each other when involved in a physical constraint can be abstracted as a model to be applied in a rule check. This not only simplifies the rule coding and rule check procedure, but also improves the accuracy of the check by replacing discrete rules with the model, which covers all combinations of parameters in one continuous function. For photonics cases where physical constraints are more complex and applied more universally, the rule coding efficiency and accuracy improvement become more important than ever. With the Calibre eqDRC approach, physical constraints can be applied closest to their design intent, and in a straightforward manner using a model description.

As photonic circuit designs allow and require a wide variety of geometrical shapes that do not exist in IC designs, traditional DRC finds it exhaustive to fulfill the requirements for reliable and consistent geometrical verification of such layouts. With the availability of property libraries to users, and the ability to interface these libraries with a programmable engine to perform mathematical calculations, the Calibre eqDRC technique offers a perfect solution for an accurate, efficient, and easy-debugging DRC approach for PICs. By finding those errors that would otherwise be missed, and flagging far fewer false errors, the Calibre eqDRC technology enables the accurate and efficient geometrical verification needed to make silicon photonics commercially viable.

References

[1]         Goodman, J. W., Leonberger, F. J., & Athale, R. A. (1984). Optical interconnections for VLSI systems. Proceedings of the IEEE, 72(7), 850–866.

[2]         Haurylau, M., Member, A., Chen, G., Chen, H., Zhang, J., Nelson, N. A., Member, S., et al. (2006). On-Chip Optical Interconnect Roadmap : Challenges and Critical Directions, 12(6), 1699–1705.

[3]         Kash, J. A., Benner, A. F., Doany, F. E., Kuchta, D. M., Lee, B. G., Pepeljugoski, P. K., Schares, L., et al. (2010). Optical interconnects in exascale supercomputers. 2010 IEEE Photinic Society’s 23rd Annual Meeting (pp. 483–484). IEEE.

[4]         Chrostowski, L., Grist, S. M., Schmidt, S., & Ratner, D. (2012). Assessing silicon photonic biosensors for home healthcare. SPIE Newsroom, 10–12.

[5]         Hurat, P., & Cote, M. (2005). DFM for Manufacturers and Designers, Proc. SPIE 5992, 25th Annual BACUS Symposium on Photomask Technology (Vol. 5992, 59920G).

[6]         Pikus, F. G. (2010). What is eqDRC? ACM SIGDA Newsletter, 40(2), 3–7.

[7]         Pikus, F. G., Programmable Design Rule Checking. U.S. Patent Application US20090106715 A1. http://1.usa.gov/1KIYfhy

Author

Ruping Cao is a PhD student presently completing an internship with Mentor Graphics in Grenoble, France. One of the areas she is investigating is the application of electronic design automation techniques and tools to the verification of integrated circuit designs that incorporate silicon photonics. Ruping holds a Bachelor’s degree in Microelectronics from East China Normal University, and a Master’s degree in Nanoscale Engineering from l’École Centrale de Lyon. She may be reached at ruping_cao@mentor.com.

The Changing (and Challenging) IC Reliability Landscape

August 20th, 2015

By Matthew Hogan, Product Marketing Manager, Calibre Design Solutions, Mentor Graphics

It seems that a laser focus on integrated circuit (IC) reliability is all around us now. Gone are the days when a little “over design,” or additional design margin, could cover the reliability issues in a design layout. Designers now need to articulate to partners, both internal and external, just how well their designs function over time and within their intended environment.

Functional safety and the push from the automotive electronics industry with ISO 26262 are not the only realms where this critical focus is being applied. General consumer devices that are always on, manufactured in the 10s to 100s of millions, are seeking the benefit of eliminating reliability issues at the IC design stage. The broad interest being shown over a wide range of process nodes, from the largest, most-established nodes to the emerging “bleeding edge” nodes, demonstrate this shift in attitude about and consideration given to reliability issues.

Many IC designers and verification teams no longer consider the obvious DRC and LVS milestones as sufficient stopping points—they continue on to advanced reliability checks aimed at increasing the longevity, performance and quality of their designs. They have taken the proactive approach of looking “one layer deeper” towards quality to avoid these subtle design problems that will impact the lifetime operation of their products.

Earlier this year, I saw a great article on what some folks are doing on the modeling side by considering random telegraph noise (RTN), and its contribution to negative-bias temperature instability (NBTI) failures and device shifts in VT [1]. It reminded me of the work I was anticipating seeing at the 2015 International Reliability Physics Symposium (IRPS). With a focus on reliability, you’d expect advanced detection and verification topics to shine and garner great interest, and they did! The conference organizers also presented the Best Paper and Outstanding Paper awards from last year’s conference. A. Oates and M.-H. Lin from Taiwan Semiconductor Manufacturing Company (TSMC) took the honors for Best Paper with “Electromigration Failure of Circuit-Like Interconnects: Short Length Failure Time Distributions with Active Sinks and Reservoirs” [2]. For Outstanding Paper, it was the team from TU Wien and IMEC (T. Grasser, K. Rott, H. Reisinger, M. Waltl, J. Franco and B. Kaczer), that delivered “A Unified Perspective of RTN and BTI” [3]. This work evaluates the suggestion that RTN and bias temperature instability (BTI) are due to similar defects. Understanding the failure mode of these effects is critically important, especially when designing accelerated test procedures to create data. Stress the device in the “wrong” way, and maybe you’re not capturing the degradation effects you think you are.

Some reliability failure modes are more familiar to designers than others, just because you tend to hear about them more often, including electromigration (EM), electrical overstress (EOS), and electrostatic discharge (ESD). With standards now calling out effects like charged device model (CDM), hot carrier injection (HCI), NBTI, and others, IC designers and verification specialists are finding there’s a whole new set of acronyms to learn about and remember. Not familiar with these? Now is the time to study up. There is increasing pressure to have validated mitigation strategies for these effects in place for the physical design implementation stage.

What’s That Mean?

To help those of you new to this field, here’s a brief introduction to the effects I just mentioned. There are many, many great references out there, and I’d encourage you to start exploring reliability design and verification resources, if you’re not already. I’ve supplied a few at the end of this article that would make a good beginning library.

CDM is a model that characterizes the susceptibility of an electronic device to damage from ESD. The CDM model is an alternative to the human body model (HBM), which is built on the generation and discharge of electricity from (you guessed it) a human body. The CDM model simulates the build-up and discharge of electricity that occurs in other circumstances, like handling during the assembly and manufacturing process. Devices that are classified according to CDM are exposed to a charge at a standardized voltage level, and then tested for survival. If the device withstands this voltage level, it is tested at the next level, and so on, until the device fails. CDM is standardized by JEDEC in JESD22-C101E [4].

HCI is a phenomenon in solid-state electronic devices where an electron (or “hole”) gains sufficient kinetic energy to overcome a potential barrier and break an interface state. The term “hot” does not refer to the overall temperature of the device, but to the effective temperature used to model carrier density. The switching characteristics of the transistor can be permanently changed, as these charge carriers can become permanently trapped in the gate dielectric of a MOS transistor. As HCI degradation slows down circuit speeds, it is sometimes considered more of a performance problem than a reliability issue, despite potentially leading to operational failure of the circuit. [5] [6].

NBTI is a key reliability issue in MOSFETs that manifests as an increase in the threshold voltage. It also causes a decrease in drain current and transconductance of a MOSFET. This degradation exhibits logarithmic dependence on time. While NBTI is of immediate concern in p-channel MOS devices, since they almost always operate with negative gate-to-source voltage, the very same mechanism also affects nMOS transistors when biased in the accumulation regime (i.e., with a negative bias applied to the gate) [7]. In the past, designers had no effective means of detecting potential NBTI conditions, so often the only option was to design all parts of the chip to absolute worst-case corner conditions. Newer verification tools that can combine both geometrical and electrical data can now locate NBTI sensitivities.

There is a growing need to be familiar with these and other reliability concerns to meet the market requirements of today’s IC customers. Not to be caught resting on their laurels, however, the reliability experts are forging ahead on advanced reliability topics and techniques. One that caught my eye is an effort to develop a unified aging model of NBTI and HCI by leveraging the way degradation for both are modeled [8]. By employing a common reaction-diffusion (R-D) framework, a proposal for a geometry-dependent unified R-D model for NBTI and HCI has been proposed [9]. How well will it work? Can it be used to develop design constraints? These are still unanswered questions by many. I’m expecting that advances in this field will represent the next milestone of required checks that our devices will need to pass.

Some Final Thoughts

From a practical perspective, the difference between yield and reliability is when the failure occurs. Focus on yield issues has been at the forefront for a good many years, but it now seems that the industry is migrating to greater awareness on reliability issues. Tackling issues in this space requires an in-depth understanding of the physical layout and interactions that may be present. Of course, the guidance and creation of design rules for overcoming these issues is in the hands of the reliability experts, and the development of the tools that will help designers perform the analysis and mitigation is in the hands of the EDA vendors, but based on the research and activity presently underway, I feel confident that the future of reliability design and verification is headed in the right direction.

Reliability Resources

Understanding Automotive Reliability and ISO 26262 for Safety-Critical Systems

Physical Verification Flow for Hierarchical Analog IC Design Constraints

Reliability Characterisation of Electrical and Electronic Systems, Jonathan Swingler (Editor), ISBN:978-1782422211 (January 2015)

References

[1]   The End of Silicon?, Katherine Derbyshire, May 2015, http://semiengineering.com/the-end-of-silicon/

[2]   A. Oates and M.-H. Lin, “Electromigration Failure of Circuit-Like Interconnects: Short Length Failure Time Distributions with Active Sinks and Reservoirs”, IRPS 2014, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860657

[3]   T. Grasser, K. Rott, H. Reisinger, M. Waltl, J. Franco and B. Kaczer, “A Unified Perspective of RTN and BTI”, IRPS 2014, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860643

[4]   Charged-device model, https://en.wikipedia.org/wiki/Charged-device_model

[5]   Hot-carrier injection, https://en.wikipedia.org/wiki/Hot-carrier_injection

[6]   John Keane, Chris H. Kim, “Transistor Aging”, IEEE Spectrum, May 2011, http://spectrum.ieee.org/semiconductors/processors/transistor-aging/0

[7]   Negative-bias temperature instability, https://en.wikipedia.org/wiki/Negative-bias_temperature_instability

[8]   Yao Wang, Sorin Cotofana, Liang Fang , “A Unified Aging Model of NBTI and HCI Degradation towards Lifetime Reliability Management for Nanoscale MOSFET Circuits”, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5941501

[9]   H. Kufluoglu and M. Ashraful Alam, “A Geometrical Unification of the Theories of NBTI and HCI Time-exponents and its Implications for Ultra-scaled Planar and Surround-Gate MOSFETs,” in IEEE International Electron Devices Meeting, IEDM Technical Digest, Dec. 2004, pp. 113 – 116. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1419081

Author

Matthew Hogan is a Product Marketing Manager for Calibre Design Solutions at Mentor Graphics, with over 15 years of design and field experience. He is actively working with customers who have an interest in Calibre PERC. Matthew is an active member of IIRW and the ESD Association—involved with the EDA working group, the Symposium technical program committee, and the IEW management committee. Matthew is also a Senior Member of IEEE, and a member of ACM. He holds a B. Eng. from the Royal Melbourne Institute of Technology, and an MBA from Marylhurst University. Matthew can be reached at matthew_hogan@mentor.com.

Manage Giga-Gate Testing Hierarchically

July 31st, 2015

By Ron Press, Mentor Graphics Corp

When designs get big, designers often implement hierarchical “divide and conquer” approaches through all phases and aspects of chip design, including the design-for-test (DFT). With hierarchical test, the DFT features and test patterns are completed on blocks are re-used at the top level. Hierarchical DFT is most useful for designs with 20 million gates or more, or when the same cores are used across multiple designs. The benefits of hierarchical test include reduction of test time, reduction of automatic test pattern generation (ATPG) run time, better management of design and integration tasks, and moving DFT insertion and pattern generation much earlier in the design process. Figure 1 depicts hierarchical test.

Figure 1. A conceptual drawing of hierarchical test.

Today’s hierarchical test methodologies are different from those used years ago. Hierarchical test used to mean just testing one block in the top-level design while all the other blocks are black-boxed. The block being tested is isolated with special wrapper scan chains added at the boundary. While this method improves the run time and workstation memory requirements, it still requires you to have a complete top-level netlist prior to creating patterns. Plus, patterns created previously cannot be easily combined with other similarly generated patterns in parallel; they are used exactly as they were constructed during ATPG (i.e. generated and applied from top level pins).

Fortunately, the automation around hierarchical test has significantly improved in recent years. There are significant advantages in managing design and integration tasks and design schedule, including:

  • It moves DFT effort earlier in the design process because all the block DFT work and ATPG can be completed with only the block available; You don’t need to wait until the top-level design or test access mechanism (TAM) is complete.
  • It helps with core reuse and use of 3rd party IP. Block-level patterns and design information are saved as plug-and-play pieces that can be reused in any design.
  • It allows design teams in different locations to work on blocks without conflicts. A top-level design is never needed in order to generate the block-level patterns. Only block data is needed to verify that the block patterns can be effectively retargeted in the top-level design and that the top-level design can be initialized such that the block being tested is accessible.
  • It simplifies the integration of cores at the top level. Various block patterns are generated independently for each different block, but if the top-level design enables access to multiple blocks in parallel, then the patterns can be merged together automatically when retargeting to the top-level design.

In addition to the design, integration, and schedule benefits of hierarchical test, it also reduces ATPG and workstation memory.  Many people assume that top-level pattern generation for the entire chip in one ATPG run is more efficient for test time than testing blocks individually. In fact, hierarchical test is often 2-3x more efficient than top-level test. I’ll try to describe why with an example. Figure 2 shows two approaches to test an IC. For each block we maintain a 200x chain-to-channel ratio. Thus, in the top-level ATPG case 800 chains with 4 channels results in a 200x compression ratio. However, in the hierarchical case there are 12 channels available for each core, so to maintain 200x compression ratio we would have 2400 chains. These chains would be 1/3 the length of the chains in core 3 top-level ATPG.

Figure 2. Flat ATPG tests all cores in parallel. In this case, core 1 requires fewer patterns than core 3. After 1000 patterns are applied, the four channels used for core 1 are useless bandwidth.

Top-level ATPG pattern count will be dictated by the block with the largest number of patterns. In this case, the tester cycles will be equal to

{(core 3 scan cells) / (800 chains)} * 4000 patterns

The hierarchical ATPG will run each block sequentially in this case. So each block can use all 12 channels and would have 2400 chains internally

{(core 1 scan cells)/ (2400 chains)} * 1000

+ {(core 2 cells)/ (2400 chains)} * 2000}

+ {(core 3 cells)/ (2400 chains)} *4000}

If each core has the same number of scan cells, then we get this comparison:

(scan cells)/800 * 4000= (scan cells) * 5 for flat ATPG

and (scan chain length) * {(1000/2400) + (2000/2400) + (4000/2400)}

= (scan chain length) * 2.9 patterns for hierarchical ATPG

So in this case, hierarchical test is 60% the test application time as flat ATPG.

In hierarchical ATPG, the bandwidth of all channels are used on one block at a time. Thus, more chains can be used on each block to maximize the channel bandwidth. This can significantly improve the efficiency of DFT.  The impact can be more pronounced when different blocks require different pattern types.

Hierarchical DFT flow

The flow starts with core-level DFT, which includes insertion of scan chains, generation and insertion of compression IP, an adding wrapper chains to isolate cores. You can reuse existing function flops as shared wrapper cells and only use dedicated wrapper cells if absolutely necessary.

The next step is core pattern generation. Using ATPG software, you create the core-level test patterns and generate gray-box models. The gray-box models are light weight models for external test and pattern retargeting. You have some flexibility to preserve specific instances outside of what the automation might choose for you.

Pattern retargeting is next. You retarget the core-level patterns to the top level and can merge the pattern sets to create a “intest” pattern sets. The full netlist is not needed for pattern retargeting; just the top-level logic and core-level gray box models or even black box model with a “core description file” that provides information about the block level test structure.

After pattern retargeting, you create top-level interconnect tests. When making the top-level “extest” patterns, the full netlist never needs to be loaded into memory, just the top-level logic and core-level gray box models.

With some up-front design effort and planning, the biggest challenges of testing giga-gate SoCs can be addressed with a hierarchical DFT methodology.

For details about hierarchical DFT, you can download the whitepaper Divide and Conquer: Hierarchical DFT for SoC Designs.


Ron Press is the technical marketing manager of the Silicon Test Solutions products at Mentor Graphics. The 25-year veteran of the test and DFT (design-for-test) industry has presented seminars on DFT and test throughout the world. He has published dozens of papers in the field of test, is a member of the International Test Conference (ITC) Steering Committee, and is a Golden Core member of the IEEE Computer Society, and a Senior Member of IEEE. Press has patents on reduced-pin-count testing and glitch-free clock switching, and pending patents on 3D test.

Custom Layout Designers Need New Tools for New and Expanding Markets

May 27th, 2015

By Srinivas Velivala, Mentor Graphics

For a long time, digital was the darling of the semiconductor industry. But then a funny thing happened—the advent of cell phones and GPS and tablets and a zillion other new products made things like power consumption and battery life important market factors. But this new emphasis on analog and mixed-signal designs also brought new market pressure to custom designers. Now more than ever, time to market could mean the difference between so-so results and profitability. With that came the need to reduce design and verification timelines while still ensuring high-quality products.

In response to that demand, we introduced Calibre® RealTime, which provides interactive DRC feedback in a custom layout environment using the same sign-off Calibre design rule checking (DRC) deck that is used for batch Calibre DRC jobs. By enabling signoff DRC during the design process, Calibre RealTime helped designers reduce the time to tapeout. Initially, the use model was intended for debugging DRC results in standard cells and block designs. As such, we included an integrated toolbar, so layout designers could highlight and step through DRC results as per the order of the results generated, or select a specific DRC check and step through the DRC results belonging to that check.

However, layout designers continued to expand the application of Calibre RealTime to larger designs, such as partial layout of a macro, or even full-chip designs, invoking it during final DRC review before tapeout (using a combination of batch Calibre and Calibre RealTime). With this use came a desire to see a complete picture of the DRC results: how many DRC checks are violated, how many DRC results are present in each check, how many DRC results can be disregarded at this design stage, and so on. Providing this type of analysis required an expanded interface GUI to allow layout designers to debug their DRC results efficiently.

The Calibre RealTime-RVE interface has the same look and feel as the Calibre RVE™ tool, to provide custom layout designers the flexibility to analyze DRC results generated from a Calibre RealTime job and formulate an efficient strategy to debug and fix the DRC errors. The interface opens up automatically after a Calibre RealTime DRC job run (Figure 1). Designers can select a specific DRC check and highlight the specific result/s belonging to that check. Designers also get a clear description of the DRC check that has been violated. In this example, the description of the check indicates that this is a double patterning (DP) error.

Figure 1. DRC error results in the Calibre RealTime-RVE interface.

The Calibre RealTime toolbar and Calibre RealTime-RVE interface are always synchronized (Figure 2), allowing designers to highlight DRC results from either the toolbar or the interface.

Figure 2. The Calibre RealTime toolbar and Calibre RealTime-RVE interface are always in sync.

In addition, designers can display and sort DRC results by associated characteristics, reducing visual “clutter” and allowing them to focus more efficiently on their debugging tasks (Figure 3).

Figure 3. Custom designers can display and sort by error characteristics.

To maximize efficiency, designers can run Calibre RealTime DRC jobs on multiple designs in the layout environment, and browse all the results using the Calibre RealTime-RVE interface. The interface opens separate tabs to display the results generated from each design, preventing any mix-up or confusion, and ensuring that there is no additional delay. Designers can select any particular results tab and highlight the results from that tab. The Calibre RealTime-RVE interface automatically ensures that the DRC results are highlighted in the design window corresponding to the DRC results tab from which the highlight commands are issued.

Figure 4. DRC results for multiple designs are displayed separately.

As custom layout designers use Calibre RealTime in an ever-expanding set of use models, they can be confident they will be able to easily comprehend, analyze and debug the DRC results using the Calibre-RealTime-RVE debug interface. Tools like this are essential to supporting the increasing market for custom designs while ensuring companies can produce reliable products in a timely, profitable manner.

Author

Srinivas Velivala is a Product Manager with the Design to Silicon Division of Mentor Graphics, focusing on developing Calibre integration and interface technologies. Before joining Mentor, he designed high-density SRAM compilers, and has more than seven years of design, field, and marketing experience. Srinivas holds a B.S. and M.S. in Electrical and Computer Engineering. In his spare time, he likes to travel and play cricket. He can be reached at srinivas_velivala@mentor.com.

OPC solutions for 10nm nodes and beyond

May 15th, 2015

By Vlad Liubich, OPC Product Manager for Design to Silicon, Mentor Graphics

“The report of my death was an exaggeration”1. Nothing describes better the current situation of modern ArF immersion (193i) lithography. With continuous shrinking of the IC devices and inability of EUV lithography to reach high volume manufacturing demands, future of the 14nm node was heavily dependent on the availability of the double patterning technology, which at that time was considered as a bridge technology between 193i and EUV2.

Significant efforts to enable double patterning technology were made on the design and computational lithography side of the business. With EUV lithography still delayed, 10nm and 7nm technology nodes are heavily dependent on availability of triple patterning and quadruple patterning decomposition and OPC as well as other supporting technologies.

The traditional OPC approach of correcting one pattern at a time does not take into account situations where inter-pattern interactions start playing a vital role. The main goal of OPC is to make sure the polygon on the mask will produce high-quality images in the photoresist layer. The OPC software compares the simulated resist image to the intended target image, referred to as OPC target convergence. Comparing the difference of the error on a wafer to the mask gives a mask error enhancement factor (MEEF). For example, if a change of 1nm on the mask (1x) produces a change of 4nm on the water, then MEEF is 4nm/1nm, or 4. The higher the MEEF, the harder it is to control the lithographic process because small variations on the mask cause large errors on the wafer.

Target convergence in high-MEEF environment has always been a challenge, but with increased pattern fidelity requirements, edge placement error margins are getting tighter and tighter. Aggressive insertion of sub-resolution assist features (SRAFs), either model- or rule-based, for the critical layers of advanced nodes insertion is a norm, but it often leads to residual SRAF printing. Printing SRAFs causes divots in the resist layer that are transferred by the etching process into dielectric.

Another new challenge is that smaller critical dimensions require thinner films, which makes the final height of the developed photoresist a concern because it leads to less tolerance of resist loss. Usually undetected during a routine top-down measurements, the resist top loss might cause wafer-level post-etch defects that reduce the integrated process window of the patterning step.

With tighter process control requirements of advanced nodes, it becomes more important to eliminate the systematic process variation, and OPC tools must be able to address the effects of variation. At 10nm and below, even layers that were not previously considered to be “lithographically critical” are becoming such.

Whether 193i lithography can provide a viable cost-effective solution for the advanced technology nodes depends in significant degree on the ability of OPC software to provide a platform to compensate for or eliminate the concerns outlined in this introduction.

Tools to enable 10nm lithography

Because multi-patterning (MP) is required at 10nm, an OPC solution must be able to correct three or more patterns simultaneously. Figure 1 shows an example of OPC results for a triple-patterned layout.

Figure 1. Triple patterning OPC results for 10nm interconnect layer.

The experience gained during 22nm and 14nm technology development showed that standard OPC methods with sequential pattern processing are not adequate in the presence of inter-pattern constraints such as inter-pattern spacing and stitching. The loss of a couple of nanometers might seem insignificant at the first glance, but with the diminishing overlay budget of the multi-patterning solutions at advanced nodes, it may represent significant patterning risk.

Figure 2. Stitch location (a) and inter-pattern space (b) after traditional OPC when each pattern is processed sequentially. Same stitch and inter-pattern space locations corrected with the MP-aware OPC functionality are shown on (c) and (d) respectively.

In addition to the traditional process window-aware correction, an MP-enabled OPC can improve the amount of overlap at the pattern-stitching regions and enforcing inter-pattern spacing. Figure 2 shows an example of MP-aware OPC outperforming the traditional sequential correction and creating robust stitching regions that keep healthy pattern separations. Compare the stitch location in (a) and inter-pattern space in (b), both of which are results from traditional OPC, to the same stitch and inter-pattern space when processed with MP-aware OPC. A 15% increase in overlay between two patterns (c) and 50% increase in spacing between the patterns (d) will directly translate into a healthier patterning process.

Together with the traditional OPC algorithms that solve fragment placement problems, MP-aware OPC should work with today’s multiple fragment movement solver.  A fragment movement solver for advanced nodes should incorporate the influence of neighboring fragments into the feedback control of fragment movements for full-chip OPC3,4.—referred to as matrix OPC. The formation of a matrix is illustrated in Figure 3.

Figure 3. Edge placement error calculation and matrix generation in Calibre Matrix OPC, an edge-based, full-chip level, enhanced OPC that scales to large numbers of CPUs just as traditional OPC does and with comparable runtime.

Figure 4 compares the results of different OPC algorithms. Even compared to specially tuned OPC recipe, the matrix OPC achieves significant convergence improvement.

Figure 4. Via layer, double patterning case. Convergence comparison between different flavors of OPC algorithms.

The next topic of this narrative is the out-of-main-image-plane effects – phenomena that occur in the photoresist layer close to its surface such as printing SRAFs and resist top loss.

The ability to handle SRAF printing has been available for single pattern applications for several years now and it is important to ensure the same functionality is available for MP cases as well. Advanced solutions have overcome the complexity of handling multiple SRAF layers placed across multiple patterns, and also added a capability of negative SRAF handling and correction. An image of MP SRAF printing is shown in Figure 5. One might think this would add complexity to the OPC setup files, but there are ways to create a cleaner and simpler SRAF print avoidance interface while minimizing run time impact by careful simulation management.

Figure 5. Interconnect layer, double patterning case. SRAF shape is eliminated due to printing. Polygons belonging to sraf_p2 are not shown.

A mask shape correction—based on a specially calibrated resist top-loss model—reduces the loss of material from the top of the photoresist surface Photoresist top loss correction in many cases can be treated as a special process window condition whose simulated contour is extracted from the upper layer of the photoresist – a phenomenon that is analogous to the SRAF printing case but different in the final outcome. Unlike SRAF print avoidance, the top loss compensation has to be applied to the main shape in order to eliminate a potential hot spot. Figure 6 shows an example of such correction carried out for the interconnect layer.

Figure 6. The image on the left shows hot spots related to resist top loss, which are eliminated in the picture on the right. The histogram shows the hot spot critical dimensions. Top loss-aware correction eliminates every location with critical dimensions <24nm.

In summary, at 10nm and below, the industry needs to adopt new OPC technologies. With the wide acceptance of the new-generation negative tone development photoresists, and transition of the OPC models from thin mask approximations to more complex models that take into account reticle 3D effects, there is no question that techniques like custom advanced OPC techniques will be required at 10nm and below.

As technical challenges grow and intertwine with the manufacturing process marginalities previously deemed as non-critical, it is important that OPC engineers engage with their counterparts in EDA to develop the flows and setup files for their sub-14nm technologies.. The increased flow complexity due to introduction of advanced OPC techniques can affect the OPC recipe turn-around-time, but there are strategies to control the impact and keep the OPC solutions production friendly.

References

  1. “Mark Twain Amused”, New York Journal, 2 June 1897
  2. W.H. Arnold, M.V. Dusa, J. Finders, “Metrology challenges of double exposure and double patterning,” Proc. SPIE, Vol. 6518
  3. Model-based OPC using the MEEF matrix, Nicolas B. Cobb ; Yuri Granik, Proc. SPIE, Vol. 4889
  4. Model-based OPC using the MEEF matrix II, Junjiang Lei, Le Hong, George Lippincott, James Word, Proc. of SPIE Vol. 9052

Vlad Liubich is a Product Manager for Calibre OPC at Mentor Graphics, with over 15 years of experience. Before joining Mentor, he served for 11 years in various engineering roles at Intel. He holds a BSc from the Moscow Institute of Steel and Alloys, Physical Chemistry Department in Russia and a MSc from Ben Gurion University in Negev, Beer Sheva, Israel. Vlad can be reached at vlad_liubich@mentor.com.

Autonomous Systems and IC Reliability

March 12th, 2015

By Matthew Hogan, Mentor Graphics

A lot of attention has been placed recently on the research being done on self-driving cars. It seems that “almost everyone” from Mercedes Benz [1] to BMW [2], Volvo [3], Volkswagen [4] and others are working towards this goal. Partnerships with those not normally associated with automotive, like Google [5] [6], nVIDIA [7] [8] and most recently (claims of) Apple [9], are getting press for their accomplishments, or at least speculation on what they might do.

The grand ambitions for these autonomous systems rely on vastly complex software running on equally complex hardware, comprising innovative IC designs and manufacturing processes. Looming overhead is the constant concern for safety and reliability. From an IC perspective, this means making sure that your designs are robust and reliable over a wide range of operating conditions and time, and that they fail in a known, safe manner.

For many IC manufacturers already within the automotive ecosystem, the functional safety standard, ISO 26262, is something they have embraced and are using as a competitive advantage [10] [11]. For those ramping up, I’m sure there will be many learning opportunities along the way. Certainly the promise and opportunities for self-driving cars are exciting enough to entice new entrants into this field. I expect that work on automobiles will also lead to advancements in other types of autonomous vehicles and devices. The automotive segment is certainly experiencing a lot of competition and focus, not only on functional safety applications, but on infotainment as well. Will companies choose to migrate their existing technology from less demanding disciplines? Will they face struggles from regulators and acceptance of the already established supply chain? Probably.

Proving IC reliability requires a specific mindset and focus when it comes to quality and reliability, a different type of IC verification than the typical DRC, LVS and ERC checks many of us have been using in the past, as mandated by our foundries. Validating that designs are robust over time, will age gracefully, and fail in a known way will play an important role in the acceptance of new technology providers to this space. The established automotive players will undoubtedly desire to continue their dominance in this demanding market.

But the adoption of self-driving cars will not only require technology readiness, but also the acceptance of society and, in particular, our law makers. For example, this month’s IEEE Spectrum [12] showcases an article, “Radar Everywhere”. Radar has been a key technology in advanced driver assistance systems (ASAS) available in many modern cars today. This technology seems to be gaining praise and drivers’ confidence. How autonomous will the next generation of these systems be? From an emotional perspective, I’m sure there are great divides within the community on how much “trust” to put in self-driving cars. How much safer will the roads be when self-driving cars are helping inexperienced new drivers and those at the other end of the age spectrum? If there were a bingle (minor collision), who would be responsible? In the United States, a number of states have made changes to local laws that facilitate the development and deployment of these systems [13] [14], as have other jurisdictions around the world [15].

Suitable laws need to be in place for these and other innovations to take place. Not only for self-driving cars, but also for other innovations that might be seen with skepticism or require oversight by regulating bodies. Recently, it seems that Amazon’s plans for its drone-based Prime Air delivery system [16] (see Fig 1), may have seen a setback, at least in the United States [17].

Fig 1: Images of Amazon’s Prime Air delivery system.

Looking to get packages delivered in 30 minutes or less, these unmanned flying vehicles (aka drones) are subject to FAA regulations that require an operator to have line of sight visual contact with the craft. This requirement limits US based deployment, but other locales are less stringent. The Australian start-up Zookal are planning to deliver text books utilizing drones supplied by Flirtey in 2015 [18] [19] [20].

I was able to get a hold of Matthew Sweeney, Flirtey’s Founder and CEO, who provided some insight on their activities and technology. “Flirtey launched the world’s first drone delivery service in October 2013 and have conducted over 100 successful test deliveries of textbooks outdoors during its test phase in Sydney.” He also told me about some upcoming activities. “In early 2015 Flirtey is launching commercial drone delivery trials with selected customers in New Zealand.” Providing more details on the flight control systems used, Mr. Sweeney added, “Flirtey’s drones fly autonomously based on GPS coordinates, using deconflicted flight paths that are logged with the aviation authority to ensure safety, and lowering the package from the air to the end customer. We use multiple communication links so a human operator can intervene if required. If there are circumstances where cameras are used in the future, Flirtey will implement industry best practices to limit data collection to only the necessary information, protect data during transmission, and set up a regiment for deletion of stored data.” [21]

The avoidance systems being developed for self-driving cars could possibly be leveraged to make these flying drones even more autonomous and safer. GPS, collision avoidance systems, and object identification would all contribute to a more effective flight control system. Again though, the question of IC reliability comes to mind. What standards would IC manufacturers be held to for use in autonomous flight systems used in unmanned delivery drones? What are the thresholds regulators and the public are comfortable with for these markets? How harsh are the operating conditions for ICs in these systems? Would the functional safety standards developed for the automotive industry (ISO 26262 and others) be overkill, or equally valid? Lots of questions! It would seem to me that a convergence of technology might not be long off for these little drones, should there be the will (and by will, I mean financial incentive) to drive this market.

The Harry Potter books (by J. K. Rowling) describe a world where wizarding students receive personal deliveries by their own personal Owl, complete with advanced collision avoidance, and great night vision. How long will it be before a swarm of autonomous delivery drones, or self-driving cars, powered by reliable ICs are considered an everyday part of life? Are there different thresholds for adoption of these technologies depending on where you live (sparsely populated rural areas, or a dense metropolis)? For package delivery, hopefully you don’t live in an apartment building, which might pose somewhat of a problem [22], at least until some of those more annoying little details get sorted, like where to leave the package.

Matthew Hogan is a Product Marketing Manager for Calibre Design Solutions at Mentor Graphics, with over 15 years of design and field experience. He is actively working with customers who have an interest in Calibre PERC. Matthew is an active member of IIRW and the ESD Association—involved with the EDA working group, the Symposium technical program committee, and the IEW management committee. Matthew is also a Senior Member of IEEE, and a member of ACM. He holds a B. Eng. from the Royal Melbourne Institute of Technology, and an MBA from Marylhurst University. Matthew can be reached at matthew_hogan@mentor.com.

REFERENCES

[1] The New Mercedes Driverless Car Even Has The Driver’s Seat Facing Away From The Road, http://www.businessinsider.com/mercedes-new-self-driving-car-f-015-2015-1

[2] BMW hits the performance limits with its driverless car, http://www.cnet.com/news/bmw-hits-the-performance-limits-with-its-driverless-car/

[3] Volvo vows to put first self-driving cars in customers’ hands by 2017, https://autos.yahoo.com/blogs/motoramic/volvo-vows-the-first-self-driving-car-in-customers–hands-by-2017-202033465.html

[4] Volkswagen Shows Off Self-Driving Auto Pilot Technology For Cars, http://www.motorauthority.com/news/1062073_volkswagen-shows-off-self-driving-auto-pilot-technology-for-cars

[5] Google driverless car, http://en.wikipedia.org/wiki/Google_driverless_car

[6] Google partners with auto suppliers on self-driving car, http://www.reuters.com/article/2015/01/14/us-autoshow-google-urmson-idUSKBN0KN29820150114

[7] Nvidia Envisions Self-Driving Cars, http://www.eetimes.com/document.asp?doc_id=1325676

[8] Nvidia’s Tegra X1 aims to make driverless cars more reliable, http://www.computerworld.com/article/2864593/nvidias-tegra-x1-aims-to-make-driverless-cars-more-reliable.html

[9] Apple’s Automobile Project Said to Include Self-Driving Cars, http://www.macrumors.com/2015/02/14/apple-car-self-driving/

[10] Functional Safety for ISO 26262 and IEC 61508, http://www.freescale.com/webapp/sps/site/overview.jsp?code=FNCTNLSFTY&fsrch=1&sr=4&pageNum=1

[11] Infineon Introduces Dual-Sensor Package Devices for Safety Critical Automotive Applications; Redundant Sensor Architecture Supports ASIL D Systems and Helps Shrink System Footprint and Reduce Cost, http://www.infineon.com/cms/en/about-infineon/press/press-releases/2014/INFATV201410-003.html

[12] Radar Everywhere, IEEE Spectrum, Vol. 52, no. 2 (NA) Feb 2015, pp52-59, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024512

[13] Automated Driving: Legislative and Regulatory Action, http://cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_Legislative_and_Regulatory_Action

[14] States take the wheel on driverless cars, http://www.usatoday.com/story/news/nation/2013/07/29/states-driverless-cars/2595613/

[15] Driverless cars on British roads in six months as ministers change law to allow trials of Google-style vehicles, http://www.dailymail.co.uk/sciencetech/article-2710370/Driverless-cars-British-roads-year-ministers-change-law-allow-trials-Google-style-vehicles.html

[16] Amazon Prime Air, http://www.amazon.com/b?node=8037720011

[17] FAA shoots down Amazon’s drone delivery plans, http://www.usatoday.com/story/tech/2015/02/15/amazon-cool-to-drone-rules/23473791/

[18] Zookal will deliver textbooks using drones in Australia next year, http://www.theverge.com/2013/10/15/4840706/zookal-will-deliver-textbooks-with-drones-in-australia

[19] Flirtey, http://flirtey.com/

[20] Flirtey delivery drone startup spreads its wings, http://www.gizmag.com/australian-drone-startup-flirtey-wings-overseas/33656/

[21] Private Communication, Matthew Sweeny, CEO and Founder, Flirtey

[22] The futurist: 37 critical drone-delivery problems, https://www.cobizmag.com/articles/the-futurist-37-critical-drone-delivery-problems

Cutting fab costs and turn-around time with smart, automated resource management

January 30th, 2015

By Mark Simmons, Product Marketing Manager, Calibre Manufacturing Group, Mentor Graphics

The competition for market share is brutal for both the pure-play and independent device manufacturer (IDM) foundries. Success involves tuning a lot of knobs and dials. One of the important knobs is the ability to continually meet or exceed aggressive time-to-market schedules. There are a multitude of facets that will enable fabs to hold true to their contracts. Yet, there are counter forces which impede their ability to do so as well, such as hardware and software availability and how efficiently those resources are utilized and consumed, respectively.  It is very challenging for companies to always move product out on time. Remember, this not only is an explicit promise to customers, but also represents cost-savings to the fab.

Much of the Mask Data Prep (MDP) flow involves a series of contiguous data processing steps, commonly referred to as jobs or tasks. Traditionally, each job requires a pre-designated allocation of both hardware CPU cores and software licenses to be available for use. If those resources are available, then the job can start as soon as it is launched.  But, if they are not available, then the job has to sit in a queue until there are sufficient resources.  This time negatively contributes to the total manufacturing time.

Another goal is to free resources so that other jobs may be adequately resourced in order to start. With any multi-threaded, distributed processing software, there’s an inherent penalty in terms of how many of the CPU cores are actively being used at any given time over the lifetime of a job: as work-to-be-done decreases, so does the number of actively processing CPU cores. More and more CPUs become idle, and they cannot be disconnected from that job to be used elsewhere. This inefficiency is associated with every job and when summed up across all jobs in a cluster, it can result in a compounded loss of CPU cores and software licenses that could have been more efficiently and effectively used.

To overcome the delay in launch for jobs and the underutilization of cluster resources, fabs can invest in software targeted to making this production flow more efficient. It dynamically alters multiple jobs’ resource allocations over time as a function of each job’s specific need for resources. This allows jobs to start with fewer resources, which lets them be launched earlier from the queue.  It ensures that unused resources are returned to the cluster pool for consumption by other jobs. In doing so, it effectively maximizes the resource utilization across the entire cluster for all jobs. By performing this dynamic resource allocation and from a cluster-level perspective, it is expected that more work can be done in less time with the same number of resources. And doing more work helps to better ensure meeting of schedule for the fabs.

The theory was given proof in a paper delivered at the SPIE Photomask Technology conference (February 2013) in which the manufacturing gurus at SMIC and Mentor Graphics describe how they achieved a nearly 30% aggregate TAT improvement and a greater than 90% average utilization of all hardware resources. You can download the paper from the Mentor website (registration required).

SMIC’s goal, like all fabs, is to ensure continual improvement in its turnaround time (TAT), which is challenging given that newer technology processes are never less complex, and consequently never faster to process. SMIC and Mentor analyzed runtime data trends at the 65nm and 40nm technology nodes and it appeared that they could significantly reduce TAT through improvements in hardware and software utilization. This seems like low-hanging fruit, but considering the challenge of managing the resource allocations for tens to hundreds of jobs simultaneously to obtain this benefit, it isn’t easy.

So Mentor and SMIC devised an experiment. Rather than focusing on tuning OPC recipes or other typical approaches to TAT reduction, they focused on dynamic resource allocation. They used a new resource cluster manager to automatically govern the hardware and software resources for all jobs running on a remote compute cluster. This software solution automatically provided idle resources to jobs that could use them, and revoked resources from jobs when they were not being used. In addition to managing a single task’s allocations, it also improved the utilization efficiency of resources at the cluster level, considering all tasks together as a whole. The experiment showed that when you optimize the distribution of resources to all of the jobs running simultaneously on the cluster you get overall aggregate runtime performance improvement and maximum utilization across the cluster resources.

Foundries are constantly improving their processes in order to cut TAT and be more competitive. Adopting smart, automated resource management is a simple and effective strategy.

Figure 1: Calibre Cluster Manager (CalCM) automates the allocation of compute resources in the post-tapeout flow.

Mark Simmons is a product marketing manager for the Calibre Manufacturing group at Mentor Graphics with over 10 years of experience. He holds a bachelor’s degree in Physics from S.U.N.Y. Geneseo, a master’s degree in microelectronics manufacturing from Rochester Institute of Technology and an MBA from Portland State University School of Business. He can be reached at mark_simmons@mentor.com.

This IP Will Work…I GUARANTEE It!

December 17th, 2014

By Matthew Hogan, Mentor Graphics

Intellectual property (IP) is usually bought from a 3rd party vendor or developed by a specialized internal IP group. This group performs testing to ensure the IP will work as designed. As the chip designer, you merely insert the IP into the IC design and make the necessary connections. Easy-peasey!

Except…robust design requires more than verifying each separate block—you must also verify that the overall design is robust. When you are using hundreds of IPs sourced from multiple suppliers in a layout, how do you ensure that the integration of all those IPs is robust and accurate?

For instance, IP is often designed with a certain set of design constraints and operating parameters in mind. As a chip designer, do you know what these conditions are when you place IP in your layout? How do you make sure you have implemented an IP in a way that conforms to the supplier’s “assumed” use model? Will the operating conditions of the overall SoC fall within each IP supplier’s design envelope? Using an IP validation process like TSMC9000 (TSMC’s IP Validation Center) helps IP designers ensure the IP is robust as a standalone component, but SoC designers must still verify that the IP is correctly implemented in the full-chip context.

The reality is that many IC reliability issues are actually the result of design flaws, not manufacturing issues per se, and involve subtle, longer-term effects like oxide degradation that cannot easily be detected by traditional production tests on the manufacturing line. Some of these result from incorrect use of IP, and are essentially design flaws. These problems can be particularly vexing in the area of electrostatic discharge (ESD) protection of cells, and input/output (I/O) pads with embedded ESD structures. The challenge is complicated by the presence of multiple power domains, because validation of the signal interactions between IPs is essential to ensuring correct and reliable behavior over the complete range of operating power states.

Consequently, it is critically important that IPs in a design are subjected to comprehensive circuit reliability checking in the context of the overall SoC. To automate this requirement, designers need a class of tools that can integrate several capabilities, including circuit classification, physical layout measurements, complex device interactions, and rule-driven circuit checking. The combination of these facilities allows designers to automate many of the circuit checks required to ensure SoC reliability. At the same time, guidelines for circuit reliability checking are emerging—some of them are proprietary to individual companies, while others have been defined collaboratively by open interest or standards groups. The ESD Association (ESDA) provides ESD verification guidelines [1], and the Silicon Integration Initiative (Si2) also recommends a standard ESD protection design flow methodology. Both of these trends (tools and methods) are having a significant impact on our ability to identify and remove design flaws that reduce long-term IC reliability.

For example, designers can validate the proper implementation of power design intent at the transistor level by combining Unified Power Format (UPF) power state tables (PSTs) with the transistor-level validation capabilities of a reliability verification tool like Calibre® PERC™. This can be done for both standalone IP and in the context of the full SoC in the same flow, providing both a timely reliability verification process and transistor-level accuracy and scalability from individual IP to full chip.

This approach is particularly handy in verifying that IPs dropped into a design have the required ESD protection. To protect SoCs from ESD, protection circuitry must be applied across I/Os and power lines. While interconnect routing is mostly automated in physical digital design, in practice, portions of I/O and power and ground routing are frequently completed manually. Designers following a typical routing strategy try to implement wires with enough “total width” for ESD protection when they have to split a wire to connect to a lower level of interconnects. To enforce this ESD design practice, the foundry assigns a minimum wire width to meet the ESD requirement (which varies per layer), and a design rule check (DRC) to ensure compliance. However, DRC checking alone is not effective because a measure of the total wire width does not ensure interconnects are safe in the presence of an ESD event.

It is actually current density that is directly correlated to ESD failure. If the current density along the ESD path on some wire segment or via region is too high, then that wire segment/via region is susceptible to ESD failure (Figure 1). Using Calibre PERC, a designer can perform a simulation along the ESD path to determine the current density on each wire segment and via area. With these current density measurements, the foundry-defined effective wire width can be converted to a current density constraint to be checked against the simulated current density.

Figure 1: Even if a wire width measurement meets foundry criteria, it may not be sufficient to provide adequate ESD protection.

The tool also checks the topology of the design for appropriate implementation of ESD structures and their placement with respect to devices to be protected and the core of the IC. It allows implementation of the ESD checks recommended by ESDA, such as layout checks, netlist checks, and current density checks, to name a few important ones. It can also identify the omission of required ESD protection devices on a schematic or netlist, and it can look for errant signal paths and other soft connection errors. These checks include well connection errors, floating devices, nets, pins, incorrect voltage supply connections, excessive series pass gates, problem-level shifter designs, guard ring and antenna checks, floating wells, and minimum “hot” NWELL widths.

Guarantees are nice, but at the end of the day, you need to make sure that you have implemented all IP blocks in a compatible way that works as a system, including all ESD protection. Validating IP blocks in the context of your design is a necessary part of the IC verification process, and the only sure way to do that is to understand all interactions and eliminate subtle design flaws.

References:

[1] ESD Electronic Design Automation Checks, ESDA TR18.0-01-11, EDA Tool Working Group

Author

Matthew Hogan is a product marketing manager for the Calibre Design Solutions group at Mentor Graphics with over 15 years of design and field experience. He is an IEEE Senior Member and ACM Member and holds a Bachelor of Engineering from the Royal Melbourne Institute of Technology and an MBA from Marylhurst University. He is actively working with customers who have interest in Calibre PERC and 3D-IC. He can be reached at matthew_hogan@mentor.com.

Next Page »