Part of the  

Solid State Technology

  Network

About  |  Contact

Archive for July, 2014

SoC Reliability Verification Doesn’t Just Happen, You Know

Wednesday, July 16th, 2014

By Matt Hogan, Mentor Graphics

In today’s complex, specialized world, system on chip (SoC) designs contain many different types of intellectual property (IP), some obtained from multiple suppliers and others developed by internal design teams. At the same time, today’s low-power SoCs often contain multiple power domains. When many IPs are intermingled on an SoC, and assigned to multiple power domains, validation of the signal interactions between these IPs is essential to ensuring correct and reliable behavior in the complete range of operating power states.

A power domain is often thought of as a specific voltage value. Power domains may contain one to many IPs. Very often, each IP has multiple power states. These control how much power that IP uses in each state, and how much effort is required to change its operating state. Signal interaction validation was considered to be relatively simple when there were only one or two power domains in the design and very little interaction between design blocks. However, this same validation becomes complex and difficult when multiple power domains exist with multiple power states.

The performance and functionality gains of low-power designs that leverage multiple power domains are well-known, but without accurate reliability verification, these designs will never make it to market. Ensuring the reliability of your designs requires an understanding of how signals cross multiple power domains for both the individual IP and the full SoC. Figure 1 shows a three power domain design, requiring validation of the interactions within and across these power domains.

Figure 1. Multiple power domain verification requires validation of interactions within and across power domains.

Reliability challenges abound. Designers look to avoid stressing thin-oxide gates by confirming the correct connectivity to the correct power domain to help minimize potential issues. While not an immediate failure, stressing thin-oxide gates results in a failure over time and is a concern for long term reliability. Designers must ensure that the correct level shifters, retention cells, and other design elements have been accurately placed for each of the different power domains. They must also validate the accuracy of bulk and well connections at the transistor level.

Some design teams may consider leaving design margin on the table by choosing a very simple power structure that can be easily validated. While that was once an acceptable design choice for market segments that did not include handheld and portable devices, in today’s market, even directly-powered devices (those that plug into a wall power socket) are looking to reduce power consumption with the use of multiple power domains.

So, how do we verify these multi-IP, multiple power domain SoCs?

The Unified Power Format (UPF) enables a repeatable, comprehensive, and efficient design verification methodology, using industry standards, at the transistor level. It can help simplify multiple power domain verification by enabling a consistent description of the power intent throughout the design flow. UPF support for power state tables (PSTs) enables verification of each power mode within the design. Figure 2 shows a typical PST for a three power domain design.

Figure 2. Typical three power domain state table and transitions

Transistor-level power intent verification is critical in designs that make extensive use of IP. Because SOCs can contain many IPs from different sources, and these IPs may all use different power methodologies, or contain their own internal global signals, correctly hooking up all of the IPs within the design is extremely challenging. If the design team does not understand the power intent of each IP, it’s very difficult to proactively prevent reliability issues (such as power domain crossing errors) from occurring when the IP is placed into the SoC. In Figure 3, the voltages internal to the IP block look consistent, but it’s been hooked up incorrectly in the SoC implementation.

Figure 3. IPs pose two levels of reliability certification challenges—internal verification, and verification in the context of a larger implementation.

Power state tables are a useful tool at the IP level, but describing power states for a complete SoC design can be incredibly difficult—how do you validate the interactions between all those IP blocks? Not only that, but the analysis of each set of table interactions is a one-time effort—the next SoC design will use some different IPs, or different versions of IPs, so the interdependencies will also be different, requiring an entirely new set of tables. Detailed SPICE simulation is not a viable option in these SoCs, because simulation of multiple domain designs requires the designer to not only include the power controller chip, but also to have the design cycle through multiple power transitions, which requires carefully chosen input vectors and produces long simulation times.

Figure 4 demonstrates a typical UPF tool flow. The power intent is described at the HDL/RTL level in the UPF file for the logic design. The UPF file is updated during the synthesis flow, and again during the place and route process. During verification, Calibre PERC can be used with either the GDS or LEF/DEF design, or the netlist (prior to physical implementation), to verify power intent.

Figure 4. Typical UPF flow (source: IEEE Std 1801™-2009)

While traditional UPF flows do not validate the final transistor implementation, especially for well and bulk connectors, reliability checking tools such as Calibre PERC can use the UPF’s description of power intent to validate power and reliability requirements at the transistor level to provide a comprehensive and deterministic reliability verification strategy for SoCs. Designers define a PST for each IP block in the SoC. Calibre PERC then merges these PSTs to enable transistor-level verification across the full SoC (Figure 5). The merged PST provides the understanding of the interactions and state overlaps needed at the SoC level to manage the reliability verification complexity. Calibre PERC examines the UPF definitions of supply networks  and checks each supply port’s supply states and its connected supply net, then analyzes the power state tables defined in terms of these states to ensure it captures the legal combinations of supply voltages in the context of the entire design.

Figure 5. By merging multiple PSTs, Calibre PERC can understand the composite power intent of the SoC with transistor-level accuracy.

Accurate and repeatable reliability verification is now a critical capability, both for increasingly complex products being produced at established nodes and for the new designs emerging at advanced nodes. By combining UPF PSTs with the transistor-level validation capabilities of a reliability verification tool like Calibre PERC, designers can now validate power intent at the transistor level, both in standalone IP and as part of a full SoC, in the same flow, providing both a timely reliability verification process and transistor-level accuracy and scalability in reliability verification from IP to full chip.

Matthew Hogan is a Product Marketing Manager for Calibre Design Solutions at Mentor Graphics, with over 15 years of design and field experience. He is actively working with customers who have an interest in Calibre PERC. Matthew is an active member of the ESD Association—involved with the EDA working group, the Symposium technical program committee, and the IEW management committee. He is also a Senior Member of IEEE, and a member of ACM. He holds a B. Eng. from the Royal Melbourne Institute of Technology, and an MBA from Marylhurst University. Matthew can be reached at matthew_hogan@mentor.com.

Intensive Gardening: What to Expect When Filling Designs at 20nm and Below

Tuesday, July 8th, 2014

By Jeff Wilson, Mentor Graphics

The word “garden” usually brings to mind tidy rows of vegetables, each neatly separated from its neighbor by the prescribed growing space. But there is another approach, often called “intensive” or “square foot” gardening, that places multiple plant types closely together. By interplanting compatible plants in a small space, a gardener can actually increase yield over a traditional garden, using far less area. However, intensive gardening requires a different mindset and approach to maintenance to ensure good results.

Designers beginning to design at 20 or 16nm might do well to think about “intensive gardening” as a metaphor for the changes they will encounter in fill technology and processes at these nodes. Let me explain why…

The most obvious change in any new node is the reduction in feature size. At 20nm and below, the changes in feature size lead to a plethora of new manufacturing effects and requirements. In addition, while features get smaller, chips don’t. A typical design in the newest technologies exceeds 15 mm on a side. Because there is a significant cost associated with moving to any new node, design teams focus on designs that are most likely to generate a profit, meaning they will want to incorporate as much functionality and as many features as possible. These designs trigger longer verification runtimes due simply to the increased number of features to be verified, and the increase in design rule coverage.

One of the major changes that occur at 20nm due to this increased manufacturing complexity is the approach to non-functional metal fill. At older technology nodes, designers added large fill shapes to open design areas because a certain metal density was required to pass the foundry’s density design rule checks (DRC). Intended to improve planarity for manufacturing by reducing thickness variations created during chemical-mechanical polishing (CMP) processes, the fill process was fairly simple—you defined an area to fill, and your tool filled the area with pre-defined shapes of a specific size and spacing. To avoid creating parasitic capacitance issues, the goal was to add only as much fill as needed to satisfy the minimum and maximum density requirements.

At 20 nm and below, the complexity and expansion of manufacturing requirements compel designers to completely change their fill strategy. These designs entail explicit and complex new analysis during the filling process to account for the new manufacturing rules that require designers to balance density constraints against the amount of capacitance added to the design, and to ensure that design rule checking (DRC) constraints are met. To combat issues associated with rapid thermal annealing (RTA), fill is now added as a multi-layer fill cells. For example, the base layers in the transistors, such as poly and diffusion, are added as cells. New rules for metal layers require the insertion of multiple fill layers, and the validation of constraints on a layer-by-layer basis. Density constraints include gradient rules that control density variations between adjacent windows, but some of the new requirements also include the analysis and balancing of perimeter values on a layer-by-layer basis. At 20nm and 16nm, the goal of multi-patterning is to balance the light that passes through the mask. Multi-patterning constraints must be taken into account when adding fill, as the fill shapes, along with the other polygons, must be properly decomposed into multiple mask assignments. Fill can also now be used to improve the results of electrochemical deposition (ECD), etch, and lithography, as well as to minimize the impacts of stress effects. Optical proximity correction (OPC) fill, which is smaller and placed in close proximity to design features, improves the uniformity of the interconnect, reducing the amount of parasitic capacitance generated and boosting its manufacturability.

As a result of all these new applications of fill, both the amount of fill and the number of fill shapes has increased. To give you a sense of the impact of the smaller size and tighter spacing of the fill shapes, the same open area in a 20nm design has seen an order of magnitude difference in the amount of fill added when compared with a 65nm design. These new manufacturing requirements require a fill strategy that now focuses on maximizing the amount of fill added to a design.

To provide-correct-by-construction results, fill tools must support all of the new and expanded DRC rules introduced at 20 nm and below, including spacing checks such as Euclidian, elliptical, pitch, and width-based rules. Additionally, because fill no longer comprises just a few fill shapes on one layer, fill analysis must now take into account groupings of related layers and fill shapes.

While all of these changes help ensure the manufacturability and performance of advanced node designs, they have had a tremendous impact on fill runtimes and database size. Significantly increasing the number and variety of fill shapes used, and adding new, complex fill rules, requires substantial increases in processing time and creates huge output files that impact transfer times when compared to past nodes. New fill techniques and tools seek to reduce both fill file size and fill runtimes with a variety of new strategies and analytical optimizations.

One way to deal with the file size increase is to raise the level of abstraction by moving from individual polygons to a cell-based fill solution, defining a multi-layer pattern of fill shapes that can be repeated in many places across the chip. These fill cells are a natural extension of multi-level fill constructs, and can be used for both front end of line (FEOL) and back end of line (BEOL). The cell-based approach helps reduce both runtime and file size, which helps maintain project schedules.

Correct-by-construction capabilities that combine a rules-based approach with sophisticated analysis algorithms to analyze items such as layout density (including gradient) and polygon perimeters help ensure that the insertion of dummy metal is accurately optimized for each layout, while minimizing fill runtimes. Correct by construction flows also require a detailed knowledge of the new design rules for such constraints as forbidden pitch and shielding.

Another design parameter that must be managed is the timing closure loop. The important fill factor here is the ability to support a net-aware fill strategy (to protect critical nets by enforcing a user-defined distance from specified nets to any fill shapes). However, with the explosion in fill shapes, customers are opting to keep the fill in a separate file, and then merge the drawn design and the fill shapes when extraction for timing verification is run, or when an ECO fill flow is required.

In addition, because design teams use a wide variety of EDA tools, fill solutions that support standard interfaces ensure that designers can easily execute a foundry-certified fill solution that addresses the newly complex fill technology from within their preferred design implementation tool. This ability to interact with a variety of toolsets can be critical, given that tools can change from one node to the next. Designers working with tools that use proprietary interfaces to communicate may find themselves left behind when they move to the next node, unable to effectively implement the new required “smart” fill techniques.

Intensive fill techniques, like intensive gardening, require more attention to detail than previous fill strategies. Using the right tools can help minimize the effects on fill runtimes and databases, while ensuring accurate, timely results.

Jeff Wilson is a DFM Product Marketing Manager in the Calibre organization at Mentor Graphics in Wilsonville, OR. He has responsibility for the development of products that analyze and modify the layout to improve the robustness and quality of the design. Jeff previously worked at Motorola and SCS. He holds a BS in Design Engineering from Brigham Young University and an MBA from the University of Oregon. Jeff may be reached at jeff_wilson@mentor.com.