Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘Mentor Graphics’

Next Page »

Mentor Graphics Team Receives the Harvey Rosten Award for Thermal Heatsink Optimization Methodology

Thursday, March 23rd, 2017

Mentor Graphics Corporation (NASDAQ:  MENT) today announced that Dr. Robin Bornoff, Dr. John Parry and John Wilson, a team from Mentor Graphics Mechanical Analysis Division, received the Harvey Rosten Award for Excellence in thermal modeling and analysis of electronics.  The team received the award for their paper, Subtractive Design: A Novel Approach to Heatsink Improvement, at the 33rd annual IEEE Thermal Measurement, Modeling and Management Symposium (SEMI-THERM) in San Jose, California.

Mentor Graphics team received the 2017 Harvey Rosten Award for excellence in thermal modeling and analysis in electronics at the SEMI-THERM symposium in San Jose, CA. The recipients (from left to right) Robin Bornoff, John Parry and John Wilson were honored for their technical paper on a unique heatsink optimization methodology.

The Mentor Graphics team created a unique methodology to sequentially remove underperforming portions of a heatsink to save weight and cost without compromising overall thermal performance. This method using Mentor Graphics® FloTHERM® technology provides a variety of automated optimization approaches to gain deeper insights into thermal characterizations to determine the best thermal design solution.

“We are extremely proud of our team for their commitment and dedication in discovering ways in which heatsinks may be optimized,” stated Roland Feldhinkel, general manager of Mentor Graphics Mechanical Analysis Division. “To be recognized by the selection committee comprised of highly esteemed thermal experts is a tremendous honor, and particularly personal since this award is named after the co-founder of Flomerics, which Mentor Graphics acquired in 2008.”

The Harvey Rosten Award For Excellence has been established by the family and friends of Harvey Rosten, who was responsible for the development of PHOENICS, the world’s first commercial general purpose computational fluid dynamics (CFD) software while working at CHAM, and co-founder of Flomerics (now a division of Mentor Graphics Corporation). The Award commemorates Rosten’s achievements in the field of thermal analysis of electronics equipment, and the thermal modeling of electronics parts and packages. The award aims to encourage innovation and excellence in these fields.

2017 Harvey Rosten Award Recipients

Dr. Robin Bornoff is a market development manager in Mentor Graphics Mechanical Analysis Division.  Robin was previously an application and support engineer, and a product marketing manager, specializing in the application of CFD to electronics cooling and the design of the built environment. He attained a mechanical engineering degree from Brunel University in 1992 followed by a PhD in 1995 for computational fluid dynamics (CFD) research.

John Wilson is currently the electronics product specialist for Mentor Graphics Mechanical Analysis Division.  John previously managed the engineering design services team, where he gained extensive experience in IC package-level test and analysis correlation, heatsink optimization and compact model development. He joined Mentor Graphics in 1999 after receiving his BS and MS in mechanical engineering from the University of Colorado at Denver.

Dr. John Parry is the electronics industry manager for Mentor Graphics Mechanical Analysis Division, which he joined when it was founded as Flomerics in 1989.  He attained a chemical Engineering Degree from Leeds University in 1982 and a PhD in 1988. His expertise includes compact modeling of fans, IC and LED packages, heatsinks, DoE and optimization methods, and thermal characterization, with over 75 published technical articles. John is a member of JC15 and past chair of SEMI-THERM.

Deep Learning Could Boost Yields, Increase Revenues

Thursday, March 23rd, 2017

thumbnail

By Dave Lammers, Contributing Editor

While it is still early days for deep-learning techniques, the semiconductor industry may benefit from the advances in neural networks, according to analysts and industry executives.

First, the design and manufacturing of advanced ICs can become more efficient by deploying neural networks trained to analyze data, though labelling and classifying that data remains a major challenge. Also, demand will be spurred by the inference engines used in smartphones, autos, drones, robots and other systems, while the processors needed to train neural networks will re-energize demand for high-performance systems.

Abel Brown, senior systems architect at Nvidia, said until the 2010-2012 time frame, neural networks “didn’t have enough data.” Then, a “big bang” occurred when computing power multiplied and very large labelled data sets grew at Amazon, Google, and elsewhere. The trifecta was complete with advances in neural network techniques for image, video, and real-time voice recognition, among others.

During the training process, Brown noted, neural networks “figure out the important parts of the data” and then “converge to a set of significant features and parameters.”

Chris Rowen, who recently started Cognite Ventures to advise deep-learning startups, said he is “becoming aware of a lot more interest from the EDA industry” in deep learning techniques, adding that “problems in manufacturing also are very suitable” to the approach.

Chris Rowen, Cognite Ventures

For the semiconductor industry, Rowen said, deep-learning techniques are akin to “a shiny new hammer” that companies are still trying to figure out how to put to good use. But since yield questions are so important, and the causes of defects are often so hard to pinpoint, deep learning is an attractive approach to semiconductor companies.

“When you have masses of data, and you know what the outcome is but have no clear idea of what the causality is, (deep learning) can bring a complex model of causality that is very hard to do with manual methods,” said Rowen, an IEEE fellow who earlier was the CEO of Tensilica Inc.

The magic of deep learning, Rowen said, is that the learning process is highly automated and “doesn’t require a fab expert to look at the particular defect patterns.”

“It really is a rather brute force, naïve method. You don’t really know what the constituent patterns are that lead to these particular failures. But if you have enough examples that relate inputs to outputs, to defects or to failures, then you can use deep learning.”

Juan Rey, senior director of engineering at Mentor Graphics, said Mentor engineers have started investigating deep-learning techniques which could improve models of the lithography process steps, a complex issue that Rey said “is an area where deep neural networks and machine learning seem to be able to help.”

Juan Rey, Mentor Graphics

In the lithography process “we need to create an approximate model of what needs to be analyzed. For example, for photolithography specifically, there is the transition between dark and clear areas, where the slope of intensity for that transition zone plays a very clear role in the physics of the problem being solved. The problem tends to be that the design, the exact formulation, cannot be used in every space, and we are limited by the computational resources. We need to rely on a few discrete measurements, perhaps a few tens of thousands, maybe more, but it still is a discrete data set, and we don’t know if that is enough to cover all the cases when we model the full chip,” he said.

“Where we see an opportunity for deep learning is to try to do an interpretation for that problem, given that an exhaustive analysis is impossible. Using these new types of algorithms, we may be able to move from a problem that is continuous to a problem with a discrete data set.”

Mentor seeks to cooperate with academia and with research consortia such as IMEC. “We want to find the right research projects to sponsor between our research teams and academic teams. We hope that we can get better results with these new types of algorithms, and in the longer term with the new hardware that is being developed,” Rey said.

Many companies are developing specialized processors to run machine-learning algorithms, including non-Von Neumann, asynchronous architectures, which could offer several orders of magnitude less power consumption. “We are paying a lot of attention to the research, and would like to use some of these chips to solve some of the problems that the industry has, problems that are not very well served right now,” Rey said.

While power savings can still be gained with synchronous architectures, Rey said brain-inspired projects such as Qualcomm’s Zeroth processor, or the use of memristors being developed at H-P Labs, may be able to deliver significant power savings. “These are all worth paying attention to. It is my feeling that different architectures may be needed to deal with unstructured data. Otherwise, total power consumption is going through the roof. For unstructured data, these types of problem can be dealt with much better with neuromorphic computers.”

The use of deep learning techniques is moving beyond the biggest players, such as Google, Amazon, and the like. Just as various system integrators package the open source modules of the Hadoop data base technology into a more-secure offering, several system integrators are offering workstations packaged with the appropriate deep-learning tools.

Deep learning has evolved to play a role in speech recognition used in Amazon’s Echo. Source: Amazon

Robert Stober, director of systems engineering at Bright Computing, bundles AI software and tools with hardware based on Nvidia or Intel processors. “Our mission statement is to deploy deep learning packages, infrastructure, and clusters, so there is no more digging around for weeks and weeks by your expensive data scientists,” Stober said.

Deep learning is driving new the need for new types of processors as well as high-speed interconnects. Tim Miller, senior vice president at One Stop Systems, said that training the neural networks used in deep learning is an ideal task for GPUs because they can perform parallel calculations, sharply reducing the training time. However, GPUs often are large and require cooling, which most systems are not equipped to handle.

David Kanter, principal consultant at Real World Technologies, said “as I look at what’s driving the industry, it’s about convolutional neural networks, and using general-purpose hardware to do this is not the most efficient thing.”

However, research efforts focused on using new materials or futuristic architectures may over-complicate the situation for data scientists outside of the research arena. At the International Electron Devices Meeting (IEDM 2017), several research managers discussed using spin torque magnetic (STT-MRAM) technology, or resistive RAMs (ReRAM), to create dense, power-efficient networks of artificial neurons.

While those efforts are worthwhile from a research standpoint, Kanter said “when proving a new technology, you want to minimize the situation, and if you change the software architecture of neural networks, that is asking a lot of programmers, to adopt a different programming method.”

While Nvidia, Intel, and others battle it out at the high end for the processors used in training the neural network, the inference engines which use the results of that training must be less expensive and consume far less power.

Kanter said “today, most inference processing is done on general-purpose CPUs. It does not require a GPU. Most people I know at Google do not use a GPU. Since the (inference processing) workload load looks like the processing of DSP algorithms, it can be done with special-purpose cores from Tensilica (now part of Cadence) or ARC (now part of Synopsys). That is way better than any GPU,” Kanter said.

Rowen was asked if the end-node inference engine will blossom into large volumes. “I would emphatically say, yes, powerful inference engines will be widely deployed” in markets such as imaging, voice processing, language recognition, and modeling.

“There will be some opportunity for stand-alone inference engines, but most IEs will be part of a larger system. Inference doesn’t necessarily need hundreds of square millimeters of silicon. But it will be a major sub-system, widely deployed in a range of SoC platforms,” Rowen said.

Kanter noted that Nvidia has a powerful inference engine processor that has gained traction in the early self-driving cars, and Google has developed an ASIC to process its Tensor deep learning software language.

In many other markets, what is needed are very low power consumption IEs that can be used in security cameras, voice processors, drones, and many other markets. Nvidia CEO Jen Hsung Huang, in a blog post early this year, said that deep learning will spur demand for billions of devices deployed in drones, portable instruments, intelligent cameras, and autonomous vehicles.

“Someday, billions of intelligent devices will take advantage of deep learning to perform seemingly intelligent tasks,” Huang wrote. He envisions a future in which drones will autonomously find an item in a warehouse, for example, while portable medical instruments will use artificial intelligence to diagnose blood samples on-site.

In the long run, that “billions” vision may be correct, Kanter said, adding that the Nvidia CEO, an adept promoter as well as an astute company leader, may be wearing his salesman hat a bit.

“Ten years from now, inference processing will be widespread, and many SoCs will have an inference accelerator on board,” Kanter said.

Mentor Graphics Joins GLOBALFOUNDRIES FDXcelerator Partner Program

Thursday, December 22nd, 2016

Mentor Graphics Corp. (NASDAQ: MENT) today announced that it has joined GLOBALFOUNDRIES’ FDXcelerator Partner Program. FDXcelerator program partners support customers of GLOBALFOUNDRIES FDX™ technologies by providing a variety of design solutions, including approved design methodology, IP development expertise, hardware/software system integration expertise, and other critical software, services, and support. They participate in FDXcelerator Partner Program events, and receive early access to the GLOBALFOUNDRIES FDX roadmap and associated technology offerings.

“Mentor Graphics is proud to have expanded our long-term relationship with GLOBALFOUNDRIES to include the FDXcelerator Partner Program,” said Joe Sawicki, vice-president and general manager of the Design-to-Silicon division at Mentor Graphics. “We look forward to delivering an enhanced set of solutions to mutual customers in support of GLOBALFOUNDRIES FDX offerings that will enable the development of high quality low-power designs based upon FD-SOI technology.”

Mentor Graphics offerings participating in the FDXcelerator program include:

  • Multiple design implementation solutions from Digital IC Design, including the Oasys-RTL™ floorplanning and synthesis platform and Nitro-SoC™ next-generation place and route platform.
  • The Calibre® platform, including the Calibre DFM tool suite, the most comprehensive set of IC design verification tools in the EDA industry. Calibre tools will be designated as the sign-off tools for FDX across all GLOBALFOUNDRIES design creation flows.
  • The Analog FastSPICE (AFS)™ Platform, the fastest, most accurate, and highest capacity simulation for nanometer-scale circuits, and the Eldo® Platform, the most advanced circuit verification for analog-centric circuits. Collaboration with GLOBALFOUNDRIES includes device and circuit level certification for 22FDX, and support of reference flows for 22FDX.
  • The Tessent® product suite of comprehensive silicon test and yield analysis solutions includes a full design for test reference flow for 22FDX designs, and provides the industry’s highest test quality, lowest test cost, and fastest time to root cause of test failures.

“We are very pleased that Mentor Graphics has joined our FDXcelerator Partner Program,” said Alain Mutricy, senior vice president of product management at GLOBALFOUNDRIES. “The combination of Mentor’s EDA offerings and our FDX technologies provide customers with the solutions that will enable success in delivering products for today’s highly competitive IC markets.”

Mentor Graphics Signs Agreement with ARM to Accelerate Early Hardware/Software Development

Wednesday, November 16th, 2016

Mentor Graphics Corporation (NASDAQ: MENT) has signed a multiyear license agreement with ARM to gain early access to a broad range of ARM Fast ModelsCycle Models and related technologies. Mentor will have access to all ARM Fast Models for the ARMv7 and ARMv8 architectures across all ARM Cortex-A, Cortex-R, Cortex-M cores, GPUs and System IP, in addition to engineering collaboration on further optimizations. This builds on agreements already in place to ensure that the validation of ARM models is completed ahead of mutual customer demand.

“Our collaboration with Mentor has resulted in one of ARM’s broadest modeling partnerships,” said Javier Orensanz, general manager, development solutions group, ARM. “With this agreement, our mutual customers can utilize ARM’s entire model portfolio to speed system execution and debug issues with complete accuracy.”

As a result of this agreement, ARM Fast Models can be combined with the Veloce emulation platform, for example, to enable faster verification and earlier software development. Moving the modeling of the CPU and GPU out of the emulator and into the ARM Fast Models allows software execution performance orders of magnitude faster than a traditional approach that relies on a complete RTL description to be ready. This enables software tasks to be executed quickly, such as Android boots and application execution. Verification teams can now validate more than just boot code and drivers. They can also run complete software stacks to exercise the system in a realistic manner and flush out hard-to-find bugs, which would otherwise have gone undetected until physical prototypes were available.

“This second agreement with ARM clearly indicates our strategic alignment toward providing a complete HW/SW development platform,” said Brian Derrick, vice president of marketing, Mentor Graphics. “Our mutual customers benefit from early access and validation of state-of-the-art Mentor technology working with the most current ARM models.”

Elusive Analog Fault Simulation Finally Grasped

Tuesday, September 27th, 2016

thumbnail

By Stephen Sunter, Mentor Graphics

The test time per logic gate in ICs has greatly decreased in the last 20 years, thanks to scan-based design-for-test (DFT), automatic test pattern generation (ATPG) tools, and scan compression. But for analog circuits, test time per transistor has not decreased at all. And to make matters worse, the test time for the analog portion of an IC can dominate total test time. A new approach is needed for analog tests to achieve higher coverage in less time, or to improve defect tolerance.

Source: ON Semiconductor

Analog designers and test engineers do not have DFT tools comparable to those used by their digital counterparts. It has been difficult to improve the number of defective parts per million (DPPM) because it has been too challenging to measure defect coverage. These are typically measured by the rate of customer returns, which can occur months after the ICs are tested.

Analog fault simulation has only been discussed in academic papers and recently, in a few industrial papers that describe proprietary software. Why haven’t the analog fault simulation techniques described in all those papers led to commercially-available fault simulators that are used in industry? Mostly because there is no industry-accepted analog fault model and simulating all potential faults requires an impractically long time.

Potential Solutions for Reducing Simulation Time

Many methods for reducing simulation have been proposed over the years in published papers, including:

  • Simulate only shorts and opens in the schematic netlist without variations;
  • Analyze a circuit’s layout to find the shorts and opens that can actually occur (and the likelihood of those defects occurring);
  • Simulate only in the AC domain;
  • Simulate the sensitivities of each tested performance to variations in each circuit element;
  • Use a simplified, time domain simulation to measure the impact of injected shorts and opens on output signals, only within a few clock cycles;
  • Measure analog toggle coverage.

Even if these techniques were very efficient and reduced simulation time dramatically, the large number of defects simulated would mean that the number of undetected defects to diagnose would be large. For example, if there were 100,000 potential faults in a circuit and 90% were detected, there would be 10,000 undetected faults to investigate. Analyzing each defect is a very time-consuming task that requires detailed knowledge of the circuit and tests. Therefore, reducing the number of defects simulated can save a lot of time, in multiple ways. The methods to reduce the number of defects include:

  • Randomly select defects from a list of all potential defects;
  • Randomly select defects, after grouping them according to defect likelihoods;
  • Select only principal parameters of the circuit elements, such as voltage, gate length, width, and oxide thickness;
  • Select representative defects based on circuit analysis.

Potential Standard Analog Fault Models

Currently, there is no accepted analog fault model standard in the industry. Proposals such as simulating only short and open defects and simulating defective variations in circuit elements or in high-level models have been rejected. Because of the lack of a standard, a group of about a dozen companies (including Mentor Graphics) has been meeting regularly since mid-2014 to develop such a fault model. The group has reported their progress publicly several times, and hopes to develop an IEEE standard by 2018.

The Tessent DefectSim Solution

Tessent® DefectSim™ incorporates lessons learned from all previous approaches, combining the best aspects of each while avoiding their pitfalls. Simulation time is reduced using a variety of techniques that all together reduce total simulation time by many orders of magnitude compared to some of the previous approaches, without introducing a new simulator, reducing existing simulator accuracy, or restricting the types of tests. The analog defect models can be shorts and opens, just variations, or both. Or, users can substitute their own proprietary defect models. The defects can be injected at the schematic level, at the layout level, or a combination of both.

To be realistic, defects should be injected in a layout-extracted netlist. But higher-level netlist descriptions or hardware description language (HDL) models, such as Verilog-A or Verilog RTL, can reduce simulation time by one or two orders of magnitude. In practice, the highest level netlist of a subcircuit is often just its schematic; nevertheless, it typically simulates an order of magnitude faster than the layout-extracted netlist. DefectSim runs Eldo® when the circuit contains only SPICE and Verilog-A models, and Questa® ADMS™ when Verilog-AMS or RTL models are also used.

DefectSim introduces a new statistical technique called likelihood-weighted random sampling (LWRS) to minimize the number of defects to simulate. This new technique uses stratified random sampling in which each stratum contains only one defect. The likelihood of randomly selecting each defect is proportional to the likelihood of the defect occurring. Each likelihood of occurrence is computed based on designer-provided global parameters, and parameters of each circuit element.

For example, shorts are the most common. In state-of-the-art production processes, shorts are 3~10X more likely than opens. When the range of defect likelihoods is large, as it is for mixed-signal circuits, LWRS requires up to 75% fewer samples than simple random sampling (SRS) for a given confidence interval (the variation in an estimate that would occur if the random sampling was done many times). In practice, when coverage is 90% or higher, this means that it is usually sufficient to simulate a maximum 250 defects, regardless of the circuit size or the number of potential defects, to estimate coverage within 2.5%, for a 99% confidence level. Simulating as few as one hundred defects is sufficient to get ±4% estimate precision. For small circuits, or when time permits, all defects can be simulated.

DefectSim allows you to combine almost all of the previously-published techniques for reducing simulation time, including random sampling, high-level modeling, stop-on-detection, AC mode, and parallel simulation. All together, these techniques can reduce simulation time by up to six orders of magnitude compared to simulating the production test of all potential defects in a flat, layout-extracted netlist. The same techniques can be applied to the measurement of defect tolerance.

For more information about Tessent DefectSim, read the whitepaper at:
https://www.mentor.com/products/silicon-yield/resources/overview/part-1-analog-fault-simulation-challenges-and-solutions-f9fd7248-3244-4bda-a7e5-5a19f81d7490?cmpid=10167

Mentor Graphics Extends Offering to Support TSMC 7nm and 16FFC FinFET Process Technologies

Wednesday, September 21st, 2016

Mentor Graphics Corp. (NASDAQ: MENT) today announced further enhancements and optimizations for various products within the Calibre Platform, and Analog FastSPICE (AFS) Platform, as well as the completion of further certifications and reference flows for Taiwan Semiconductor Manufacturing Corporation (TSMC) 16FFC FinFET and 7nm FinFET processes. Moreover, the Calibre offering has been extended on additional established TSMC processes in support of the growing Internet of Things (IoT) design market requirements.

The AFS Platform, including AFS Mega simulation, has been certified for the TSMC 16FFC FinFET and the TSMC 7nm FinFET process technologies through TSMC’s SPICE Simulation Tool Certification Program. The AFS Platform supports TSMC design platforms for mobile, HPC, automotive, and IoT/wearables. Analog, mixed-signal, and RF design teams at leading semiconductor companies worldwide will benefit from using Analog FastSPICE to efficiently verify their chips designed in 16FFC and 7nm FinFET technologies.

Mentor’s Calibre xACT™ extraction offering is now certified for the TSMC 16FFC FinFET and the TSMC 7nm FinFET process technologies. Calibre xACT extraction leverages its built-in deterministic fast field-solver engine to deliver needed accuracy around three-dimensional FinFET devices and local interconnect. Its scalable multiprocessing delivers sufficient punch for large leading-edge digital designs. In addition, both companies continue extraction collaboration in established process nodes, with additional corner variation test cases and tighter criteria to ensure tool readiness for IoT applications.

The Calibre PERC™ reliability platform has also been enhanced to enable TSMC 7nm customers to run point-to-point resistance checks at full chip. This greater capacity allows customers to quickly analyze interconnect robustness at all levels (IP, block, and full chip) while verifying lower resistance paths on critical electrostatic discharge (ESD) circuitry, helping ensure long-term chip reliability. Likewise, Calibre Multi-Patterning functionality has been enhanced for 7nm, including new analysis, graph reduction and visualization capabilities which are essential to customers designing and debugging this completely new multi-patterning technique.

The Calibre YieldEnhancer ECOFill solution, initially developed for 20nm, has now been extended to all TSMC process nodes from 7nm to 65nm. Designers at all process nodes will now be able to minimize fill runtimes, manage fill hierarchy, and minimize shape removal when implementing changes to the initial design.

Mentor’s Nitro-SoC P&R platform has also been enhanced to support advanced 7nm requirements, such as floorplan boundary cell insertion, stacking via routing, M1 routing and cut-metal methodology, tap cell insertion and swapping, and ECO flow methodology. Certification of the flow integration of these N7 features are on-going. For 16FFC, the needed tool features have been validated by TSMC, and Mentor is optimizing its correlation with sign-off analysis.

“Today’s chip design teams are looking at different process nodes to implement their complete solution,” said Joe Sawicki, vice president and general manager of Mentor Graphics Design-to-Silicon Division. “By working with TSMC, Mentor is able to provide mutual customers with a single solution that is not only certified, but also includes the latest tool capabilities, for whichever TSMC process node they choose.”

“TSMC’s long-standing collaboration with Mentor Graphics enables both companies to work together effectively to identify new challenges and develop innovative solutions across all process nodes,” said Suk Lee, TSMC senior director, Design Infrastructure Marketing Division. “The Mentor Analog FastSPICE Platform, AFS Mega, and Calibre xACT tools have successfully met the accuracy and compatibility requirements for 16FFC and 7nm FinFET technologies. That certification, along with the Calibre Platform’s provision of fast, accurate physical verification, and extraction solutions critical to 7nm, ensures mutual customers they have access to EDA tools that are optimized for the newest process technologies.”

Mentor Graphics Veloce Emulation Platform Used by Starblaze for Verification of SSD Enterprise Storage Design

Wednesday, September 21st, 2016

Mentor Graphics Corporation (NASDAQ: MENT) today announced that the Veloce® emulation platform was successfully used by Starblaze Technology for a specialized high-speed, enterprise-based Solid State Drive (SSD) storage design.

Starblaze performed a detailed and lengthy analysis of the available solutions in the emulation market.  The Veloce emulation platform was selected and deployed because of its superior virtualization technology and memory protocol support, rich software debug capabilities and proven track record delivering innovative emulation technology.

“The enterprise SSD market is evolving rapidly, so the SoC (System on a Chip) verification technology we use has to be perfectly aligned with our needs, especially in terms of flexibility and high-performance protocol support,” said Sky Shen, CEO of Starblaze Technology. “After using the Veloce emulation platform on our latest high-performance, enterprise SSD controller project, we are convinced that a virtual solution with extensive software debug capability is the trend for the future of emulation technology.”

In the SSD storage space, it is extremely important for design teams to study the architecture and tune the performance while finding deep hardware bugs in the pre-silicon stage. Starblaze used VirtuaLAB PCIe to provide the host connection to their design on the Veloce emulation platform. The VirtuaLAB PCIe delivers very high debug productivity, and Starblaze was able to use its Software Design Kit “as is” without any modification or adaption.  In addition to using Veloce VirtuaLAB, Starblaze used Mentor’s Codelink® software debug capability to support the requirements of their embedded core software debug. In the flash interface side, the Veloce platform provides both HW and SW sparse memory solutions, which permits the necessary tradeoffs in the storage application.

ICE and Virtual:  Complementary Technologies

With the Veloce Emulation platform, verification teams have access to the best of both worlds, whether using an ICE-based or virtual emulation environment.  In-circuit emulation (ICE), a foundational emulation use model, remains a ‘must have’ for SoC designs that need to connect to real devices or custom hosts where physical hardware is required. The Veloce iSolve™ library offers a full complement of hardware components to build a robust ICE-based flow.

As more verification teams move from an ICE-based flow to a virtual flow, the Veloce emulation platform provides a smooth transition.  The Veloce Deterministic ICE App complements ICE by eliminating the non-deterministic nature of ICE and enabling advanced verification techniques: debug, power analysis, coverage closure, and software debug.

Full virtualization is achieved with the Veloce VirtuaLAB environment, which delivers virtual ICE-equivalent, high-speed host protocols and memory devices, allowing for greater flexibility for hardware/software system-level debug, power analysis, and system performance analysis.

“The Veloce emulation platform continues to deliver a comprehensive and robust emulation platform to a broad set of markets that all have unique challenges,” said Eric Selosse, vice president and general manager of the Mentor Emulation Division. “With Starblaze’s expertise in Flash Controller and SoC design, they quickly recognized the benefits of our VirtuaLAB solution.  Our success in working with them is attributed to our in-depth knowledge of the power of a virtual solution, and our timely support in deploying the Veloce emulation platform to meet their specific needs.”

About the Veloce Emulation platform

The Veloce emulation platform uses innovative software, running on powerful, qualified hardware and an extensible operating system, to target design risks faster than hardware-centric strategies. Now considered among the most versatile and powerful of verification tools, emulation greatly expands the ability of project teams to do hardware debugging, hardware/software co-verification or integration, system-level prototyping, low-power verification and power estimation and performance characterization.

The Veloce emulation platform is a core technology in the Mentor® Enterprise Verification Platform™ (EVP) – a platform that boosts productivity in ASIC and SoC functional verification by combining advanced verification technologies in a comprehensive platform. The Mentor EVP combines Questa® advanced verification solutions, the Veloce emulation platform, and the Visualizer™ debug environment into a globally accessible, high-performance datacenter resource. The Mentor EVP features global resource management that supports project teams around the world, maximizing both user productivity and total verification return on investment.

Veloce2 Emulator

Veloce2 Emulator is a high capacity, high-speed, multi-application powerhouse for simulation and emulation of SoC designs Learn More

About Mentor Graphics

Mentor Graphics Corporation is a world leader in electronic hardware and software design solutions, providing products, consulting services and award-winning support for the world’s most successful electronic, semiconductor and systems companies. Established in 1981, the company reported revenues in the last fiscal year of approximately $1.18 billion. Corporate headquarters are located at 8005 S.W. Boeckman Road, Wilsonville, Oregon 97070-7777. http://www.mentor.com.

CMOS-Photonics Technology Challenges

Friday, July 8th, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Fig 1

While it is very easy to talk about the potential advantages of CMOS-photonic integration, the design and manufacturing of commercially competitive products has been extraordinarily difficult. It has been well-known that the cost efficiencies of silicon wafers and CMOS fab processes could theoretically be leveraged to create low-cost photonic circuitry. However, the physics of optics is quite different from the physics of electronics, and so there have been unexpected challenges in moving R&D experiments to HVM products. During the Imec Technology Forum in Brussels held this May, Joris Van Campenhout, imec program director for Optical I/O (Fig. 1) sat down with Solid State Technology to discuss recent progress and future plans.

Data centers—also known as “The Cloud”—continue to grow along with associated power-consumptions, so there are strong motivations to find cost-effective ways to replace more of the electrical switches with lower-power optical circuits. Optical connections in modern data centers do not all have the same specifications, with a clear hierarchy based on the 3D grid-like layout of rows of rack-mounted Printed Circuit Boards (PCB). The table shows the basic differences in physical scale and switching speeds required at different levels within the hierarchy.

Data centers—also known as “The Cloud”—continue to grow along with associated power-consumptions, so there are strong motivations to find cost-effective ways to replace more of the electrical switches with lower-power optical circuits. Optical connections in modern data centers do not all have the same specifications, with a clear hierarchy based on the 3D grid-like layout of rows of rack-mounted Printed Circuit Boards (PCB). The table shows the basic differences in physical scale and switching speeds required at different levels within the hierarchy.

ESTIMATED DATA CENTER REQUIREMENTS FOR OPTICAL I/O  (Source: imec)
OPTICAL CONNECTION RACK BACKPLANE PCB CHIP
DISTANCE 5-500m 0.5-3m 5-50cm 1-50mm
RELATIVE COST $$$$ $$$ $$ $
POWER/Gbps 5mW 1mW 0.5mW 0.1mW

Rack fiberoptic lines connecting the rows of rack-mounted printed-circuit boards (PCB) in data centers represent a major portion of the total investments for capital equipment, so there is a roadmap to keep the same fibers in place while upgrading the speeds of photonic transmit and receive components over time:

40GHz was standard through 2015,

100GHz upgrades in 2016,

400GHz planned by 2019, and

1THz estimated by 2022.

Some companies have tried to develop multi-mode fiber solutions, but imec is working on single-mode. The telecommunications standard for single-mode optical fiber diameter is 9 microns, while multimode today can be up to 50 microns diameter. “Fundamentally single-mode will be the most integrate-able way to try to get that fiber on to a chip,” explained Van Campenhout. “It is difficult enough to get nine micron diameter fibers to couple to sub-micron waveguides on chip.”

Backplane is the PCB-to-PCB connection within one rack, that today uses copper connections running at up to 50 GHz. Imec sees backplane applications as a possible insertion point for CMOS-Photonics, because there are approximately 10X the number of connections compared to rack applications and because the relative cost target calls for new technologies. Imec’s approach uses 56G silicon ring-modulators to shift wavelengths by 0.1% at very low power, knowingly taking on control issues with non-linearity, and high temperature sensitivity. “We’re confident that it can be done,” stated Van Campenhout, “but the question remains if the overhead can be reduced so that the costs are competitive.” The overhead includes the possible need for on-chip thin-film heaters/coolers to be able to control the temperature.

PCB level connections are being pushed by the Consortium for On-Board Optics (COBO), an industry group working to develop a series of specifications to permit the use of board-mounted optical modules in the manufacturing of networked equipment (i.e. switches, servers, etc.). The organization plans to reference industry specifications where possible and develop specifications where required with attention to electrical interfaces, pin-outs, connectors, thermals, etc. for the development of interchangeable and interoperable optical modules that can be mounted onto motherboards and daughtercards.

Luxtera is the commercial market leader for CMOS-Photonic chips used at the Rack level today, and uses ‘active alignment’ meaning that the fiber has to be lit with the laser and then aligning to the waveguides during test and during assembly. Luxtera is fabless and uses Freescale as foundry to build the chip in an established CMOS SOI process flow originally established for high performance microprocessors. The company produces 10G chips today for advanced Ethernet connections, and through a partnership with Molex ships 40G Active Optical Cables.

Chip level optical connections require breakthrough technologies such as indium-phosphide epitaxy on silicon to be able to grow the most efficient electrically-controlled optical switches, instead of having to pick-and-place discrete components aligned with waveguides. Alignment of components is a huge issue for manufacturing and test that adds inherent costs. “The main issue is getting the coupling from the chip to the fiber with low losses, since sub-micron alignment is needed to avoid a 1 dB loss,” summarized Van Campenhout.

Figure 2 shows a simplified functional schematic of a high-capacity optical communications links employing Dense Wavelength Division Multiplexing (DWDM) to combine modulated laser beams of different colors on a single-mode fiber. Luxtera is working on DWDM for increased bandwidth as is imec.

FIGURE 2: Dense Wavelength Division Multiplexing (DWDM) scheme allows multiplication of the total single-mode fiber (SMF) bandwidth by the number of laser colors used. (Source: imec)

Difficult Design

“If you have just a 1 nm variation in the waveguide width, that device’s spectral response will be proportional as a rule of thumb,” explained Van Campenhout. “We can tune for that with a heating element, but then we lose the low-power advantage.” This results in a need for different design-for-manufacturing approaches.

“When we do photonics design we have to have round features or the light will scatter. So when we do mask making we have to use different rules, and we need to educate all of our partners that we are doing photonics,” reminded Van Campenhout. “However there are EDA companies that are becoming aware of these aspect, so things are developing nicely to create a whole ecosystem to be able to build these. We have the first version of a PDK that we use for multi-product-wafer runs, so we can deliver custom chips to partners.”

Mentor Graphics is an imec partner, and the company’s Tom Daspit, marketing manager for Pyxis Design Tools, spoke with Solid State Technology about the special challenges of EDA for photonics. “You’ve now jumped off the cliff of the orthogonal design environment. Light doesn’t bend at 45° let alone 90°. On an IC it’s all orthogonal, while if it’s photonic we have to modify the interconnect so that the final design is a nice curved one.” To produce a smooth curve the EDA tools must fracture it into a small grid for the photomask, so a seemingly simple set of curves can require gigabytes in a final GDSII file.

It was about 4 years ago that some customers began asking Mentor to modify tools to be able to support photonics, and today there are customers large and small, and some are in full volume production for communications applications. “Remember when they building the old Cray supercomputers and they had to account for all wire lengths to handle signal delays, well now with photonics we need to account for waveguide lengths,” commented Daspit.

In full volume products today are likely communications chips. Customers do not typically share product plans, so not sure of applications spaces. Everybody wants to get rid of the Cu in the backpane to eliminate power consumption, but:

“The big application is photonics for sensor integration, with universities leading the way. Medical is a huge new market,” explained Daspit. “The CMOS die could be 130- down to 65nm or maybe 28nm-nm for some digital.” So there are a wide variety of future applications for CMOS-Photonics, and despite the known manufacturing challenges there are already commercial applications in communications.

—E.K.

Mentor Graphics Offers Tanner Calibre One Verification Suite for the Tanner Analog/Mixed-Signal IC Design Environment

Monday, June 6th, 2016

Mentor Graphics® Corporation announced the Tanner Calibre One IC verification suite as an integral part of the Tanner™ analog/mixed-signal (AMS) physical design environment, creating an easy path to the proven capabilities of Calibre® verification tools for Tanner EDA’s user base. This results in a dramatically-improved IC design and verification solution for Tanner customers by providing tightly-integrated access to Calibre’s physical and circuit verification, exclusively within the Tanner L-Edit™ layout environment.

The Calibre platform is the industry-leader for physical verification and is qualified for sign-off by every major IC foundry and the Tanner Calibre One verification suite uses the same Calibre design kits. Customers that already have stand-alone Calibre licenses, and would like to consider the Tanner design environment, can continue to use the pre-existing Calibre-Tanner interfaces. However, offering an additional, custom integration between Calibre and the Tanner AMS IC design flow provides an invaluable option for Tanner IC designers, giving design teams the access they need to confidently tape out their designs.

“We’ve seen a dramatic increase in the productivity of our layout team thanks to the seamless interaction of L-Edit and the Tanner Calibre One verification suite,” said Stefan Lauxtermann, President of Sensor Creations Inc. “Our customers greatly value that we employ Calibre and that there is a one-to-one correspondence between the final DRC by the foundry and the Tanner design process that we use.”

The Tanner Calibre One verification suite includes the following products:

  • Calibre nmDRC™ (hierarchical design rule checking) ensures the physical layout can be manufactured. This industry-leading tool provides fast cycle times and innovative design rule capabilities.
  • Calibre nmLVS™ (hierarchical layout versus schematic) checks that the physical layout is electrically and topographically the same as the schematic. It improves designer productivity by providing actual device geometry measurement and sophisticated interactive debugging capabilities to ensure accurate verification.
  • Calibre xRC™ (parasitic extraction) verifies that layout-dependent effects do not adversely affect the electrical performance of the design, delivering accurate parasitic data for comprehensive and accurate post-layout analysis and simulation.

In addition, the Calibre RVE™ tool brings the solution together, providing a graphical results viewing environment that reduces debug time by visually identifying design issues instantly and cross-selecting the associated issue in Tanner’s layout and schematic capture tool.

The Tanner IC design suite supports analog, mixed-signal, and MEMS design in one complete, highly-integrated, end-to-end flow. Designers capture the schematic, perform analog and mixed-signal simulation, and lay out the physical design within this unified flow. With the addition of the Tanner Calibre One verification suite, each designer using the Tanner IC flow can interactively invoke an individual Calibre tool in order to verify the design.

“Tanner Calibre One gives designers using L-Edit the highest confidence possible that their tape outs will be successful,” says Greg Lebsack, General Manager of Tanner operations at Mentor Graphics. “We are thrilled that key capabilities of the industry-leading Calibre suite are now available to everyone in our global Tanner customer base.”

The Tanner Calibre One design flow will be demonstrated at the 2016 Design Automation Conference (DAC) in the Tanner EDA booth (#1828).

Pattern Matching Tackles IC Verification and Manufacturing Problems

Monday, June 6th, 2016

thumbnail

Mentor Graphics Corporation announced that customers and ecosystem partners are expanding their use of Calibre Pattern Matching solution to overcome previously intractable IC verification and manufacturing problems. The solution is integrated into the Mentor® Calibre nmPlatform solution, creating a synergy that drives these new applications at IC design companies and foundries, across multiple process nodes.

Calibre Pattern Matching technology supplements multi-operational text-based design rules with an automated visual geometry capture and compare process. This visual approach is both very powerful in its ability to capture complex pattern relationships, and to work within mixed tool flows, making it much easier for Mentor customers to create new applications to solve difficult problems. Because it is integrated into the Calibre nmPlatform toolset, the Calibre Pattern Matching functionality can leverage the industry-leading performance and accuracy of all Calibre tools and flows to create new opportunities for design-rule checking (DRC), reliability checking, DFM, yield enhancement, and failure analysis.

“Our customers count on eSilicon’s design services, IP, and ecosystem management to help them succeed in delivering market-leading ICs,” said Deepak Sabharwal, general manager, IP products & services at eSilicon. “We use Calibre Pattern Matching to create and apply a Calibre-based yield-detractor design kit that helps identify and eliminate design patterns that impact production ramp-up time.”

Since its introduction, use models for Calibre Pattern Matching technology have rapidly expanded, solving problems that were previously too complex or time-consuming to be implemented. New use cases include the following:

  • Physical verification of IC designs with curved structures—for analog, high-power, radio frequency (RF) and microelectromechanical (MEMS) circuitry—is extremely difficult with products designed to work with rectilinear design data. Calibre customers are automating that verification using a combination of Calibre Pattern Matching technology and other Calibre tools for much greater efficiency and accuracy, especially when compared to manual techniques.
  • Calibre Pattern Matching technology can be used to quickly locate and remove design patterns that are known or suspected of  being difficult to manufacture (“yield detractors”). Foundries or design companies create libraries of yield detractor patterns that are specific to a process node or a particular design methodology. Samsung Foundry used this approach in its Closed-Loop DFM solution to help its customers ramp to volume faster, and reduce process-design variability.
  • Some customers use Calibre Pattern Matching technology with Calibre Auto-Waivers™ functionality to define a specific context for waiving a DRC violation. This enhancement allows for automatic filtering of those violations for significant time savings and improved design quality.

“To help our customers create manufacturing-ready designs, we use Calibre Pattern Matching to create and use a yield detractor database to fix most of the litho hotspots in the block level. Then we perform fast signoff DFM litho checking at the chip level using an integrated solution with Calibre Pattern Matching and Calibre LFD” said Min-Hwa Chi, senior vice president, SMIC. “By offering a solution for manufacturability robustness that is built on the Calibre platform, we are seeing ready customer adoption of SMIC’s DFM solution.”

With the Calibre Pattern Matching tool, design companies can now optimize their physical verification checking to their unique design styles. The tool is easy to adopt because it doesn’t rely on expertise in scripting languages. Instead, any engineer can readily define a visual pattern that captures the designer’s expertise in the critical geometries and context for that configuration.

“With the growing adoption of Calibre Pattern Matching technology, Mentor continues to help our customers address increasing design complexity, regardless of the process node they are targeting,” said Joe Sawicki, vice president and general manager of the Design-to-Silicon division at Mentor Graphics. “By incorporating the Calibre Pattern Matching tool, the Calibre platform becomes an even more valuable bridge between design and manufacturing for the ecosystem.”

At the 2016 Design Automation Conference, Mentor has a Calibre Pattern Matching presentation on Tuesday, June 7 at 3PM in the Mentor booth #949. Register for the session using the registration form.

https://www.mentor.com/events/design-automation-conference/schedule

Next Page »