Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘Analog’

Elusive Analog Fault Simulation Finally Grasped

Tuesday, September 27th, 2016

thumbnail

By Stephen Sunter, Mentor Graphics

The test time per logic gate in ICs has greatly decreased in the last 20 years, thanks to scan-based design-for-test (DFT), automatic test pattern generation (ATPG) tools, and scan compression. But for analog circuits, test time per transistor has not decreased at all. And to make matters worse, the test time for the analog portion of an IC can dominate total test time. A new approach is needed for analog tests to achieve higher coverage in less time, or to improve defect tolerance.

Source: ON Semiconductor

Analog designers and test engineers do not have DFT tools comparable to those used by their digital counterparts. It has been difficult to improve the number of defective parts per million (DPPM) because it has been too challenging to measure defect coverage. These are typically measured by the rate of customer returns, which can occur months after the ICs are tested.

Analog fault simulation has only been discussed in academic papers and recently, in a few industrial papers that describe proprietary software. Why haven’t the analog fault simulation techniques described in all those papers led to commercially-available fault simulators that are used in industry? Mostly because there is no industry-accepted analog fault model and simulating all potential faults requires an impractically long time.

Potential Solutions for Reducing Simulation Time

Many methods for reducing simulation have been proposed over the years in published papers, including:

  • Simulate only shorts and opens in the schematic netlist without variations;
  • Analyze a circuit’s layout to find the shorts and opens that can actually occur (and the likelihood of those defects occurring);
  • Simulate only in the AC domain;
  • Simulate the sensitivities of each tested performance to variations in each circuit element;
  • Use a simplified, time domain simulation to measure the impact of injected shorts and opens on output signals, only within a few clock cycles;
  • Measure analog toggle coverage.

Even if these techniques were very efficient and reduced simulation time dramatically, the large number of defects simulated would mean that the number of undetected defects to diagnose would be large. For example, if there were 100,000 potential faults in a circuit and 90% were detected, there would be 10,000 undetected faults to investigate. Analyzing each defect is a very time-consuming task that requires detailed knowledge of the circuit and tests. Therefore, reducing the number of defects simulated can save a lot of time, in multiple ways. The methods to reduce the number of defects include:

  • Randomly select defects from a list of all potential defects;
  • Randomly select defects, after grouping them according to defect likelihoods;
  • Select only principal parameters of the circuit elements, such as voltage, gate length, width, and oxide thickness;
  • Select representative defects based on circuit analysis.

Potential Standard Analog Fault Models

Currently, there is no accepted analog fault model standard in the industry. Proposals such as simulating only short and open defects and simulating defective variations in circuit elements or in high-level models have been rejected. Because of the lack of a standard, a group of about a dozen companies (including Mentor Graphics) has been meeting regularly since mid-2014 to develop such a fault model. The group has reported their progress publicly several times, and hopes to develop an IEEE standard by 2018.

The Tessent DefectSim Solution

Tessent® DefectSim™ incorporates lessons learned from all previous approaches, combining the best aspects of each while avoiding their pitfalls. Simulation time is reduced using a variety of techniques that all together reduce total simulation time by many orders of magnitude compared to some of the previous approaches, without introducing a new simulator, reducing existing simulator accuracy, or restricting the types of tests. The analog defect models can be shorts and opens, just variations, or both. Or, users can substitute their own proprietary defect models. The defects can be injected at the schematic level, at the layout level, or a combination of both.

To be realistic, defects should be injected in a layout-extracted netlist. But higher-level netlist descriptions or hardware description language (HDL) models, such as Verilog-A or Verilog RTL, can reduce simulation time by one or two orders of magnitude. In practice, the highest level netlist of a subcircuit is often just its schematic; nevertheless, it typically simulates an order of magnitude faster than the layout-extracted netlist. DefectSim runs Eldo® when the circuit contains only SPICE and Verilog-A models, and Questa® ADMS™ when Verilog-AMS or RTL models are also used.

DefectSim introduces a new statistical technique called likelihood-weighted random sampling (LWRS) to minimize the number of defects to simulate. This new technique uses stratified random sampling in which each stratum contains only one defect. The likelihood of randomly selecting each defect is proportional to the likelihood of the defect occurring. Each likelihood of occurrence is computed based on designer-provided global parameters, and parameters of each circuit element.

For example, shorts are the most common. In state-of-the-art production processes, shorts are 3~10X more likely than opens. When the range of defect likelihoods is large, as it is for mixed-signal circuits, LWRS requires up to 75% fewer samples than simple random sampling (SRS) for a given confidence interval (the variation in an estimate that would occur if the random sampling was done many times). In practice, when coverage is 90% or higher, this means that it is usually sufficient to simulate a maximum 250 defects, regardless of the circuit size or the number of potential defects, to estimate coverage within 2.5%, for a 99% confidence level. Simulating as few as one hundred defects is sufficient to get ±4% estimate precision. For small circuits, or when time permits, all defects can be simulated.

DefectSim allows you to combine almost all of the previously-published techniques for reducing simulation time, including random sampling, high-level modeling, stop-on-detection, AC mode, and parallel simulation. All together, these techniques can reduce simulation time by up to six orders of magnitude compared to simulating the production test of all potential defects in a flat, layout-extracted netlist. The same techniques can be applied to the measurement of defect tolerance.

For more information about Tessent DefectSim, read the whitepaper at:
https://www.mentor.com/products/silicon-yield/resources/overview/part-1-analog-fault-simulation-challenges-and-solutions-f9fd7248-3244-4bda-a7e5-5a19f81d7490?cmpid=10167

IoT Demands Part 2: Test and Packaging

Friday, April 15th, 2016

By Ed Korczynski, Senior Technical Editor, Solid State Technology, SemiMD

The Internet-of-Things (IoT) adds new sensing and communications to improve the functionality of all manner of things in the world. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be extremely small and low-cost. To understand the state of technology preparedness to meet the anticipated needs of the different application spaces, experts from GLOBALFOUNDRIES, Cadence, Mentor Graphics and Presto Engineering gave detailed answers to questions about IoT chip needs in EDA and fab nodes, as published in “IoT Demands:  EDA and Fab Nodes.” We continue with the conversation below.

Korczynski: For test of IoT devices which may use ultra-low threshold voltage transistors, what changes are needed compared to logic test of a typical “low-power” chip?

Steve Carlson, product management group director, Cadence

Susceptibility to process corners and operating conditions becomes heightened at near-threshold voltage levels. This translates into either more conservative design sign-off criteria, or the need for higher levels of manufacturing screening/tests. Either way, it has an impact on cost, be it hidden by over-design, or overtly through more costly qualification and test processes.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

We need to make sure that the testability has also been designed to be functional structurally in this mode. In addition, sub-threshold voltage operation must account for non-linear transistor characteristics and the strong impact of local process variation, for which the conventional testability arsenal is still very poor. Automotive screening used low voltage operation (VLV) to detect latent defects, but at very low voltage close to the transistor threshold, digital becomes analog, and therefore if the usual concept still works for defect detection, functional test and @speed tests require additional expertise to be both meaningful and efficient from a test coverage perspective.

Korczynski:  Do we have sufficient specifications within “5G” to handle IoT device interoperability for all market segments?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

The estimated timeline for standardization availability of 5G is around 2020. 5G is being designed keeping three classes of applications in mind:  Enhanced Mobile Broadband, Massive IoT, and Mission-Critical Control. Specifically for IoT, the focus is on efficient, low-cost communication with deep coverage. We will start to see early 5G technologies start to appear around 2018, and device connectivity,

interoperability and marshaling the data they generate that can apply to multiple IoT sub-segments and markets is still very much in development.

Korczynski:  Will the 1st-generation of IoT devices likely include wide varieties of solution for different market-segments such as industrial vs. retail vs. consumer, or will most device use similar form-factors and underlying technologies?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

If we use CES 2016 as a showcase, we are seeing IoT “Things” that are becoming use-case or application-centric as they apply to specific sub-segments such as Connected Home, Automotive, Medical, Security, etc. There is definitely more variety on the consumer front vs. industrial. Vendors / OEMs / System houses are differentiating at the user-interface design and form-factor levels while the “under-the-hood” IC capabilities and component technologies that provide the atomic intelligence are fairly common. ​

Steve Carlson, product management group director, Cadence

Right now it seems like everyone is swinging for the fence. Everyone wants the home-run product that will reach a billion devices sold. Generality generally leads to sub-optimality, so a single device usually fails to meet the needs and expectations of many. Devices that are optimized for more specific use cases and elements of purchasing criteria will win out. The question of interface is an interesting one.

Korczynski:  Will there be different product life-cycles for different IoT market-segments, such as 1-3 years for consumer but 5-10 years for industrial?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

That certainly seems to be the case. According to Gartner’s market analysis for IoT, Consumer is expected to grow at a faster pace in terms of units compared to Enterprise, while Enterprise is expected to lead in revenue. Also the churn-cycle in Consumer is higher / faster compared to Enterprise. Today’s wearables or smart-phones are good reference examples. This will however vary by the type of “Thing” and sub-segment. For example, you expect to have your smart refrigerator for a longer time period compared to smart clothing or eyewear. As ASPs of the “Things”come down over time and new classes of products such as disposables hit the market, we can expect even larger volumes.​

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

The market segments continue to be driven by the same use cases. In consumer wearables, short cycles are linked to fashion trends and rapid obsolescence, where consumer home use has longer cycles closer to industrial market requirements. We believe that the lifecycle norms will hold true for IoT devices.

Korczynski:  For the IoT application of infrastructure monitoring (e.g. bridges, pipelines, etc.) long-term (10-20 year) reliability will be essential, while consumer applications may be best served by 3-5 year reliability devices which cost less; how well can we quantify the trade-off between cost and chip reliability?

Steve Carlson, product management group director, Cadence

Conceptually we know very well how to make devices more reliable. We can lower current densities with bigger wires, we can run at cooler temperatures, and so on.  The difficulty is always in finding optimality for a given criterion across the, for practical purposes, infinite tradeoffs to be made.

Korczynski:  Why is the talk of IoT not just another “Dot Com” hype cycle?

Rajeev Rajan, Vice President of Internet of Things (IoT) at GLOBALFOUNDRIES

​​I participated in a panel at SEMICON China in Shanghai last month that discussed a similar question. If we think of IoT as a “brand new thing” (no pun intended), then we can think of it as hype. However if we look at the IoT as as set of use-cases that can take advantage of an evolution of Machine-to-Machine (M2M) going towards broader connectivity, huge amounts of data generated and exchanged, and a generational increase in internet and communication network bandwidths (i.e. 5G), then it seems a more down-to-earth technological progression.

Nicolas Williams, product marketing manager, Mentor Graphics

Unlike the Dot Com hype, which was built upon hope and dreams of future solutions that may or may not have been based in reality, IoT is real business. For example, in a 2016 IC Insights report, we see that last year $63.4 billion in revenue was generated for IoT systems and the market is growing at about 20% CAGR. This same report also shows IoT semiconductor sales of over $15 billion in 2015 with a CAGR of 21.1%.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is the investment needed up front to create sensing agents and an infrastructure for the hardware foundation of the IoT that will lead to big data and ultimately value creation.

Steve Carlson, product management group director, Cadence

There will be plenty of hype cycles for products and product categories along the way. However, the foundational shift of the connection of things is a diode through which civilization will only pass through in one direction.

IoT Demands Part 1: EDA and Fab Nodes

Thursday, April 14th, 2016

The Internet-of-Things (IoT) is expected to add new sensing and communications to improve the functionality of all manner of things in the world:  bridges sensing and reporting when repairs are needed, parts automatically informing where they are in storage and transport, human health monitoring, etc. Solid-state and semiconducting materials for new integrated circuits (IC) intended for ubiquitous IoT applications will have to be assembled at low-cost and small-size in High Volume Manufacturing (HVM). Micro-Electro-Mechanical Systems (MEMS) and other sensors are being combined with Radio-Frequency (RF) ICs in miniaturized packages for the first wave of growth in major sub-markets.

To meet the anticipated needs of the different IoT application spaces, SemiMD asked leading companies within critical industry segments about the state of technology preparedness:

*  Commercial IC HVM – GLOBALFOUNDRIES,

*  Electronic Design Automation (EDA) – Cadence and Mentor Graphics,

*  IC and complex system test – Presto Engineering.

Korczynski:  Today, ICs for IoT applications typically use 45nm/65nm-node which are “Node -3″ (N-3) compared to sub-20nm-node chips in HVM. Five years from now, when the bleeding-edge will use 10nm node technology, will IoT chips still use N-3 of 28nm-node (considered a “long-lived node”) or will 45nm-node remain the likely sweet-spot of price:performance?

Timothy Dry, product marketing manager, GLOBALFOUNDRIES

In 5 years time, there will be a spread of technology solutions addressing low, middle, and high ends of IoT applications. At the low end, IoT end nodes for applications like connected smoke

detectors, security sensors will be at 55, 40nm ULP and ULL for lowest system power, and low cost. These applications will be typically served by MCUs <50DMIPs. Integrated radios (BLE, 802.15.4), security, Power Management Unit (PMU), and eFlash or MRAM will be common features. Connected LED lighting is forecasted to be a high volume IoT application. The LED drivers will use BCD extensions of 130nm—40nm—that can also support the radio and protocol-MCU with Flash.

In the mid-range, applications like smart-meters and fitness/medical monitoring will need systems that have more processing power <300DMIPS. These products will be implemented in 40nm, 28nm and GLOBALFOUNDRIES’ new 22nm FDSOI technology that uses software-controlled body-biasing to tune SoC operation for lowest dynamic power. Multiple wireless (BLE/802.15.4, WiFi, LPWAN) and wired connectivity (Ethernet, PLC) protocols with security will be integrated for gateway products.

High-end products like smart-watches, learning thermostats, home security/monitoring cameras, and drones will require MPU-class IC products (~2000DMIPs) and run high-order operating systems (e.g. Linux, Android). These products will be made in leading-edge nodes starting at 22FDX, 14FF and migrating to 7FF and beyond. Design for lowest dynamic power for longest battery life will be the key driver, and these products typically require human machine Interface (HMI) with animated graphics on a high resolution displays. Connectivity will include BLE, WiFi and cellular with strong security.

Steve Carlson, product management group director, Cadence

We have seen recent announcements of IoT targeted devices at 14nm. The value created by Moore’s Law integration should hold, and with that, there will be inherent advantages to those who leverage next generation process nodes. Still, other product categories may reach functionality saturation points where there is simply no more value obtained by adding more capability. We anticipate that there will be more “live” process nodes than ever in history.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

It is fair to say that most IoT devices will be a heterogeneous aggregation of analog functions rather than high power digital processors. Therefore, and by similarity with Bluetooth and RFID devices, 90nm and 65nm will remain the mainstream nodes for many sub-vertical markets, enabling the integration of RF and analog front-end functions with digital gate density. By default, sensors will stay out of the monolithic path for both design and cost reasons. The best answer would be that the IoT ASIC will follow eventually the same scaling as the MCU products, with embedded non-volatile memories, which today is 55-40nm centric and will move to 28nm with industry maturity and volumes.

Korczynski:  If most IoT devices will include some manner of sensor which must be integrated with CMOS logic and memory, then do we need new capabilities in EDA-flows and burn-in/test protocols to ensure meeting time-to-market goals?

Nicolas Williams, product marketing manager, Mentor Graphics

If we define a typical IoT device as a product that contains a MEMS sensor, A/D, digital processing, and a RF-connection to the internet, we can see that the fundamental challenge of IoT design is that teams working on this product need to master the analog, digital, MEMS, and RF domains. Often, these four domains require different experience and knowledge and sometimes design in these domains is accomplished by separate teams. IoT design requires that all four domains are designed and work together, especially if they are going on the same die. Even if the components are targeting separate dice that will be bonded together, they still need to work together during the layout and verification process. Therefore, a unified design flow is required.

Stephen Pateras, product marketing director, Mentor Graphics

Being able to quickly debug and create test patterns for various embedded sensor IP can be addressed with the adoption of the new IEEE 1687 IP plug-and-play standard. If a sensor IP block’s digital interface adheres to the standard, then any vendor-provided data required to initialize or operate the embedded sensor can be easily and quickly mapped to chip pins. Data sequences for multiple sensor IP blocks can also be merged to create optimized sequences that will minimize debug and test times.

Jon Lanson, vice president worldwide sales & marketing, Presto Engineering

From a testing standpoint, widely used ATEs are generally focused on a few purposes, but don’t necessarily cover all elements in a system. We think that IoT devices are likely to require complex testing flows using multiple ATEs to assure adequate coverage. This is likely to prevail for some time as short run volumes characteristic of IoT demands are unlikely to drive ATE suppliers to invest R&D dollars in creating new purpose-built machines.

Korczynski:  For the EDA of IoT devices, can all sensors be modeled as analog inputs within established flows or do we need new modeling capability at the circuit level?

Steve Carlson, product management group director, Cadence

Typically, the interface to the physical world has been partitioned at the electrical boundary. But as more mechanical and electro-mechanical sensors are more deeply integrated, there has been growing value in co-design, co-analysis, and co-optimization. We should see more multi-domain analysis over time.

Nicolas Williams, product marketing manager, Mentor Graphics

Designers of IoT devices that contain MEMS sensors need quality models in order to simulate their behavior under physical conditions such as motion and temperature. Unlike CMOS IC design, there are few standardized MEMS models for system-level simulation. State of the art MEMS modeling requires automatic generation of behavioral models based on the results of Finite Element Analysis (FEA) using reduced-order modeling (ROM). ROM is a numerical methodology that reduces the analysis results to create Verilog-A models for use in AMS simulations for co-simulation of the MEMS device in the context of the IoT system.

Meeting the IoT Design Challenge

Monday, November 2nd, 2015

thumbnail

By Pete Singer, Editor-in-Chief

Mentor Graphics acquired Tanner EDA in March of 2015, in an effort to better address the design, layout and verification of analog/mixed-signal (AMS) and MEMS ICs, key building blocks in Internet of Things (IoT).

Since then, the Tanner team has moved offices and successfully been integrated into Mentor’s corporate structure.

We recently caught up with Jeff Miller, product marketing manager for the Tanner Group at Mentor Graphics. “We’ve kept the team together and we’re continuing to work as a business unit within Mentor Graphics with the same team under the same leadership,” he said. “Greg Lebsack, who was the president of Tanner EDA is now the general manager of the Tanner Group. We have the same basic org chart.” He noted that the same people who were with Tanner for a long time are still there. “We tried to preserve that and we’ve done a good job of that,” Miller said.

With the explosion of IoT devices – some estimate 70 billion devices will be connected to the internet by 2020 – the Tanner acquisition seems particularly prescient in that many if not most IoT devices are analog/mixed signal devices, and many involve the use of MEMS.

“We’ve been involved in various IoT-type designs for a long time,” Miller explained. He defined an IoT device as a sensor and an actuator — that’s the “thing” part – plus some amount of readout or control circuitry, and some digital logic in order to control that and interface to a radio which then communicates to your cell phone or you WiFi network and then on to the internet. “You need to have all those four pieces to make your IoT device,” he said.

The microcontroller or microprocessor component and the radio component have been traditionally been done outside of the Tanner EDA tools, but Miller said they group has been making a big effort in the last couple of years to bring some of that into their design flow in terms of enabling a greater degree of integration. “The cost, size pressures and power pressures are going to force some integration there,” Miller said.

In other words, sensors are being integrated with more and more of intelligence. “Instead of just having a raw MEMS accelerometer, they’ll have a 3-axis accelerometer with a 3-axis gyro and a read-out circuit and enough digital logic to do some processing,” said Miller. “These sensors are becoming a lot smarter and more integrated in order to support these kinds of applications.”

Miller said he’s seen a lot of new entrants into the IoT market. Typically design teams have5 to 20 people. Tanner’s market historically has been the smaller companies with relatively focused products.

“I’m expecting the needs of this market to be diverse enough that we’re going to see a proliferation of small interesting designs that enable a particular class of IoT device,” Miller predicts. “This proliferation across the market will lead to small design teams doing something innovative in a smaller scale environment, trying to make these things as small and efficient as they can possibly be. “

Since the acquisition, a big focus of the Tanner Group has been on how to best integrate Mentor’s tools such as Calibre, ModelSim and AFS with existing Tanner products. “More so than ever before, we have a complete design flow, start to finish, for analog design flow, mixed signal design and for MEMS design, and any integration across those things,” Miller said. “We’re keeping our basic ways of doing things and leveraging the incredible resources that are available being part of a large company like Mentor Graphics. It’s really good for us to part of this new, larger team.”

The first major integration was with Calibre, followed by ModelSim as the digital simulator in their mixed signal flow. “We can integrate our SPICE simulator with ModelSim and do mixed signal simulations and communicate the signals across the boundary between analog and digital,” Miller said. He adds that expects to have more and tighter integrations with other Mentor Graphic tools moving forward.

“I’ve been really encouraged that Mentor has been investing us and making sure we’re going to be around and still doing business in a Tanner kind of way going into the future,” Miller said.

MicroWatt Chips shown at ISSCC

Thursday, March 5th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

With much of future demand for silicon ICs forecasted to be for mobile devices that must conserve battery power, it was natural for much of the focus at the just concluded 2015 International Solid State Circuits Conference (ISSCC) in San Francisco to be on ultra-low-power circuits that run on mere microWatts (µW). From analog to digital logic to radio-frequency (RF) chips and extending to complete system-on-chip (SoC) prototypes, silicon IC functionality is being designed with evolutionary and even revolutionary reductions in the operational power needed.

The figure shows a multi-standard 2.4 GHz radio that was co-developed by imec, Holst Centre, and Renesas using a 40nm node CMOS process. This was detailed in session 13.2 when Y.H. Liu presented “A 3.7mW-RX 4.4mW-TX Fully Integrated Bluetooth Low-Energy/IEEE802.15.4/Proprietary SoC with an ADPLL-Based Fast Frequency Offset Compensation in 40nm CMOS.” It uses a digital-intensive RF architecture tightly integrated with the digital baseband (DBB) and a microcontroller (MCU), and the digital-intensive RF design reduces the analog core area to 1.3mm2, and the DBB/MCU/SRAM occupies an area of 1.1mm2. This is an evolution of a previous 90nm RF front-end design that results in a reduced supply voltage (20 percent), power consumption (25 percent), and chip area (35 percent).

Ultra-low-power multi-standard 2.4 GHz radio compliant with Bluetooth Low Energy and ZigBee, co-developed by imec, Holst Centre, and Renesas. (Source: Renesas)

“From healthcare to smart buildings, ubiquitous wireless sensors connected through cellular devices are becoming widely used in everyday life,” said Harmke De Groot, Department Director at imec. “The radio consumes the majority of the power of the total system and is one of the most critical components to enable these emerging applications. Moreover, a low-cost area-efficient radio design is an important catalyst for developing small sensor applications, seamlessly integrated into the environment. Implementing an ultra-low power radio will increase the autonomy of the sensor device, increase its quality, functionality and performance and enable the reduction of the battery size, resulting in a smaller device, which in case of wearable systems, adds to user’s comfort.”

When most ICs were used in devices and systems that were powered by line current there was no advantage to minimizing power consumption, and so digital CMOS circuits could be designed with billions of transistors switching billions of times each second resulting in sufficient brute-force power to solve most problems. With power-consumption now a vital aspect of much of the demand for future chips, this year’s ISSCC offered the following tutorials on low-power chips:

  • “Ultra Low Power Wireless Systems” by Alison Burdett of Toumaz Group (UK),
  • “Low Power Near-threshold Design” by Dennis Sylvester of University of Michigan, and
  • “Analog Techniques for Low-Power Circuits” by Vadim Ivanov of Texas Instruments.

Then on Thursday the 26th, an entire short course was offered on “Circuit Design in Advanced CMOS Technologies:  How to Design with Lower Supply Voltages.” with lectures on the following:

  • “A Roadmap to Lower Supply Voltages – A System Perspective” by Jan M. Rabaey of UC Berkeley,
  • “Designing Ultra-Low-Voltage Analog and Mixed-Signal Circuits” by Peter Kinget of Columbia University,
  • “ACD Design in Scaled technologies” by Andrea Baschirotto of University of Milan-Bicocca, and
  • “Ultra-Low-Voltage RF Circuits and Transceivers” by Hyunchoi Shin of Kwangwoon University.

µW SoC Blocks

Session 5.10 covered “A 4.7MHz 53µW Fully Differential CMOS Reference Clock Oscillator with -22dB Worst-Case PSNR for Miniaturized SoCs” by J. Lee et al. of the Institute of Microelectronics (Singapore) along with researchers from KAIST and Daegu Gyeongbuk Institute of Science and Technology in Korea. While many SoCs for the IoT are intended for machine-to-machine networks, human interaction will still be needed for many applications so session 6.7 covered “A 2.3mW 11cm-Range Bootstrapped and Correlated-Double-Sampling (BCDS) 3D Touch Sensor for Mobile Devices” by L. Du et. al. from UCLA (California).

As indicated by the low MHz speed of the clock circuit referenced above, the only way that these ICs can consume 1/1000th of the power of mainstream chips is to operate at 1/1000th the speed. Also note that most of these chips will be made using 90nm- and 65nm-node fab processes, instead of today’s leading 22nm- and 14nm-node processes, as evidenced by session 8.3 covered “A 10.6µA/MHz at 16MHz Single-Cycle Non-Volatile Memory-Access Microcontroller with Full State Retention at 108nA in a 90nm Process” by V.K. Singhal et al. from the Kilby Labs of Texas Instruments (Bangalore, India). Session 18.3 covered “A 0.5V 54µW Ultra-Low-Power Recognition Processor with 93.5% Accuracy Geometric Vocabulary Tree and 47.5 Database Compression” by Y. Kim et al. of KAIST (Daejeon, Korea).

In the Low Power Digital sessions it was natural that ARM Cortex chips were the basis for two different presentations on ultra-low power functionality, since ARM cores power most of the world’s mobile processors, and since the RISC architecture of ARM was deliberately evolved for mobile applications. Session 8.1 covered “An 80nW Retention 11.7pJ/Cycle Active Subthreshold ARM Cortex-M0+ Subsystem in 65nm CMOS for WSN Applications” by J. Myers et al. of ARM (Cambridge, UK). In the immediately succeeding session 8.2, W. Lim et al. of the University of Michigan (Ann Arbor) presented on the possibilities for “Batteryless Sub-nW Cortex-M0+ Processor with Dynamic Leakage-Suppression Logic.”

nW Beyond Batteries

Session 5.4 covered “A 32nW Bandgap Reference Voltage Operational from 0.5V Supply for Ultra-Low Power Systems” by A. Shrivastava et al. of PsiKick (Charlottesville, VA). PsiKick’s silicon-proven ultra-low-power wireless sensing devices are based on over 10 years of development of Sub-Threshold (Sub-Vt) devices. They are claimed to operate at 1/100th to 1/1000th of the power budget of other low-power IC sensor platforms, allowing them to be powered without a battery from a variety of harvested energy sources. These SoCs include full sensor analog front-ends, programmable processing and memory, integrated power management, programmable hardware accelerators, and full RF (wireless) communication capabilities across multiple frequencies, all of which can be built with standard CMOS processes using standard EDA tools.

Extremely efficient energy harvesting was also shown by S. Stanzione et al. of Holst Centre/ imec/KU Leuven working with OMRON (Kizugawa, Japan) in session 20.8 “A 500nW Battery-less Integrated Electrostatic Energy Harvester Interface Based on a DC-DC Converter with 60V Maximum Input Voltage and Operating From 1μW Available Power, Including MPPT and Cold Start.” Such energy harvesting chips will power ubiquitous “smarts” embedded into the literal fabric of our lives. Smart clothes, smart cars, and smart houses will all augment our lives in the near future.

—E.K.

The Challenges Of 28nm HKMG

Tuesday, June 26th, 2012

28nm Super Low Power (28nm-SLP) is the low power CMOS offering delivered on a bulk silicon substrate for mobile consumer and digital consumer applications. This technology has four Vt’s (high, regular, low and super low) for design flexibility with multi-channel length capability and offers the ultimate in small die size and low cost. Multiple SRAM bit cells for high density and high-performance are available. With the simpler process integration of a “Gate-First” HKMG scheme, 28nm-SLPalso offers the use of an eFuse, which is known to be more competitive and superior than a BEOL copper fuse solution.

To read more, click here.