Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘Google’

Deep Learning Could Boost Yields, Increase Revenues

Thursday, March 23rd, 2017

thumbnail

By Dave Lammers, Contributing Editor

While it is still early days for deep-learning techniques, the semiconductor industry may benefit from the advances in neural networks, according to analysts and industry executives.

First, the design and manufacturing of advanced ICs can become more efficient by deploying neural networks trained to analyze data, though labelling and classifying that data remains a major challenge. Also, demand will be spurred by the inference engines used in smartphones, autos, drones, robots and other systems, while the processors needed to train neural networks will re-energize demand for high-performance systems.

Abel Brown, senior systems architect at Nvidia, said until the 2010-2012 time frame, neural networks “didn’t have enough data.” Then, a “big bang” occurred when computing power multiplied and very large labelled data sets grew at Amazon, Google, and elsewhere. The trifecta was complete with advances in neural network techniques for image, video, and real-time voice recognition, among others.

During the training process, Brown noted, neural networks “figure out the important parts of the data” and then “converge to a set of significant features and parameters.”

Chris Rowen, who recently started Cognite Ventures to advise deep-learning startups, said he is “becoming aware of a lot more interest from the EDA industry” in deep learning techniques, adding that “problems in manufacturing also are very suitable” to the approach.

Chris Rowen, Cognite Ventures

For the semiconductor industry, Rowen said, deep-learning techniques are akin to “a shiny new hammer” that companies are still trying to figure out how to put to good use. But since yield questions are so important, and the causes of defects are often so hard to pinpoint, deep learning is an attractive approach to semiconductor companies.

“When you have masses of data, and you know what the outcome is but have no clear idea of what the causality is, (deep learning) can bring a complex model of causality that is very hard to do with manual methods,” said Rowen, an IEEE fellow who earlier was the CEO of Tensilica Inc.

The magic of deep learning, Rowen said, is that the learning process is highly automated and “doesn’t require a fab expert to look at the particular defect patterns.”

“It really is a rather brute force, naïve method. You don’t really know what the constituent patterns are that lead to these particular failures. But if you have enough examples that relate inputs to outputs, to defects or to failures, then you can use deep learning.”

Juan Rey, senior director of engineering at Mentor Graphics, said Mentor engineers have started investigating deep-learning techniques which could improve models of the lithography process steps, a complex issue that Rey said “is an area where deep neural networks and machine learning seem to be able to help.”

Juan Rey, Mentor Graphics

In the lithography process “we need to create an approximate model of what needs to be analyzed. For example, for photolithography specifically, there is the transition between dark and clear areas, where the slope of intensity for that transition zone plays a very clear role in the physics of the problem being solved. The problem tends to be that the design, the exact formulation, cannot be used in every space, and we are limited by the computational resources. We need to rely on a few discrete measurements, perhaps a few tens of thousands, maybe more, but it still is a discrete data set, and we don’t know if that is enough to cover all the cases when we model the full chip,” he said.

“Where we see an opportunity for deep learning is to try to do an interpretation for that problem, given that an exhaustive analysis is impossible. Using these new types of algorithms, we may be able to move from a problem that is continuous to a problem with a discrete data set.”

Mentor seeks to cooperate with academia and with research consortia such as IMEC. “We want to find the right research projects to sponsor between our research teams and academic teams. We hope that we can get better results with these new types of algorithms, and in the longer term with the new hardware that is being developed,” Rey said.

Many companies are developing specialized processors to run machine-learning algorithms, including non-Von Neumann, asynchronous architectures, which could offer several orders of magnitude less power consumption. “We are paying a lot of attention to the research, and would like to use some of these chips to solve some of the problems that the industry has, problems that are not very well served right now,” Rey said.

While power savings can still be gained with synchronous architectures, Rey said brain-inspired projects such as Qualcomm’s Zeroth processor, or the use of memristors being developed at H-P Labs, may be able to deliver significant power savings. “These are all worth paying attention to. It is my feeling that different architectures may be needed to deal with unstructured data. Otherwise, total power consumption is going through the roof. For unstructured data, these types of problem can be dealt with much better with neuromorphic computers.”

The use of deep learning techniques is moving beyond the biggest players, such as Google, Amazon, and the like. Just as various system integrators package the open source modules of the Hadoop data base technology into a more-secure offering, several system integrators are offering workstations packaged with the appropriate deep-learning tools.

Deep learning has evolved to play a role in speech recognition used in Amazon’s Echo. Source: Amazon

Robert Stober, director of systems engineering at Bright Computing, bundles AI software and tools with hardware based on Nvidia or Intel processors. “Our mission statement is to deploy deep learning packages, infrastructure, and clusters, so there is no more digging around for weeks and weeks by your expensive data scientists,” Stober said.

Deep learning is driving new the need for new types of processors as well as high-speed interconnects. Tim Miller, senior vice president at One Stop Systems, said that training the neural networks used in deep learning is an ideal task for GPUs because they can perform parallel calculations, sharply reducing the training time. However, GPUs often are large and require cooling, which most systems are not equipped to handle.

David Kanter, principal consultant at Real World Technologies, said “as I look at what’s driving the industry, it’s about convolutional neural networks, and using general-purpose hardware to do this is not the most efficient thing.”

However, research efforts focused on using new materials or futuristic architectures may over-complicate the situation for data scientists outside of the research arena. At the International Electron Devices Meeting (IEDM 2017), several research managers discussed using spin torque magnetic (STT-MRAM) technology, or resistive RAMs (ReRAM), to create dense, power-efficient networks of artificial neurons.

While those efforts are worthwhile from a research standpoint, Kanter said “when proving a new technology, you want to minimize the situation, and if you change the software architecture of neural networks, that is asking a lot of programmers, to adopt a different programming method.”

While Nvidia, Intel, and others battle it out at the high end for the processors used in training the neural network, the inference engines which use the results of that training must be less expensive and consume far less power.

Kanter said “today, most inference processing is done on general-purpose CPUs. It does not require a GPU. Most people I know at Google do not use a GPU. Since the (inference processing) workload load looks like the processing of DSP algorithms, it can be done with special-purpose cores from Tensilica (now part of Cadence) or ARC (now part of Synopsys). That is way better than any GPU,” Kanter said.

Rowen was asked if the end-node inference engine will blossom into large volumes. “I would emphatically say, yes, powerful inference engines will be widely deployed” in markets such as imaging, voice processing, language recognition, and modeling.

“There will be some opportunity for stand-alone inference engines, but most IEs will be part of a larger system. Inference doesn’t necessarily need hundreds of square millimeters of silicon. But it will be a major sub-system, widely deployed in a range of SoC platforms,” Rowen said.

Kanter noted that Nvidia has a powerful inference engine processor that has gained traction in the early self-driving cars, and Google has developed an ASIC to process its Tensor deep learning software language.

In many other markets, what is needed are very low power consumption IEs that can be used in security cameras, voice processors, drones, and many other markets. Nvidia CEO Jen Hsung Huang, in a blog post early this year, said that deep learning will spur demand for billions of devices deployed in drones, portable instruments, intelligent cameras, and autonomous vehicles.

“Someday, billions of intelligent devices will take advantage of deep learning to perform seemingly intelligent tasks,” Huang wrote. He envisions a future in which drones will autonomously find an item in a warehouse, for example, while portable medical instruments will use artificial intelligence to diagnose blood samples on-site.

In the long run, that “billions” vision may be correct, Kanter said, adding that the Nvidia CEO, an adept promoter as well as an astute company leader, may be wearing his salesman hat a bit.

“Ten years from now, inference processing will be widespread, and many SoCs will have an inference accelerator on board,” Kanter said.

IoT Security, Software Are Highlighted at ARM TechCon

Friday, November 13th, 2015

thumbnail

By Jeff Dorsch, Contributing Editor

Many people are aware of the Internet of Things concept. What they want to know now is how to secure the IoT and how to develop code for it.

Plenty of vendors on hand for the ARM TechCon conference and exposition in Santa Clara, Calif. this week were offering solutions on both counts. And there were multiple presentations in the three-day conference program devoted to both subjects.

Mentor Graphics, for instance, spoke about “Use Cases for ARM TrustZone Benefits of HW-Enforced Partitioning and OS Separation.” MediaTek presented on “Secured Communication Between Devices and Clouds with LinkIt ONE and mbedTLS.” And so on.

ARM CEO Simon Segars said in his keynote address that security and trust are one of the key principles in the Internet of Things (the others being connectivity and partnership across the ecosystem). Security and trust, he asserted, must be “at every level baked into the hardware, before you start layering software on top.”

James Bruce, ARM’s director of mobile solutions, addressed the security topic at length in an interview at the conference. ARM is taking a holistic approach to security through its TrustZone technology, he said, describing it as “a great place to put [network] keys.”

With microcontrollers, the chips often used in IoT devices, TrustZone makes sure sensitive data is “inaccessible to normal software,” Bruce said. At the same time, “you want to make devices easy to update,” he added.

ARM wants to enable its worldwide ecosystem of partners to stay ahead of cyberattacks and other online dangers, according to Bruce. “That’s why we’re doing the groundwork now,” he said.

The reaction of ARM partners to the introduction of TrustZone CryptoCells and the new ARMv8-M architecture for embedded devices has been “very positive,” Bruce said, adding, “Security can’t be an afterthought.”

Ron Ih, senior manager of marketing and business development in the Security Products Group at Atmel, described standard encryption as “only a piece” of security measures. “Authentication is a key part,” he said.

Atmel was touting its Certified-ID platform at ARM TechCon, featuring the ATECC508A cryptographic co-processor. Ih cited the “made for iPhone” chips that Apple requires of its partners developing products to complement the smartphone, ensuring ecosystem control. “You either have the chip or you don’t,” he said.

“People don’t care about the devices,” Ih concluded. “They care about who the devices are connected to.”

Simon Davidmann, president and chief executive officer of Imperas Software, is a veteran of the electronic design automation field, and he brings his experience to bear in the area of embedded software development.

Software, especially for the IoT, is “getting so complex, you can’t do what you used to do,” he said. “The software world has to change. Nobody should build software without simulation.”

At the same time, simulation is “necessary but not sufficient” in software development, he said. Code developers should be paying attention to abstractions, assertions, verification, and other aspects, according to Davidmann.

“Our customers are starting to adopt virtual platforms,” he added.

Jean Labrosse, president and CEO of MIcrium, a leading provider of real-time operating system kernels and other software components, said “the industry is changing” with the onset of the Internet of Things. Multiple-core chips are entering into the mix – not only for their low-power attributes, but for the safety and security they can provide, he noted.

Jeffrey Fortin, director of product management at Wind River and a specialist in IoT platforms, spoke on the last day of the conference on “Designing for the Internet of Things: The Technology Behind the Hype.”

Wind River, now an Intel subsidiary, has been around for more than three decades, developing “an embedded operating system that could be connected to other systems,” he said.

There are two business interests driving IoT demand, according to Fortin – business optimization and business transformation. He described the IoT as “using data to feed actionable analytics.”

The foundation of the IoT is hardware and software that provides safety and security, Fortin said.

Colt McAnlis of Google (Photo by Jeff Dorsch)

In the final keynote of ARM TechCon, Google developer advocate Colt McAnlis spoke on “The Hard Things About the Internet of Things.”

IoT technology, at present, is “not optimizing the user,” he said in a frequently funny and witty presentation. Networking and battery issues are bedeviling the IoT ecosystem, he asserted.

By draining the batteries of mobile devices with near-constant signals, such as setting location via GPS, companies are imposing “a taxation system for every single thing [IoT] does,” McAnlis said. “We’re talking about how often we’re sampling. People are already realizing this sucks.”

Beacons installed in a shopping mall can bombard smartphone users with advertising and coupons, he noted, while the property management gets data on specifics of foot traffic. “Imagine this at scale,” installed on every block of San Francisco, he added.

“We have a chance to not make this a reality,” McAnlis asserted. “We need IoT technology to make this not suck for users.”

At the end of his keynote, McAnlis asked the attendees to hold up their smartphones and vow, “I solemnly agree not to screw this up.”

Silicon Summit speakers look at the future of chip technology

Friday, April 17th, 2015
thumbnail

Gregg Bartlett

By Jeff Dorsch, Contributing Editor

Quick quiz: What topics do you think were discussed at length Wednesday at the Global Semiconductor Alliance’s Silicon Summit?

A. The Internet of Things.

B. Augmented reality and virtual reality.

C. Cute accessories for spring and summer looks.

The answers: A and B. C could be right if you count wearable electronics as “cute accessories.”

Wednesday’s forum at the Computer History Museum in Mountain View, Calif., not far from  Google’s headquarters, was dominated by talk of IoT, AR, VR, and (to a lesser extent) wearable devices.

Gregg Bartlett, senior vice president of the Product Management Group at GlobalFoundries, kicked off the morning sessions with a talk titled “IoT: A Silicon Perspective.” He said, “A lot of the work left in IoT is in the edge world.”

Bartlett noted, “A lot of the infrastructure is in place,” yet the lack of IoT standards is inhibiting development, he asserted.

“IoT demands the continuation of Moore’s Law,” Bartlett said, touting fully-depleted silicon-on-insulator technology as a cost-effective alternative to FinFET technology. FD-SOI “is the killer technology for IoT,” he added.

Next up was James Stansberry, senior vice president and general manager of IoT Products at Silicon Laboratories. Energy efficiency is crucial for IoT-related devices, which must be able to operate for 10 years with little or no external power, he said.

Bluetooth Smart, Thread, Wi-Fi, and ZigBee provide the connectivity in IoT networks, with a future role for Long-Term Evolution, according to Stansberry. He also played up the importance of integration in connected devices. “Nonvolatile memory has to go on the chip” for an IoT system-on-a-chip device, he said.

For 2015, Stansberry predicted a dramatic reduction in energy consumption for IoT devices; low-power connectivity standards will gain traction; and the emergence of more IoT SoCs.

Rahul Patel, Broadcom’s senior vice president and general manager of wireless connectivity, addressed health-care applications for the IoT. “Security is key,” he said. Reliability, interoperability, and compliance with government regulations are also required, Patel noted.

“My agenda is to scare everyone to death,” said Martin Scott, senior vice president and general manager of the Cryptography Research Division at Rambus. Cybersecurity with the IoT is causing much anxiety, he noted. “Silicon can come to the rescue again,” he said. “If your system relies on software, it’s hackable.”

To build trust in IoT devices and networks, the industry needs to turn to silicon-based security, according to Scott. “Silicon is the foundation of trusted services,” he concluded.

The second morning session was titled “The Future of Reality,” with presentations by Keith Witek, corporate vice president, Office of Corporate Strategy, Advanced Micro Devices; Mats Johansson, CEO of EON Reality; and Joerg Tewes, CEO of Avegant.

Augmented reality and virtual reality technology is “incredibly exciting,” Witek said. “I love this business.” He outlined four technical challenges for VR in the near future: Improving performance; ensuring low latency of images; high-quality consistency of media; and system-level advances. “Wireless has to improve,” Witek said.

VR is “starting to become a volume market,” Johansson said. What matters now is proceeding “from phone to dome,” where immersive experiences meet knowledge transfer, he added. Superdata, a market research firm, estimates there will be 11 million VR users by next year, according to Johansson.

Avegant had a successful Kickstarter campaign last year to fund its Glyph VR headset, with product delivery expected in late 2015, Tewes said. The Glyph has been in development for three years, he said, employing digital micromirror device technology, low-power light-emitting diodes, and latency of less than 12 microseconds to reduce or eliminate the nausea that some VR users have experienced, he said.

The afternoon session was devoted to “MEMS and Sensors, Shaping the Future of the IoT.” Attendees heard from Todd Miller, Microsystems Lab Manager at GE Global Research; Behrooz Abdi, president and CEO of InvenSense; Steve Pancoast, Atmel’s vice president of software and applications; and David Allan, president and chief operating officer of Virtuix.

Miller outlined the challenges for the industrial Internet – cybersecurity, interoperability, performance, and scale. “Open standards need to continue,” he said.

General Electric and other companies, including Intel, are involved in the Industrial Internet Consortium, which is developing use cases and test beds in the area, according to Miller.

He noted that GE plans to begin shipping its microelectromechanical system devices to external customers in the fourth quarter of this year.

Abdi said, “What is the thing in the Internet of Things? The IoT is really about ambient computing.” IoT sensors must continuously answer these questions: Where are you, what are you doing, and how does it feel, he said.

The IoT will depend upon “always on” sensors, making it more accurate to call the technology “the Internet of Sensors,” Abdi asserted. He cautioned against semiconductor suppliers getting too giddy about business prospects for the IoT.

“You’re not going to sell one billion sensors for a buck [each],” Abdi said.

Pancoast of Atmel said sensors would help provide “contextual computing” in IoT networks. “Edge/sensing nodes are a major part of IoT,” he noted. Low-power microcontrollers and microprocessors are also part of the equation, along with “an ocean of software” and all IoT applications, Pancoast added. He finished with saying, “All software is vulnerable.”

Allan spoke about what he called “the second machine age,” with the first machine age dating to 1945, marking the advent of the stored-program computer and other advances. “The smartphone is the first machine of the second machine age,” he said.

IoT involves wireless sensor networks and distributed computing, he said. Google has pointed the way over the past decade, showing how less-powerful computers, implemented in large volumes, have become the critical development in computing, Allan noted. Because of this ubiquity of distributed computing capabilities, “Moore’s Law doesn’t matter as much,” he said.

With the IoT, “new machines will augment human desires,” Allan predicted, facilitating such concepts as immortality, omniscience, telepathy, and teleportation. He explained how technology has helped along the first three – we know what people are thinking through Facebook and Twitter – and the last is just a matter of time, according to Allan.

Blog review September 8, 2014

Monday, September 8th, 2014

Jeff Wilson of Mentor Graphics writes that, in IC design, we’re currently seeing the makings of a perfect storm when it comes to the growing complexity of fill. The driving factors contributing to the growth of this storm are the shrinking feature sizes and spacing requirements between fill shapes, new manufacturing processes that use fill to meet uniformity requirements, and larger design sizes that require more fill.

Is 3D NAND a Disruptive Technology for Flash Storage? Absolutely! That’s the view of Dr. Er-Xuan Ping of Applied Materials. He said a panel at the 2014 Flash Memory Summit agreed that 3D NAND will be the most viable storage technology in the years to come, although our opinions were mixed on when that disruption would be evident.

Phil Garrou takes a look at some of the “Fan Out” papers that were presented at the 2014 ECTC, focusing on STATSChipPAC (SCP) and the totally encapsulated WLP, Siliconware (SPIL) panel fan-out packaging (P-FO), Nanium’s eWLB Dielectric Selection, and an electronics contact lens for diabetics from Google/Novartis.

Ed Koczynski says he now knows how wafers feel when moving through a fab. Leti in Grenoble, France does so much technology integration that in 2010 it opened a custom-developed people-mover to integrate cleanrooms (“Salles Blanches” in French) it calls a Liaison Blanc-Blanc (LBB) so workers can remain in bunny-suits while moving batches of wafers between buildings.

Handel Jones of IBS provides a study titled “How FD-SOI will Enable Innovation and Growth in Mobile Platform Sales” that concludes that the benefits of FD-SOI are overwhelming for mobile platforms through Q4/2017 based on a number of key metrics.

Gabe Moretti of Chip Design blogs that a grown industry looks at the future, not just to short term income.  EDA is demonstrating to be such an industry with significant participation by its members to foster and support the education of its future developers and users through educational licenses and other projects that foster education.