Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Headlines

Headlines

Process Control Deals with Big Data, Busy Engineers

thumbnail

By Dave Lammers, Contributing Editor

Turning data into insights that will improve fab productivity is one of the semiconductor industry’s biggest opportunities, one that experts say requires a delicate mix between automation and human expertise.

A year ago, after the 2015 Advanced Process Control (APC) conference in Austin, attendees said one of their challenges was that it takes too long to create the fault detection and classification (FDC) models that alert engineers when something is amiss in a process step.

“The industry listened,” said Brad van Eck, APC conference co-chairman. Participants at the 2016 APC in Phoenix heard progress reports from device makers as diverse as Intel, Qorvo, Seagate, and TSMC, as well as from key APC software vendors including Applied Materials, Bistel, and others.

Steve Chadwick, principal engineer for manufacturing IT at Intel, described the challenge in a keynote address. IC manufacturers which have spent billions of dollars on semiconductor equipment are seeking new ways to maximize their investments.

Steve Chadwick

“We all want to increase our quality, make the product in the best time, get the most good die out, and all of that. Time to market can be a game changer. That is universal to the manufacturing space,” Chadwick said.

“Every time we have a new generation of processor, we double the data size. Roughly a gigabyte of information is collected on every wafer, and we sort thousands of wafers a day,” Chadwick said. The result is petabytes of data which needs to be stored, analyzed, and turned into actionable “wisdom.”

Intel has invested in data centers located close their factories, making sure they have the processing power to handle data coming in from roughly 5 billion sensor data points collected each day at a single Intel factory.

“We have to take all of this raw data that we have in a data store and apply some kind of business logic to it. We boil it down to ‘wisdom,’ telling someone something they didn’t know beforehand.”

In a sense, technology is catching up, as Hadoop and several other data search engines are adopted to big data. Also, faster processors allow servers to analyze problems in 15 seconds or less, compared to several hours a few years ago.

Where all of this gets interesting is in figuring out how to relate to busy engineers who don’t want to be bothered with problems that don’t directly concern them. Chadwick detailed the notification problem at Intel fabs, particularly as engineers use smart phones and tablets to receive alarms. “Engineers are busy, and so you only tell them something they need to know. Sometimes engineers will say, ‘Hey, Steve, you just notified my phone of 500 things that I can’t do anything about. Can you cut it out?’”

Notification must be prioritized, and the best option in many cases is to avoid notifiying a person at all, instead sending a notification to an expert system. If that is not an option, the notification has to be tailored to the device the engineer is using. Intel is moving quickly to HTML 5-based data due largely to its portability across multiple devices, he added.

With more than half a million ad hoc jobs per week, Intel’s approach is to keep data and analysis close to the factory, processing whenever possible in the local geography. Instead of shipping data to a distant data center for analysis, the normal procedure is to ship the small analysis code to a very large data set.

False positives decried

Fault detection and classification (FDC) models are difficult to create and oftentimes overly sensitive, resulting in false alarms. These widely used, manually created FDC models can take two weeks or longer to set up. While they take advantage of subject-matter-expert (SME) knowledge and are easy to understand, tool limits tend to be costly to set up and manage, with a high level of false positives and missed alarms.

An Applied Materials presentation — by Parris Hawkins, James Moyne, Jimmy Iskandar, Brad Schulze, and Mike Armacost – detailed work that Applied is doing in cooperation with process control researchers at the University of Cincinnati. The goal is to develop next-generation FDC that leverages Big Data, prediction analytics, and expert engineers to combine automated model development with inputs from human experts.

Fully automated solutions are plagued with significant false positives/negatives, and are “generally not very useful,” said Hawkins. By incorporating metrology and equipment health data, a form of “supervised” model creation can result in more accurate process controls, he said.

The model creation effort first determines which sensors and trace features are relevant, and then optimizes the tool limits and other parameters. The goal is to find the optimum between too-wide limits that fail to alert when faults are existent, and overly tight limits which set off false alarms too often.

Next-generation FDC would leverage Big Data and human expertise. (Source: Applied Materials presentation at APC 2016).

Full-trace FDC

BISTel has developed an approach called Dynamic Full Trace FDC. Tom Ho, president of BISTel USA, presented the work in conjunction with Qorvo engineers, where a beta version of the software is being used.

Tom Ho

Ho said Dynamic Full Trace FDC starts with the notion that the key to manufacturing is repeatability, and in a stable manufacturing environment “anything that differs, isn’t routine, it is an indication of a mis-process and should not be repeatable. Taking that concept, then why not compare a wafer to everything that is supposed to repeat. Based on that, in an individual wafer process, the neighboring wafer becomes the model.”

The full-trace FDC approach has a limited objective: to make an assessment whether the process is good or bad. It doesn’t recommend adjustments, as a run-to-run tool might.

The amount of data involved is small, because it is confined to that unique process recipe. And because the neighboring trace is the model, there is no need for the time-consuming model creation mentioned so often at APC 2016. Compute power can be limited to a personal computer for an individual tool.

Ho took the example of an etch process that might have five recipe steps, starting with pumping down the chamber to the end point where the plasma is turned off. Dynamic full-trace FDC assumes that most wafers will receive a good etch process, and it monitors the full trace to cover the entire process.

“There is no need for a model, because the model is your neighboring trace,” he said. “It definitely saves money in multiple ways. With the rollout of traditional FDC, each tool type can take a few weeks to set up the model and make sure it is running correctly. For multiple tool types that can take a few months. And model maintenance is another big job,” he said.

For the most part, the dynamic full-trace software runs on top of the Bistel FDC platform, though it could be used with another FDC vendor “if the customer has access to the raw trace data,” he said.



Tags: , , , , ,

Leave a Reply