Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Headlines

Headlines

Deep Learning Joins Process Control Arsenal

thumbnail

By David Lammers

At the 2017 Advanced Process Control (APC 2017) conference, several companies presented implementations of deep learning to find transistor defects, align lithography steps, and apply predictive maintenance.

The application of neural networks to semiconductor manufacturing was a much-discussed trend at the 2017 APC meeting in Austin, starting out with a keynote speech by Howard Witham, Texas operations manager for Qorvo Inc. Witham said artificial intelligence has brought human beings to “a point in history, for our industry and the world in general, that is more revolutionary than a small, evolutionary step.”

People in the semiconductor industry “need to take what’s out there and figure out how to apply it to your own problems, to figure out where does the machine win, and where does the brain still win?” Witham said.

At Seagate Technology, a small team of engineers stitched together largely packaged or open source software running on a conventional CPU to create a convolution neural network (CNN)-based tool to find low-level device defects.

In an APC paper entitled Automated Wafer Image Review using Deep Learning, Sharath Kumar Dhamodaran, an engineer/data scientist based at Seagate’s Bloomington, Minn. facility, said wafers go through several conventional visual inspection steps to identify and classify defects coming from the core manufacturing process. The low-level defects can be identified by the human eye but are prone to misclassification due to the manual nature of the inspections.

Each node in a convolutional layer takes a linear combination of the inputs from nodes in the previous layer, and then applies a nonlinearity to generate an output and pass it to nodes in the next layer. Source: Seagate

“Some special types of low-level defects can occur anywhere in the device. While we do have visual inspections, mis-classifications are very common. By using deep learning, we can solve this issue, achieve higher levels of confidence, and lower mis-classification rates,” he said.

CNNs, Hadoop, and Apache

The deep learning system worked well but required a fairly extensive training cycle, based on a continuously evolving set of training images. The images were replicated from an image server into an Apache HBASE table on a Hadoop cluster. The HBASE table was updated every time images were added to the image server.

To improve the neural network training steps, the team artificially created zoomed-in copies of the same image to enlarge the size of the training set. This image augmentation, which came as part of a software package, was used so that the model did not see the same image twice, he said.

“Our goal was to demonstrate the power of our models, so we did no feature engineering and only minimal pre-processing,” Dhamodaran said.

A Convolution Neural Network (CNN)

Neural networks are trained with many processing layers, which is where the term deep learning comes from. The CNN’s processing layers are sensitive to different features, such as edges, color, and other attributes. This heterogeneity “is exploited to construct sophisticated architectures in which the neurons and layers are connected, and plays a primary role in determining the network’s ability to produce meaningful results,” he said.

The model was trained initially with about 7,000 images over slightly less than six hours on a conventional CPU. “If training the model had been done on a high-performance GPU, it would have taken less than a minute for several thousand images,” Dhamodaran said.

The team used commercially available software, writing code in Python and using Ubuntu, Tensorflow, Keras and other data science packages.

After the deep learning system was put into use, the rate of false negatives on incoming images was excellent. Dhamodaran said the defect classification process was much better than the manual system, with 95 percent of defects correctly classified and the remaining five percent mis-classifications. With the manual system, images were correctly classified only 60 percent of the time.

“None of the conventional machine learning models could do what deep learning could do. But deep learning has its own limitations. Since it is a neural network it is a black box. Process engineers in a manufacturing setting would like to know ‘How does this classification happen?’ That is quite challenging.”

The team created a dashboard so that when an unseen defect occurs the system can incorporate feedback from the operator, feedback which can be incorporated in the next training cycle, or used to create the training set for different processes.

The project involved fewer than six people, and took about six months to put all the pieces together. The team deployed the system on a workstation in the fab, achieving better-than-acceptable decision latency during production.

While Dhamodaran said future implementations of deep learning can be developed in a shorter time, building on what the team learned in the first implementation. He declined to detail the number of features that the initial system dealt with.

Seagate engineer Tri Nguyen, a co-author, said future work involves deploying the deep learning system to more inspection steps. “This system doesn’t do anything but image processing, and the classification is good or bad. But even with blurry images, the system can achieve a high level of confidence. It frees up time and allow operators to do some root cause analysis,” Nguyen said.

Seagate engineers Tri Nguyen and Sharath Kuman Dhamodaran developed a deep learning tool for wafer inspection that sharply reduced mis-classifications.

Python, Keras, TensorFlow

Jim Redman, president of consultancy Ergo Tech (Santa Fe, N.M.), presented deep learning work done with a semiconductor manufacturer to automate lithography alignment. Redman was unabashedly positive about the potential of neural networks for chip manufacturing applications. The movement toward deep learning, he said, “really started” from the date — 9 November 2015 – when the TensorFlow software, developed within the Google Brain group, was released under an Apache 2.0 open source license.

Other tools have further eased the development of deep learning applications, Redman added, including Keras, a high-level neural network API, written in Python and capable of running on top of TensorFlow, for enabling fast experimentation.

In just the last year or so, the application of neural networks in the chip industry has made “huge advances,” Redman said, arguing that deep learning is “a wave that is coming. It is a transformative technology that will have a transformative effect on the semiconductor industry.”

In image processing and analysis, what is difficult to do with conventional techniques often can be handled more easily by neural networks. “The beauty of neural networks is that you can take training sets and teach the model something by feeding in known data. You train the model with data points, and then you feed in unknown data.”

While Redman’s work involved lithography alignment, he said “there is no reason the same learning shouldn’t apply to etch tools or electroplaters. It is the basically the same model.”

Less Code, Lower Costs

Complex FDC modeling can involve Ph.ds with domain expertise, while deep learning can involve models with “30-40 lines of Python code,” he said, noting that the “minimal number of lines of code translates to lower costs.”

Humans, including engineers, are not adapted to look for small details in hundreds or thousands of metrology images or SPC charts. “Humans don’t do that well. Engineers still see what they want to see. We should let computers do that. When it comes to wafer analysis and log files, it is getting too complex (for human analysis). The question now is: Can we leverage these advances in machine learning to solve our problems?”

After training a model to detect distortions for a particular stepper, based on just 35 lines of Python code, Redman said the model provided “an extremely good match between the predicted values and the actual values. We have a model that lines up exactly. It is so good it is almost obscene.”

Redman said similar models could be applied to make sure etchers or electroplating machines were performing to expectations. And he said models can be continuously trained, using incoming flows of data to improve the model itself, rather than thinking of training as distinct from the application of the system.

“Most people talk about the training phase, but in fact we can train continuously. We run data through a model, and we can feed that back into the model, using the new data to continuously train,” he said.

Machine Learning for Predictive Maintenance

Benjamin Menz, of Bosch Rexroth (Lohr am Main, Germany), addressed the challenge of how to apply machine learning to predictive maintenance.

To monitor a machine’s vibration, temperature threshold, power signal, and other signals, companies have developed model-based rules to answer the question: Will it break in the next couple of days? Metz said

“Machine learning can do this in a very automatic way. You don’t need tons of data to train the network, perhaps fifty measurements. A very nice example is a turning machine. The network learned very quickly that the tool is broken, even though the human cannot see it. The new approach is clearly able to see a drop in the health index, and stop production,” he said.



Tags: , , , , ,

Leave a Reply