Posts Tagged ‘semiconductor manufacturing’
NY’s Marcy Nanocenter, the largest remaining shovel-ready, greenfield site in New York State’s Tech Valley near Utica, is another step closer to the goal of attracting major semiconductor manufacturing companies. The computer chip packaging consortium will work inside the Quad C Complex now under construction on the SUNYIT campus, which is due to open in late 2014.
SEMI believes that the semiconductor materials market will trend with the device market, resulting in an increase of one percent this year and a seven percent increase in 2014, resulting in a materials market approaching $50 billion in 2014.
At the upcoming International Electron Devices Meeting (IEDM), to be held December 9-11 in Washington, D.C., IBM researchers will report on a CMOS-compatible 200 mm wafer-scale sub-20nm nanochannel fabrication method that enables stretching, translocation and real-time fluorescence microscopy imaging of single DNA molecules.
Dow Electronic Materials announced availability of its SOLDERON BP TS 6000 Tin-Silver Plating Chemistry for use in lead-free solder bump plating applications.
Semiconductor capital spending has increased significantly among pure-play foundries as more IDMs shift to a fabless/fab-lite business model and as new foundry participants intensify competition among the old guard.
By Chris Henderson
We have an interesting dynamic in the world of semiconductor training that has been in play since the financial crisis in 2008-2009. In order to pull through the especially dire conditions, most companies in the space dramatically reduced their expenses by implementing huge reductions in headcount, travel and training. Now that the industry is bouncing back, one would think that hiring, and therefore the need to train new individuals, would also return. That has not been the case. While it’s true that the industry has resumed hiring, the recruiting strategy and composition of those hired has been different. First, the industry discovered that it could simply avoid replacing many of these positions and still maintain output. The remaining people were just told to do more. They were compelled to oblige, since the alternative was to be out of work. Second, the industry was able tap experienced individuals who were previously let go from other companies. These people did not necessarily need much training to be productive in new positions. Third, companies compensated by increasing automation and outsourcing. The result is that the amount of training has generally been greatly reduced post recession. In some other industries there has been some bounce-back to previous levels, but not in our industry. For example, the attendance at semiconductor-related short courses (1-5 days at a hotel or other training facility) has fallen off dramatically since 2008. Has the need for training gone down, or is there an unseen need that is reducing quality and productivity?
One partial explanation is that there is a shift in the responsibility for training. For example, automation in the factory has pushed more training out of the semiconductor manufacturer and into the hands of the equipment suppliers. As our tools grow more complex, we require more extensive training to operate and maintain them. Moving the fixed cost for in-house training to a supplier’s expenses just sweetens the deal for the chipmaker.
A much more positive shift is toward what we call “performance support.” Performance support is learning at the “point of need.” Rather than attending a conference or a short-course on a topic, then waiting for months to use the information, one accesses a system with the needed information in-hand when the problem occurs. Many engineers and scientists naturally work this way, jumping onto the Internet to look for papers or discussion boards that might address their pressing needs. The problem with this approach is that it is poorly structured and yields inconsistent results. “Googling” can yield some relevant hits, but this information is often of unknown, or even suspect, quality. Further, the search results tend to be a “mile wide and an inch deep.” Some sites like IEEE Xplore provide information that is highly specific and detailed. It is in essence, a “mile deep and an inch wide”. While highly experienced engineers can navigate through this mass of data and potentially find the answers they need, the sheer volume can easily stymie others. They need knowledge that is more structured, focused and at the right level to address their immediate questions. They need a system that is able to adapt to their changing needs while “on the line,” using the tool.
Another important shift we see coming is the rise of the simulator for training in the semiconductor industry. Our tools are now incredibly expensive, so much so that tools like a state-of-the-art immersion lithography system now costs more than a passenger airliner or jet fighter. We use simulators for jet aircraft, and we will likely need simulators for our manufacturing tools so that users can learn in an environment without the fear or danger of damaging the tool and/or expensive wafer lots.
In conclusion, while it may seem like training is on the decline, there are compelling reasons why we need to continue learning, and even step up our efforts in this area. We believe there are new approaches that can lower training costs, reduce risk, and provide engineers with the knowledge they need to be successful on the job. •
CHRIS HENDERSON is President of Semitracks, Inc.
By Pete Singer, Editor-in-Chief of Solid State Technology
It’s apparent that the world’s appetite for electronics has never been greater. That has increasingly taken the form of mobile electronics, including smartphones, tablets and tablets and the new “phablets.” People want to watch movies and live sports on their phones. They want their mobile devices to be “situationally aware” and even capable of monitoring their health through sensors. That drives higher bandwidth (6G is on the drawing board), faster data rates and a demand for reduced power consumption to conserve battery life. At the same time, “big data” and the internet of things (IoT) are here, which drives the demand for server networks and high performance semiconductors, as well as integrated sensors and inventive gadgets such as flexible displays and human biosensor networks.
All of this is pushing the semiconductor manufacturing industry and related industry (MEMS, displays, packaging and integration, batteries, etc.) in new directions. The tradeoffs that chipmakers must manager between power, performance, area and cost/complexity (PPAC) are now driven not by PCs, but by mobile devices.
In a keynote address at Semicon West 2013, Ajit Monacha, CEO of Global Foundries, expanded on his Foundry 2.0 concept, talking about how the requirements of mobile devices were, in fact, changing the entire semiconductor industry. He noted that the mobile business is forecast to be double the size of the PC market in 2016. The mobile business drives many new requirements, said Manocha, including power, performance and features, higher data rates, high resolution multicore processors and thinner form factors.
Manocha presented the audience with what he sees as today’s Big Five Challenges: cost, device architectures, lithography and EUV, packaging and the 450mm wafer transition. I don’t recall when cost wasn’t an issue, but an audience poll revealed that most people believe economic challenges will be the main factor limiting industry growth, not technical challenges. I agree, but I’m also thinking new applications will emerge particularly in the health field that could push the industry in yet another new direction.
By Pete Singer
The switch to 450mm will likely be the largest, most expensive retooling the semiconductor industry has ever experienced. 450mm fabs, which will give an unbeatable competitive advantage to the largest semiconductor manufacturers, are likely to cost $10 billion and come on-line in 2017, with production ramp in 2018.
Unprecedented technical challenges still need to be overcome, but work is well underway at an R&D center in upstate New York, at the Global 450mm Consortium, G450C. Paul Farrar Jr., the G450C General Manager, recently spoke on the current status of activities, key milestones and schedules during a webcast produced by Solid State Technology.
“At this point, we have contracts with 12 major suppliers, and we have tools that are being delivered to the consortium starting in April and continuing through 2015,” Farrar said.
The G450C team now has over 60 engineers and assignees from the member companies. The goal is to have more than 150 engineers by 2014, with approximately 60 supplier engineers on site. “2013 and early 2014 will be about getting tools installed and up and running. Then the integration and unit process scientists will continue from there,” Farrar said.
Farrar said G450C has commitments for 112 process levels. For 45 processes, two suppliers are developing products (which equates to 90 process levels). A few have three suppliers, and about 10 process steps have one supplier. Farrar said that he sees 300mm and 450mm development continuing simultaneously. “We certainly know that for the next six or seven years, the industry will be developing and bringing capability to both 300mm and 450mm. A key goal here is to make sure that we do not slow down the scaling required for Moore’s Law to go from say 20nm to 15 to 12 to 10, etc. versus the cost reduction you get from going to a larger wafer size. We need to both of these things simultaneously as an industry,” he said. “A rough target is to get to 10nm, and then in 2016 we want to be ready for IC makers to make their decisions on when they will ramp to 450mm.”
By Pete Singer, Editor-in-Chief of Solid State Technology
In what might be a case of the cobbler’s children finally getting new shoes, new algorithms and control technology powered by advanced semiconductors, of course are enabling “intelligent” real-time flow error detection in mass flow controllers (MFCs).
The accuracy and repeatability of MFCs ??? which control the amount of process gas flowing into etch and deposition chambers, for example can have a very direct impact on yield: “A simple 1% increase in yield on an etch system can equate to up to $60,000 a day savings,” notes Shaun Pewsey, Director of Microelectronics Strategic Accounts at Brooks Instrument. “Process gas stability has been identified by virtually every IDM as critical to meeting yield enhancement goals and initiatives. MFC accuracy is critical in maintaining the level of control required,” he said (the remarks were made during a recent webcast produced by Solid State Technology).
Pewsey said the challenges are only getting more severe as the industry moves to ever more challenging devices, larger die sizes, and greater die complexity. Compounding the problem is the push for a higher mix of products in the fabs, particularly foundries. “We’re seeing tools that are being run with multiple recipe types, in some cases pushing the tool beyond its original design requirements,” Pewsey said.
The end result of this is a stronger focus on basic MFC performance attributes. Today, 1% accuracy is required for challenging applications and Pewsey believes we will soon see a requirement for 0.5% accuracy. Tighter flow repeatability is also required for chamber matching.
It’s well known that the accuracy of MFCs can drift over time, in part due to the build-up of particles from process gas. The common practice for checking the accuracy of MFCs is to take the gas panel off-line and perform a flow check. This can easily take half a day, says Pewsey. And, of course, until this check is done, the drifting MFCs could have impacted hundreds of wafers.
To address this common problem, engineers at Brooks Instrument have developed the smarter MFC. The latest version, the GF135, uses Brooks’ real-time rate-of-decay flow error detection technology to continually test for changes in the device’s performance. Data can be used to improve accuracy at critical low-flow set points, set up alarm limits for critical performance parameters and monitor trends for predictive maintenance.
How does the new smart MFC work? In operation while process gas is flowing into the chamber Brooks figured out a way to momentarily close the valve in order to run a diagnostic test. “As the valve closes, we continue to deliver the required amount of gas to the process chamber as the internal pressure starts to decrease. As that happens, our control valve opens up, continuing to deliver the exact amount of flow required for the process,” Pewsey explained.
After completing the measurement, the valve is reopened and the pressure transient compensation algorithm compensates for the initial pressure spike. A proprietary algorithm computes the flow based on the rate of pressure change and time and compares this against the baseline.
In short, the new technology identifies and corrects issues before they happen, preventing wasted time and wasted wafers, increasing uptime and yield.
By Pete Singer
Nobody can predict the future, of course, but 2013 is shaping up to be a good year for the semiconductor industry and its suppliers. According to SEMI, total fab spending for equipment needed to ramp fabs, upgrade technology nodes, and expand or change wafer size could increase 16.7 percent in 2013 to reach a new record high of $42.7 billion. The estimate includes new equipment, used equipment, or in-house equipment but excludes test assembly and packaging equipment (which, if included, would bring the number up to about $50 billion). The market for semiconductor manufacturing materials, which was $48.6 billion this year, is expected to grow 4% to more than $50 billion in $2013.
There’s been some hand-wringing in 2012 about continued consolidation and the number of companies that will be moving to 450mm: most pundits guess that only 5-7 companies will be able to make the move. However, that’s a limited view of the industry, since there are hundreds of facilities around the world cranking out chips, LEDs, optoelectronics, power devices, MEMS and other components. The latest edition of the SEMI World Fab Forecast lists over 1,150 facilities (including 300 opto/LED facilities), with 76 facilities starting production in 2012 and in the near future.
There’s sure to be much talk in 2013 about technology requirements at the leading edge, including the 450mm transition, progress in EUV, 3D integration and FinFET optimization. Sustainability will be key, with an emphasis on reducing power consumption, which means lower leakage currents and reduced Vdd.
The demand for semiconductors will never be higher, particularly as the middle class rise on dominance in places such as Brazil, Russia, India and China. First on the wish list it seems, after shelter, food and clothing, is a smart phone.
After a trip to imec in Leuven, Belgium, I’m particularly bullish on opportunities in healthcare, which range from body area sensor networks to amazingly advanced labs-on-a-chip that can screen 20 million blood cells per second to find a single tumor cell in 5 billion blood cells. It is these kinds of applications that could lead to a new revolution in how electronics are designed and manufactured.
The increasing demand for wireless data bandwidth and the emergence of LTE and LTE Advanced standards pushes radio-frequency (RF) IC designers to develop devices with higher levels of integrated RF functions, meeting more and more stringent specification levels. The substrates on which those devices are manufactured play a major role in achieving that level of performance.
Everybody’s talking about it, but just what is DFM? According to various EDA company websites, design for manufacturing can be: generation of yield optimized cells; layout compaction; wafer mapping optimization; planarity fill; or, statistical timing among other definitions. Obviously, there is very little consensus. For me, DFM is what makes my job hard: Characterizing it, and developing tools for it, is the most important item on my agenda.
In nanometer designs, the number of single vias, and the number of via transitions with minimal overlap, can contribute significantly to yield loss. Yet doubling every via leads to other yield-related problems and has a huge impact on design size. While there is still concern over of how many vias can be fixed without rerouting and without creating DRC violations, the Calibre via doubling tool can identify via transitions and recommend areas for second via insertion without increasing area.
Certain measurement methodologies can be inaccurate even if they’re precise, and there are known errors associated with certain system parameters.
The etch loading effect is the dominant factor that impacts final CD control at advanced nodes with shrinking critical dimension.
A look at ways to simplify the optical and resist model calibration and to speed up the entire process.
Fabricating interconnects is one of the most process-intensive and cost-sensitive parts of manufacturing.
Testing interposer-based versions of stacked die and future versions using through-silicon vias.