Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘metrology’

Linde Launches Asian R&D Center in Taiwan

Friday, September 23rd, 2016

thumbnail

By Ed Korczynski, Sr. Technical Editor

Timed in coordination with SEMICON Taiwan 2016 happening in early September, The Linde Group launched a new electronics R&D Center in Taichung, Taiwan. “We had a fabulous opening, with 35 to 40 customers and 20 people from the Taiwanese government such as ITRI,” said Carl Jackson (Fig. 1), Head of Electronics, Technology and Innovation at The Linde Group, in an exclusive interview with SemiMD. This new R&D center represents an investment of approximately EUR 5m to support local customers and development partners throughout the Asia Pacific region with its state-of-the-art analytical and product development laboratory.

FIG1: Carl Jackson, Head of Electronics, Technology and Innovation, LindeGroup. (Source: The Linde Group)

Linde has dozens of labs around the world supporting different industries, all of which work in coordination with three main centers termed ‘hubs’ located in New Jersey, Munich, and Shanghai. This new electronics lab in Taichung will support customers in China, Malaysia, Singapore, South Korea, and of course Taiwan. Working closely with local research partners and customers, the new center will also support development of local supply chains and local special gases manufacturing capabilities. “Customers do prefer a local supply-chain. There are examples in China where they’re even specifying a geographical limit around their fab, and if you’re outside that limit you can’t supply the materials,” said Jackson.

As a major step in collaborating with key regional partners in Taiwan, Linde is also entering into a collaboration agreement with the Industry Technology Research Institute (ITRI) of Taiwan. Jia-Ruey Duann, the vice president of ITRI, stated, “ ITRI values the cooperation on Electronic Specialty Gases (ESG) Production & Analysis with The Linde Group, and we look forward to working together to develop new products and services that benefit Taiwan’s electronics industry.”

Supporting Asia Pacific region

The R&D Center is part of an ongoing expansion and investment in the Asia Pacific region for Linde Electronics. Last year Linde commissioned the world’s largest on-site fluorine plant to supply SK Hynix, in addition to bringing multiple new electronics project on-stream in Asia. This year Linde announced that they have been awarded multiple gas and chemical supply wins for a number of world-leading photovoltaic cell manufacturers in Southeast Asia. “We’re talking about customer-specific applications in specific market segments,” explained Jackson. “They come to us with specific problems and the purpose of this lab is to find solutions.”

While this new lab supports manufacturing customers in LED, FPD, and PV industries, most of the demand for new materials comes from IC fabs. “Semiconductors always drive the materials focus, because it’s rare to find unique demands in the other markets,” said Jackson. “However, the scale can be much larger in the other segments, and that can drive improvements in gases used in semiconductor fabs. An example is ammonia which is used in huge volumes by LED fabs, and similarly when thin-film solar was happening there was huge demand for germane.”

Linde assists customers in realizing continuous technology progress through improvements in the ability to reduce chemical variability in existing products and in the development of new materials that are critical to support customers’ technology roadmaps. “We feel as thought we need to be better positioned to be able to support customers when they require it,” said Jackson. “Quite frankly, some materials don’t travel well. I’m not suggesting that suddenly we’ll start supplying everything locally, but this facility will help us start supplying customers throughout Asia.”

The Linde Electronics R&D Center (Fig. 2) will be used for improvement of product quality through advanced synthesis, purification, packaging and new applications development. These improvements are enabled by Linde’s advanced analytical processes and quality control systems that verify compositions and manage impurities.

FIG2: New electronics R&D center in Taichung, Taiwan will support customers throughout the Asia Pacific region. (Source: The Linde Group)

Analysis and Synthesis

“The way that we have it configured it has two distinct features that work together, but the main focus is on analysis and that’s where the main investment has been made,” explained Jackson. “We think that we probably have the most advanced lab in Asia and perhaps in the world. At least for the materials portfolio that we have we can do ‘finger-printing’ analysis, including all the trace-elements and all the metals, which is to say all the things that can potentially affect process.”

The second feature of this lab is the ability to create experimental quantities of completely new chemical and blends to meet the needs of customers working in advanced device R&D and in pilot-line production. The lab features new purification and new synthesis technologies that work on small quantities of materials. “One capability we have is to do binary- or mixed-component blends,” elaborated Jackson. “In terms of purification, we have a bench-scale set-up with absorbance and distillation, but generally that would be done somewhere else. That’s the advantage of being connected to the global network of labs.”

“There are unique requirements for every fab in every industry,” reminded Jackson. “For example, nitrous-oxide is a key critical-material for OLED manufacturing and you must maintain repeatability in every cylinder, in every truck, and down every pipe. How do you reduce the variability in the molecule regardless of the supply mode? Having the ability to do in-depth analysis certainly gives us a leg up.”

Since sustainability of the supply-chain is always essential, one trend is HVM fabs today is the consideration of recover methods for critical gases such as argon, helium, and neon. “In some cases it works, and particularly as the scale continues to grow. Being able to use the expertise from our Linde Engineering colleagues and scaling it to the right size for semiconductor manufacturing is really important for us.”

—E.K.

Managing Dis-Aggregated Data for SiP Yield Ramp

Monday, August 24th, 2015

thumbnail

By Ed Korczynski, Sr. Technical Editor

In general, there is an accelerating trend toward System-in-Package (SiP) chip designs including Package-On-Package (POP) and 3D/2.5D-stacks where complex mechanical forces—primarily driven by the many Coefficient of Thermal Expansion (CTE) mismatches within and between chips and packages—influence the electrical properties of ICs. In this era, the industry needs to be able to model and control the mechanical and thermal properties of the combined chip-package, and so we need ways to feed data back and forth between designers, chip fabs, and Out-Sourced Assembly and Test (OSAT) companies. With accelerated yield ramps needed for High Volume Manufacturing (HVM) of consumer mobile products, to minimize risk of expensive Work In Progress (WIP) moving through the supply chain a lot of data needs to feed-forward and feedback.

Calvin Cheung, ASE Group Vice President of Business Development & Engineering, discussed these trends in the “Scaling the Walls of Sub-14nm Manufacturing” keynote panel discussion during the recent SEMICON West 2015. “In the old days it used to take 12-18 months to ramp yield, but the product lifetime for mobile chips today can be only 9 months,” reminded Cheung. “In the old days we used to talk about ramping a few thousand chips, while today working with Qualcomm they want to ramp millions of chips quickly. From an OSAT point of view, we pride ourselves on being a virtual arm of the manufacturers and designers,” said Cheung, “but as technology gets more complex and ‘knowledge-base-centric” we see less release of information from foundries. We used to have larger teams in foundries.” Dick James of ChipWorks details the complexity of the SiP used in the Apple Watch in his recent blog post at SemiMD, and documents the details behind the assumption that ASE is the OSAT.

With single-chip System-on-Chip (SoC) designs the ‘final test’ can be at the wafer-level, but with SiP based on chips from multiple vendors the ‘final test’ now must happen at the package-level, and this changes the Design For Test (DFT) work flows. DRAM in a 3D stack (Figure 1) will have an interconnect test and memory Built-In Self-Test (BIST) applied from BIST resident on the logic die connected to the memory stack using Through-Silicon Vias (TSV).

Fig.1: Schematic cross-sections of different 3D System-in-Package (SiP) design types. (Source: Mentor Graphics)

“The test of dice in a package can mostly be just re-used die-level tests based on hierarchical pattern re-targeting which is used in many very large designs today,” said Ron Press, technical marketing director of Silicon Test Solutions, Mentor Graphics, in discussion with SemiMD. “Additional interconnect tests between die would be added using boundary scans at die inputs and outputs, or an equivalent method. We put together 2.5D and 3D methodologies that are in some of the foundry reference flows. It still isn’t certain if specialized tests will be required to monitor for TSV partial failures.”

“Many fabless semiconductor companies today use solutions like scan test diagnosis to identify product-specific yield problems, and these solutions require a combination of test fail data and design data,” explained Geir Edie, Mentor Graphics’ product marketing manager of Silicon Test Solutions. “Getting data from one part of the fabless organization to another can often be more challenging than what one should expect. So, what’s often needed is a set of ‘best practices’ that covers the entire yield learning flow across organizations.”

“We do need a standard for structuring and transmitting test and operations meta-data in a timely fashion between companies in this relatively new dis-aggregated semiconductor world across Fabless, Foundry, OSAT, and OEM,” asserted John Carulli, GLOBALFOUNDRIES’ deputy director of Test Development & Diagnosis, in an exclusive discussion with SemiMD. “Presently the databases are still proprietary – either internal to the company or as part of third-party vendors’ applications.” Most of the test-related vendors and users are supporting development of the new Rich Interactive Test Database (RITdb) data format to replace the Standard Test Data Format (STDF) originally developed by Teradyne.

“The collaboration across the semiconductor ecosystem placed features in RITdb that understand the end-to-end data needs including security/provenance,” explained Carulli. Figure 2 shows that since RITdb is a structured data construct, any data from anywhere in the supply chain could be easily communicated, supported, and scaled regardless of OSAT or Fabless customer test program infrastructure. “If RITdb is truly adopted and some certification system can be placed around it to keep it from diverging, then it provides a standard core to transmit data with known meaning across our dis-aggregated semiconductor world. Another key part is the Test Cell Communication Standard Working Group; when integrated with RITdb, the improved automation and control path would greatly reduce manually communicated understanding of operational practices/issues across companies that impact yield and quality.”

Fig.2: Structure of the Rich Interactive Test Database (RITdb) industry standard, showing how data can move through the supply chain. (Source: Texas Instruments)

Phil Nigh, GLOBALFOUNDRIES Senior Technical Staff, explained to SemiMD that for heterogeneous integration of different chip types the industry has on-chip temperature measurement circuits which can monitor temperature at a given time, but not necessarily identify issues cause by thermal/mechanical stresses. “During production testing, we should detect mechanical/thermal stress ‘failures’ using product testing methods such as IO leakage, chip leakage, and other chip performance measurements such as FMAX,” reminded Nigh.

Model but verify

Metrology tool supplier Nanometrics has unique perspective on the data needs of 3D packages since the company has delivered dozens of tools for TSV metrology to the world. The company’s UniFire 7900 Wafer-Scale Packaging (WSP) Metrology System uses white-light interferometry to measure critical dimensions (CD), overlay, and film thicknesses of TSV, micro-bumps, Re-Distribution Layer (RDL) structures, as well as the co-planarity of Cu bumps/pillars. Robert Fiordalice, Nanometrics’ Vice President of UniFire business group, mentioned to SemiMD in an exclusive interview that new TSV structures certainly bring about new yield loss mechanisms, even if electrical tests show standard results such as ‘partial open.’ Fiordalice said that, “we’ve had a lot of pull to take our TSV metrology tool, and develop a TSV inspection tool to check every via on every wafer.” TSV inspection tools are now in beta-tests at customers.

As reported at 3Dincites, Mentor Graphics showed results at DAC2015 of the use of Calibre 3DSTACK by an OSAT to create a rule file for their Fan-Out Wafer-Level Package (FOWLP) process. This rule file can be used by any designer targeting this package technology at this assembly house, and checks the manufacturing constraints of the package RDL and the connectivity through the package from die-to-die and die-to-BGA. Based on package information including die order, x/y position, rotation and orientation, Calibre 3DSTACK performs checks on the interface geometries between chips connected using bumps, pillars, and TSVs. An assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of 3D SiP.

—E.K.

Solid State Watch: February 20-26, 2015

Monday, March 2nd, 2015
YouTube Preview Image

Research Alert: February 3, 2015

Tuesday, February 3rd, 2015

Fabrication of patterns with linewidths down to 1.5nm

Researchers at aBeam Technologies, Lawrence Berkeley National Laboratory and Argonne National Laboratory have developed a technology to fabricate test patterns with a minimum linewidth down to 1.5nm. The fabricated nanostructures are used to test metrological equipment. The designed patterns involve thousands of lines with precisely designed linewidths; these lines are combined in such a way that the distribution of linewidths appears to be random at any location. This pseudo- random test pattern allows nanometrological systems to be characterized over their entire dynamic range.

The test pattern contains alternating lines of silicon and silicon-tungsten, this results in a pretty good contrast in the metrological systems. The size of the sample is fairly large, apprx. 6×6 microns, and involves thousands of lines, each according to its designed width. Earlier, aBeam and LBNL reported the capability of fabricating 4nm lines and spaces using e-beam lithography, atomic layer deposition, and nanoimprint.

Dr. Sergey Babin, president of aBeam Technologies said, “The semiconductor industry is moving toward a half-pitch of 11nm and 7nm. Therefore, metrology equipment should be very accurate, at least one order of magnitude more accurate than that. The characterization of metrology systems requires test patterns at a scale one order smaller than the measured features. The fabrication was a challenge, especially for such a complex pattern as a pseudo-random design, but we succeeded.”

Dr. Valeriy Yashchuk, a researcher at the Advanced Light Source of LBNL continued: “When you measure anything, you have to be sure that your metrological system produces accurate results, otherwise what kind of results will you get, nobody knows. Qualifying and tuning metrology systems at the nanoscale is not easy. We designed the test pattern that is capable of characterizing nano-metrology systems over their entire dynamic range, resulting in the modulation transfer function, the most comprehensive characteristic of any system.”

The test pattern is to be used to characterize almost any nano-metrology system. Experiments were performed using a scanning electron microscope (SEM), atomic force microscope (AFM), and soft x-ray microscopes. A part of an ideal test-sample and its SEM microscopy image is shown below. The image includes imperfection in the microscope and needs to be characterized.

The power spectral density of the sample is flat; the spectra of the image has a significant cut-off at high frequencies; this is used to characterize the microscope over its dynamic range and show the degradation of the microscope’s sensitivity as soon as the linewidth becomes smaller.

New method allows for greater variation in band gap tunability

If you can’t find the ideal material, then design a new one.

Northwestern University’s James Rondinelli uses quantum mechanical calculations to predict and design the properties of new materials by working at the atom-level. His group’s latest achievement is the discovery of a novel way to control the electronic band gap in complex oxide materials without changing the material’s overall composition. The finding could potentially lead to better electro-optical devices, such as lasers, and new energy-generation and conversion materials, including more absorbent solar cells and the improved conversion of sunlight into chemical fuels through photoelectrocatalysis.

“There really aren’t any perfect materials to collect the sun’s light,” said Rondinelli, assistant professor of materials science and engineering in the McCormick School of Engineering. “So, as materials scientists, we’re trying to engineer one from the bottom up. We try to understand the structure of a material, the manner in which the atoms are arranged, and how that ‘genome’ supports a material’s properties and functionality.”

The electronic band gap is a fundamental material parameter required for controlling light harvesting, conversion, and transport technologies. Via band-gap engineering, scientists can change what portion of the solar spectrum can be absorbed by a solar cell, which requires changing the structure or chemistry of the material.

Current tuning methods in non-oxide semiconductors are only able to change the band gap by approximately one electronvolt, which still requires the material’s chemical composition to become altered. Rondinelli’s method can change the band gap by up to 200 percent without modifying the material’s chemistry. The naturally occurring layers contained in complex oxide materials inspired his team to investigate how to control the layers. They found that by controlling the interactions between neutral and electrically charged planes of atoms in the oxide, they could achieve much greater variation in electronic band gap tunability.

“You could actually cleave the crystal and, at the nanometer scale, see well-defined layers that comprise the structure,” he said. “The way in which you order the cations on these layers in the structure at the atomic level is what gives you a new control parameter that doesn’t exist normally in traditional semiconductor materials.”

By tuning the arrangement of the cations–ions having a net positive, neutral, or negative charge–on these planes in proximity to each other, Rondinelli’s team demonstrated a band gap variation of more than two electronvolts. “We changed the band gap by a large amount without changing the material’s chemical formula,” he said. “The only difference is the way we sequenced the ‘genes’ of the material.”

Supported by DARPA and the US Department of Energy, the research is described in the paper “Massive band gap variation in layered oxides through cation ordering,” published in the January 30 issue of Nature Communications. Prasanna Balachandran of Los Alamos National Laboratory in New Mexico is coauthor of the paper.

Arranging oxide layers differently gives rise to different properties. Rondinelli said that having the ability to experimentally control layer-by-layer ordering today could allow researchers to design new materials with specific properties and purposes. The next step is to test his computational findings experimentally.

New pathway to valleytronics

A potential avenue to quantum computing currently generating quite the buzz in the high-tech industry is “valleytronics,” in which information is coded based on the wavelike motion of electrons moving through certain two-dimensional (2D) semiconductors. Now, a promising new pathway to valleytronic technology has been uncovered by researchers with the Lawrence Berkeley National Laboratory (Berkeley Lab).

Feng Wang, a condensed matter physicist with Berkeley Lab’s Materials Sciences Division, led a study in which it was demonstrated that a well-established phenomenon known as the “optical Stark effect” can be used to selectively control photoexcited electrons/hole pairs – referred to as excitons -in different energy valleys. In valleytronics, electrons move through the lattice of a 2D semiconductor as a wave with two energy valleys, each valley being characterized by a distinct momentum and quantum valley number. This quantum valley number can be used to encode information when the electrons are in a minimum energy valley. The technique is analogous to spintronics, in which information is encoded in a quantum spin number.

“This is the first demonstration of the important role the optical Stark effect can play in valleytronics,” Feng says. “Our technique, which is based on the use of circularly polarized femtosecond light pulses to selectively control the valley degree of freedom, opens up the possibility of ultrafast manipulation of valley excitons for quantum information applications.”

Wang, who also holds an appointment with the University of California (UC) Berkeley Physics Department, has been working with the 2D semiconductors known as MX2 materials, monolayers consisting of a single layer of transition metal atoms, such as molybdenum (Mo) or tungsten (W), sandwiched between two layers of chalcogen atoms, such as sulfur (S). This family of atomically thin 2D semiconductors features the same hexagonal “honeycombed” lattice as graphene. Unlike graphene, however, MX2 materials have natural energy band-gaps that facilitate their use in transistors and other electronic devices.

This past year, Wang and his group reported the first experimental observation of ultrafast charge transfer in photo-excited MX2 materials. The recorded charge transfer time of less than 50 femtoseconds established MX2 materials as competitors with graphene for future electronic devices. In this new study, Wang and his group generated ultrafast and ultrahigh pseudo-magnetic fields for controlling valley excitons in triangular monolayers of WSe2 using the optical Stark effect.

“The optical Stark effect describes the energy shift in a two-level system induced by a non-resonant laser field,” Wang says.

“Using ultrafast pump-probe spectroscopy, we were able to observe a pure and valley-selective optical Stark effect in WSe2 monolayers from the non-resonant pump that resulted in an energy splitting of more than 10 milli-electron volts between the K and K? valley exciton transitions. As controlling valley excitons with a real magnetic field is difficult to achieve even with superconducting magnets, a light-induced pseudo-magnetic field is highly desirable.”

Like spintronics, valleytronics offer a tremendous advantage in data processing speeds over the electrical charge used in classical electronics. Quantum spin, however, is strongly linked to magnetic fields, which can introduce stability issues. This is not an issue for quantum waves.

“The valley-dependent optical Stark effect offers a convenient and ultrafast way of enabling the coherent rotation of resonantly excited valley polarizations with high fidelity,” Wang says. “Such coherent manipulation of valley polarization should open up fascinating opportunities for valleytronics.”

Solid State Watch: August 22-28, 2014

Friday, August 29th, 2014
YouTube Preview Image

Overlay Metrology Suite for Multiple Patterning

Tuesday, August 26th, 2014

By Ed Korczynski, Sr. Technical Editor

Today, KLA-Tencor Corporation (NASDAQ: KLAC) released two metrology tools and an upgraded data analysis system that can reduce overlay error by 25% when using multi-patterning in leading-edge IC fabs. By taking additional data and using feed-forward control loops, the integrated solution dynamically adjusts the exposures in lithographic steppers to improve both overlay and critical dimension (CD) results in high-volume manufacturing (HVM). The suite of tools has passed beta-site evaluations with fab customers.

“Feed-forward has been used at gate CD to control variations, mostly controlling the Z-dimension of deposition and etch. But this is using feed-forward to control the 2D aspect of overlay.” explained Ady Levy, KLA-Tencor fellow, in an exclusive interview with Solid State Technology and SemiMD. “With the absence of traditional lithography scaling, customers are developing 3D structures that are using other parts of the fab.”

Figure 1 shows an analysis of the origin of patterning errors for Litho-Etch-Litho-Etch (LELE) double-patterning, indicating that traditional lithography processes account for just ~40% of the errors. Most multi-patterning errors originate with the deposition and etching and chemical-mechanical planarization (CMP) of films, inducing wafer-shape variations and thickness non-uniformities.

Fig. 1

The company’s WaferSight™ Patterned Wafer Geometry (PWG) measurement tool is an extension of the WaferSight line to measure bow and warp and other surface non-uniformities on unpatterned wafers, with the added ability to measure both sides to provide data on thickness variations. By incorporating industry-unique vertical wafer hold to minimize gravitational distortion and a sampling density of 3.5 million data points per wafer, the new tool produces highly accurate wafer shape data. “By feeding forward this information we can then correct the exposure on the scanner and correct for the induced overlay error due to stress from a prior process step,” elaborated Levy.

Brunner et al. (Optical Microlithography XXVII, Proc. of SPIE, Vol. 9052, 90520U, 2014) from IBM recently showed the quantified benefits of using PWG feed-forward (PWG-FF) information in stepper exposures to correct for across-wafer stress variation. Stress Monitor Wafers showed overlay errors dominated by wafer distortion effects, with six-times greater distribution of errors compared to distortion-free wafers. Table 1 compares standard linear alignment with High Order Wafer Alignment (HOWA) and with PWG-FF alignment, the latter provides the best results without requiring the slower processing of HOWA.

Proprietary model-based metrology allows the LMS IPRO6 to accurately measure reticle registration for on-device pattern features, as well as standard registration marks for significantly higher sampling. With faster measurement time than its predecessor, the LMS IPRO6 supports measuring the increased number of reticles associated with innovative multi-patterning techniques. The LMS IPRO6 enables generation of pattern-dependent registration error data that improves feedback to the e-beam mask writer, and can be fed forward to the fab’s lithography module for feature-optimized scanner corrections that improve wafer-level patterning.

The K-T Analyzer 9.0 is the latest version of the company’s platform that enables advanced, run-time data analysis for a wide range of metrology system types. Though the company fields a wide portfolio of products, KLA-Tencor doesn’t provide all inspection and metrology tools needed to control a commercial HVM fab line, and so the company provides software loaders to allow data from other tools to be integrated. The data analysis platform upgrade includes in-line methods for calculating scanner corrections per exposure on an on-product, lot-by-lot basis that maintains high accuracy without requiring full wafer measurement data—a production-capable control technique that can reduce pattern overlay error. In addition, the platform includes new scanner fleet management, scanner data analysis, and scanner alignment optimization capabilities.

All of this allows commercial HVM fabs to push the limits of patterning resolution for complex next-generation logic ICs. “Within the lithography module, our Archer™ 500 overlay and SpectraShape™ 9000 CD advanced metrology systems identify and monitor patterning errors,” said Ahmad Khan, group vice president of KLA-Tencor’s Parametric Solutions Group. “Extending beyond the lithography cell, our new WaferSight PWG and LMS IPRO6 systems isolate additional process- or reticle-related sources of patterning errors. These fab-wide, comprehensive measurements, supported by K-T Analyzer 9.0’s flexible data analysis, expand the process window and enable improved production patterning control for our customers’ leading-edge devices.”

—E.K.

Automation of Sample Plan Creation For Process Model Calibration

Thursday, July 18th, 2013

The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only because it is required to accurately represent full chip designs with countless combinations of widths, spaces and environments, but also because of the constraints imposed by metrology which may result in limiting the number of structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition, the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the number of development and production nodes, and the process is getting more complicated if process window aware models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans which also help reduce cycle time. In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined, all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools for customizing the final plan, and the time needed to generate these plans is greatly reduced.

To view this white paper, click here.