Part of the  

Solid State Technology

  and   

The Confab

  Network

About  |  Contact

Posts Tagged ‘TSVs’

TSV Market Demand Now for Performance not Size

Wednesday, October 1st, 2014

thumbnail

By Ed Korczynski, Sr. Technical Editor, SST/SemiMD

Through-Silicon Vias (TSV) have finally reached mainstream commercial use for 3D ICs, though still for “high-end” high-performance applications. Despite allowing for extreme miniaturization, the demand for TSV has little to do with package size as evidenced by recent Samsung and TSMC product announcements for “enterprise servers” and “routers and other networking equipment.”

Used to connect opposite sides of a silicon substrate to allow for stacking of multiple Integrated Circuit (IC) chips in a single functional package, the industry has been using TSV in Micro-Electro-Mechanical Systems (MEMS) and Backside Image Sensors (BSI) manufacturing for many years now. Also, the first announcement of a commercial FPGA product using TSV in a so-called “2.5D” interposer package happened four years ago.

(Source: Yole Devellopement)

However, the Figure above shows that CIS and MEMS and 2.5D-FPGAs can all be categorized as “niche” applications with limited growth potentials. Specialty memory and logic (and eventually photonics) applications have long been seen as the major drivers of future TSV demand.

On September 25 of this year, TSMC announced it has collaborated with HiSilicon Technologies Co, Ltd. to create an ARM-based networking processor that integrates a 16nm-node logic chips with a 28nm-node I/O chip using silicon interposer technology. This is the same 2.5D TSMC-branded Chip-on-Wafer-on-Substrate (CoWoS) technology used in the Xilinx FPGA product. “This networking processor’s performance increases by three fold compared with its previous generation,” said HiSilicon President Teresa He. Package size reduction has nothing to do with the value of the products now demanding TSV.

Samsung announced last August that it has started mass producing the industry’s first 64GB DDR4 registered dual Inline memory modules (RDIMMs) using TSV. Targeting enterprise servers and “cloud” data centers, the new RDIMMs include 36 DDR4 packages, each of which consists of four 4-gigabit (Gb) DDR4 DRAM dice. The low-power chips are manufactured using Samsung’s 20nm-node process. The company claims that the new 64GB TSV module performs twice as fast as a 64GB module that uses wire-bonding, while consuming about half the power. Samsung has invested in TSV R&D since 2010 for 40nm-node 8GB DRAM RDIMMs and 2011 for 30nm-node 32GB DRAM RDIMMs.

The Hybrid Memory Cube (HMC) and other heterogeneous 3D-IC stacks based on TSV should be seen as long-term strategic technologies. HMC R&D led by Micron continues to serve near-term customers demanding ultra-high performance such as supercomputers and performance networking, as detailed in an SST article from last year. Micron’s Scott Graham, General Manager, Hybrid Memory Cube, commented then, “As we move forward in time, we’ll see technology evolve as costs come down for TSVs and manufacturing technology, it will enter into future space where traditional DDR type of memory has resided. Beyond DDR4, we can certainly see this technology being for mainstream memory.”

Elusive Demand for Mobile Applications

14 years ago, this editor—while working for an early innovator in TSV technology—was co-author of a “3D stacked wafer-level packaging” feature article in SST.

The lead paragraph of that article summarizes the advantages of using TSV to reduce package sizes:

As electronics applications shrink in size, integrated circuit (IC) packaged devices must be reduced both in footprint and in thickness. The main motivation for the development of smaller packages is the demand for portable communications devices, such as memory cards, smart cards, cellular telephones, and portable computing and gaming devices. End-users of such electronic devices are interested in greater functionality per unit volume, not relatively simplistic metrics, such as transistors per chip or circuit speed.

While still true, established and inherently lower-cost packaging technologies have been extended to allow for stacking of thinned silicon chips:  wire-bonding can connect dozens of layers to a substrate, flip-chip with wire-bonding and substrate-vias can connect 4 layers easily, and both fan-in and fan-out packages can provide ample electrical Input/Output (I/O) connections. At SEMICON West this year in the annual Yield Forum breakfast sponsored by Entegris, Qualcomm vice president Dr. Geoffry Yu reminded attendees that, “TSV eventually will come, but the million dollar question is when. The market forces will dictate the answer.” What has become clear in the last year is that market demand for improved product performance will set the pace.

—E.K.

The Week in Review: September 19, 2014

Friday, September 19th, 2014

Extreme-ultraviolet lithography systems will be available to pattern critical layers of semiconductors at the 10-nanometer process node, and EUV will completely take over from 193nm immersion lithography equipment at 7nm, according to Martin van den Brink, president and chief technology officer of ASML Holding.

North America-based manufacturers of semiconductor equipment posted $1.35 billion in orders worldwide in August 2014 (three-month average basis) and a book-to-bill ratio of 1.04, according to the August EMDS Book-to-Bill Report published today by SEMI.   A book-to-bill of 1.04 means that $104 worth of orders were received for every $100 of product billed for the month.

Rudolph Technologies has introduced its new SONUS Technology for measuring thick films and film stacks used in copper pillar bumps and for detecting defects, such as voids, in through silicon vias (TSVs).

Samsung Electronics announced this week that it has begun mass producing its six gigabit (Gb) low-power double data rate 3 (LPDDR3) mobile DRAM, based on advanced 20 nanometer (nm) process technology. The new mobile memory chip will enable longer battery run-time and faster application loading on large screen mobile devices with higher resolution.

ProPlus Design Solutions, Inc. announced this week it expanded its sales operations to Europe.

Mentor Graphics this week announced the appointment of Glenn Perry to the role of vice president of the company’s Embedded Systems Division. The Mentor Graphics Embedded Systems Division enables embedded development for a variety of applications including automotive, industrial, smart energy, medical devices, and consumer electronics.

Intel Announces “New Interconnect” for 14nm

Tuesday, September 2nd, 2014

By Dr. Phil Garrou, Contributing Editor, Solid State Technology

Intel has just announced that Embedded Multi-die Interconnect Bridge (“EMIB”) packaging technology will be available to 14nm foundry customers. Claiming it is a “…lower cost and simpler 2.5D packaging approach for very high density interconnects between heterogeneous dies on a single package.” [link]

Intel released the following description “Instead of an expensive silicon interposer with TSV (through silicon via), a small silicon bridge chip is embedded in the package, enabling very high density die-to-die connections only where needed. Standard flip-chip assembly is used for robust power delivery and to connect high-speed signals directly from chip to the package substrate. EMIB eliminates the need for TSVs and specialized interposer silicon that add complexity and cost.”

It is highly likely that this is tied to the issuance of patent application publication US 2014/0070380 A1 published March 13 2014.

In simplified form interconnect bridges (“silicon glass or ceramic”) are embedded in a laminate substrate and connected with flip chip as shown below.

Bridge Interconnect as described in recent Intel patent.

A cross section of the package is more revealing showing connections through the laminate and connections through the bridge substrate (316) which would be TSV in the case of a silicon bridge substrate. The underside of the bridge substrate (314) may be connected to another bridge substrate for further interconnect routing as shown below.

While there is no silicon interposer, there do appear to be TSV in the embedded interconnect substrate as shown below. While removing complexity from the IC fabrication by eliminating TSV from the foundry process, the packaging operation becomes much more complex.

Since the 2.5D interposer has been reduced in size to the interconnect bridges this may reduce cost, but will increase signal length vs a true 3D stack or a silicon interposer 2.5D.

Further details will be discussed in a future IFTLE blog.

Intel EMIB Module in Cross Section

Inside the Hybrid Memory Cube

Friday, September 27th, 2013

By Thomas Kinsley and Aron Lunde

The HMC provides a breakthrough solution that delivers unmatched performance with the utmost reliability.

Since the beginning of the computing era, memory technology has struggled to keep pace with CPUs. In the mid 1970s, CPU design and semiconductor manufacturing processes began to advance rapidly. CPUs have used these advances to increase core clock frequencies and transistor counts. Conversely, DRAM manufacturers have primarily used the advancements in process technology to rapidly and consistently scale DRAM capacity. But as more transistors were added to systems to increase performance, the memory industry was unable to keep pace in terms of designing memory systems capable of supporting these new architectures. In fact, the number of memory controllers per core decreased with each passing generation, increasing the burden on memory systems.

To address this challenge, in 2006 Micron tasked internal teams to look beyond memory performance. Their goal was to consider overall system-level requirements, with the goal of creating a balanced architecture for higher system level performance with more capable memory and I/O systems. The Hybrid Memory Cube (HMC), which blends the best of logic and DRAM processes into a heterogeneous 3D package, is the result of this effort. At its foundation is a small logic layer that sits below vertical stacks of DRAM die connected by through-silicon -vias (TSVs), as depicted in FIGURE 1. An energy-optimized DRAM array provides access to memory bits via the internal logic layer and TSV – resulting in an intelligent memory device, optimized for performance and efficiency.

By placing intelligent memory on the same substrate as the processing unit, each system can do what it’s designed to do more efficiently than previous technologies. Specifically, processors can make use of all of their computational capability without being limited by the memory channel. The logic die, with high-performance transistors, is responsible for DRAM sequencing, refresh, data routing, error correction, and high-speed interconnect to the host. HMC’s abstracted memory decouples the memory interface from the underlying memory technology and allows memory systems with different characteristics to use a common interface. Memory abstraction insulates designers from the difficult parts of memory control, such as error correction, resiliency and refresh, while allowing them to take advantage of memory features such as performance and non-volatility. Because HMC supports up to 160 GB/s of sustained memory bandwidth, the biggest question becomes, “How fast do you want to run the interface?”

The HMC Consortium

A radically new technology like HMC requires a broad ecosystem of support for mainstream adoption. To address this challenge, Micron, Samsung, Altera, Open-Silicon, and Xilinx, collaborated to form the HMC Consortium (HMCC), which was officially launched in October, 2011. The Consortium’s goals included pulling together a wide range of OEMs, enablers, and tool vendors to work together to define an industry-adoptable serial interface specification for HMC. The consortium delivered on this goal within 17 months and introduced the world’s first HMC interface and protocol specification in April 2013.

The specification provides a short-reach (SR), very short-reach (VSR), and ultra short-reach (USR) interconnection across physical layers (PHYs) for applications requiring tightly coupled or close proximity memory support for FPGAs, ASICs and ASSPs, such as high-performance networking and computing along with test and measurement equipment.

FIGURE 1. The HMC employs a small logic layer that sits below vertical stacks of DRAM die connected by through-silicon-vias (TSVs).

The next goal for the consortium is to develop a second set of standards designed to increase data rate speeds. This next specification, which is expected to gain consortium agreement by 1Q14, shows SR speeds improving from 15 Gb/s to 28 Gb/s and VSR/USR interconnection speeds increasing from 10 to 15–28 Gb/s.

Architecture and Performance

Other elements that separate HMC from traditional memories include raw performance, simplified board routing, and unmatched RAS features. Unique DRAM within the HMC device are designed to support sixteen individual and self-supporting vaults. Each vault delivers 10 GB/s of sustained memory bandwidth for an aggregate cube bandwidth of 160 GB/s. Within each vault there are two banks per DRAM layer for a total of 128 banks in a 2GB device or 256 banks in a 4GB device. Impact on system performance is significant, with lower queue delays and greater availability of data responses compared to conventional memories that run banks in lock-step. Not only is there massive parallelism, but HMC supports atomics that reduce external traffic and offload remedial tasks from the processor.

As previously mentioned, the abstracted interface is memory-agnostic and uses high-speed serial buses based on the HMCC protocol standard. Within this uncomplicated protocol, commands such as 128-byte WRITE (WR128), 64-byte READ (RD64), or dual 8-byte ADD IMMEDIATE (2ADD8), can be randomly mixed. This interface enables bandwidth and power scaling to suit practically any design—from “near memory,” mounted immediately adjacent to the CPU, to “far memory,” where HMC devices may be chained together in futuristic mesh-type networks. A near memory configuration is shown in FIGURE 2, and a far memory configuration is shown in FIGURE 3. JTAG and I2C sideband channels are also supported for optimization of device configuration, testing, and real-time monitors.

HMC board routing uses inexpensive, standard high-volume interconnect technologies, routes without complex timing relationships to other signals, and has significantly fewer signals. In fact, 160GB/s of sustained memory bandwidth is achieved using only 262 active signals (66 signals for a single link of up to 60GB/s of memory bandwidth).

FIGURE 2. The HMC communicates with the CPU using a protocol defined by the HMC consortium. A near memory configuration is shown.

FIGURE 3.A far memory communication configuration.

A single robust HMC package includes the memory, memory controller, and abstracted interface. This enables vault-controller parity and ECC correction with data scrubbing that is invisible to the user; self-correcting in-system lifetime memory repair; extensive device health-monitoring capabilities; and real-time status reporting. HMC also features a highly reliable external serializer/deserializer (SERDES) interface with exceptional low-bit error rates (BER) that support cyclic redundancy check (CRC) and packet retry.

HMC will deliver 160 GB/s of bandwidth or a 15X improvement compared to a DDR3-1333 module running at 10.66 GB/s. With energy efficiency measured in pico-joules per bit, HMC is targeted to operate in the 20 pj/b range. Compared to DDR3-1333 modules that operate at about 60 pj/b, this represents a 70% improvement in efficiency. HMC also features an almost-90% pin count reduction—66 pins for HMC versus ~600 pins for a 4-channel DDR3 solution. Given these comparisons, it’s easy to see the significant gains in performance and the huge savings in both the footprint and power usage.

Market Potential

HMC will enable new levels of performance in applications ranging from large-scale core and leading-edge networking systems, to high-performance computing, industrial automation, and eventually, consumer products.

Embedded applications will benefit greatly from high-bandwidth and energy-efficient HMC devices, especially applications such as testing and measurement equipment and networking equipment that utilizes ASICs, ASSPs, and FPGA devices from both Xilinx and Altera, two Developer members of the HMC Consortium. Altera announced in September that it has demonstrated interoperability of its Stratix FPGAs with HMC to benefit next-generation designs.

According to research analysts at Yole Développement Group, TSV-enabled devices are projected to account for nearly $40B by 2017—which is 10% of the global chip business. To drive that growth, this segment will rely on leading technologies like HMC.

FIGURE 4.Engineering samples are set to debut in 2013, but 4GB production in 2014.

Production schedule

Micron is working closely with several customers to enable a variety of applications with HMC. HMC engineering samples of a 4 link 31X31X4mm package are expected later this year, with volume production beginning the first half of 2014. Micron’s 4GB HMC is also targeted for production in 2014.

Future stacks, multiple memories

Moving forward, we will see HMC technology evolve as volume production reduces costs for TSVs and HMC enters markets where traditional DDR-type of memory has resided. Beyond DDR4, we see this class of memory technology becoming mainstream, not only because of its extreme performance, but because of its ability to overcome the effects of process scaling as seen in the NAND industry. HMC Gen3 is on the horizon, with a performance target of 320 GB/s and an 8GB density. A packaged HMC is shown in FIGURE 4.

Among the benefits of this architectural breakthrough is the future ability to stack multiple memories onto one chip. •

**********
THOMAS KINSLEY is a Memory Development Engineer and ARON LUNDE is the Product Program Manager at Micron Technology, Inc., Boise, ID.

Paradigm Changes in 3D-IC Manufacturing

Monday, July 1st, 2013

THORSTEN MATTHIAS and PAUL LINDNER, EV Group, St. Florian, Austria

The process flows applied today for real product manufacturing are quite different from the process flows initially proposed for a universal 3D IC.

Successful 3D-IC prototypes have been demonstrated for many different devices. However, while for some applications 3D-IC architectures have been smoothly integrated into products despite their technical complexity and the omnipresent cost pressure, for other products there seems to be a long list of issues (cost, yield, thermal issues, lack of standards, lack of design tools, etc.) that prevents adoption of 3D-IC integration in the near future.

Common wisdom is that a technical innovation is first introduced for high-performance, high-margin applications, for which the performance gain can bear the additional cost. As the technology becomes more mature, costs are reduced and an increasing number of applications adopt the new technology. A good example of this in the semiconductor industry is flip chip bumping. However, if we look at 3D ICs the situation is not so clear. While it is true that some “cost does not matter” applications in the science, military or medical field use 3D stacking, many high-end devices (most notably CPUs) do not yet use 3D stacking. However, some of the lowest-cost devices in our industry such as light emitting diodes (LEDs), micro-electromechanical systems (MEMS) and image sensors have successfully implemented 3D-IC technology.

Is the adoption of 3D-IC technology in high-volume, low-cost devices evidence of its technical maturity? In his famous book “The Innovator’s Dilemma” Harvard Professor Clayton Christensen introduced the idea that any innovation and its potential for industrial adoption should be assessed in the context of their respective value networks [1]. The value network of a product (e.g. a CPU) is defined by the sales critical parameters (e.g. computing power, clock speed, on-chip cache memory and price) and by the expectations of the current customers about future requirements. Any innovation that improves the sales critical parameters within the current value network is defined as “sustainable innovation”. Innovations that do not improve the sales critical parameters within the current value network are defined as “disruptive innovations”. In his studies Prof. Christensen concluded that innovations cannot be introduced in value networks where they are considered “disruptive” no matter how technically mature, cheap or well established for other applications they are. However, in a different value network where the innovation is sustainable, the users can hone their skills and build expertise. If the expectations within one value network change, then innovations can be introduced very rapidly.

For example, for FPGAs the 2.5D interposer enables smaller die sizes, which allowed 28nm technology to be introduced at a reasonable wafer yield at an earlier point of time. The result of introducing interposers for FPGAs is that an FPGA with more and faster transistors can be manufactured earlier. Therefore, the 2.5D interposer for FPGAs is a sustainable innovation. For mainstream semiconductor devices like memory, Moore’s Law is (or at least was until recently) a good proxy of the value network. If you put a TSV on a chip, you cannot put transistors onto the same area. TSVs reduce the number of transistors on the chip and increase the price per transistor.

Is 3D integration ready for volume production?

At first glance TSV and 3D IC are disruptive innovations. TSVs and die stacking have already been successfully implemented in high-volume manufacturing for CMOS image sensors, despite the fact that the combination of high technical complexity, immature technology, and low-cost/low-margin devices seems like a very unfavorable situation. From the pure technical point of view 3D-IC image sensors seem more challenging than other devices, such as stacked memory. The TSV density is higher, the TSV diameter is smaller, the pitch is smaller, the wafers are thinner and wafer-to-wafer stacking is necessary. However, the transition from front-side illuminated image sensors to backside illuminated image sensors as well as the current transition to 3D-stacked image sensors (where photodiodes and digital signal processing are manufactured on separate wafers) have resulted in technical improvements within the existing value network, including better resolution, smaller pixels, better signal-to-noise ratio, higher image processing speed and higher bandwidth.

When engineers first looked at developing 3D IC technology, they did not design a 3D-IC device from the get go, but rather started with separate technical milestones. In most cases, the first milestone was to manufacture a daisy chain with 10, 1000 or 10000 TSVs. The focus was primarily on the unit processes for TSV manufacturing. The second milestone was to manufacture a thin die with TSVs and bumps on both sides, which might eventually be used in a real system. The paradigm for thin-wafer processing in early 3D/TSV development was that a temporarily bonded wafer had to withstand any kind of backside processes. Thus, flexibility and broad process windows were most important. In theory the idea to completely manufacture individual thin chips is very attractive as it enables known good die (KGD) manufacturing and fits into every possible integration scheme and business scenario. However, in practice this approach results in overly complex integration schemes, which are not optimal from a yield and cost perspective.

This paradigm changed when product groups began to adopt thin-wafer processing for specific products. Now the highest goal was to maximize profit on the product, and yield and cost of ownership were optimized for the entire process flow for chip and package — often abandoning previously considered universal one-size-fits-all solutions and resulting in completely new integration schemes. For example, whereas previous R&D efforts went into very thick films with the idea to embed C4 (flip chip) bumps, today’s 3D-IC devices primarily apply bump-last process flows, which use very thin adhesive layers. This has the advantage of reduced cost, better film thickness control and higher yield as a result of avoiding bump damage caused by post-bump processing. Transitioning from very thick to thin films also reduces the duration of the baking steps for curing the films — enabling the design of temporary bonders with more than twice the throughput. FIGURE 1 shows an exemplary process flow for 2.5D interposers with 3D-IC chips stacked by chip-to-wafer bonding [2]. Another key 3D-integration concept is overmolding prior to debonding, which allows double-side processing on ultra-thin wafers while avoiding thin-wafer handling altogether. After chip stacking the entire wafer is overmolded while the thin interposer wafer is still bonded to the carrier wafer. This overmolding compound creates a rigid film on top of the thin interposer wafer.

An analysis of the published process flows for 3D-IC manufacturing today shows that bump-last process flows and overmolding prior to debonding have already been implemented. Within TSMC’s Chip-on-wafer-on-substrate (CoWoS) process flow [3], the chip stacking on the interposer occurs before the backside of the interposer is processed. It is a complete reversal from the previous paradigm that individual chips have to be tested prior to stacking. KGD manufacturing of the interposer is not possible with this process flow. However, from the pragmatic manufacturing point of view it allows manufacturing of thin wafers while avoiding handling of thin wafers. Implementing a bump-last approach also increases flexibility and eliminates the risk of bump damage during stacking. Texas Instruments’ stacked wafer chip-scale package (WCSP) platform separates the interposer and chip manufacturing, which is more in line with the classical foundry/OSAT model [4]. However, like the CoWoS process flow, it allows the creation of ultra-thin interposers without the need to handle thin wafers at any point during manufacturing or assembly.

An ultra-thin device wafer is created by permanent wafer bonding to a silicon carrier wafer. After a series of process steps this wafer stack is bonded to a glass wafer, which then acts as a carrier wafer for further processing. This allows thinning of the initial silicon carrier wafer and creating TSVs. Essentially, the original carrier wafer now becomes an interposer wafer. An important aspect is that in this case the image sensor-interposer connection is bump-less, which allows interconnects with a fine pitch down to less than 2 micron while at the same time saving the cost to create bumps.

One paradigm of 3D-IC manufacturing was that the industrial adoption will first start with chip-to-chip stacking (C2C), later on move to chip-to-wafer-stacking (C2W) and finally move to wafer-to-wafer stacking (W2W). W2W integration has the fundamental limitations that the dies have to have the same size and that a good die might be stacked onto a defective die. However, with respect to manufacturing complexity it has a lot of advantages; first and foremost that it allows parallel processing of all dies on the wafer. In fact, it is remarkable that W2W stacking with TSVs has been successfully implemented for many devices already, especially low-cost devices. Due to the successful implementation of backside illuminated image sensor manufacturing on large substrates, a 300mm wafer bonding infrastructure has been established in the industry. Fusion wafer bonding is the method of choice for bump-less chip-chip interconnects for both via-last integration for oxide-oxide bond interfaces and via-middle integration with hybrid oxide/metal bond interfaces. Fusion wafer bonding is also a key technology for monolithic 3D integration as it can be used to transfer thin layers of silicon on top of an already processed wafer.

The EVG GeminiFB fusion wafer bonding system integrates wafer cleaning, LowTemp plasma activation, SmartView wafer-to-wafer alignment system and wafer bonding all in one system.

One interesting aspect of W2W integration is that it enables ultra-shallow TSVs. It is possible to implement 1 ??m x10 ??m or 1 m x 5 m TSVs without the need to deal with 5 m or 10 m thin wafers. As the cost of TSV manufacturing is strongly correlated to TSV depth and TSV aspect ratio, W2W integration allows significant cost reduction. W2W integration also enables much better die-to-die alignment accuracies compared to C2C and C2W thereby enabling the usage of small TSV diameters and fine TSV pitch.

Stacked memory is a potential application for W2W stacking as the dies have the same size. It is questionable whether die testing prior to stacking can be implemented for memory. If testing prior to stacking were not to be implemented then W2W integration is a natural choice due to reduced TSV cost and precise alignment capability. In this way, the image sensor has paved the way for W2W integration for memory.

Conclusion

3D integration can provide many benefits, but only where it can prove to be a sustainable innovation. TSV and 3D chip stacking have been successfully implemented for devices like FPGAs and image sensors, where 3D IC was a means to improve the sales critical parameters of the devices. Its implementation in high-volume manufacturing occurred despite high technical complexity and the omnipresent cost pressure, which is compounded for low-cost devices. Innovation theory suggests that once a new technology has been established for one product, it can be adopted very rapidly by other products.
The process flows applied today for real product manufacturing are quite different from the process flows initially proposed for a universal 3D IC. Chip manufacturing and packaging process flows have since been concurrently optimized. Today, C4 bumps are generally manufactured as late as possible in the overall manufacturing process. Thin-wafer processing is a key competence, but in many cases thin-wafer handling after debonding has been eliminated by either overmolding or wafer bonding to another device wafer prior to debonding. Image sensors apply the most radical concept of W2W stacking, which allows reduced manufacturing costs due to bump-less integration and ultra-shallow TSVs. 3D ICs based on W2W integration is a reality today.

References

1. C. Christensen, “The Innovator’s Dilemma,” Harvard Business School Press, 1997
2. T. Matthias et al., “From unit processes to smart process flows ??? new integration schemes for 2.5D interposer,” Chip Scale Review, March/April 2013
3. P. Garrou, “Insights from the Leading Edge 135: UMC/SCP Memory on Logic; SEMI Europe 3D Summit Part 2,” Solid State Technology, February 12, 2013
4. R. Dunne et al., “Development of a stacked WCSP package platform using TSV Technology,” Proc. IEEE Electronic Components and Technology Conference (ECTC) 2012
THORSTEN MATTHIAS is business development director at EV Group, St. Florian am Inn, Austria. Tel: +43 676 84531148, e-mail: t.matthias@evgroup.com. PAUL LINDNER is executive technology director at EV Group.


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.