UPDATED. 2024-07-17 07:31 (수)
Samsung at a crossroad as HBM deal with Nvidia yet to be sealed
Samsung at a crossroad as HBM deal with Nvidia yet to be sealed
  • JY Han
  • 승인 2024.06.20 07:03
  • 댓글 0
이 기사를 공유합니다

Interposer competence means Korean chipmaker will, eventually, win deal
Image: TSMC
Image: TSMC

This week, multiple acquaintances messaged me to ask whether a certain report was true.

The report, made in South Korean media, claimed that Nvidia asked Samsung to change the design of their high-bandwidth memory (HBM) and that this would cause further delays in supply. Coincidentally, Samsung’s share price also fell on the same day. The report was deleted just hours after it was posted. Samsung denied it was true.

Just last month, Samsung changed the head of its chip division, which many viewed as a reaction to it falling behind in HBM to rivals. Since then, similar reports of alleged failure on Samsung’s part to win Nvidia’s contract have surfaced.

Nvidia used to be a small customer for memory makers that bought small volumes of graphic DRAM. Now it is one of their top customers on the level of Apple.

This is due to the popularity of generative AI, which has caused seismic changes in the chip industry. For example, the market valuation of Hanmi Semiconductor, a fab equipment maker, is similar to that of LG Electronics, a company that makes a hundred times the revenue, today. In other words, the chip industry as a whole is going through a boom. Some companies will rise, others fall.

On Tuesday, Nvidia became the world’s most valuable company, beating Microsoft.  The GPU maker’s share prices and earnings have skyrocketed since OpenAI unveiled ChatGPT on November 30, 2022.

Nvidia recorded US$26.04 billion in revenue and US$16.9 billion in operating income (an incredible margin rate) in its latest fiscal quarter, which beat market expectations by a wide margin. AI GPU accounted for 86.7% of its revenue. Before ChatGPT, gaming GPU was its bread and butter but AI GPU began to surpass them in terms of revenue starting in mid-2022.

The company’s AI accelerator H100, which uses its GPU and SK Hynix’s HBM, costs 50 million won per board. But demand is outpacing supply. Its successor H200 is expected also to face a shortage, Nvidia CEO Jensen Huang said during the conference call for its latest quarter.

The reason for the shortage in AI accelerator boards is caused by the lack of production capacity for parts used in their packaging. H100 and H200 have a silicon interposer atop a large board. GPU and HBM are placed on the interposer. While HBM chip die supply is admittedly slow, slower to the point of bottleneck is TSMC’s chip on wafer on substrate (CoWoS) package production capacity. And, silicon interposers.

TSMC is expanding its CoWoS package capacity aggressively. However, the situation is different for silicon interposers.

Silicon interposers are used to integrate multiple chip dies into one package to deliver electrical signals.

And those used in H100 are quite large. According to TSMC, the silicon interposer on H100 is 3.3 times the size of a lithography mask (26×33㎜=858㎟). So this puts the surface area of the interposer at 2831㎟. This means using a 40-nanometer (nm) process, a 300mm wafer can deliver up to nine units of interposer, if the yield rate is at 100%. Given that the interposers, while they are on the wafer, go through wire formation and through-silicon-via processes, there will be some defective units, so a single wafer realistically will deliver six to seven interposers.

As GPUs become more advanced, they will become larger, and the number of HBMs will also increase. This means the interposer will also need to become larger. According to TSMC, in 2026, the interposer will be 5.5 times in size than a lithography mask, and in 2027 this will increase to 8 times. So the number of interposers that can be made out of a single wafer will further drop to five units. This limitation is one reason why glass substrates, which can act as an interposer, are of high interest to chipmakers.

TSMC is procuring interposers from others such as UMC due to its lack of capacity. But this is a short-term solution. New packaging technologies such as TSMC’s CoWoS-L, or CoWoS with local interconnect, where interposer is placed locally on only the necessary areas, needs to be commercialized. However, this requires sophistication on par with front-end processes.

While the timing of it will be important for Samsung, this shortage in interposers is likely why Nvidia will inevitably place some orders to the South Korean tech giant. Samsung is one of the few chipmakers that can design and produce its own interposers. It also has HBM and 2.5D package production capacity. Samsung stressed earlier this month in its Samsung Foundry Forum that it was the only company that can offer memory, interposer, and package in turn-key.

More challenging for Samsung is to catch up to SK Hynix’s HBM, which was designed with characteristics customized for Nvidia’s GPU. This is a position that Samsung, the world’s leader in memory chips, has never found itself in before. Will Samsung’s HBM pass Nvidia’s quality test? It seems so. But it is the how of it, not when, that will matter.

삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.

  • 515, Nonhyeon-ro, Gangnam-gu, Seoul, Republic of Korea 4F, Ahsung Bldg.
  • 대표전화 : 82-2-2658-4707
  • 팩스 : 82-2-2659-4707
  • 청소년보호책임자 : Stan LEE
  • 법인명 : The Elec Inc.
  • 제호 : THE ELEC, Korea Electronics Industry Media
  • 등록번호 : 서울, 아05435
  • 등록일 : 2018-10-15
  • 발행일 : 2018-10-15
  • 발행인 : JY HAN
  • 편집인 : JY HAN
  • THE ELEC, Korea Electronics Industry Media Prohibiting unauthorized duplication,publishing,modification and distribution the material on this Site for any purpose.
  • Copyright © 2024 THE ELEC, Korea Electronics Industry Media. All rights reserved. mail to powerusr@thelec.kr