Hello Guest

Sign In / Register

Welcome,{$name}!

/ Saini ese j
Samoa
EnglishDeutschItaliaFrançais한국의русскийSvenskaNederlandespañolPortuguêspolskiSuomiGaeilgeSlovenskáSlovenijaČeštinaMelayuMagyarországHrvatskaDanskromânescIndonesiaΕλλάδαБългарски езикGalegolietuviųMaoriRepublika e ShqipërisëالعربيةአማርኛAzərbaycanEesti VabariikEuskera‎БеларусьLëtzebuergeschAyitiAfrikaansBosnaíslenskaCambodiaမြန်မာМонголулсМакедонскиmalaɡasʲພາສາລາວKurdîსაქართველოIsiXhosaفارسیisiZuluPilipinoසිංහලTürk diliTiếng ViệtहिंदीТоҷикӣاردوภาษาไทยO'zbekKongeriketবাংলা ভাষারChicheŵaSamoaSesothoCрпскиKiswahiliУкраїнаनेपालीעִבְרִיתپښتوКыргыз тилиҚазақшаCatalàCorsaLatviešuHausaગુજરાતીಕನ್ನಡkannaḍaमराठी
Aiga > Tala Fou > [{1 1}]

[{1 1}]

According to the Science and Technology Innovation Board Daily, industry insiders have revealed that Nvidia's China-specific "special edition" AI chip, H20, will be fully available for pre-order after this year's GTC 2024 conference (March 18-March 21), with the fastest delivery time being four weeks.

In January, there were reports that Nvidia had begun accepting pre-orders from distributors for a new AI chip, H20, designed specifically for China. These chips are priced competitively with products from rivals in China. The H20 graphics card is the most powerful among three models developed by Nvidia for the Chinese market. However, the computing power of the H20 naturally falls short of Nvidia's flagship H100 AI chip and the previously released H800 for the Chinese market.

According to three sources, the specifications of the H20 also suggest that it performs less well in certain key areas compared to Chinese competitors, such as the FP32 performance, which measures the speed of processing common tasks by the chip. However, the H20 seems to have an advantage in terms of interconnect speed. Nvidia has set the price for orders of the H20 from Chinese distributors at between $12,000 to $15,000 per card.

Based on previously leaked specifications, Nvidia's H20 is part of the same series as the H100 and H200, all utilizing Nvidia's Hopper architecture, but with an increased memory capacity to 96GB HBM3 and a GPU memory bandwidth of 4.0TB/s. In terms of computing power, the product's FP8 compute capability is 296 TFLOPS, and its FP16 compute capability is 148 TFLOPS, only 1/13th of the "strongest" AI chip today, the H200.