In recent years, Qualcomm, ARM and other enterprises in the mobile chip, the Internet of Things chip strength, so that Intel faces a pinch in the mobile Internet era, and NVIDIA ( Nvidia ) to join, so that Intel has always been proud of the backyard also started.
As GPUs begin to be widely used in general-purpose computing, the advantages of NVIDIA are gradually emerging. On August 12th this year, NVIDIA announced its latest financial report, with a total revenue of 1.428 billion U.S. dollars, an increase of 24% year-on-year, and a profit of 253 million U.S. dollars, a year-on-year increase of 873%. This poses no small threat to Intel.
And just recently, the confrontation between the two major chip makers has been fiercely staged.
Earlier, Intel pointed out in the test report that the new Intel Xeon Phi processor series has higher computing power than the current GPU processors on the market. However, NVIDIA disagreed with the contents of this report and wrote a blog to counter Intel with its new computing system DGX-1 built specifically for deep learning.
(The picture shows the Intel report statement)
(The picture shows DGX-1)
NVIDIA emphasized that the DGX-1 deep learning algorithm was launched in response to the age of artificial intelligence . It was built using the Tesla P100 Accelerator powered by innovative NVIDIA Pascal technology and pre-installed with multi-level application software that includes acceleration of all The main deep learning architecture library, NVIDIA DeepLearning SDK, DIGITS GPU training system, drivers and CUDA. The system includes cloud management service access for container creation and deployment, system updates, and application storage mechanisms. Thanks to the advantages of implementing these application software functions on Tesla GPUs, the speed can be up to 12 times faster than the application speeds of various legacy GPU-accelerated solutions.
In order to emphasize the performance advantages of the Xeon Phi processor, Intel emphasized in the report the difference between the performance of this product and the current GPU processor. The report pointed out that the Xeon Phi processor training speed is 2.3 times faster than the GPU, Xeon Phi chip in 38% of the expansion of multiple nodes, and up to 128 nodes, which is currently available on the market The GPU can't do it. At the same time, a system of 128 Xeon Phi processors is 50 times faster than a single Xeon Phi processor, which means that Xeon Phi processors have significant advantages in scalability.
However, for Intel's claim, NVIDIA made a strong rebuttal and pointed out that Intel used data from 18 months ago, comparing four Maxwell GPUs and four Xeon Phi processors. If you use the updated Caffe AlexNet data, you will find that the four Maxwell GPUs are 30% faster than the four Xeon Phi processors.
At the crossroads of product changes, Intel has been forced into a desperate by competitors. Is it continuing to "wait for the light" or is it in a different direction to accelerate the breakthrough? And how will NVIDIA's GPU products gain fame and fortune, rewrite the history of the technology industry? The two giants of the chip industry are facing their own historical moments. In any case, the highly competitive GPU market has never been calm.
Recommended reading:
What is Intel’s latest chip Xeon Phi against Nvidia?
Interpreting Radar Gesture Recognition by Invista, We Learn More about Project Soli