Nvidia Tackles High-Performance Computing

By Scott Ferguson  |  Posted 2007-06-21 Email Print this article Print
 
 
 
 
 
 
 

WEBINAR: Event Date: Tues, December 5, 2017 at 1:00 p.m. ET/10:00 a.m. PT

How Real-World Numbers Make the Case for SSDs in the Data Center REGISTER >

With the Tesla GPU, Nvidia is looking to bring its graphics technology into the data center for the first time.

Nvidia is looking to bring its graphics processing know-how into the data center for the first time.

On June 20, the Santa Clara, Calif. company introduced its Tesla processor that executives said will allow the company to translate its graphics processor technology into high-performance computing (HPC).

The Tesla graphics processing unit (GPU) marks Nvidia's first attempt to penetrate the enterprise beyond its traditional role as a producer of graphics technology. The company's Quadro processors are mainly used for digital content creation and 3D graphics, while its GeForce graphics processor are used in video games and other entertainment products.

The Tesla GPU is considerably more powerful. It uses 128 parallel processors that can deliver up to 518 gigaflops of parallel computation in either a high-density PC or workstation. A gigaflop is a billion floating-point calculations per second.

This type of compute power, according to Nvidia, makes the Tesla GPU an ideal processor for a number of highly specialized fields that need HPC capabilities, such as oil and gas companies, the geosciences, molecular biology, and medical diagnostics.

However, the Tesla GPU is not meant as a substitute for a traditional CPU, but is designed to work in conjunction with a traditional processor to provide additional computing power, said Andy Keane, the general manager of GPU computing at Nvidia.

Click here to read more about Nvidia and workstations.

By allowing multiple the software threads to run in parallel, the processor provides for higher throughput fro multithreaded applications.

Nvidia also unveiled Wednesday a computing server the company is touting as an example of the cooperation between GPU and CPU using Tesla technology. This 1U (1.75-inch) system houses eight Tesla GPUs and offers more than a 1,000 parallel processors. This system, when coupled with a standard server with multicore processors, will add teraflops of performance with its parallel processing ability.

The Tesla GPU also offers better performance per watt. Nvidia's Computing Server will use about 550 watts of power during peak capacity, Keane said.

In addition to the new GPU and server, Nvidia unveiled what the company calls its Deskside Supercomputer, a high-density workstation that includes two Tesla GPUs that are attached through a standard PCI-Express connection, which then offers eight teraflops of compute power.

Click here to read more about Intel and its experimental 80-core processor.

In 2006, Nvidia also unveiled its CUDA (Compute Unified Device Architecture), software that allows for thread computing on GPUs and CPUs. Thread computing allows hundreds of on-chip processor cores to simultaneously communicate and cooperate to solve complex computing problems.

The company's CUDA development environment is supported on both Linux and Microsoft Windows XP operating systems.

The standard configuration of the Computing Server is $12,000, while the Deskside Supercomputer begins at $7,500. The GPU Computing processor costs $1,499. Both the Tesla processor and the supercomputer will be available in August. The server will become generally available sometime later this year.

Check out eWEEK.com's for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.

 
 
 
 
 
 
 
 
 
























 
 
 
 
 
 

Submit a Comment

Loading Comments...
























 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date