Though virtual machines have become indispensable in the server room over the last few years, desktop virtualization has been less successful. One of the reasons has been performance, and specifically graphics performance—modern virtualization products are generally pretty good at dynamically allocating CPU power, RAM, and drive space as clients need them, but graphics performance just hasn’t been as good as it is on an actual desktop.

NVIDIA wants to solve this problem with its VGX virtualization platform, which it unveiled at its GPU Technology Conference in May. As pitched, the technology will allow virtual machines to use a graphics card installed in a server to accelerate applications, games, and video. Through NVIDIA’s VGX Hypervisor, compatible virtualization software (primarily from Citrix, though Microsoft’s RemoteFX is also partially supported) can use the GPU directly, allowing thin clients, tablets, and other devices to more closely replicate the experience of using actual desktop hardware.

Enlarge / NVIDIA’s VGX K1 is designed to bring basic graphics acceleration to a relatively large number of users.

When last we heard about the hardware that drives this technology, NVIDIA was talking up a board with four GPUs based on its Kepler architecture. That card, now known as the NVIDIA VGX K1, is built to provide basic 3D and video acceleration to a large number of users—up to 100, according to NVIDIA’s marketing materials. Each of this card’s four GPUs uses 192 of NVIDIA’s graphics cores and 4GB of DDR3 RAM (for a total of 768 cores and 16GB of memory), and has a reasonably modest TDP of 150 watts—for reference, NVIDIA’s high-end GTX 680 desktop graphics card has a TDP of 195W, and the dual-GPU version (the GTX 690) steps this up to 300W.

Read 7 remaining paragraphs | Comments

via Ars Technica » Technology Lab http://feeds.arstechnica.com/~r/arstechnica/technology-lab/~3/mmDfscV4C_A/

Advertisements