The Personal Supercomputer for Leading-Edge AI Development. Your data science team depends on computing performance to gain insights, and innovate faster through the power of deep learning and data analytics. Until now, AI supercomputing was confined to the data center, limiting the experimentation needed to develop and test deep neural networks prior to training at scale. Now there’s a solution, offering the power to experiment with deep learning while bringing AI supercomputing performance within arm’s reach.

DGX Station brings the incredible performance of an AI supercomputer in a workstation form factor that takes advantage of innovative engineering and a water-cooled system that runs whisper-quiet. The NVIDIA DGX Station packs 480 TeraFLOPS of performance, with the first and only workstation built on four NVIDIA Tesla® V100 accelerators, including innovations like next generation NVLink™ and new Tensor Core architecture.

DGX Station breaks through the limitations of building your own deep learning platform. You could spend a month or longer, procuring, integrating, and testing hardware and software. Then additional expertise and effort are needed to optimize frameworks, libraries, and drivers. That’s valuable time and money spent on systems integration and software engineering that could be spent training and experimenting.

NVIDIA DGX Station is designed to kickstart your AI initiative, with a streamlined plug-in and power-up experience that can have you training deep neural networks in just one day.

Want to kickstart deep learning within your organisation? Contact us now!

ClusterVision is a dedicated Preferred Solution Provider of NVIDIA. Our close relationship with NVIDIA ensures that we can offer the best GPU and deep learning solutions for our customers. Additionally, ClusterVision is the NPN Exclusive Partner for NVIDIA to sell the DGX Station in the Benelux.

DGX Station System Specifications

Spec NVIDIA DGX Station
GPUs 4 x Tesla V100
TFLOPS (GPU FP16)  480
GPU Memory 64 GB total system
CPU 20-Core Intel Xeon E5-2698 v4 2.2 GHz
NVIDIA CUDA Cores 20,480
NVIDIA Tensor Cores 2,560
Maximum Power Requirements 1,500 W
System Memory 256 GB DDR4 LRDIMM
Storage  4 (data: 3 and OS: 1) x 1.92 TB SSD RAID 0
Network  Dual 10 GbE, 4 IB EDR
Display 3X DisplayPort, 4K resolution
Acoustics < 35 dB
Software  Ubuntu Linux Host OS
DGX Recommended GPU Driver
CUDA Toolkit
System Weight  88 lbs / 40 kg
System Dimensions  518 D x 256 W x 639 H (mm)
Operating Temperature Range 10 – 30 °C