The NVIDIA Titan V Deep Learning Deep Dive: It's All About The Tensor Cores
by Nate Oh on July 3, 2018 10:15 AM ESTThe Test
For our purposes, we have utilized the full Baidu DeepBench for a single GPU, a reference benchmark from NVIDIA's Caffe2 Docker image, submissions for Stanford DAWNBench, and benchmarks from HPE DLBS. Altogether, this offers a low-level look into the Titan V, as well as real-world performance, as well as a glance at NVIDIA's TensorRT inference optimizer.
Outside of DeepBench, all tests were done in Docker images. Configuring and troubleshooting ROCm/HIP/MIOpen beyond DeepBench was beyond the scope of this article, and so the Radeon RX Vega 64 only features in the DeepBench tests.
Overview of Conducted Deep Learning Tests | |||||
Parent Suite/Test | Type | Dataset | Model | Framework | Tensor Core Aware |
DeepBench Dense Matrix Multiplies |
Training | N/A | Yes | ||
Inference | |||||
DeepBench Convolutions |
Training | N/A | Yes | ||
Inference | |||||
DeepBench Recurrent Layers |
Training | N/A | Yes | ||
Inference | |||||
DeepBench Sparse Ops | Inference | N/A | N/A | ||
NVIDIA Caffe2 Docker ImageNet Training |
Training | ILSVRC2012 (ImageNet) | ResNet-50 (CNN) | Caffe2 | Yes |
HPE DLBS Caffe2 | Training | ILSVRC2012 (ImageNet) | ResNet-50 | Caffe2 | Yes |
Inference | |||||
HPE DLBS TensorRT | Inference | ILSVRC2012 (ImageNet) |
ResNet-50 | TensorRT | Yes |
DAWNBench CIFAR10 Image Classification |
Training | CIFAR10 | Custom ResNet34 | PyTorch | No |
Custom ResNet18 |
For one, we are limited by our single-node, single-GPU configuration, as well as the need for regression testing. In that sense, multi-day training runtimes are not ideal, particularly as on older hardware this might translate into multi-week runtimes and non-convergence.
As our first foray into deep learning performance on GPUs, we do not expect this to be the most optimal test lineup, and we welcome constructive criticism on our ongoing deep learning investigations.
Software Configurations
The testbed was put in non-graphical mode when running benchmarks, so that the GPU was not additionally rendering a desktop environment. For the implementations of the two DAWNBench CIFAR10 submissions, we utilized later versions and lightly modified them for easier logging/use (models, optimizers, parameters, etc., were untouched). Docker images were pulled from NVIDIA GPU Cloud (NGC).
Deep Learning Tests Comparison | |||
Test | Software Versions | ||
DeepBench | NVIDIA | CUDA 9.1.85 CuDNN 7.1.3 NVIDIA Driver 390.30 |
|
AMD | ROCm 1.8.118 MIOpen-HIP 1.3.0 rocBLAS 0.13.2.1 |
||
NVIDIA Caffe2 Docker ImageNet Training |
NGC Docker Image: Caffe 18.04-py2 | ||
DAWNBench Image Classification Submissions | NGC Docker Image: PyTorch 18.04-py3 | ||
HPE DLBS | NGC Docker Image: Caffe2 18.04-py2 PyTorch 18.04-py3 |
Citations
Baidu DeepBench
Baidu Research. DeepBench: Benchmarking Deep Learning operations on different hardware. https://github.com/baidu-research/DeepBench
ImageNet (ILSVRC2012)
Olga Russakovsky and Jia Deng (equal contribution), Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV). 2014, 115, 211-252. https://arxiv.org/abs/1409.0575
Stanford DAWNBench
Cody A. Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. DAWNBench: An End-to-End Deep Learning Benchmark and Competition. NIPS ML Systems Workshop 2017. https://dawn.cs.stanford.edu/benchmark/papers/nips17-dawnbench.pdf
CIFAR10
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. University of Toronto, 2009.
KervResNet
Chen Wang. https://github.com/wang-chen/KervNets
Basenet (ResNet18 with Modifications)
Ben Johnson. https://github.com/bkj/basenet/
65 Comments
View All Comments
SirCanealot - Tuesday, July 3, 2018 - link
No overclocking benchmarks. WAT. ¬_¬ (/s)Thanks for the awesome, interesting write up as usual!
Chaitanya - Tuesday, July 3, 2018 - link
This is more of an enterprise product for consumers so even if overclocking it enabled its something that targeted demographic is not going to use.Samus - Tuesday, July 3, 2018 - link
woooooooshMrSpadge - Tuesday, July 3, 2018 - link
He even put the "end sarcasm" tag (/s) to point out this was a joke.Ticotoo - Tuesday, July 3, 2018 - link
Where oh where are the MacOS drivers? It took 6 months to get the pascal Titan drivers.Hopefully soon
cwolf78 - Tuesday, July 3, 2018 - link
Nobody cares? I wouldn't be surprised if support gets dropped at some point. MacOS isn't exactly going anywhere.eek2121 - Tuesday, July 3, 2018 - link
Quite a few developers and professionals use Macs. Also college students. By manufacturer market share Apple probably has the biggest share, if not then definitely in the top 5.mode_13h - Tuesday, July 3, 2018 - link
I doubt it. Linux rules the cloud, and that's where all the real horsepower is at. Lately, anyone serious about deep learning is using Nvidia on Linux. It's only 2nd-teir players, like AMD and Intel, who really stand anything to gain by supporting niche platforms like Macs and maybe even Windows/Azure.Once upon a time, Apple actually made a rackmount OS X server. I think that line has long since died off.
Freakie - Wednesday, July 4, 2018 - link
Lol, those developers and professionals use their Macs to remote in to their compute servers, not to do any of the number crunching with.The idea of using a personal computer for anything except writing and debugging code is next to unheard of in an environment that requires the kind of power that these GPUs are meant to output. The machine they use for the actual computations are 99.5% of the time, a dedicated server used for nothing but to complete heavy compute tasks, usually with no graphical interface, just straight command-line.
philehidiot - Wednesday, July 4, 2018 - link
If it's just a command line why bother with a GPU like this? Surely integrated graphics would do?(Even though this is a joke, I'm not sure I can bear the humiliation of pressing "submit")