NVIDIA Tegra X1 Preview & Architecture Analysis
by Joshua Ho & Ryan Smith on January 5, 2015 1:00 AM EST- Posted in
- SoCs
- Arm
- Project Denver
- Mobile
- 20nm
- GPUs
- Tablets
- NVIDIA
- Cortex A57
- Tegra X1
GPU Performance Benchmarks
As part of today’s announcement of the Tegra X1, NVIDIA also gave us a short opportunity to benchmark the X1 reference platform under controlled circumstances. In this case NVIDIA had several reference platforms plugged in and running, pre-loaded with various benchmark applications. The reference platforms themselves had a simple heatspreader mounted on them, intended to replicate the ~5W heat dissipation capabilities of a tablet.
The purpose of this demonstration was two-fold. First to showcase that X1 was up and running and capable of NVIDIA’s promised features. The second reason was to showcase the strong GPU performance of the platform. Meanwhile NVIDIA also had an iPad Air 2 on hand for power testing, running Apple’s latest and greatest SoC, the A8X. NVIDIA has made it clear that they consider Apple the SoC manufacturer to beat right now, as A8X’s PowerVR GX6850 GPU is the fastest among the currently shipping SoCs.
It goes without saying that the results should be taken with an appropriate grain of salt until we can get Tegra X1 back to our labs. However we have seen all of the testing first-hand and as best as we can tell NVIDIA’s tests were sincere.
NVIDIA Tegra X1 Controlled Benchmarks | |||||
Benchmark | A8X (AT) | K1 (AT) | X1 (NV) | ||
BaseMark X 1.1 Dunes (Offscreen) | 40.2fps | 36.3fps | 56.9fps | ||
3DMark 1.2 Unlimited (Graphics Score) | 31781 | 36688 | 58448 | ||
GFXBench 3.0 Manhattan 1080p (Offscreen) | 32.6fps | 31.7fps | 63.6fps |
For benchmarking NVIDIA had BaseMark X 1.1, 3DMark Unlimited 1.2 and GFXBench 3.0 up and running. Our X1 numbers come from the benchmarks we ran as part of NVIDIA’s controlled test, meanwhile the A8X and K1 numbers come from our Mobile Bench.
NVIDIA’s stated goal with X1 is to (roughly) double K1’s GPU performance, and while these controlled benchmarks for the most part don’t make it quite that far, X1 is still a significant improvement over K1. NVIDIA does meet their goal under Manhattan, where performance is almost exactly doubled, meanwhile 3DMark and BaseMark X increased by 59% and 56% respectively.
Finally, for power testing NVIDIA had an X1 reference platform and an iPad Air 2 rigged to measure the power consumption from the devices’ respective GPU power rails. The purpose of this test was to showcase that thanks to X1’s energy optimizations that X1 is capable of delivering the same GPU performance as the A8X GPU while drawing significantly less power; in other words that X1’s GPU is more efficient than A8X’s GX6850. Now to be clear here these are just GPU power measurements and not total platform power measurements, so this won’t account for CPU differences (e.g. A57 versus Enhanced Cyclone) or the power impact of LPDDR4.
Top: Tegra X1 Reference Platform. Bottom: iPad Air 2
For power testing NVIDIA ran Manhattan 1080p (offscreen) with X1’s GPU underclocked to match the performance of the A8X at roughly 33fps. Pictured below are the average power consumption (in watts) for the X1 and A8X respectively.
NVIDIA’s tools show the X1’s GPU averages 1.51W over the run of Manhattan. Meanwhile the A8X’s GPU averages 2.67W, over a watt more for otherwise equal performance. This test is especially notable since both SoCs are manufactured on the same TSMC 20nm SoC process, which means that any performance differences between the two devices are solely a function of energy efficiency.
There are a number of other variables we’ll ultimately need to take into account here, including clockspeeds, relative die area of the GPU, and total platform power consumption. But assuming NVIDIA’s numbers hold up in final devices, X1’s GPU is looking very good out of the gate – at least when tuned for power over performance.
194 Comments
View All Comments
KateC - Thursday, January 8, 2015 - link
Regarding the comment on AMD having FP16 support in GCN 1.2. Is this full featured support, e.g., FP16 at double FP32 support?Parablooper - Thursday, January 22, 2015 - link
Does anyone know if this will support 64-bit operating systems? I know for sure that the K1 only had up to 32-bit. I'm thinking of buying a chromebook but am torn between buying one with a low-end intel processor for more productivity or NVIDIA processor with at least some graphics capability.Keermalec - Friday, April 17, 2015 - link
Nvidia should make a phone with an underclocked X1yhselp - Thursday, July 28, 2016 - link
Rereading this article after the report that Nintendo's NX - their new flagship console - would be powered by NVIDIA's Tegra is so enlightening. It's like reading a whole new preview. Many things start making sense in this new context:HDMI 2.0 and 4K60 support;
16 ROPs;
Aggressive clockspeed;
Conservative rasterization and MFAA.
To quote the article: "It seems obvious that this would be a great SoC to put in a gaming tablet and a variety of other mobile devices, but it remains to be seen whether NVIDIA can get the design wins necessary to make this happen."
What a conclusion! And what a gaming tablet it would be. You couldn't have known how those words would ring today - over a year later. Talk about a design win. Awesome.
P.S. Please, do an article on the Nintendo NX reports.