NVIDIA's Fermi: Architected for Tesla, 3 Billion Transistors in 2010
by Anand Lal Shimpi on September 30, 2009 12:00 AM EST- Posted in
- GPUs
ECC Support
AMD's Radeon HD 5870 can detect errors on the memory bus, but it can't correct them. The register file, L1 cache, L2 cache and DRAM all have full ECC support in Fermi. This is one of those Tesla-specific features.
Many Tesla customers won't even talk to NVIDIA about moving their algorithms to GPUs unless NVIDIA can deliver ECC support. The scale of their installations is so large that ECC is absolutely necessary (or at least perceived to be).
Unified 64-bit Memory Addressing
In previous architectures there was a different load instruction depending on the type of memory: local (per thread), shared (per group of threads) or global (per kernel). This created issues with pointers and generally made a mess that programmers had to clean up.
Fermi unifies the address space so that there's only one instruction and the address of the memory is what determines where it's stored. The lowest bits are for local memory, the next set is for shared and then the remainder of the address space is global.
The unified address space is apparently necessary to enable C++ support for NVIDIA GPUs, which Fermi is designed to do.
The other big change to memory addressability is in the size of the address space. G80 and GT200 had a 32-bit address space, but next year NVIDIA expects to see Tesla boards with over 4GB of GDDR5 on board. Fermi now supports 64-bit addresses but the chip can physically address 40-bits of memory, or 1TB. That should be enough for now.
Both the unified address space and 64-bit addressing are almost exclusively for the compute space at this point. Consumer graphics cards won't need more than 4GB of memory for at least another couple of years. These changes were painful for NVIDIA to implement, and ultimately contributed to Fermi's delay, but necessary in NVIDIA's eyes.
New ISA Changes Enable DX11, OpenCL and C++, Visual Studio Support
Now this is cool. NVIDIA is announcing Nexus (no, not the thing from Star Trek Generations) a visual studio plugin that enables hardware debugging for CUDA code in visual studio. You can treat the GPU like a CPU, step into functions, look at the state of the GPU all in visual studio with Nexus. This is a huge step forward for CUDA developers.
Nexus running in Visual Studio on a CUDA GPU
Simply enabling DX11 support is a big enough change for a GPU - AMD had to go through that with RV870. Fermi implements a wide set of changes to its ISA, primarily designed at enabling C++ support. Virtual functions, new/delete, try/catch are all parts of C++ and enabled on Fermi.
415 Comments
View All Comments
PorscheRacer - Thursday, October 1, 2009 - link
I have no clue what the red rooster thing implies, and I never understood why people called nVIDIA the green goblin. Until now. You, sir, have made it clear to me. They are called the green goblin, because that's where the trolls come from. Like wow. Your partisan and righteous thinking has no merit, no basis except conjecture and criticism. Save a keyboard, chill out and let's see if you can post anything in here without using the words, nVIDIA, ATI, red rooster, green goblin, and anything with ALL CAPS.It's fine to be passionate about something. But to exessive extents that push everyone else away and leave people ashamed, discouraged and embarrased; that's not how to win hearts and minds. I can already see you getting riled up over this post telling you to chill out....
SiliconDoc - Friday, October 2, 2009 - link
Hmmmm, that's very interesting. First you go into a pretend place where you assume green goblin is something "they call" nVIDIA, but just earlier, you'd never seen it in print before in your life.Along with that little fib problem, you make the rest of the paragraph a whining attack. One might think you need to settle down and take your own medicine.
And speaking of advice, your next paragraph talks about what you did in your first that you claim noone should, so I guess you're exempt in your own mind.
kirillian - Thursday, October 1, 2009 - link
Yall...seriously...leave the poor NVidia Fanboy alone. His head is probably throbbing with the fact that he found his first website (other than HardOCP) that isn't extremely NVidia biased.SiliconDoc - Friday, October 2, 2009 - link
Gee, I find that interesting that you know all about bias at other websites...So that says what again about here ?
silverblue - Thursday, October 1, 2009 - link
The 5870 is but one single GPU. The 295 is two and costs more. The 4870X2/CF is also a case of two GPUs. A 5870X2 would annihilate everything out there right now, and guess what? 5870 CF does just that. If money is no object, that would be the current option, or 5850s in CF to cut down on power usage and a fair amount of the cost without substantially decreasing performance.By stating "if someone wants to get their next-gen performance now", of course he's going to point in the direction of ATI as they are the only people with a DX11 part, and they currently hold the single GPU speed crown. This will not be the case in a few months, but for now, they do.
SiliconDoc - Friday, October 2, 2009 - link
I kinda doubt the 5870x2 blows away GTX295 quad, don't you ?--
Now you want to whine cost, too, but then excuse it for the 5870CF. LOL.
Another big fat riotous red rooster.
Really, you people love lies, and what's bad when it's nvidia, is good when it's ati, you just exactly said it !
ROFLMAO
--
Should I go get a 295 quad setup review and show you ?
--
How come you were wrong, whined I should settle down, then came back blowing lies again ?
There's no DX11 ready to speak of, so that's another pale and feckless attempt at the face save, after your excited, out of control, whipped up incorrect initial post, and this follow up fibber.
You need to settle down. "I want you banned"
Finally, you try to pretend you're not full of it, with your spewing caveat of prediction, "this will not be the case in a few months" - LOL
It's NOT the case NOW, but in a few months, it sure looks like it might BE THE CASE NO MATTER WHAT, unless of course ati launches the 5870x2 along with nvidia's SC GT300, which for all I know could happen.
So, even in that, you are NOT correct to any certainty, are you...
LOL
Calm down, and think FIRST, then start on your rampage without lying.
silverblue - Friday, October 2, 2009 - link
My GOD... you're a retard of the highest order.Why would I want to compare a dual GPU setup with an 8 GPU setup? What numpty would do that when it would logically be far faster? Even a quad 5870 setup wouldn't beat a quad 295 setup, and you know what? WE KNOW! 8 cores versus 4 is no contest. Core for core, RV870 is noticeably faster than the GT200 series, but you're the only person attempting to compare a single GPU card to a dual GPU card and saying the single GPU card sucks because it doesn't win.
And where did I say "I want you banned"? As someone once said, "lay off the crack".
SiliconDoc - Friday, October 2, 2009 - link
Aren't you the one who claimed only ati for the next gen performance ?Well, you really blew it, and no face save is possible. A single NVIDIA card beats the best single ati card. PERIOD.
It's true right now, and may or may not change within two months.
PERIOD.
silverblue - Friday, October 2, 2009 - link
No, I said that ATI currently has the single GPU crown. Not card - GPU. In a couple of months, ATI may have the 5870X2 out, and that WILL send the 295 the way of the dodo if it's priced correctly.No face saving necessary on my part.
Zaitsev - Wednesday, September 30, 2009 - link
^^LOL. I don't see what all the bickering is about. If you're willing to wait a few more months, then you can buy a faster card. If you want to buy now, there are also some nice options available. Currently there are 5 brands of 5870's and 1 5850 at the egg.