ATI Radeon HD 4890 vs. NVIDIA GeForce GTX 275
by Anand Lal Shimpi & Derek Wilson on April 2, 2009 12:00 AM EST- Posted in
- GPUs
The Widespread Support Fallacy
NVIDIA acquired Ageia, they were the guys who wanted to sell you another card to put in your system to accelerate game physics - the PPU. That idea didn’t go over too well. For starters, no one wanted another *PU in their machine. And secondly, there were no compelling titles that required it. At best we saw mediocre games with mildly interesting physics support, or decent games with uninteresting physics enhancements.
Ageia’s true strength wasn’t in its PPU chip design, many companies could do that. What Ageia did that was quite smart was it acquired an up and coming game physics API, polished it up, and gave it away for free to developers. The physics engine was called PhysX.
Developers can use PhysX, for free, in their games. There are no strings attached, no licensing fees, nothing. Now if the developer wants support, there are fees of course but it’s a great way of cutting down development costs. The physics engine in a game is responsible for all modeling of newtonian forces within the game; the engine determines how objects collide, how gravity works, etc...
If developers wanted to, they could enable PPU accelerated physics in their games and do some cool effects. Very few developers wanted to because there was no real install base of Ageia cards and Ageia wasn’t large enough to convince the major players to do anything.
PhysX, being free, was of course widely adopted. When NVIDIA purchased Ageia what they really bought was the PhysX business.
NVIDIA continued offering PhysX for free, but it killed off the PPU business. Instead, NVIDIA worked to port PhysX to CUDA so that it could run on its GPUs. The same catch 22 from before existed: developers didn’t have to include GPU accelerated physics and most don’t because they don’t like alienating their non-NVIDIA users. It’s all about hitting the largest audience and not everyone can run GPU accelerated PhysX, so most developers don’t use that aspect of the engine.
Then we have NVIDIA publishing slides like this:
Indeed, PhysX is one of the world’s most popular physics APIs - but that does not mean that developers choose to accelerate PhysX on the GPU. Most don’t. The next slide paints a clearer picture:
These are the biggest titles NVIDIA has with GPU accelerated PhysX support today. That’s 12 titles, three of which are big ones, most of the rest, well, I won’t go there.
A free physics API is great, and all indicators point to PhysX being liked by developers.
The next several slides in NVIDIA’s presentation go into detail about how GPU accelerated PhysX is used in these titles and how poorly ATI performs when GPU accelerated PhysX is enabled (because ATI can’t run CUDA code on its GPUs, the GPU-friendly code must run on the CPU instead).
We normally hold manufacturers accountable to their performance claims, well it was about time we did something about these other claims - shall we?
Our goal was simple: we wanted to know if GPU accelerated PhysX effects in these titles was useful. And if it was, would it be enough to make us pick a NVIDIA GPU over an ATI one if the ATI GPU was faster.
To accomplish this I had to bring in an outsider. Someone who hadn’t been subjected to the same NVIDIA marketing that Derek and I had. I wanted someone impartial.
Meet Ben:
I met Ben in middle school and we’ve been friends ever since. He’s a gamer of the truest form. He generally just wants to come over to my office and game while I work. The relationship is rarely harmful; I have access to lots of hardware (both PC and console) and games, and he likes to play them. He plays while I work and isn't very distracting (except when he's hungry).
These past few weeks I’ve been far too busy for even Ben’s quiet gaming in the office. First there were SSDs, then GDC and then this article. But when I needed someone to play a bunch of games and tell me if he noticed GPU accelerated PhysX, Ben was the right guy for the job.
I grabbed a Dell Studio XPS I’d been working on for a while. It’s a good little system, the first sub-$1000 Core i7 machine in fact ($799 gets you a Core i7-920 and 3GB of memory). It performs similarly to my Core i7 testbeds so if you’re looking to jump on the i7 bandwagon but don’t feel like building a machine, the Dell is an alternative.
I also setup its bigger brother, the Studio XPS 435. Personally I prefer this machine, it’s larger than the regular Studio XPS, albeit more expensive. The larger chassis makes working inside the case and upgrading the graphics card a bit more pleasant.
My machine of choice, I couldn't let Ben have the faster computer.
Both of these systems shipped with ATI graphics, obviously that wasn’t going to work. I decided to pick midrange cards to work with: a GeForce GTS 250 and a GeForce GTX 260.
294 Comments
View All Comments
lk7900 - Monday, April 27, 2009 - link
Can you please die? Prefearbly by getting crushed to death, or by getting your face cut to shreds with a
pocketknife.
I hope that you get curb-stomped, f ucking retard
Shut the *beep* up f aggot, before you get your face bashed in and cut
to ribbons, and your throat slit.
http://www.youtube.com/watch?v=QGt3lpxyo1U">http://www.youtube.com/watch?v=QGt3lpxyo1U
I wish you a truly painful, bloody, gory, and agonizing death, *beep*
joeysfb - Wednesday, April 15, 2009 - link
Hahaha! An eye for an eye. Guess the table has turned. AMD used to be in a needy position... taking it from left..right..center and back from players like Nvidia.joeysfb - Monday, April 13, 2009 - link
Good job AnandTech!!, really like your behind the scene commentary.araczynski - Saturday, April 11, 2009 - link
so far my overclocked 4850 crossfire setup has been keeping me happy, i'll come back into the market when the 5000 series rolls out and i upgrade my rig in general.ChemicalAffinity - Thursday, April 9, 2009 - link
Can someone ban this guy? I mean seriously.SiliconDoc - Friday, April 24, 2009 - link
Are you on drugs, is that why you don't understand or have a single counterpoint ?Come on, come up with at least one that refutes my endless stream of corrections to the lies you've lived with for months.
No ?
Ban the truth instead ?
Yeah, that wouldn't help you.
Ananke - Thursday, April 9, 2009 - link
I had 4850, 4870-1Gb, 260-216 and 280-Overclocked. Ran on 24" 1900*1200 - Crysis and Warhead, FarCry2, GTA4, Stalker ....whatever else you can imagnine...My experience:
Radeons are hot and noisier. You HAVE to increase the fan speed and it is audible. Image quality in games is very good though. Especially Crysis was better looking with the Radeons. Bullet tracing and sunshine effects were spectacular...GTX 280 on max everything in Crysis was also very beautiful. However that card gets HOT, so you would be better off with 285. I didn't like the image quality of Radeons in movies , but maybe my settings were not good. 4850 is definitely not the money, too hot for my test.
So, 4870 or 4890 1 Gb is definitely worth buying, performance is on par with 285 on 1900*1200 - Crysis was 27-41 FPS with standart Radeon 4870, and 31-45 with 280 OC 615 MHz.
IF 285 price is $250, that would be the best buy. If it costs more is NOT worth the money, unless you really want bigger and quiter card. Performance wise is the same as Radeon 4890, which now costs 229 and can be overclocked. I did overclock the GTX280 and 285, which doesnt show any performance change, I guess they are constrained by memory bandwidth?
So, honestly, for the money Radeon 4890 for $229 is the better choice. IF you find 4870 1Gb for $169 is worth considering also. The 896MB on the Nvidias is a constraint, I would not reccomend anything but 285, but that is expensive.
Truenofan - Tuesday, April 7, 2009 - link
woops. i meant arctic cooling S1 Rev2.Truenofan - Tuesday, April 7, 2009 - link
i don't get whats going on with silicon. but i enjoy my 4870. it works best at my resolution(1920x1200) and it costed less than the 275 with the ac-1. runs very chilly(45C idle 57C load oc'ed). i dont need phys-x or an application to do video encoding that costs extra adding to the total cost of the video card. gaming is its sole purpose to me and it does that extremely well.180 + 80 dollars for the video applications costs more than what my 4870 ran me and it completely outclasses at stock speeds it let alone a 275(260) or 280(270) which mine still costed less than. now you can get a 4870 for what the 260 runs. wheres the logic in that? just so you can run a few games with physx that aren't even that good? to do some video encoding? i'll stick with my lower cost 4870.
SiliconDoc - Tuesday, April 7, 2009 - link
I see, now your 4870 completely outclasses even the 280. LOLYour 4870 is matched with the 260, not the 275, and not the 280.
You don't have anything but another set of lies, so it's not something about you determining "my problem", or you "not knowing what it is", but it is rather the obvious lies required for you to "express your opinion". Maybe you should read my responses for the 20 some pages, and tell me why any of the 20 plus solid points that destroy the lies of the reds, are incorrect ? You think you might try it ? I mean we have a lot more than just YOUR OPINION,, false as you presented it, to determine, what is correct. For instance:
http://www.fudzilla.com/index.php?option=com_conte...">http://www.fudzilla.com/index.php?optio...Itemid=4...
.
Now, not even your 4870 overclocked XXX can beat the GTX260 GLH. In your MIND, though, it does, huh....? lol
Too bad, for you. I, unlike you, know what your problem is, and that is exactly what should bother you, about me.