Intel's Larrabee Architecture Disclosure: A Calculated First Move
by Anand Lal Shimpi & Derek Wilson on August 4, 2008 12:00 AM EST- Posted in
- GPUs
The Awesome Potential of Fully Programmable Graphics
Certainly we can't judge the applicability and impact Larrabee will have until we see how it handles real-world applications. But we absolutely cannot write off such a giant as Intel when they throw their chips into the pot. Some of the current graphics hardware establishment have tried to suggest to us that Intel is not in touch with the current development community and that the only reason some developers are excited about the extensive low level programmability of Larrabee is because they are nostalgic for the old days of graphics programming where it was all about the software renderer.
I don't think anyone is under the illusion that DirectX and OpenGL performance are irrelevant for Larrabee. If Intel fails at delivering equivalent or greater price/performance in games and applications that use these programming APIs, then no matter how well the hardware could be used for any software engine it will fail. But the potential to customize every part of the rendering pipeline, the capability of supporting a software renderer with the same level of performance as if the hardware was customized to it, adds a level of value to the development community that will absolutely blow away anything NVIDIA or AMD can currently (or will for the foreseeable future) offer.
Re-opening the door for Tim Sweeney, John Carmack, Michael Abrash, and other pioneers and visionaries in the field of 3D graphics to once again have the freedom to take a piece of hardware that can offer the kind of data parallel speed that has heretofore been limited to the GPU and literally do anything they want with it is something to be excited about. Limited much less by the physical design of the hardware to once again only be limited by the performance of any given segment of code could help speed up the transition from SIGGRAPH to games. Larrabee could help create a new wellspring of research, experimentation and techniques for real-time graphics, the likes of which have not been seen since the mid-to-late 1990s.
We have absolutely been seeing the current graphics hardware giants move toward more flexibility and programmability. But if Intel is able to effectively leap-frog their slow trudge toward true general purpose programming DX version by DX version, we will see the end of an era where games are feature limited by hardware. No longer will we need new hardware to handle a new DX version with new techniques and effects: we would only need a driver update to add support for the new API. The only obstacle to running games using future APIs will be performance. The only reason to upgrade in the future will be speed. It will be a different world, altogether different than anything we've known or experienced before yet incredibly similar to the roots from which the industry was born.
It is an exciting time to be in the field of computer graphics.
101 Comments
View All Comments
skochnet - Monday, August 4, 2008 - link
I have been a daily reader of Anandtech and computer tech enthusiast for the past 6 years. I found this article so interesting and well written that I felt compelled to signup for an account today to post my appreciation for it. The depth of this article was fascinating. This could be a leap ahead technology which would change and potentially restructure the industry as it stands now if successful. …or not make the grade and the industry players all continue the tug-of-war. I especially enjoy your speculation. Your group is no doubt privy to a unique vantage point that makes these thoughts even more valuable and interesting. Thank you.DerekWilson - Monday, August 4, 2008 - link
We really appreciate the kind words.I'm glad you enjoyed the article, and this is definitely and exciting development that I think -- whether it succeeds or fails -- we will all have our eyes on.
iocedmyself - Monday, August 4, 2008 - link
Actually crysis is the point, or a good example anyway since intel has been touting that larrabee will be something like up to 2.5 - 5 times faster than traditional present day gpu solutions. I didn't think they had been working on this for 4 years, and while it may have seemed like a good idea back then at a time when they hadn't launched a worth while product in 4 or 5 years: they just aren't that innovative.HP and intel teamed up in 94 to develop the original IA-64, spent billions in developement launching it 3 years late and making platform sales in the triple digits...total.
During that time they also started developing the Timna in 97, what was supposed to be for the low-power sub-$600 desktop system bracket and their "first" cpu with an IMC. To achieve this they designed the IMC for use with Rambus memory...they pushed launch back several times to nearly 2 years after projected, during which they redesigned the IMC for use with SDRAM, though it ended up having some pesky fatal design flaw and they scrapped it just before the launch of...
Pentium 4! mmm...netburst, tastes like cowpie. Intel originally quoted netburst as being able to scale to 10Ghz, but they were close there weren't they? 3.8ghz is close...isn't it? It's almost half-way...besides, it doubled as a hotplate!
It has taken them 5 years to reach the point of being able to essentially copy AMD's IMC design, to luanch on a chip which they licsence the x86-64 code from...AMD. Intel may do great work when it comes to designing new motherboard and networking standards (sata, Pci-e, and ehternet) but a high performance 32bit chip launched months before the consumer 64bit OS isn't an achievment in my view, so much as a reminder of the success they enjoyed in the P4 days.
From a performance standpoint, it just doesn't seem realistic. The HD4870xt can surpass 1 teraflop of computations on a single 55nm die, where as Intel's $1000+ highest performing quad-core chips clock in at 38-51 gigaflops or 9.5-12.75 gigaflop per core.
Their 80 core Terascale which used a 65nm fab if i'm not mistaken hit 1 teraflop clocked at 3.2ghz (12.5g Gflop/core) and 1.8 teraflop when clocked up to 5.1ghz (22.5 Gflop/core)
ltcommanderdata - Monday, August 4, 2008 - link
Perhaps it's not so, but it seems to me that Larrabee is quite similar to the SPEs in Cell. Simplified cores based on a common ISA (x86/PPC) optimized for floating point/vector ops. It might be interesting to compare the Cell's and Larrabee's architecture and eventually performance when the products are released. I believe Toshiba is already incorporating the Cell as the Spurs Engine accelerator in notebooks so I can see it running into Larrabee. Might be a return on the x86/PPC debates.Lux88 - Monday, August 4, 2008 - link
I'm glad Intel strongly believes they can provide a solution. Novel approaches are always welcome :)!But there are couple of things that keep me from becoming overly optimistic:
1. Itanium also relied (relies) on very smart compilers to produce the optimal machine code. Didn't quite happen.
2. Dynamic compilers, i.e translationg DirectX to Larrabee on the fly, can't be very smart because this translation has to run in "real time".
3. Intel seems to be heavily touting raytracing. Again, I'm glad they are doing this. But it seems to confirm that they know they can't have a sellar win by just rendering "same old DirectX" through additional layers.
In addition to hardware, they have to juggle with drivers, compilers and libraries. Also came up with software renderer. Also support developers to code great apps on Larrabee (I bet Carmack can't wait to try out his octree-renderer). Quite a number of moving targets to nail down...
DerekWilson - Monday, August 4, 2008 - link
1) Itanium does do certain things very very well -- it's just not the future of the desktop.2) note that we did not use the word emulation or translation at any point -- Intel is NOT doing this with DirectX. It's just like any other API: the DirectX functions that must be implemented in a driver will be implemented in code written for Larrabee as opposed to code written for GT200 or RV770 ... Imagine some other API or even just a DLL with some functions in it -- it's all the same no matter what hardware it's written for. In some cases we'll see the actual implementation just set registers and issue a single command to the hardware to get something done. In Larrabee's case, functions to perform any complex operation will have to be called, but they are essentially doing the same thing.
When I first go started on OpenGL, it was running in software. Hardware companies came along and implemented OpenGL functions in a way that used their hardware. These APIs are hardware independent (essentially).
3) the raytracing talk is, to a degree, posturing. raytracing scales well with traditional CPUs. rasterizers fit well on wide vector hardware. Larrabee happens to be both ... so ... I think they are interested in the long term. honestly, when hybrid renderers come along that combine raytraceing for specific effects inside a rasterizer we'll see some really cool things. I think this is where Intel is going.
NVIDIA even showed off some hybrid tech that could run on their hardware.
...
...
Intel does have a lot of hurdles to get past, and that is definitely worth pointing out.
mars777 - Monday, August 4, 2008 - link
Just one curiosity:A predicted 64 core Larabee would contain 64x32KB of L1 cache and 64x256KB of L2 cache.
- 2048 KB of L1 + 16328 KB L2 -
Given a chess field configuration on silicon this lead to an abnormal die size with too much parallel leaking lines where cache coherency will be impossible or given a 4/8 block configuration which leads slower L2 cache (but this means these are not just cores sticked on silicon but rather a custom core which defeats the purpose of the project and makes this a pretty $$ solution).
IMHO Larabee will probably work out but will be nothing to cheer about, probably a pushed up product that will eventually die out slowly (itanium...).
DerekWilson - Monday, August 4, 2008 - link
itanium has what? 24mb of on die cache? Large cache is not unreasonable for something like this -- but you are forgetting register space and the fact that the L1 has both 32k data and 32k instruction (so 64 cores would be 4MB of L1)The L2 cache is segmented so that each core can only directly access 256kb. The arrangement can be quite flexible because of this. Cache coherency is maintained through the ring bus. if one core needs data being used by another core, in the L2, it goes through the ring. at least that's my understanding.
I apologize if we didn't do a good enough job in the article, but this isn't just a solution where Intel wants to drop stock cores on a die -- everything is custom from the scalar and vector processor upto the internal memory bus and added fixed function logic.
the project has been in development for 4 years and is not meant to be cheap -- intel is putting a lot into it.
by the way -- i still think 32 cores is the sweet spot for launch based on the data Intel provided -- I don't think they'll target a larger size off the bat.
Griswold - Monday, August 4, 2008 - link
One of the few comments here that actually make sense.FujiT - Monday, August 4, 2008 - link
Some if you just don't get it.It's not about whether or not it can play crysis with 100 FPS and it's not as much about whether it can compete with AMD/nVidia (although that's important too).
I see this chip as a beginning of a new revolution in computing. It reminds me a lot of a cell processor (although i don't know that much about architecture) where a smarter CPU will tell the dumber CPUs what to do. The ability to have a many core CPU with a mixture of really smart and dumber, but FP optimized cores will really make stuff like rendering a lot faster on a CPU, and would take programs such as F@H to the next level. The added perk is the fact that it's all x86 as anand pointed out.