Monday, July 27, 2009

On the precipice of battle, DX11 approaches

Sheesh, last time I complained it had been ten days since the previous rambling. Now here I am damn near a month later before I'm updating again.

It's almost August. Windows 7 has gone gold. In four months, on October 22 of this year, it'll release to the hungry masses, bringing with it the next DirectX generation, DirectX 11. To correspond with the new API update, new graphics card generations will be released, as they've always done. Since Intel is still at least a year off from debuting the hard launch of Larrabee, that leaves two graphics card companies to think about: AMD and NVIDIA. It has been a long time since the last generation first launched. Both GPU designers, the public, and the press are absolutely aching for the next battle.

Already it seems that NVIDIA is lagging behind. AMD has already showed off working DX11 silicon at this year's Computex, and the rumor mill is talking up the problems and delays NVIDIA is facing with their own hardware. The problem seems to revolve around the new 40nm process, and while both companies have already launched 40nm hardware, it's a whole other matter when dealing with flagship products where transistor budgets are challenged and clockspeeds are of great importance. While some are saying NVIDIA will have to push the launch in the first quarter of next year, others, particularly Fudzilla, are holding steadfast to the idea that it'll make it out by the end of this year.

The latter Fudzilla article does mention the possibility, however, that we could be seeing something more akin to a paper launch, or at least a launch in very limited quantities. This makes sense as some are saying AMD could have their DX11 stuff out over a month prior to Windows 7, and NVIDIA doesn't want to lose any mindshare in the upcoming fight. Both companies know that as the first new generation launch in over a year, and one heralded by the launch of a new OS, this launch is extremely important as a means of setting the tone for the rest of the generation. NVIDIA in particular knows how high the stakes are. Last year they were beaten in price and margins and, as much as they tried to downplay it, feature-wise too with DX10.1. NVIDIA eventually backtracked by adding support for it with their first 40nm cards, and arguably the first mainstream versions of their updated GT200 architecture. But everyone recognized that raw performance still went to them, and even if history repeats and the GT300 is larger, hotter, and hungrier for power than the competition, they'll still top all the benchmarks at the very least. I do think they'll play things more conservatively on the price front though, because the price-drop fopaux they experienced two weeks into the launch of the GTX 200 series generated bad press and sour feelings towards them from both partners and consumers.

Fudzilla goes on to say in a later article that the RV870, AMD's highend DX11 chip will simply be an update of the RV770. That same article points out that the GT300 is expected to be a radical change, one that some even call the first "cGPU". That got me thinking. Even if AMD makes it to market first, NVIDIA would need only release details of their upcoming chip to motivate people to wait. Obviously to the enthusiast crowd a totally new architecture is a lot more interesting than a simple update on a preexisting architecture. The last thing people are craving right now is a rehash of the same old, same old. NVIDIA is especially guilty of rebranding, and while AMD creates actual new GPUs to fit different price segments, their architecture hasn't changed much from the original R600 some three years ago, which itself was built on the then-well-documented Xenos GPU used in the Xbox 360, which again used a great deal of the principles from previous architectures all the way back to the R300, the ideas behind which were purchased from ArtX by ATI beforehand. In fact I'm not sure ATI-now-AMD has ever done anything truly fundamentally new with their graphics architectures on their own in pretty much ever. OK there was the R200, I'll give them that one.

The GT300 could very well have some very compelling advantages to cause enthusiasts to want to wait for it, as few others are really eager to upgrade since current cards can already play the most demanding stuff available on the PC now and in the foreseeable future. Aside from future Crysis installments I'm not sure there really is anything else to look forward to that'll push the envelope, so the excitement behind these cards is likely only generated by a very small niche in the market, a shrunken niche compared to an age when graphical leaps were as easily identified as the difference between night and day. The MIMD stuff is probably only really interesting to developers even still, but it matters more when it comes from NVIDIA as it would anyone else because their CUDA interface is the current king of the market, proprietary as it is. Comparisons between it and AMD's Stream initiative are often extremely lobsided, and it doesn't look as though they're all that aggressive in wanting to fix that. Until OpenCL takes off, an event whose arrival is anyone's guess, NVIDIA has the only real answer, and as I keep buying more seasons of favorite television shows and ripping them to my computer, a fast general purpose GPU, rather than my CPU, will continue to become increasingly appealing.

A congratulations is in order for AMD, regardless. Rumors aside, it seems pretty well evident that they will in fact beat NVIDIA to the market with a new DirectX generation for the first time since the original DX9. No doubt their products will also be extremely sleek designs, priced very affordably, and performing admirably (though I'm betting no more than 50% over current offerings). NVIDIA will again shoot for the moon, having not really learned their lesson from last time, except that they shouldn't grow too content or overconfident with where they stand. Their GPU will be huge, it will be hot, it'll consume a lot of power, just like last time and the time before that, but unlike last time, their execution seems already doomed to be inferior to the previous generation, as at least then they still beat AMD to market. The only thing they can hope for now is that technology-wise they'll hold all the cards without question, and it's only a matter of time before we find out if it ends up being the greatest thing since the dawn of GPUs, or if it'll fall flat on its face. I'm not counting on there being an option in between.