Showing posts with label ATI. Show all posts
Showing posts with label ATI. Show all posts

Tuesday, August 31, 2010

Tech Updates

Before I commit the entire month of August to headphone rambling, here's a few things in the tech world I sort of skipped over since the last relevant post.

The biggest I think is AMD ditching the ATI name. A lot of computer websites and myself included mostly referred to ATI products by AMD anyway, but occasionally "ATI" would slip back into consciousness and find its way into our writings whenever the thought arose about what actually produces these products. Well AMD hopes to dispel any such confusion in the future by erasing any question of the source of their graphics solutions, emphasizing the dissolving of any subdivisions into a unified, singular company. To some, this is a startling reminder of the extremity of the event that took place now four years ago, one that carries with it some pretty heavy significance in the history of computing, and in the minds of those who follow the industry. Those folks might have foreseen this eventuality from the outset, but the sudden immediacy of its arrival is no less sobering especially for fans of ATI in years gone by. It seems hard to imagine a Radeon without an ATI, but we must be quick to remind ourselves that it isn't the name that made the Radeon, but the company, the engineers and innovators within that kept an underdog alive, and even gave their opposition a bloody nose from time to time. Now more than ever they're on top of their game, and with NVIDIA struggling to flesh out their product line, their clockwork execution and unwavering onslaught have put them in a position few would have thought possible after the stumbling first couple years following their acquisition.

Monday, April 12, 2010

My ATI

Since the launch of the 5870, I was never really impressed by it. Set with the simple goal of doubling everything in the previous flagship, architecturally it left little for the technology enthusiast to scrutinize, taking what we already knew and just expanding it. This resulted in performance bound by limitations instilled in the design of a graphics pipeline dating back several years, and not a particularly forward-looking one even then. Certainly some tweaks were made, mainly to do with caches and registers and low-level things of that nature, but the overall layout remained, itself strictly a product of a general analysis of current-day graphics engines.

Weathering all expectations was the realization of a meager 40% gain over the 4890, a number that's since ballooned to about 58% average under a more modern selection of games and drivers. Clearly a card that was more a victim of the times than anything else; a ceiling imposed by the limited demands of contemporary software.

Monday, April 5, 2010

Taiwan Semi-(bad)Conduct

NVIDIA has a problem. It's got a bad chip launch on its hands: it's hot and uses a lot of power, it's huge and yields suck, and it hasn't nearly hit performance targets. Worst of all, they're stuck with that for a whole year before the option for a die shrink shows up to fix all their woes.

TSMC has been promising that 28nm would be well ready by this year. But as reported by The Inq and a very good news article by Xbit Labs, they might barely make it out by the end of this year, with GPU-suitable versions only being ready by early next year. Also mentioned in the Xbit article (and here as well), 32nm has been scrapped, so that leaves NVIDIA no other alternative but to wait, and thus so must we.

Friday, April 2, 2010

FX Reincarnated?

I held off writing another blog post for a couple months just waiting for Fermi. I didn't have much else to write about, and as far as tech-related stuff goes, Fermi was the biggest thing on my mind. I've already said pretty much everything else I'd ever want to say about it, and there wouldn't be anything new to comment on until it finally released. (I did think about writing a rambling about Bioshock 2 but I didn't have anything to say about that that hadn't already been said elsewhere.) Since its release, it was just a matter of setting aside the time to do it.

The last six months have been agonizing. I remember the day the 5870 came out. I had WoW running in the background and a webpage sitting there in front of me with a NewEgg listing of all the 5870s available, all of them $380. I had to will myself not to pull the trigger on one, because NVIDIA might have something much better right around the corner, and it might be stupid not to at least wait and see. Usually in the tech world that's always the best policy, but this is one of the few times I'm kicking myself for not indulging in some impulse buying.

Friday, January 8, 2010

Updates, updates...

So CES is this week. Palm launched new stuff. Intel launched new stuff. AMD launched new stuff. More importantly, NVIDIA launched new stuff.

NVIDIA has succeeded again in releasing another SoC that everyone wants. Hopefully they succeed this time at actually delivering it to everyone who wants one. Last time Tegra's only notable design win was the Zune HD, a largely forgettable media player that...well, everyone largely forgot about shortly after its release. But that was all it had. Earlier at the start of this blog I had gushed at the possibilities of its use in smartbooks, only to be disappointed at the close of the year by the absence of said smartbooks. Turns out Mobinnova (and others) was simply waiting for Tegra 2, and for good reason. Packing two out-of-order dual-issue FPU-enabled ARM Cortex A9s, it beats the shit out of Tegra 1. Every demo of a tablet (I guess some are calling those "slate PCs" now) or smartbook using Tegra showed a sluggish running system. The thing was simply not meant for full-sized computing endeavours, and let's face it, we're not even talking full-sized demands here. But Tegra 2 should have no problem handling any Firefox-browsing aspirations, and hell even some HD media and gaming on the side. Cooler still, it's built on 40nm. Usually side products like this--chipsets, bridge chips, NVIO, whatever else NVIDIA makes that's not a GPU--get second class manufacturing, but not this time. I guess it's a sign NVIDIA's really taking this seriously, and if worst comes to worst, I think they're banking on supporting themselves on this little "side product" if at all possible. Apparently they see the mobile SoC market as being worth billions, overshadowing any other market they've ever been in, so it could very well be the next big thing for them. Well, the only other big thing for them aside from GPUs. For now let's hope Tegra 2 makes it into some kickass products that we can actually buy.

Wednesday, November 18, 2009

Graphics is my favorite subject

So the HD 5970 is out. I like the name. No suffixes whatsoever. Simple, clean, elegant, gets the point across. There's a prefix but that's just to denote the boarder range of GPUs it's a part of. Better than NVIDIA's prefixes, which are really suffixes just moved to the front.

I read a few reviews, and obviously the thing pulverizes the competition, but the competition is a dead horse anyway. Something frustrated me about most of the reviews though: the game selection. It can't be helped, I suppose. Almost all of them are console ports (some with minor enhancements) that never had any problem running in the first place. What's the point benching those games if absolutely no one would be basing their purchasing decision on them? Nobody's thinking "oh man, I need to start doing research for a card that can play Borderlands". Fucking anything can play Borderlands. $100 cards used to be shit but now that'll buy you a 9800GT or equivalent. That's like nothing for a video card budget, and we're talking a card just a smidge under the flagship performance of 2006 (which would normally make it pretty old, but not anymore). So yeah, anything north of toilet paper will run Borderlands, or any of the COD games... or Far Cry 2, or L4D2, or Resident Evil 5, or Batman: Arkham Asylum, or whatever the hell else.

Saturday, October 3, 2009

NVIDIA's Fermi

Fermi is the name of the architecture for NVIDIA's next gen (DX11) cards. Fermi was announced ahead of actual card announcements or even just information about gaming features. All that was talked about, in fact, was Tesla-related shit, but despite that I've read all kinds of bullcrap from people jumping to all kinds of ridiculous conclusions about it.

Once again, this was an announcement for Tesla. Companies looking to make large investments in new servers and HPC systems need a lot of lead time to make decisions, and NVIDIA was trying to appeal to them, as well as investors and stock holders, proving that Fermi is real and that there are some really cool things to look forward to about it. AMD released their shit, so now NVIDIA wants to make some sort of response, even if it isn't actual hardware. This was an announcement to gain mindshare, nothing more.

Monday, July 27, 2009

On the precipice of battle, DX11 approaches

Sheesh, last time I complained it had been ten days since the previous rambling. Now here I am damn near a month later before I'm updating again.

It's almost August. Windows 7 has gone gold. In four months, on October 22 of this year, it'll release to the hungry masses, bringing with it the next DirectX generation, DirectX 11. To correspond with the new API update, new graphics card generations will be released, as they've always done. Since Intel is still at least a year off from debuting the hard launch of Larrabee, that leaves two graphics card companies to think about: AMD and NVIDIA. It has been a long time since the last generation first launched. Both GPU designers, the public, and the press are absolutely aching for the next battle.

Saturday, June 6, 2009

NVIDIA cards are overpriced

I think most people who've been following the industry will take one look at that title and think "duh". I admit I haven't been keeping up with graphics card prices lately, because I've been trying hard not to shop for one, even though the 8800GTX I'm using right now doesn't do Crysis enough justice by my standards, and it's because of that I've yet to beat the game, or buy the standalone expansion to it.

What brought on this observation was an investigative article on Anandtech that came across my feeds the other day about GTX 275 overclocking. I felt it was a relevant article to read at the time because shader VS core scaling has been an interesting issue with NVIDIA cards since the G80, and also because I was bored at the time. The article pointed to a preceding article that investigated the same topic with the 4890, and I decided to look over that one as well since I hadn't been keeping up with that card. I was surprised to find that the 4890 actually keeps toe-to-toe with NVIDIA's current fastest single-GPU card the GTX 285, in the most intensive games (read: the only games that matter to those shopping for a new GPU right now). But then it seems you can overclock the 4890 higher than the GTX 285, percentage-wise. Well then I got curious about where they stand price-wise.