HEXUS Forums :: 14 Comments

Login with Forum Account

Don't have an account? Register today!
Posted by TiG - Thu 13 Oct 2005 22:16
Just confused by the hexus custom demo for the chronicles of Riddick?, it was worse with the 30% speed fix at 1024by768?.

Just able to confirm this, wasn't a transpose of the results?

Cheers
TiG
Posted by Xaneden - Thu 13 Oct 2005 22:41
I personally think ATi and Nvidia are relying on software too much to provide the speed boost their hardware cannot give. Maybe a little less advertising/SLi gimmicking and a little more development? :)
Posted by kempez - Thu 13 Oct 2005 23:52
Xcelsion
I personally think ATi and Nvidia are relying on software too much to provide the speed boost their hardware cannot give. Maybe a little less advertising/SLi gimmicking and a little more development? :)

I don't think that at all. The only thing this “driver war” does is increase some numbers. There are no games out that either of the big two's top-end cards cannot play, or even test.

I'm all for this: more performance for £0!!! :D:rockon: :bowdown:
Posted by Rys - Thu 13 Oct 2005 23:57
TiG
Just confused by the hexus custom demo for the chronicles of Riddick?, it was worse with the 30% speed fix at 1024by768?.

Just able to confirm this, wasn't a transpose of the results?

Cheers
TiG
Yeah, it was worse. Repeatably so (as always with the numbers we report). The memory controller tuning going on here isn't always going to be a win. Per-application and per-resolution tuning will clean up foibles like that as things progress in their driver.
Posted by Syn - Fri 14 Oct 2005 00:23
seem like ATi is on the right track to having good OpenGL support. Seems like they are really going full on this year.

Now we the consumers will get the benefit of two companies being close on par and pushing probably the tech even further. Well done to the ati team this year, now we need nvidia to step up we cant afford anymore huge gaps it leaves us with less options in what to buy and bigger holes in the wallets.
Posted by Andrzej - Fri 14 Oct 2005 11:28
Eric Demeres runs the chip design team for ATI

This is what he had to say on the subject…

“This change is for the X1K family.
The X1Ks have a new programmable memory controller and gfx subsystem mapping.
A simple set of new memory controller programs gave a huge boost to memory BW limited cases, such as AA (need to test AF).
We measured 36% performance improvements on D3 @ 4xAA/high res.
This has nothing to do with the rendering (which is identical to before).
X800's also have partially programmable MC's, so we might be able to do better there too.
Basically, discovering such a large jump, we want to revisit our previous decisions.

But It's still not optimal.
The work space we have to optimize memory settings and gfx mappings is immense.
It will take us some time to really get the performance closer to maximum.
But that's why we designed a new programmable MC.
We are only at the beginning of the tuning for the X1K's.

As well, we are determined to focus a lot more energy into OGL tuning in the coming year; shame on us for not doing it earlier.”


It looks as though 2006 is shaping up to be ‘fun’ from the performance graphics point of view - especially if there are no more OpenGL ‘gimmies’ on the green :P


Eric's comments taken from Beyond3D
Posted by TiG - Fri 14 Oct 2005 11:33
Rys
Yeah, it was worse. Repeatably so (as always with the numbers we report). The memory controller tuning going on here isn't always going to be a win. Per-application and per-resolution tuning will clean up foibles like that as things progress in their driver.

I agree its not going to always be a win, but i was quite suprised it was a loss of that size. Thanks for clearing it up for me tho Rys.

Cheers
TiG
Posted by merlin2001 - Fri 14 Oct 2005 12:48
interesting stuff. one wonders if they will be able to achieve similiar improvements in D3D games by tuning up the memory controllers… obviously, the motivation to improve is strongest in OpenGL where nvidia has had the lead for some time, but is there the same scope for increases in D3D
Posted by Rys - Fri 14 Oct 2005 14:40
There's scope for improvement across the board, in almost everything that's not using the memory controller as well as it could be.
Posted by Andrzej - Mon 17 Oct 2005 21:47
merlin2001
…D3D…

One of the guys I spoke with earlier said that this kind of jump was unlikely to happen on a regular basis…

…but that they expected steady speed increases in D3D titles every month

One of the biggest problems with OpenGL is that there are so few titles released that it actually becomes hard to get enough data to analyse

Publications (web and print) typically run about 33% of their game tests on OpenGL - whereas the real share of the PC gaming market for OpenGL titles is closer to 10%…

…with more and more games like Call Of Duty 2 moving to the D3D platform
Posted by mike_w - Mon 17 Oct 2005 21:55
Xcelsion
I personally think ATi and Nvidia are relying on software too much to provide the speed boost their hardware cannot give.

Huh? It doesn't really matter if the card has raw power or not if the drivers aren't any good. Surely better drivers are always a good thing? If I get a new driver upgrade that, for example (and an obvious exaggeration) increases performance by 100%, I'm not going to say “Well, they should have made their card faster in the first place”.
Posted by Xaneden - Mon 17 Oct 2005 23:29
No one would doubt good drivers are an important factor, but it seems ATi and Nvidia's marketting machines focus on either dual card setups or driver enhancements. The core technology seems to be stale (with ATi at least)? In the past 2 years, I haven't been excited by a new card as I was with say, the 9700 Pro; nothing seems particularly revolutionary. In my opinion at least, their strategy of doubling up is nothing but a facade to hide stagnant technology, which hasn't really had the ‘wow’ factor in a while :(

Though, you're all welcome to disagree, which I'm sure some of you will :D
Posted by Rys - Tue 18 Oct 2005 09:08
I certainly disagree. While I simply haven't had the resources to explain the new R5-series silicon yet, it's quite the leap technologically speaking. While the shader core is very similar on both VS and PS, the way they're connected and fed is almost entirely new and that's where the fun and new performance is.

This memory controller tweak is part of that. Of course if you're not bothered about the low-level details then I can see your point, but there's still a lot to like about the latest 3D hardware in terms of increased image quality (what it's all about really) across the board, even if you're not fussed about how it gets there.
Posted by Andrzej - Wed 19 Oct 2005 21:27
Rys is correct - the jump from R420 to R520 is massive

The entire rendering path is new = physically and conceptually - is brand new

Also, an in depth investigation of the G70/R520 shows that both companies have a very different view of the GPU of tomorow

ATI's focus is on massive maths capability - whereas nVidia seem to believe that texture fetching is everything

Seperating the texture unit from the shader core is a revolutionary concept…

…to then add in separate technology to perform branching in the pixel shader has allowd ATI to offer performance that the top developers like Crytek have labelled “f***ing amazing”
(Rys can furnish you with a precise quote !)

Overall, gamers have never had it so good :devilish: