Next Generation Emulation banner

Ati vs Nvidia

81 - 100 of 925 Posts
Hector said:
That's the point. Anyway I was comparing 5900u with 9800, not talking about mid-class market, where ATI eats alive nVidia :D

Oh Well. You can't really compare the Geforce FX 5900 Line to the ATI cards by Pixel Pipelines and Texture Map Units.


Obviously the Nvidia card only has 4 Pixel Pipelines. It's supposively makes up with this by using a floating Point texel device for texture/shader inputs. which is a very powerful and flexible way to render shader scenes and textures.

But the Floating point texel device really doesn't do it any justice right now as the GPU itself is not powerful enough to make use of it. It's a good idea when the GPU has more processing power.

Currently I'd have to say that standard pixel Pipelines with single TMUs seem to be the best router for Shaders.

This is the reason why Nvidia cant muster the shader performance it needs. It's Pipeline architecture just isn't up to par.

Perhaps on the Nv40 we can truly see the power of the Nvidia's new texture/shader applying method. Since the Nv30/35 dont use traditional TMU's.

I can only hope so. As from a graphic point of view. This method is far superior in rendering complex scenes. it's unfortunate that current graphics aren't ready for this architecture.

Dont be confused by the word "texture map units" The Nv30 doesn't use them. Nor the Does Nv35, SO calling them a 4x2 card really isn't fair, Anywho.


Eventually Nvidias gonna have to up its pipeline architecture. To Get its Shader performance up to par. Then it's floating Point texture mapper should be more useful.

God if that didn't make sense. I'm tired. I'll explain it better later :???:


And I really dont agree with you. The Current Mainstream market from both Nvidia and ATI is pretty sub par. The 9600 Pro is under performing due to its lack of an 8 Pipeline architecture and weakened vertex Shaders. And the FX 5600 Ultra is having shader issues right now and forcing 16 bit shaders when aplication does not make specific requests

Which seems strange to me Why'd they do that. Since all developers will use 16 bit/fx/32 bit as needed.
 
Discussion starter · #82 ·
The reason that neither of them can be be declared a clear winner is why there is all this competition.

Glad to see you back Chris Ray, I enjoy learning from your wisdom. BTW, are you waiting for Final Fantasy XI too?
 
ChrisRay is knowledgeable. But. He uses. Too many. Periods. :p
I type on a message board like I type in a chat room :(



Glad to see you back Chris Ray, I enjoy learning from your wisdom. BTW, are you waiting for Final Fantasy XI too?

Not sure. Currently Breaking from Everquest and Playing Anarchy Online. I am considering FFX1, Worlds of Warcraft And Everquest 2. Seems better to try as many as you can so you dont get too drawn into a specific one. Some MMORPG's can eat your life away :)
 
Yeah, the Most Powefull PCI Card after TNT2
 
Discussion starter · #88 ·
Nvidia made a smart move by using gpus based on highend card on the midrange and budget instead of a old gpu with a new name and pcb.
That's true about the FX line. But on the GF 4's, no. The MX's were basicly overclocked GF 2's. THey don't support DX 8 or DX8 shaders which the Ti line did. The 5200 does have DX 9 suport and shaders 2.0, which is a good thing. It supports all the new features, but it's too slow.
 
For further clarification on why the r300 line "might" be able to do AA in Half Life 2 and Nvidias cannot is essentially based on "Centroid Multi Sampling"

Centroid Multi Sampling is a method that instead of predicting the edges from the outside to the in (normal multi sampling found on r300 and nv20 + cards)

But rather predicting the edges and aliasing them/blurring them the inside out of the texture. This is a "feature" that works with Shaders of 3.0 origin in DirectX 9.0

Since the r300 was not "inherently" designed for centroid anti aliasing I am questioning how it's going to perform. But the hardware is capable of it. and Centroid Anti Aliasing would definately hit performance more than Standard Multi Sampling.

Now the Nv30 is still using its DirectX 8.0 hybrid of MUlti Sampling made in Nv20. It cannot support this Method of Anti Aliasing.

Nvidia users shouldn't totally fret about this. As Super Sampling will work fine in this game. Tho I'm not quite sure how well it will perform.
 
Nvidia made a smart move by using gpus based on highend card on the midrange and budget instead of a old gpu with a new name and pcb.
I agree. GForce FX5200 is totally *smart* :railgun:
 
The FX 5200 costs them like no money to make. They are making a huge profit on it. What was so "not smart" about it?


Its easy to produce. Has a low transistor count. Performs Adequately. And face it. Nvidia is making alot of money off of it
 
Peroforms Adequately? It's beaten even by GForce4MX. They're making money of consumer ignorance.
 
Hector said:
Peroforms Adequately? It's beaten even by GForce4MX. They're making money of consumer ignorance.
Could you please. Not Compare a Geforce 4 MX to the FX 5200 64 Bit memory interface counterpart.


There are 2 variants of the Geforce FX 5200. One is the 128 Bit DDR memory interface. And the Other is the 64 Bit DDR memory interface. The 128 Bit DDR interface performs well above the Geforce 4 MX.

So Please. At least understand the differences between the variant versions of the FX 5200. Because Nvidia has no control over whether its board partners should they choose to use a 64 bit DDR counterpart.

The 5200 128 Bit Memory interface, Compares very competitively against the 9000 card. Since you seem so eager to Flame Nvidia, I would suggest you come up with some more relevent flaming material. As there's quite a bit more out there. That's not so easily put down as this.


And if you wont take my word for it. Even tho its pretty common knowledge that there is a 64 Bit DDR version out there.

Please refer to this.

http://www.nordichardware.com/reviews/graphiccard/2003/Budget_Roundup/index.php?ez=10
 
DarkAurora said:
You can't always excuse NVidia's mistakes with "it's what the board manufacturer's use." NVidia should have a certain standard set that the board manufacturers have to use, so customers don't get ripped off.

First of all. Why is it Nvidia's mistake that they give board manufacturers alot of control over the products they deliver? This is done for a reason. Products like the Albatron Geforce 4 Ti 4200 Turbo would not exist if Nvidia did not give its developers a little control over the products they deliver.

ATi does this as well. This is most notable with the Radeon 8500. Which had clock speeds and memory ratings lowing at low as 6ns memory. AKA Radeon 8500LELE which had about million different clock speeds and settings.

It's not about Quality control. It's about "Cash flow". In Which case Nvidia is right on. Nvidia's "reference design" sets a certain standard for which Nvidia reccomends all the chips they send to 3rd party manufacturers be delivered.


It is up to the 3rd parties to distribute the products according to Nvidias reference design and specifications. If anyone is ripping the customers off. It's not Nvidia. Because they simply design and ship the chip to the 3rd party developers. They are not gaining or losing any money on this specific order. The 3rd party developers are the ones who are making money by selling sub par PCB with a specific chip on it.
 
I barely could install an FX right now, i have 4/5 PCI slots used and luckily not the one next to the agp heh. Lets see....got my Adaptec Ultra 160 controlelr...got my SB Live! 5.1.......My 56K modem (which i could take out now that i have Comcast cable).....and my DXR3 DVD Decoder board.
 
All i know is that i don't really care who is better than who its all about the developers. Some games are optimized for one card. Like EA Games and Nvidia have an agreement so you can bet the game will be nvidia optimized. http://news.com.com/2100-1043-996423.html
All i know is when developers 'unlock the full fury' of the ati cards thats when i will purchase another one.
 
81 - 100 of 925 Posts