Next Generation Emulation banner
1 - 4 of 26 Posts

· Registered
Joined
·
230 Posts
Well, trying to keep my dismay out of the way in my judgement of the PS2, I belive that the graphics from A GeForce2 are superior to the graphics in the PS2. To do an acurate comparison of power, you need to set your video card to 640x480x32bpp. What i do for comparison is use my GeForce2 tv-out feature to see the diffrence. I have been able to convince my Playstaton-loving-pc-smashing-videoit friend that games do look better with a pc than the PS2. But i guess that is mostly opinion.

But for a spec by spec comparison:

PS2:
147.456MHz RAMDAC
4MB VRAM
48 GB per second memory bandwith
2.4 Gpixels per second
25 million polygons per second

GeForce2:
350MHz RAMDAC
32/64/128MB VRAM
7.36GB/Sec Memory Bandwidth
1 Billion Pixels/Sec Rendering Power
31 million polygons per second

As you can see, both have advantages over one another, but the bottom line is polygons per seconds, and of course, the GeForce 2 takes the prize.

XboX on the other hand, is supposed to have a more advanced version of the GeForce3. But i will belive it when i see it.

NINTENDO ROCKS THE HOUSE!!!!
 

· Registered
Joined
·
230 Posts
GeForce3
350 MHz RAMDAC
64/128/256MB
7.36GB per second memory
5 Gpixels per second (3.2 Gpixels FSAA)

Now for the polygons persecond, you cant quite make a number. Because the GPU is programable, the number of polygons depends on how effecient the person programed the GPU. It can theoretically (according to nVidia) push 125 million.

Oh, and yes, the PS2 has to be more effeicent because it has to swap all of its graphics out of a 4mb chunk of memory. It acheives this because the PS2 componets were all specially designed to work with each other. Unlike the PC where the manufacurer has no idea what the user will have..
 

· Registered
Joined
·
230 Posts
I used Deus ex and Giants to show that guy why P.C. games are superior. Graphically anyways. I also used a few. But yes, witht the latest patch on Deus Ex with a GeForce2 and FSAA, it is absolutely amazing! (even without FSAA)
 

· Registered
Joined
·
230 Posts
Well, so i dont get pooped on again by lewpy, who i respect for his awsome achivement with his GPU plugin. Here is where i got my numbers.

http://www1.nvidia.com/Products/GeForce3.nsf/

Also, you talk about memory problems, the internal memory gives it a nudge of speed by bypassing the GPU all together...

"Lightspeed Memory Architecture implements four independent memory controllers that not only communicate with the graphics processor but with themselves as well. In theory, this "crossbar" memory architecture can be up to four times faster than previous designs by being able to move smaller amounts of data in 64-bit blocks rather than tying up the entire 256-bit capacity of the memory bus when it's not needed. " -Nvidia

This is only effective with the cards onboard memory. So as long as it is swapping between its onw 64/128/256mb of memory, the memory argument is moot.

Anyways, I will give lewpy the benifit of the doubt his corrections. Afterall, i have never written a bit of code for a videocard. Maybe after finals i will revisit this issue.

Again, lewpy, AWSOME plugin. I used it for my 3500
 
1 - 4 of 26 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top