Next Generation Emulation banner

1 - 20 of 26 Posts

·
Registered
Joined
·
48 Posts
On par w/a PSX? Any current generation video card is superior to a PSX. A PS2? Not really sure how it racks upto a GeForce 2. Now on par or superior to the X-Box, that would be the GeForce 3, as I've heard the X-Box is running a toned down version of the GF3. Now I may be wrong on some of this, so don't take it 100%, until you get some more details/backup of what I said from others. :D
 
S

·
Guest
Joined
·
0 Posts
CDBuRnOut stated that the PS2's graphics processor is superior to even a GEForce 3. But, other than that, I would say that the GEForce 3 tops all the rest.
sincerely,
sx/amiga
 

·
Registered
Joined
·
230 Posts
Well, trying to keep my dismay out of the way in my judgement of the PS2, I belive that the graphics from A GeForce2 are superior to the graphics in the PS2. To do an acurate comparison of power, you need to set your video card to 640x480x32bpp. What i do for comparison is use my GeForce2 tv-out feature to see the diffrence. I have been able to convince my Playstaton-loving-pc-smashing-videoit friend that games do look better with a pc than the PS2. But i guess that is mostly opinion.

But for a spec by spec comparison:

PS2:
147.456MHz RAMDAC
4MB VRAM
48 GB per second memory bandwith
2.4 Gpixels per second
25 million polygons per second

GeForce2:
350MHz RAMDAC
32/64/128MB VRAM
7.36GB/Sec Memory Bandwidth
1 Billion Pixels/Sec Rendering Power
31 million polygons per second

As you can see, both have advantages over one another, but the bottom line is polygons per seconds, and of course, the GeForce 2 takes the prize.

XboX on the other hand, is supposed to have a more advanced version of the GeForce3. But i will belive it when i see it.

NINTENDO ROCKS THE HOUSE!!!!
 

·
&-)---|--<
Joined
·
8,586 Posts
a playstation is not even on par with a voodoo 3........ I can't think of a graphics card that would be on par with a psx..... All of the newer graphics cards are way more powerful than the aging psx..... I would like to see Metal Gear Solid 2 SOL on a pc...... It is a psx2 game and I would like to see how it would look on a pc.....
 

·
Registered
Joined
·
52 Posts
Discussion Starter #6
according to Dan, PS2's gpu is better than GeForce 2. in terms of processor efficiency (treat PS2's gpu as one real processor)
can someone dig up GeForce3's spec?
oh yeah... what about the GameCube from Nintendo?

tek
 

·
War Games coder
Joined
·
1,926 Posts
Well, if you want to think of a vid-card that's on par with a PSX, maybe, MAYBE, it's the Diamond Stealth 3D upped to 4 megs (And yes, I was stupid enough to upgrade mine to 4 megs :mad: ). Then again, if I remember correctly, that was a pitiful attempt to bring PC's graphics up to an N64 (which failed), and was only successful when OTHER companies tried it.
 

·
Registered
Joined
·
230 Posts
GeForce3
350 MHz RAMDAC
64/128/256MB
7.36GB per second memory
5 Gpixels per second (3.2 Gpixels FSAA)

Now for the polygons persecond, you cant quite make a number. Because the GPU is programable, the number of polygons depends on how effecient the person programed the GPU. It can theoretically (according to nVidia) push 125 million.

Oh, and yes, the PS2 has to be more effeicent because it has to swap all of its graphics out of a 4mb chunk of memory. It acheives this because the PS2 componets were all specially designed to work with each other. Unlike the PC where the manufacurer has no idea what the user will have..
 

·
Registered
Joined
·
1,830 Posts
Originally posted by Dan
I have been able to convince my Playstaton-loving-pc-smashing-videoit friend that games do look better with a pc than the PS2. But i guess that is mostly opinion.
Well I've only got an overclocked TNT2 Ultra and I've played games on it that look better than any PS2 game I've seen including the Metal Gear demo. Anyone played Deus Ex for PC? try it on a Geforce 2 and you'll see what I mean. For an example: In a PS2 game a wall gets more pixelated the closer you are to it. In Deus Ex when you move closer to a wall it not only stays clear, but you can actually see more and more texture in the pattern when you move closer. You know you've got a good video card when you actually want to stop and stare at the wall for a while.
 

·
Registered
Joined
·
230 Posts
I used Deus ex and Giants to show that guy why P.C. games are superior. Graphically anyways. I also used a few. But yes, witht the latest patch on Deus Ex with a GeForce2 and FSAA, it is absolutely amazing! (even without FSAA)
 
S

·
Guest
Joined
·
0 Posts
Adair, I get the same effect in the PSX version of Quake 2. When I get close to a wall or map panel, instead of pixelating, it gets sharper and clearer. I am using a GEForce 2 GTS w/FSAA turned on using Pete's OGL plugin. Now if only the Strogg troops would let me admire the graphics more.:)
sincerely,
sx/amiga
 

·
Registered
Joined
·
1,830 Posts
sx/amiga you make me jealous for your geforce 2, but I'm waiting for geforce 3 to drop in price. That's amazing that you can get that kinda performance out of epsxe and Quake 2 on your card. But even with that it still can't compare to Deus Ex. I mean have you seen this game? When you walk up to the wall you actually see texture bumps popping out and my TNT2 doesn't even support bump mapping. Of coarse I'm sure I don't need to convince you that a PC game out performs psx emus. Anyway you should really check Deus Ex out. Its a FP shooter/sneaker with a little bit of RPG style choice based conversations incorporated. The conspiracy filled plot is kind of like meets Syphon Filter. I played the game from start to finish(not all in one sitting, it takes at least a week to beat even if you know the game by heart.) 4 times in a row. Plus the game is only $20US. I'm extremely biased in case you didn't notice.
 

·
Registered
Joined
·
538 Posts
Originally posted by Dan
GeForce3
350 MHz RAMDAC
64/128/256MB
7.36GB per second memory
5 Gpixels per second (3.2 Gpixels FSAA)
Whoa there! Someone has been putting wacky-backy in your pipe, me thinks :)
The GeForce3 is still a "4 pixel per clock" engine, which means at 200Mhz, it can output 800Mpixels per second. Period. It can also apply up to 2 textures per pixel, which gives a "top speed" of 1.6Gtexels per second. Pixels and texels are different, when quoting graphics processing "power".
Place that against a GeForce2 Ultra, which is also a "4 pixel per clock" engine, but runs at 250Mhz, and is therefore capable of 1Gpixels per second. Again, it can apply up to 2 textures per pixel, giving a "top speed" of 2Gtexels per second.
It's faster than the GeForce3?!?! The core rendering engine is, yes. But these figures are not the end of the story. We need to look around the main graphics engine, at the "support" chips: graphics memory.
You really need to examine the bandwidth to memory that the graphics processor has.
Both the GeForce2 and GeForce3 use 128bit memory buses to main memory (the GeForce3 has a more efficient bus controller, allowing better use of the available memory bandwidth). Using DDR memory at 230/460Mhz (standard for GeForce2 Ultra and GeForce3), both have 7.4Gb/s of memory bandwidth (rounded). Sounds a lot, doesn't it? Not when you think that to achieve 1GPixel per second, drawing 32bit colour (4 bytes) with 32bit ZBuffer (4 bytes), you need 1GPixel/s * 8bytes/pixel = 8Gb/s memory bandwidth. Oops. And that is ignoring any memory reads (zbuffer and in case you are alpha-blending), and any texture reads (in case you actually want to texture the pixels you are rendering).
What this means is that both the GeForce2 Ultra and the GeForce3 are limited by the speed of the memory achitectures. But when both are heavily stressed, the GeForce3 comes out on top by virtue of it's more efficient usage of the memory bandwidth it as available. But when not heavily stressed, the GeForce2 Ultra can beat the GeForce3 in speed, as seen by various reviews/benchmarks.

Now let's turn to the PS2. Where the GeForce2/3 has a 128bit bus to main memory, the PS2 has a 2560bit bus to main memory. That is no typing mistake, it really is 2560bits wide. It has a 1024bit read bus, it has a 1024bit write bus and a 512bit texture bus.
A bit of maths - 2560bits = 320bytes (128bytes read, 128bytes write, 64bytes texture). 150Mhz * 320bytes = 48Gb/s memory bandwidth. That is a massive figure, achieved by using embedded DRAM (hence the ability to use massive bus widths).
Why is it 128bytes read/128bytes write? Well, the PS2 can render "16 pixels per clock". If each pixel is 32bit colour, with 32bit depth, that is 8bytes per pixel (as above). 8bytes * 16 = 128bytes. And Sony allocated that for both reading and writing, so zbuffering and alpha-blending have full bandwidth too.
And finally, 150Mhz * "16 pixels per clock" = 2.4Gpixel/s. Note that is "pixel" not "texel". That is way beyond any PC rendering speed, and it has the bandwidth to actually achieve that speed as well.

The unfortunate thing about the PS2 GS is that when texturing is enabled, the fill-rate drops to 1.2Gpixel/s (or 1.2GTexel/s, as it can only do single texturing). They seem to have optimised the rendering read/write process, but not the texture fetching process.

Now for the polygons persecond, you cant quite make a number. Because the GPU is programable, the number of polygons depends on how effecient the person programed the GPU. It can theoretically (according to nVidia) push 125 million.
Theory is a wonderful place: everything works in Theory ;)
Well, let's take a look at the numbers, shall we? :)
Let's assume that we are drawing an infinite triangle-strip, so that on average, only one vertex needs to be sent to the GPU for every triangle (the other two vertices are already in the GPU from previous triangles). The vertex consists of homogenous co-ordinates (X, Y, Z, and W), one set of texture co-ordinates (S & T, or U & V, depending on your favourite API), and one colour (ARGB). Each of those values is a 32bit data variable (float for everything, apart from the ARGB8888 32bit word). 32bit = 4bytes.
4 bytes * 7 = 28 bytes. 28 bytes per vertex, that is. Which also means 28 bytes per triangle, based on our generous assumption.
Okay, we are drawing 128 Million traingles per second, so we need 128Mtris * 28 bytes/tri = 3.5Gbyte/sec bandwidth.
Erm, AGP4x peaks out at 32bit*66Mhz*4 = 1Gbyte/sec. That means we can only get 1/3.5 = 29% of the required bandwidth. So instead of 125 million triangles, we end up with 29% of 125 Million triangles, which is 36 million triangles/sec.
Now then, sort out the bandwidth bottle-necks in the PC architecture, and you may be able to approach that theoretical figure ... did anyone mention XBox? ;)

Okay, this is all really a rough hack through the figures, just trying to put a perspective on things. It's really in the hands of the developers that each platform comes to life, regardless of what the platform is capable of.
 

·
Registered
Joined
·
1,830 Posts
Wow Lewpy, you really know alot about that stuff. Thanks for all the info. So If a PS2 can do all that then why aren't the games alot more impressive? Is the unoptimised texture fetching process or are they just not using the full capability of their hardware on purpose?
 

·
Registered
Joined
·
230 Posts
Well, so i dont get pooped on again by lewpy, who i respect for his awsome achivement with his GPU plugin. Here is where i got my numbers.

http://www1.nvidia.com/Products/GeForce3.nsf/

Also, you talk about memory problems, the internal memory gives it a nudge of speed by bypassing the GPU all together...

"Lightspeed Memory Architecture implements four independent memory controllers that not only communicate with the graphics processor but with themselves as well. In theory, this "crossbar" memory architecture can be up to four times faster than previous designs by being able to move smaller amounts of data in 64-bit blocks rather than tying up the entire 256-bit capacity of the memory bus when it's not needed. " -Nvidia

This is only effective with the cards onboard memory. So as long as it is swapping between its onw 64/128/256mb of memory, the memory argument is moot.

Anyways, I will give lewpy the benifit of the doubt his corrections. Afterall, i have never written a bit of code for a videocard. Maybe after finals i will revisit this issue.

Again, lewpy, AWSOME plugin. I used it for my 3500
 

·
Registered
Joined
·
52 Posts
Discussion Starter #18
:D ...never would thought Lewpy will tug-in for lecture, so... would PS2's design turns out to be a big waste? and speaking of XBox, how it gonna perform against PS2? (focus on their graphics engine)
 

·
Registered
Joined
·
538 Posts
Originally posted by Adair
Wow Lewpy, you really know alot about that stuff. Thanks for all the info. So If a PS2 can do all that then why aren't the games alot more impressive? Is the unoptimised texture fetching process or are they just not using the full capability of their hardware on purpose?
I always check out technology/hardware when it comes out. My background is electronics, so highspeed chip technology always gets my interest ;)
Anyway, from what I understand, here is the main problems with the PS2 architecture: 4Mb of VRAM.
If the game is running at 640x480x32bit colour, you are looking at atleast 3 buffers that size: 2 display buffers, 1 ZBuffer. 640x480x32bit = 1.17Mb. 1.17Mb*3= 3.51Mb ... that leaves a whooping 0.49Mb for textures .... gee whiz.
I don't know the real details of the PS2 GS, only what has been freely published on the .net, but I would guess they have an interlaced mode, which means you could reduce the display frames footprint by one-half, but obviously this would reduce the image quality.
PS2 also supports anti-aliasing, but I believe it uses super-sampling. Thus, a larger display-buffer is required in VRAM, which leaves practically nothing for textures.
Also, PS2 does not support texture compression (only the pseudo-compression of paletted textures). This does not allow what little texture memory there is to be used efficiently.
This means that the programmers have to do very, very careful texture memory management, with many texture uploads per frame, in order to do something useful with textured primitives. Using large textures is almost impossible, as they just can't fit in the available space.
There is another problem I have thought about, but have no proof as to whether it is true or not. The PS2 GS renders 16pixels simultaneously. What I don't know is whether there are hard limits on what pixels can be rendered simultaneously. Worst case, only aligned groups of parallel pixels can be rendered at the same time. This means if a vertical line is rendered, then only one of the 16 pixel pipelines is doing any useful work, while the other 15 are idle. This reduces the fill rate to 1/16 it's peak "theoretical" value. Bummer.
It just seems that PS2 was not really designed for heavy textured primitive usage. You could almost argue that it doesn't need to, because it has such a large tri-throughput capability, that the texels on an object could be modeled with coloured triangles rather than textures. How feasable this is, I am not sure.
As soon as developers get to grips with the PS2, and understand how to make it perform, then the impressive stuff will flow. The PS2 also requires careful programming to extract the most from all the parallel engines in the unit. The art of doing this will take time to mature. Just look at the PSX stuff coming out still: would you have thought it was possible on a PSX a couple of years ago?

Originally posted by Dan
Also, you talk about memory problems, the internal memory gives it a nudge of speed by bypassing the GPU all together...

"Lightspeed Memory Architecture implements four independent memory controllers that not only communicate with the graphics processor but with themselves as well. In theory, this "crossbar" memory architecture can be up to four times faster than previous designs by being able to move smaller amounts of data in 64-bit blocks rather than tying up the entire 256-bit capacity of the memory bus when it's not needed. " -Nvidia

This is only effective with the cards onboard memory. So as long as it is swapping between its onw 64/128/256mb of memory, the memory argument is moot.
No, the memory arguement is far from moot :) Their memory "crossbar" is a good thing, of course. But is it radically different from what Matrox implemented with their "dual independant buses", which split the bus in to two smaller buses, each independant? nVidia have extended it to 4 smaller buses. Logical progression to me :)
Plus, what this really does for the system is allow it to approach the theoretical maximum memory bandwidth of the memory more often. It doesn't offer more bandwidth, just uses the bandwidth that is available more efficiently.
One thing to note from that quote from nVidia is that they talk about a 256bit bus. Well, technically, that is not true. What they are really saying is a 128bit bus running at DDR (double-data rate). It doesn't take much for someone to look at the figures and be lead to believe they have a 256bit bus running at DDR (say, 460Mhz). Marketting people suck, they deliberately try to skew the figures, using terms such as "effective".

Anyway, I am not trying to put nVidia down, I am trying to put things in perspective. I think they are doing some incredible design and implementation, just not as wonderfully great as they like to make out often ... quite often, sometimes! ;)

Originally posted by tekwiz99
:D ...never would thought Lewpy will tug-in for lecture, so... would PS2's design turns out to be a big waste? and speaking of XBox, how it gonna perform against PS2? (focus on their graphics engine)
I don't think the PS2 design is a big waste, it just needs to be deisgned/programmed for. That will take a few generations of games. The XBox is more like a PC in architecture, so companies that are inherently PC based can make the transistion to console developers far more easily. Good thing, Bad thing? I don't know. Time will tell.
Graphically speaking, the XBox is basically a GeForce3 chip with 2 more texture units per pixel pipeline. This allows 4 textures per pixel, and doubles the peak texel throughput ... of course, this is only true if 4 textures are being applied simultaneously.
I am not sure how they have tackled the memory bandwidth problem. Also, from what I remember, the XBox GPU is the main memory controller for the entire console (including CPU). This means some of the available bandwidth is wasted looking after the other components in the system, but it does mean all display lists are inherently already in the GPUs local memory, as all main memory is local to it ;)
I guess they are using DDR SDRAM, and they probably have split it in to banks. Then again, consoles tend to render at lower resolutions, and the GeForce3 has on-chip FSAA capabilities rather than super-sampling, so all these factors will help lower it's memory bandwidth requirements.
 

·
Registered
Joined
·
124 Posts
Wow, Lewpy, with all the info you have on the Xbox and the PS2, can you spare some techno bull you have stored up on the Dreamcast?
 
1 - 20 of 26 Posts
Top