Originally posted by fivefeet8
You guys need to check this thread...
http://boards.psxemu.com/showthread.php?threadid=1353&highlight=million
Lewpy gives some great insight into the architechture of the ps2 and GF3..
True, although there are a few things you have to remember when reading this.
The first is that his argument about AGPx4's limit holding back the GeForce3, while being true if you stream everything through AGP from the CPU's main memory into the card, is true, most developers are going to avoid this situation by making sure as much of their textures and vertices are contained on the videocard as possible; this avoids taking the insane performance hit of running through AGP, allowing you to only have to worry about the on-card performance barriers. Seriously, when you can compress your textures, and given the size of the average mesh's vertices, you can easily fit most of a game's textures, index buffers and vertex buffers in the 64MB that's on the card.
(I should point out that some meshes do need to be deformed for animations and the like. While the old way was to transform them on the CPU and then stream them to the video card, this can now be done on the GeForce3 thanks to its highly-flexible vertex shader.)
The second issue I have with Lewpy's explanation is that he doesn't seem to give the memory streaming capabilities of the PS2 much merit. From what I understand of the memory architecture, there's no need to keep all of the textures in the 4MB that's embedded in the GS; you could easily stream in and replace your video textures from main memory. (The texture swap could be costly, but then again, the GeForce3 has to do the same thing too, doesn't it?) Main memory on a PS2 is 32MB; I'm sure there's more than enough room in there for a few textures.
If these assumptions hold true, then the PS2 doesn't really need to keep all it's textures in memory anyways. The system's got fast enough memory bandwidth that you could also just generate them in a coprocessor, instead of yanking them directly from main memory. And if you can do that, then its also possible to decompress textures from main memory into video memory. (If you really need the extra textures and are willing to give up some computing time on a vector unit.)
Of course, the GeForce3 has a pixel shader, which can be used to generate textures too.. although I suspect that you probably can't really write a decent decompression algorithm as a pixel shader, given the branching limitations (ie; you can't branch at all) of pixel shaders.