Next Generation Emulation banner
321 - 340 of 3,992 Posts
This sentence seems to lead your whole argumentation ad absurdum, because you are perfectly right with that. No, theoretical numbers never reflect the real power of a system, it's technology which matters. What's a superior gfx card worth when the CPU in the background lacks performance? An RSX built into a PC would never reach the benchmark values it would reach if it were placed in a PS3, because the x86 technology SUCKS! It is old, inflexible, the only advantage is its good performance in applications. Guess why almost every PC has its own little super computer inside called GeForce or Radeon or whatever...
Oh my god... benchmarks such as 3dmark03 are wrappered so they dont rely on the CPU subsystem at all. They avoid CPU calls all together. This does not at all change my point. The graphic capabilities are "identical". The ROP setup (Pixel filler, The Pipeline ALU arrangement (Dual ALUs, with 1 texture shared address, and bandwith concerns) We are talking about the graphic capabilities of the rendering pipeline of the RSX. As far as bottlenecks go. It will share many similarities to the current Geforce 6/7 technology because it has a similar pipeline configuration. If you guys cant answer this singular point I have made this discussion is going no where. Its one sided and ridiculous.

"possible" and "very likely" but in the end, there is none to be found >_>, since PC "platform" is not a console. thus, i think if history repeats itself (i see no reason why it shud not), something very similar will happen whereby a top-end pc today will never be as efficiently used as a ps3 and especially not 5-6 years down the road.

stuff like Unreal Championship II, Chronicles of Riddick, Conker (more in the screenshots section for this one ) , and espcially the titles on ps2 that push huge poly numbers and effects have simply not been released for PC that will run on a computer that came out 1 to 1.5 years after this current generation's consoles have. running ps2 gt4 thru HDTV mode gets u something very similar to this screenshot
You know Chronickles of Riddick, is available for the PC and its minimum requirements are a 1 Ghz machine. Run it at Xbox resolutions and it can be played just fine on a minumum requirement machine. Again. You didnt answer my concerns nor did you answer what I said. In a years time we will have PS3 quality graphics on the computer. There are tech demos available now and the coming months which will illustrate this. Your "Can run better on a console than it does a machine" point is moot. Completely irrelevent because by the time the the thing is actually released the hardware will be obsolete. Which is exactly contrary to the argument presented earlier that a PS3 will provide a better visual experience than PC hardware currently available. It's fictious and incorrect.

PC's suffer the problem of the factor that they are far more widespread in the least common denominator, Ranging from high end to low end. Given developers the right opurtunities and the right foresight for a developer platform. ((2.6 Ghz Machine with an a Geforce 6 or higher. ))We'd have far more beautiful graphics than we do today.

Now, The PS3 is going to be a fine gaming machine. But as a piece of hardware. It's not a revolutionary set of technology. As a matter of fact. As much as it pains me to say this. The Xenon core is probably more revolutionary from a technology standpoint. But perhaps not a practical one.

yeah, and check how it looks compared to on the Xbox, (I can understand you didn't comment about it as far as proccesor and memory goes, you know as well as I do it wouldn't run very well on a Celeron 733 mhz with 64 mb ram, no matter what graphics card you had... because graphics capatabilty is not only about graphics card, it's also about arhitecture, and other components, especially when it comes to games)
You missed the point of my argument. a Geforce 4 MX is an "Inferior" set of technology than what is available in the Xbox. The Xbox GPU consists of standard array of 1.1 capable pixel shaders with dual vertex units. The Geforce 4 MX lacks any shading capabilities and all vertex/geometry numbers are offloaded to the CPU. The fact that the Xbox can run the game at low detail better than a Geforce 4 MX doesnt show me anything. It just shows that Doom 3 is a scalable engine that can run on ultra low end hardware.

With all of this you're just confiriming what I said about consoles and developing for specific hardware.
No I'm not. I'm pointing out that Source and Doom 3 begin their development process back in the early Geforce 3/Geforce 2 era. And the engines were developed with those graphic capabilities in mind. The games themselves had a few additional features tacked on but they are not a showcase of a modern computer's power either. Give a game a proper development foundation and it will have exceed what you have seen today and could easily be comparable. The newer game engines being worked on now are based upon the newer technology implementation. Non shader capable hardware is starting to be buried from a developer point of view and, and long programmable shaders are starting to become a minimum requirements.

What I was trying to say ChrisRay is that the way you compare the graphics capabilities, is not very valuable, because the entire architecture makes a machine and not only the graphics proccesor, it doesn't matter if you have a geforce 7800gtx if everything will be CPU limited, or the memory isn't fast enough, etc etc, as far as I see it you compared your system directly to the PS3. I guess I misunderstood you though, if you only meant as far as ROP/ALU/FILLRATE on the graphics card goes, your system might beat it. I don't think it will in TPS, this depends on a lot more than just the graphics proccesors.

If you don't believe me try running any standard application that counts TPS, and see how much you get. ( For example just open a mesh with a 3d program without any shaders or things like that to make the comparasion unfair, and see how much you get )
You keep bringing up the term "CPU bound"". Whether you realise it or not. (Which I personally dont think you have any idea what you're talking about) This is not 4 years ago where CPUS shared functions with GPU. A Sm 3.0 capable hardware is fully capable of performing geometry, branching, tesselation, speculiar effects, all without the need of CPU interference. The games in the PS3 demos are bound by the graphic subsystem of the demo. A PS3 graphic capabilities will be bound by the capabilities of the RSX. And ps3 games properly coded to use shaders will once again be bottlenecked by the capabilities of the card. Not the Cell architecture. As with an X86 system properly coded to use Geforce6/7 graphic card.



What I was trying to say ChrisRay is that the way you compare the graphics capabilities, is not very valuable
This is obsurd. It's entirely valuable. This is how we judge graphic capabilities of a computer system! Synthetic capabilities are a baseline for judging throughputs and theoretical maximums. We already have the capability to judge the shader output of an RSX system. The same bottlenecks will apply.


which anyone, that knows anything about multi-proccesor systems, know will be impossible | (it's the same about using multiple GPUs, you won't reach the peak values, however I realised you just might for the fill-rate, not going to argue with you on that one)
No GPU is 100% efficient. Neither is any CPU, But the same graphic rendering bottlenecks will apply. Because the architecture's are nearly identical. Also.


CG is a HLSL clone. Its a shading language. It's almost identical to the shading language used in DirectX 9.0. No shading language is bound by an API. It is however bound by its compiling target.
 
Now, The PS3 is going to be a fine gaming machine. But as a piece of hardware. It's not a revolutionary set of technology. As a matter of fact. As much as it pains me to say this. The Xenon core is probably more revolutionary from a technology standpoint. But perhaps not a practical one.
You might know some of your graphics, but as proven by what you've written you have a lot to learn about other hardware. 3dmark is a very poor example because it doesn't have much to do with games, on top of that you'll see very poor performance in it if you have a slow CPU, and the latest one does physics, yes your precius GPU can't do that.... that's for the CPU to do, see how your AMD 64 performs against 3x7 cell, it'll get crushed. OOO but why am I not targeting the proccesing pipeline of the graphic cards, because that's what we are discussing right? well read below:

If you guys cant answer this singular point I have made this discussion is going no where. Its one sided and ridiculous.
How is asking you what kind of TPS you get with your setup, not about the rendering pipeline?

Futhermore I find it kind of strange how you say this and at the rest of the post you go ahead to bash all the hardware of the PS3 and not just the graphics proccesor

Now, The PS3 is going to be a fine gaming machine. But as a piece of hardware. It's not a revolutionary set of technology. As a matter of fact. As much as it pains me to say this. The Xenon core is probably more revolutionary from a technology standpoint. But perhaps not a practical one.
Yeah sure that's why IBM,sony, Toshiba spent so many years developing Cell to make something that's less revolutionary then connecting 3 power PC's, and that's also why it's quickly being adopted in medical application, and other things like that were a lot of power is needed, think again about this one.
 
How is asking you what kind of TPS you get with your setup, not about the rendering pipeline?

Futhermore I find it kind of strange how you say this and at the rest of the post you go ahead to bash all the hardware of the PS3 and not just the graphics proccesor
You're going to have to inform what the abreviation of TPS stands for. And I will tell you if I think its relevent.

You might know some of your graphics, but as proven by what you've written you have a lot to learn about other hardware. 3dmark is a very poor example because it doesn't have much to do with games, on top of that you'll see very poor performance in it if you have a slow CPU, and the latest one does physics, yes your precius GPU can't do that.... that's for the CPU to do, see how your AMD 64 performs against 3x7 cell, it'll get crushed. OOO but why am I not targeting the proccesing pipeline of the graphic cards, because that's what we are discussing right? well read below:
You gotta be kidding me. 3dmark is a technology designed to push graphic rendering capabilities of a system. The point of the benchmark is to test the bottlenecks of a graphic setup.


that's for the CPU to do, see how your AMD 64 performs against 3x7 cell, it'll get crushed. OOO but why am I not targeting the proccesing pipeline of the graphic cards, because that's what we are discussing right? well read below
Irrelevent to the actual graphic rendering capabilities. Which is what I am discussing. You alude topics better than anyone I know. Chankast. You are wasting my time. The physics/threads are not relevent to the visual capabilities of the PS3 graphic chip. If you are incapable of discussing the capabilities of the RSX running real time graphics. Please so say now and quit wasting my time with such irrelevant material.
 
You know Chronickles of Riddick, is available for the PC and its minimum requirements are a 1 Ghz machine. Run it at Xbox resolutions and it can be played just fine on a minumum requirement machine.
well that takes care of one game (provided u are able to run exactly what the xbox runs on the pc version...but i doubt any consumers like you and myself will know exactly how that wud be now wud we...). but if we compare halo/halo2 running at 1080p on xbox with halo/halo2(?) running on a 1 ghz P3, 128 meg ram, 64 meg Geforce 3 w/ winxp (this is already close to 4 times as expensive as an xbox at launch...we really shud compare with a 733 celly with 64 megs total ram shared with geforce 3 proc...)....

Again. You didnt answer my concerns nor did you answer what I said.
? did u direct anything at me >_> im sorry but i didnt think u did o_O. unless if u think this is a everyone vs chrisray discussion >_<

In a years time we will have PS3 quality graphics on the computer. There are tech demos available now and the coming months which will illustrate this. Your "Can run better on a console than it does a machine" point is moot. Completely irrelevent because by the time the the thing is actually released the hardware will be obsolete. Which is exactly contrary to the argument presented earlier that a PS3 will provide a better visual experience than PC hardware currently available. It's fictious and incorrect.
i dont see how it can be that a 2.6 ghz machine with a geforce 6800 will ever produce the same graphics quality output as a ps3 or an xbox. and i dont think my point is "moot" and "irrelevent". i believe we have already been pointing out the fact that when limited to a single machine with fixed specs, developers are able to push the hardware to close to its maximum capacity.

given what u yourself said about the widespread specs in the PC industry (what the differences even in cpu architecture, gpu architecture, etc...with athlon xps not supporting sse2, different mb chipsets, etc, it wud be fun to see someone attempt to squeeze power thru hardcoding...), it would be surprising to me if the rig u have in your sig right now is able to achieve in 5 years time anywhere close to what ps3 games are achieving going to be efficiently pushing in their second or third round of games by efficient programming.

i think u didnt address what i wrote in regards to this very aspect, which was quoted :emb:. the main thing is : ok, suppose that we compare a ps3 game, which is powered by a chip which u have proposed to be very similar but slightly more powerful than a g70, with a pc game 5 years down the road powered by your current sli setup. now, what are the chances that that game will be running on your setup without wasting anything? if game developers all acted like carmack and took um..."X" (or "n"...whichever u prefer) years to develop games, then...MAYBE. on the other hand, a second gen game on the ps3 will probably be running already at much more efficient paths than said pc setup, and not only that, will probably incorporate a lot of hardware "hacks" which would be immediately badmouthed by "hardware gurus" at certain sites on the internet...

btw, from where do u get the info on the rsx? arent u another person that feeds off speculations/specs that companies release like a lot of other ppl on the internet ?
 
well that takes care of one game (provided u are able to run exactly what the xbox runs on the pc version...but i doubt any consumers like you and myself will know exactly how that wud be now wud we...).
Well actually, Unreal Championship is based off the same Unreal tournament engine found in UT2003, UT2004. So...

given what u yourself said about the widespread specs in the PC industry (what the differences even in cpu architecture, gpu architecture, etc...with athlon xps not supporting sse2, different mb chipsets, etc, it wud be fun to see someone attempt to squeeze power thru hardcoding...), it would be surprising to me if the rig u have in your sig right now is able to achieve in 5 years time anywhere close to what ps3 games are achieving going to be efficiently pushing in their second or third round of games by efficient programming
I dont claim I can predict the future. At least not 5 years into it. I can simply give you some insight to what I know about the graphic capabilities of of current setups with software in the coming year. Consoles do have more lifetime as gaming machines. But thats usually because PC users are prone to upgrade. Right now PCS are "far" beyond the capabilities of the PS2. And produce much better graphics as well. But this is just because the PS2 is obsolete and out dated. Same with Xbox. I completely agree that consoles are usually a better investment if you're into platform games and RPGS. And thats all. Simply put. Most PC enthusiast users will not allow themselves to get that outdated. But here's some food for though. Doom 3 can run on a Voodoo2 with the shader functions disabled.

btw, from where do u get the info on the rsx? arent u another person that feeds off speculations/specs that companies release like a lot of other ppl on the internet ?
I cant answer your question in its entirety, as I certain restrictions preventing me from doing so. Besides. Nvidia themselves have said the RSX was based on their upcoming graphic card ((Aka The Geforce 7800GTX, thats available now. We have the clock rates and we know the ALU setup of RSX, What we dont know is the actual vertex capabilities and how it will interact with cell. Yes there are some differences. ((There has to be for proper cell communication))
 
good lord ur fast...i was about to edit the last post so my las comment wudnt seem as offensive (well it seemed offensive when i read it again when i completely did not mean it that way)

what i meant isnt it entirely possible for the following to be possible and i quoted from an anandtech source (but i dunno what the general opinion is around the "hardware gurus" these days...:

The most likely explanation is attributed to nothing more than clock speed. Remember that the RSX, being built on a 90nm process, is supposed to be running at 550MHz - a 28% increase in core clock speed from the 110nm GeForce 7800 GTX. The clock speed increase alone will account for a good boost in GPU performancewhich would make the RSX “more powerful” than the G70.

There is one other possibility, one that is more far fetched but worth discussing nonetheless. NVIDIA could offer a chip that featured the same transistor count as the desktop G70, but with significantly more power if the RSX features no vertex shader pipes and instead used that die space to add additional pixel shading hardware.

Remember that the Cell host processor has an array of 7 SPEs that are very well suited for a number of non-branching tasks, including geometry processing. Also keep in mind that current games favor creating realism through more pixel operations rather than creating more geometry, so GPUs aren’t very vertex shader bound these days. Then, note that the RSX has a high bandwidth 35GB/s interface between the Cell processor and the GPU itself - definitely enough to place all vertex processing on the Cell processor itself, freeing up the RSX to exclusively handle pixel shader and ROP tasks.
If this is indeed the case, then the RSX could very well have more than 24 pipelines and still have a similar transistor count to the G70, but if it isn’t, then it is highly unlikely that we’d see a GPU that looked much different than the G70.
>< THATS what i meant in that last part. to be honest, unless if we're the actual programmers/console designers (and even then...its questionable...who wudda known gt2 wud run on something with 3 megs of ram in total..), it wud be very very difficult to gauge the potential of a console. now, i suppose if we extend this to speculations about a machine that we dont even know all the specs about and whose hardware hasnt even been finalized completely, i believe all we can do is look at history and see what has happened.

in regards to pcs being upgradable to match a consoles "power" in the upcoming years (in what way do we measure this btw..)...well thats inevitable
 
I'm not sure I entirely agree with anandtech's conclusion there. If anything I predict consoles are more likely to take advantage of high geometry counts. I mean definately not in the millions. But entirely possible for 50,000 + Polygons like Unreal.

If you look at how many engineers were assigned to the development of the RSX ((Fairly trivial amount in comparison to G70/Nv4x) It seems unlikely that there are any radical changes to the pipeline. Nvidia did not expend nearly as many resources for PS3 development as they did for the PC counterpart. This is actually a bit paralel to how ATI expended its resources.

in regards to pcs being upgradable to match a consoles "power" in the upcoming years (in what way do we measure this btw..)...well thats inevitable
Well you cant really measure this. As GPUs continue to develop they will be more programmable. I was just using a historical reference in regards to GPU performance trends. As you were with console software timelines in comparison :)
 
in regards to pcs being upgradable to match a consoles "power" in the upcoming years (in what way do we measure this btw..)...well thats inevitable
I would have to say at launch it would be equivelent to a high end PC at a much lower price and around a year after the console's release it wound be around the same as a mid range PC
 
There's also the main fact that a PC is dedicating alot of resources to running the OS, and other programs that may be running at the time, where a console usually has simply a small bios, and has pretty much ALL of it's resources to devote to just that one game that it's playing at the moment. That right there I would think would make a load of difference in the quality of games a console could run versus what a PC can run.
 
Don't bother with ChrisRay. This is the guy who gets a 7800gtx sli setup but doesn't want to upgrade his CPU because he's "not made out of money." He posts threads with "The way it's meant to be played" in the title, and he can't get it into his head that consoles can be just as good as PC's. Just like every argument that happens here, no side see's the others point. So I think we should just wait for the damn consoles to come out.
 
First of all Darian. The 7800GTX cards are "Review" samples. Do you understand the concept of review samples? Upgrading my CPU is not a feasible possibility right now. The idea that you tell me how "upgrade" my system is just obsurd.

Secondly, If people like Fivefeet are allowed to post threads such as "My 6800 Ultra screenshot thread" that spans for 20 + pages. I am perfectly within my right to post a bragging thread in the screenshot forum.

Embicile.





and he can't get it into his head that consoles can be just as good as PC's
If you actually read the thread. Which you didnt. You'd see that I made direct comparisons to the 7800GTX and the capabilities of the hardware verses that of the RSX and the pipeline capabilities.

Again. Reading comprehension for the win. You should try it sometime.
 
Don't bother with ChrisRay. This is the guy who gets a 7800gtx sli setup but doesn't want to upgrade his CPU because he's "not made out of money." He posts threads with "The way it's meant to be played" in the title, and he can't get it into his head that consoles can be just as good as PC's. Just like every argument that happens here, no side see's the others point. So I think we should just wait for the damn consoles to come out.
o_O i thot we were actually getting to understand each other more tho. i can gather that a person who just spent over $1000 on a pair of peripherals can be biased, but thats the same for everyone to be honest.

from what i understand, this is one of those threads where we can all just try to speculate, share our respective views about what cud happen, etc. i mean, the thread is called "The Playstation 3 Thread!"....its a thread about something no one really knows enuff about to really say much. u have to expect a lot of BS'ing ...
 
D.D. said:
o_O i thot we were actually getting to understand each other more tho. i can gather that a person who just spent over $1000 on a pair of peripherals can be biased, but thats the same for everyone to be honest.

from what i understand, this is one of those threads where we can all just try to speculate, share our respective views about what cud happen, etc. i mean, the thread is called "The Playstation 3 Thread!"....its a thread about something no one really knows enuff about to really say much. u have to expect a lot of BS'ing ...
You're right, I overreacted, and I apologize. I'm just getting sick of the endless arguing where no one agrees on one thing. I did read the thread, and I reacted just like everyone else does, blindly and stupidly. Hopefully more people can admit they're wrong like me. I did read it, and I did comprehend it, I just didn't agree with it. I'm suprised there are still people who sink to insulting and saying you're wrong because I'm right. I do understand the architecture of the things we're talking about and probably better than a lot of people here, and I know that in this case the consoles specs don't matter nearly as much as the PC's. Hell, the xbox had a 733mhz pIII, but it's probably equivalent to my p4 1.7 radeon 9600 in all of the games that I have played. Much better in certain cases because it was built for the xbox. But I still think a post that I made a while ago in the xbox360 thread shows my honest opinion and complete bias.

Revolution murders xb0x 1.5 and ps3.14159265 it has 10 4 ghz processors with 9 cores and 2 dual core 7800gt's in sli with 2 gigs of gddr5 and redram of 8 gigs. it also has a linux os and a candy dispenser. Rev pwnz the halo3 box and psrpg.
I must go now, I'm off to "review" a hooker. :wave:
 
I apologise too. I came here earlier to explain my position and my understanding of the G70/RSX hardware. Not to put the console down. But alot of people have hyped the console beyond all point of recognition. And when I get told "I dont know anything about hardware". When its my secondary job to study/work with it. I get a little annoyed.

All I ask is that people look at things from a technical point of view when comparing hardware. The beauty of software is that its scalable. And it can be scaled down from hardware to hardware based upon its implementation.
 
Just to conclude:

Your GF7800gtx SLI setup might be actually faster than the RSX ALONE. Your GF7800gtx SLI setup in your Athlon64 3800+ @ 2.4GHz might be also faster than the RSX in your Athlon64 3800+ @ 2.4GHz. Everyone (or at least I) would agree with that.

But: Your GF7800gtx SLI setup in your Atlon64 3800+ @ 2.4GHz WON'T be faster than the RSX in the PS3 because of several factors:

- The PS3 dedicates its hardware to the game only, nothing any PC would be ever capable of, it will only become worse thinking about the effort the next Windows is going to spend in DRM and other useless crap.
- The PS3 developers can optimize their games for one and only one machine, PC devs will always have to consider a lot of different configurations making special optimizations almost impossible.
- The PS3 is programmed directly, PC devs always rely on DirectX or OpenGL, they can't use special tricks (C64, anyone? :D)
- The Cell actually DOES assist the RSX in graphics computing, because it uses vectorized computing, the PC can't. Considering there are seven cores plus the RSX capable of graphics, you can even do raytracing in realtime.
- There are still no final specifications of the RSX out (officially), who knows what it might be capable of in reality.

P.S.: Chris, you seem to be an NVidia insider ... sort of. Can you tell me how much 3dfx know-how aside from SLI is used in current NVidia GPUs?
And with benchmarks I didn't mean programs like 3dMark or something, but the real game performance if you tried to measure it.
 
Let me aplogise as well, as I'm certain you do know about hardware and I can be too rude sometime. I'm a game developer and I have a major in science, and the subject was game development, so I too get upset when someone tells me I don't know what I'm talking about.

Your graphics card setup is something to brag about, and it's a very powerful one, and it'll beat the RSX in pure graphics proccesing power ( after all sony even said so themselves, it's about as powerful as 2 Geforce 6800 ultras.... about the RSX )

What I'm trying to exlapin is just graphics card, doesn't make the graphics in games, I know very well how a slower CPU and a low amount of RAM with fast graphics card will get worse performance, compared to a mid-range CPU, with medium amount of ram and a mid-range graphics card.

As far as physics go, it's part of the quality in games as well, if you look at Half-Life 2 a lot of it's quality comes from cool things related to physics, and it's very much related to graphics, I consider stuff moving, how things are animated, lip-synching and emotions as things that enhance the graphics quality as well, I think this is partly why were it went wrong, as you comparasion was purely between the GPU's and nothing else. This is also why I said it's not very relevant for games to only compare the GPU's and not the complete system. I guess the folks at 3Dmark feel the same way, and therefore included physics in 3dMark 2005 ( yes the liquid filled tank exploding for example, that's done by physics ), calculated by the CPU, but also depending on how fast it can communicate and get the GPU to react on the physics.

Better graphics does also take a tool on the CPU, if you have a forest of 20,000 trees, the CPU will have to make many calculation for example in regard to culling, it also needs to determine what shaders should be in use at a certain time, how certain polygones should get proccesed, shadows is not done entirly in the GPU either as the light moves real-time shadows will change, and how they should be displayed by the graphics card will have to be re-calculated by the CPU.

Another thing to consider is write-back ( yes you did cover it some ) but as I understand it this will be one of the major strengths of the PS3, a shader right now can run a program, even a long one, but it doesn't give any information back to the CPU, for example when I wrote a advanced alghoritm for breaking windows, it'd have been a whole lot faster to use the vertex shader to move all the window pieces, but if I had done this, once they landed on the floor, the shader couldn't update the memory so I had to have the CPU do it, it's very slow because Write-Back sucks in our cards today, just making a dynamic Vertex buffer is a lot slower than a write only one. Supposingly this will change with the PS3, we can't be sure until it's released however.

I realised the post is quite long now, yet I've only tapped on the surface of how the CPU is involed in graphics proccesing. Lastly TPS is perhaps the most common way of measuring perfomance for us developers, it's much better than framerate to compare two different games for example, it stands for Triangles Per Second, so basicly it show how many polygons the game manages to render within a second. A game with twice as high TPS as the other will display twice the amount of polygones as the other one, within a second. These days graphics is not only about how many polygones are displayed, but it's still one of the most important factors.
 
ChankastRules said:
I realised the post is quite long now, yet I've only tapped on the surface of how the CPU is involed in graphics proccesing. Lastly TPS is perhaps the most common way of measuring perfomance for us developers, it's much better than framerate to compare two different games for example, it stands for Triangles Per Second, so basicly it show how many polygons the game manages to render within a second. A game with twice as high TPS as the other will display twice the amount of polygones as the other one, within a second. These days graphics is not only about how many polygones are displayed, but it's still one of the most important factors.
So this is much more an engine quality measuring factor, right? Like the good old "Quake3 better than Unreal" comparison?
 
It's good for that, however it's also included in the specs for graphics cards, for example my Radeon 9800 XT was advertise as 412M+ TPS , which is more or less false marketing, the highest I've reach in any application is about 200M TPS and that was a very biased app, I'd see the advertised value as the Absolute maximum your graphics card could reach under extreme circumstances :D It's intressing to note that they say a complete realistic scene would require about 80 million polygones, if you're to run this at 60 FPS you'll need 4800M TPS, so at the time graphics card are able to proccess that massive amount of polygones and not just under extreme circumstances, that's when we won't see much progress anymore ( atleast not in the polygone area )
 
321 - 340 of 3,992 Posts