Originally posted by MukiSama
BTW, as long as you're here, just how hard izzit to do dem motion blur effects in CC? I'm just curious as to know what kinda data needs to be sent and why it beats most gfx cards to the ground when it's forced in certain plugins...
Note: I will only talk authoritatively about Glide
The motion-blur effects are normally achieved by using the previous frame-buffer as a texture in the current frame. Glide treats the frame-buffer and texture memory as two seperate banks of data. You can not directly reference one of these banks bank of memory from the other bank. This means you can not directly use the previous frame-buffer as a texture in the next frame. If the PSX game wants to do this kind of activity, I need to do extra work to cater for it.
There are two ways to handle this:-
1) When the PSX makes a reference to a previous frame-buffer, copy that frame-buffer from the 3dfx card to CPU main memory, colour convert it to the PSX colour space (BGR1555), scale it to the correct size (hardware GPUs normally run at increased resolution), and then upload it to the 3dfx card as texture memory
2) Draw the previous frame-buffer using software routines to main memory in the correct PSX colour space, and upload that to the 3dfx card as a texture.
Although (1) only uses the hardware for rendering the image, the cost in speed of downloading the frame-buffer from the 3dfx card is prohibitively slow. Take a screen-shot and see how it causes the game to pause: this is due to reading back the frame-buffer from the graphics card. At 1024x768x32bit colour (as I tend to run my Voodoo5), the pause is maybe 1-2 seconds. Okay, 1-2 seconds when the frame is supposed to take 1/60th of a second is a joke

It just isn't a viable solution.
The solution in (2) is the compromise: you are effectively running two GPUs simultaneously, the hardware rendering GPU and a software rendering GPU. You just never directly see the output of the software GPU, its work is behind-the-scenes. Of course, running both GPUs at the same time means it is going to run slower than the slowest of those GPUs individually: the slowest is normally the software rendering GPU. This is not an ideal solution, and people complain about the speed loss.
So what is there to do?
The only thing left is some kind of compromise. Life is full of compromises.
The way I tackled it was to offer the option that would detect when any of the above was necessary, and then automatically switch the code on to handle it. And it requires both solutions to be used in conjunction with each other.
Why both? Well, if the previous frame is required as a texture source, it is too late to just switch on software rendering, as that frame has already been rendered. It's too late. So, I copy that frame from the 3dfx card, convert/resize etc. and use it as the texture source. Subsequent frames are handled by the software routine method.
Of course, this causes a one-off pause at the beginning of the effect, and a slow-down in the rendering of the rest of the frames, while the effect is happening.
Ideally, we would have infinitely fast software GPU routines, so that they could run all the time in parallel with the hardware rendering system, but we don't.
If someone would write super-fast software routines (and super-stable too!!) and would send me the code, I would certainly add them to my plugin
