Next Generation Emulation banner
1 - 14 of 14 Posts

· Registered
Joined
·
1 Posts
Discussion Starter · #1 ·
ATI today announced that their next video card (Radeon II?) will include new rendering technology (http://www.ati.com/na/pages/corporate/press/2001/4377.html) called Truform. My understanding, from this article (http://www.anandtech.com/showdoc.html?i=1476), is that this technology will, in DirectX 8 and OpenGL applications, convert low-poly models to rounder, higher poly models. This could have some pretty interesting implications as far as N64 and PSX emulators go. Emulators already let us play console games at higher resolutions. Would this technology allow us to play, say Zelda:MM and GT2 with more detailed models? If so, cool! Does anyone with more programming knowledge than me have any idea if this is correct? BTW - The first card supporting this won't be out till September.
 

· Registered
Joined
·
116 Posts
I doubt any shape smoothing algorithms will help the Playstation emulators. In order to smooth the shape, its 3D coordinates would need to be known. But the Playstation GPU only gets the 2D coordinates of the shape on the screen, not the 3D coordinates of the shape in space.

A really high level emulator might manage to infer the 3D coordinates -- But I don't think any of the current emulators do so.
 

· Premium Member
Joined
·
538 Posts
... and even so, you are asking for something to be created out of nothing.
From what I understand, the ATI tech is curved-surface rendering (could be tesselation in hardware, who knows ... well, ATI do I guess :)).
Since the PSX doesn't natively use curved surfaces, this technology is of little use. Same as the T&L engine in modern graphics card: it may be flash and whizzy, but it doesn't "fit" in with how the PSX renders. And if it doesn't fit, it's of no use.
 

· Registered
Joined
·
116 Posts
Originally posted by Lewpy
...Since the PSX doesn't natively use curved surfaces, this technology is of little use...
You could assume that the polygons in an object are meant to represent a smooth shape, and attempt to infer the "true" curve. I'm thinking of something in the direction of
http://web.mit.edu/manoli/crust/www/sigcrust.pdf

But it wouldn't be easy, and would probably look downright weird if the object was supposed to have sharp edges.

An easier (but still not easy) method would be to use a simpler heuristic. For example, you could add polygons at edges to make those edges a bit less abrupt. (I'm suggesting the 3D analog of rounded rects versus normal rectangles.)

But you'd still need the 3D representation of the objects for any trick like this, and the Playstation doesn't have it.
 

· Registered
Joined
·
68 Posts
It seems to me that FSAA goes a long way towards "softening" and "smoothing" the appearance of a model on screen, regardless off the nature of it (points in space; projected onto a viewing plane; curves; whatever) Good FSAA (i.e. fast FSAA) would be a bigger selling point for me than hardware tesselation.
 

· Registered
Joined
·
11 Posts
Seems like it wouldn't be so hard implement in old games though. There were a couple of articles on truform that I read the day the anandtech one was posted and one of them(might've been the one at anandtech) said that it wouldn't take more than a single line of code to incorporate the n-patching extensions into current games. Doesn't seem like such a big deal to write a little patch.
 

· Registered
Joined
·
18 Posts
First of all, the only point to this thing IS older games. Why bother with newer games if they'll probably have more than enough polygons to deal with?

It should help PSX emulators, because if all they're doing is taking 2D coordinates, how do they get wireframe data? How do they send polygons to the 3D card? The thing about this new technology is that the 3D card only needs an enabaling line of code and then it fixes everything else on its own.

In order to avoid odd looking corners, though, 90 degree angles aren't enhanced. That's the way it works. =3
 

· Registered
Joined
·
116 Posts
Originally posted by MukiSama
First of all, the only point to this thing IS older games. Why bother with newer games if they'll probably have more than enough polygons to deal with?
You could use it to cut down on the number of polygons you need to send to the graphics card.

Based on the Anandtech article, it looks like the inference of curvature is based on the vertex normals. I don't know if older games send the right kind of lighting information to the card for Truform to work.


It should help PSX emulators, because if all they're doing is taking 2D coordinates, how do they get wireframe data? How do they send polygons to the 3D card?
As far as the GPU is concerned, it's just a bunch of flat triangles sent to it in a particular order. Putting the triangles in the right place is handled by the CPU and GTE.


In order to avoid odd looking corners, though, 90 degree angles aren't enhanced. That's the way it works. =3
That's a bit of a kludge, but it could work well in some games. Imagine a 3D mech game, though -- would it really look better if the mechs' edges were rounded?
 

· Registered
Joined
·
18 Posts
Good point about the mechs. Games that have that Star Fox-like ship design wouldn't get helped either.

The thing about the GPU emulator is that it's a GPU *emulator*. When the info gets into the graphics card, in OpenGL, the models ARE display lists. It's from THAT data that the card can enhance models. Maybe an option to turn it on or off is best suited when the technology arrives (e.g. Chrono Cross would look nice, but Armored Core would be better off without it, and who knows what kind of glitches would arrive by smoothing out models that were never designed to be smoothed out?)

- It'd be hilarious if it makes Tekken 3 look better 'n Tekken Tag on PS2, though =3
 

· Premium Member
Joined
·
538 Posts
Originally posted by MukiSama
It should help PSX emulators, because if all they're doing is taking 2D coordinates, how do they get wireframe data? How do they send polygons to the 3D card? The thing about this new technology is that the 3D card only needs an enabaling line of code and then it fixes everything else on its own.
Your reasoning is badly flawed, I'm afraid: I produce wireframe output by simply tracing the outline of the triangle instead of filling it, and that is done from 2D screen-coordinates, not 3D world/object-coordinates.
Let me try and break this down: the 3D part of a PSX rendering pipeline is handled by the CPU and the GTE. By the time the primitive data reaches the GPU, it has been transformed into 2D screen co-ordinate space. There isn't even any depth information for the vertices, because the PSX does not use ZBuffering or perspective-correct texturing/lighting.

The thing about the GPU emulator is that it's a GPU *emulator*. When the info gets into the graphics card, in OpenGL, the models ARE display lists. It's from THAT data that the card can enhance models. Maybe an option to turn it on or off is best suited when the technology arrives (e.g. Chrono Cross would look nice, but Armored Core would be better off without it, and who knows what kind of glitches would arrive by smoothing out models that were never designed to be smoothed out?)
- It'd be hilarious if it makes Tekken 3 look better 'n Tekken Tag on PS2, though =3
You are correct: the GPU plugins emualate exactly the PSX GPU. This means they only get fed the data the PSX GPU gets fed, which is in 2D screen co-ordinate space. This means, regardless of what 3D pipeline capabilities are in your 3D card, most of them are bypassed because they can not work with 2D screen co-ordinates: they are designed to work in 3D world/object space.
Truform does model smoothing. It does NOT do more detail. It can't magically add detail to a model, it can only smooth it to look more natural. The concept of a Tekken3 model being increased in detail to the level of a Tekken Tag model is the kind of thing you dream up when you've been smoking something you probably shouldn't have been smoking ;)
 

· Registered
Joined
·
18 Posts
That's depressin', Lewpy... I was hoping for some nice smooth Chrono Cross models... =(

BTW, as long as you're here, just how hard izzit to do dem motion blur effects in CC? I'm just curious as to know what kinda data needs to be sent and why it beats most gfx cards to the ground when it's forced in certain plugins... @[email protected]

- If you know, of course. ^^
 

· Premium Member
Joined
·
538 Posts
Originally posted by MukiSama
BTW, as long as you're here, just how hard izzit to do dem motion blur effects in CC? I'm just curious as to know what kinda data needs to be sent and why it beats most gfx cards to the ground when it's forced in certain plugins...
Note: I will only talk authoritatively about Glide
The motion-blur effects are normally achieved by using the previous frame-buffer as a texture in the current frame. Glide treats the frame-buffer and texture memory as two seperate banks of data. You can not directly reference one of these banks bank of memory from the other bank. This means you can not directly use the previous frame-buffer as a texture in the next frame. If the PSX game wants to do this kind of activity, I need to do extra work to cater for it.
There are two ways to handle this:-
1) When the PSX makes a reference to a previous frame-buffer, copy that frame-buffer from the 3dfx card to CPU main memory, colour convert it to the PSX colour space (BGR1555), scale it to the correct size (hardware GPUs normally run at increased resolution), and then upload it to the 3dfx card as texture memory
2) Draw the previous frame-buffer using software routines to main memory in the correct PSX colour space, and upload that to the 3dfx card as a texture.

Although (1) only uses the hardware for rendering the image, the cost in speed of downloading the frame-buffer from the 3dfx card is prohibitively slow. Take a screen-shot and see how it causes the game to pause: this is due to reading back the frame-buffer from the graphics card. At 1024x768x32bit colour (as I tend to run my Voodoo5), the pause is maybe 1-2 seconds. Okay, 1-2 seconds when the frame is supposed to take 1/60th of a second is a joke :( It just isn't a viable solution.
The solution in (2) is the compromise: you are effectively running two GPUs simultaneously, the hardware rendering GPU and a software rendering GPU. You just never directly see the output of the software GPU, its work is behind-the-scenes. Of course, running both GPUs at the same time means it is going to run slower than the slowest of those GPUs individually: the slowest is normally the software rendering GPU. This is not an ideal solution, and people complain about the speed loss.

So what is there to do?
The only thing left is some kind of compromise. Life is full of compromises.
The way I tackled it was to offer the option that would detect when any of the above was necessary, and then automatically switch the code on to handle it. And it requires both solutions to be used in conjunction with each other.
Why both? Well, if the previous frame is required as a texture source, it is too late to just switch on software rendering, as that frame has already been rendered. It's too late. So, I copy that frame from the 3dfx card, convert/resize etc. and use it as the texture source. Subsequent frames are handled by the software routine method.
Of course, this causes a one-off pause at the beginning of the effect, and a slow-down in the rendering of the rest of the frames, while the effect is happening.

Ideally, we would have infinitely fast software GPU routines, so that they could run all the time in parallel with the hardware rendering system, but we don't.

If someone would write super-fast software routines (and super-stable too!!) and would send me the code, I would certainly add them to my plugin ;)
 
1 - 14 of 14 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top