Next Generation Emulation banner

1 - 20 of 25 Posts

·
Registered
Joined
·
6 Posts
Discussion Starter #1 (Edited)
Perpsective correction!!!!

:confused: Why is it that no ps emulator (at least the ones I've tried) tries perspective correction on textures? I'd say that it is the utmost immediate and important feature to enhance visuals after mip mapping and increased resolution. In fact maybe even before mip mapping, since a game at 1zillion x 1zillion pixels with octolinear mipmapping still looks CRAP if all its textures are zig-zagging.

3d accelerators can correct perspective, but still in all the screenshots I'v seen of epsx (any any other emulator) all textures still zig-zag. Surely I am not the only one who has thought of that...

Are there some technical impossibilities or what?
 

·
Registered
Joined
·
6 Posts
Discussion Starter #4
Pete, Galtor sorry if I came a bit impertinent. That was not my intention. Of course I appreciate a lot that you are willing to give your free time in these kind of projects so that everyone out there can enjoy ps emulation.

It's just that this question has come to me ever since I saw the first emu, and have never seen it explained, not even mentioned that perspective correction is always left aside.

Of course, I am not a programmer, so maybe I am asking a question, the answer of which I would not be able to understand, but I tried anyway just in case.

Is there an answer you would be willing to give that a no-programmer could understand.

Anyhow, thanks again for your efforts.
 

·
Premium Member
Joined
·
538 Posts
I've answered these question a few times in the past, but I'll summarise quickly again.
To do perspective-correct texturing, you need to have the depth information for the primitive's vertices, so the graphics card can do the correct 1/depth interpolation calculation per rendered pixel.
Unfortunately, the PSX architecture does not pass the depth information for the primitive's vertices to the GPU. So perspective-correct texturing is not possible, nor is improved hidden-surface removal via z-buffering, nor is mip-mapping/ansiotropic filtering (functions that requires depth information).
 

·
Registered
Joined
·
17 Posts
A curiosity: the PSX Consolle make correction for texture or it has zig-zag effect? I haven't a PSX Consolle to observe it.

(Lewpy and Pete are GREAT:D )
 

·
Registered
Joined
·
216 Posts
Man, you don't get any Z-buffer information. You could do some cool things if you get depth information to the GPU. Even stuff that the PSX can't do. This means you can't do any Z optimisations, like HSR. Wouldn't emulation be much faster with this? I thought it would be cool if you could have a LOD system. Not now!
 

·
Premium Member
Joined
·
538 Posts
Yamhead: well, it's our "jobs" to know this kind of low-level stuff ;)

Lycenhol: the PSX does not do perspective-correct texturing, so suffers both the "zig-zag" effect, and texture "swimming" on clipping.

Cairey: yup, with no z information, there is not a lot you can do :( You could argue there is "pseudo"-depth information, in that the primitives get presented to the GPU in a z-sorted list, but that is only relative-z information, and per-primitive not per-vertex. I think Nik's D3D GPU plugin used a ZBuffer, with the z-info derived from the order of the primitive. This meant you got a kind of "pseudo"-stereoscopic effect with appropriate hardware/glasses. That's about all you can do :)
 

·
Registered
Joined
·
6 Posts
Discussion Starter #10
oh, ok :(

Now I'll get even more stupid:eyemove: I think, Lewpy, you already answered this, but since I am not sure and this came to my head, here it is anyway: If we talk about simple and pure technical drawing, not on the computer but the sort of thing we (I at least) did at high school, one could extrapolate the z level of a vertex in a perspective drawing and recreate it to, say, have its object's 3d bounding box's measures, if enough data was visible in the drawing. I remember doing this sort of things in some exercises at class with simple figures such as boxes in conic perspectives. Of course it's about 6 years ago and I might be remembering just half of the truth, and of course too, I am playing here with a perspective drawing which does not zig zag ;)

So, if what I just said is not bullshit, isn't it possible that the emu could sort of do the same calculations with the scene and go beyond what the psx is supposed to do and pass that info to the gpu? (miracles?) (it would not depend on that bit of "enough information on the drawing" I guess, since it is calculating it all)

(miracles?)

Yes i know I am talking about a thing I don't know, so it's up to you to answer ;)
 

·
Premium Member
Joined
·
538 Posts
[I think I understand what you are getting at ...]

You could only work out the vertices depth values if you knew what the object was supposed to be :)
i.e. with a perspectively-drawn box you could calculate it's rotation/orientation from the deformation caused by perspective-projection, but only because you know what a box should look like in uniform-space (non-perspectively drawn).
With primitives that are sent to the GPU, there is no knowledge of what the primitive should look like in uniform-space, so there is nothing to "compare" the rendered item to. Without this "knowledge", I can't see how you would read any more info from the primitive.
Is that what you were getting at? :)
 

·
Registered
Joined
·
6 Posts
Discussion Starter #12
yes, that was it. Still, since as I said I am at zero level in programming whatever, I was quite shocked by your answer when you said that you do not know what the appearance of a primitive should look like. I could not imagine that.

Excuse me again if I am asking too much, but how is it, that one can program a gpu or emu as a whole, where games ask the emu to do certain things (such as drawing a primitive), and you get the emu to do that without knowing what that primitive is supposed to be? Is it the reason that epsxe needs external software not made by the emu programmer such as the ps bios?

When imagining what was the theorical way to program an emu I thought that the programmer understood to a certain extent what each instruction prompted by the game to the hardware was supposed to ask for, and the write a program that was capable of doing each of those possible questions hardcoding each one of them.

Is not that the way emus are done?
 

·
Registered
Joined
·
216 Posts
Well I'm not quite sure either, but I see that GPU's are getting some information, as Lewpy is able to calculate triangles being rendered, as well as points and lines.
But they don't get depth information for the polygons vertex co-ordinates.
 

·
Registered
Joined
·
1,808 Posts
Perpsective correction is the most one thing I wish someone do it so much in psx emulator...........................
Now many expect come out and said it impossible to do it in PSX emulator..................I feel so sad :(
Anyway I hope someday someone make will possible..............
 

·
Registered
Joined
·
1,808 Posts
Perpsective correction is the most one thing I wish someone do it so much in psx emulator...........................
Now many expect come out and said it impossible to do it in PSX emulator..................I feel so sad :(
Anyway I hope someday someone will make it possible..............
 

·
Premium Member
Joined
·
538 Posts
Originally posted by elmimmo
yes, that was it. Still, since as I said I am at zero level in programming whatever, I was quite shocked by your answer when you said that you do not know what the appearance of a primitive should look like. I could not imagine that.
You missed what I was saying :)
Of course I know what the PSX is sending to the GPU. But, as already discussed, it is sending data that has already being perspectively-projected. It is also limited to quads and triangles (ignoring sprites/tiles/etc., as they are really just "special case" quads).
What I was saying was, to do a reverse-perspective projection, as I believe you were suggesting, I would need to know the dimensions of the object being drawn. This is in high-level terms. For example, I would need to know it was drawing a square box. I could then "think" along like this: "well, all edges must be of equal length, therefore I can calculate the corner's relative depth by examining the distortion of each edges length being rendered". In technical drawing terms, I could probably extrapolate the lines to caculate the "fulcrum" [can't remember the exact term for this!] of the projection.
But I don't "know" this information in the GPU. All I have is a string of primitives (tris & quads), which may be a fed in a mixture of several models, since the list is pre-sorted on depth.
With this in mind, it is not possible to perform a reverse-perspective projection in the GPU, which would be necessary to reverse-engineer the depth of the vertices of a primitive so that perspective correction could be applied to the texturing.
 

·
Registered
Joined
·
216 Posts
Well yeah that would be impossible, but there has to be some way of calculating the depth of the vertices. I feel this is important in improving emulation for both speed and visuals.
 

·
Premium Member
Joined
·
538 Posts
Originally posted by cairey
Well yeah that would be impossible, but there has to be some way of calculating the depth of the vertices. I feel this is important in improving emulation for both speed and visuals.
Once you have devised the cunning algorithm that will magically present the depth of the vertices, please post it and we'll use it ;)
btw, any extra calculations will slow down emulation, not speed it up. The calculation of vertice's depth would be pure overhead work for the GPU, if it were possible. So it may improve graphical quality (although most modern PSX games seem to handle the texturing problems of the PSX in their own ways already), but it will not improve speed.
 

·
Registered
Joined
·
36 Posts
Perhaps I can make the matter just a little clearer.

As mentioned on multiple occasions, the PSX's graphics units in incapable of manipulating pure 3D data. It was designed back in the days when the Voodoo Graphics processor was top of the line.

In this case, it was up to the game programmer to program all the 3D processing through the CPU. By the time the data reached the GPU, it's all in a 2D form, the only concession being that the deepest things went first. Other than that, all the 3D data has been processed (and thus destroyed).

So when say a trapezoid gets passed into the system, the GPU has no way of telling whether it's a square leaning backwards...or it's a real trapedoid. The same 2D object can represent two entirely different 3D objects, and THAT'S the crux of the problem. The programmers have no idea what the objects were in the first place and no way of figuring it out. It's like trying to identify a person when all you see is his or her hand. There's just too little data to work with. At least in your math problems, you had some additional data provided by the instructions. The coders don't. They just get triangles and quadrilaterals, textures and vertices...that's it.
 
1 - 20 of 25 Posts
Top