Next Generation Emulation banner
1 - 6 of 6 Posts

virmaior

· Registered
Joined
·
6 Posts
Discussion starter · #1 ·
having seen the stack of worthless suggestions from people with equally sucky computers who imagine them to be great, I'm hesitant to ask a question of this sort...

still, i'd like to know if there's something that makes it theoretically impossible. Considering that pcsx2 has the ability to recompile code designed for the ps2 to x86, would it be possible to do something on the order of massive pre-emptive recompile or substantial caching?

What I'm wondering is whether it would be possible to setup a kind of pre-processing / processing split:

stage 1: taking every code section from the disk/iso and recompiling it to a kind of chewed iso file.

stage 2: emulating with the chewed iso file rather than the ps2 original

I can think of a number of possible flaws with my suggestion. (1) maybe, i'm misunderstanding the benefits of using recompilers. (2) maybe the recompilers depend explicitly on the data in the registers or memory. (3) maybe the recompile is strongly tied to absolute memory references (if it were only loosely then it could be rewritten so that the execute phase needed only to substitute at the memory related functions).

anyway, i've really appreciated all the work the team has put in.

i've been starting to take a look at the code myself, but i haven't written anything thing emulation related ever, in c in 10 years, in assembler in 8 years.
 
It can't be done, both for the reasons you listed and many others. Most get pretty technical, but I'll write something up if there is interest.

The ps2 and x86 follow very different design philosophies. Adapting one to the other is hard enough, coming up with a universal solution would be an extraordinary achievement.

The idea of pre-emptively translating code comes up a lot in emulation, but is rarely do-able. It's just too hard to account for every possible case (it's a pain to get right even at runtime), otherwise it would be done.
 
Discussion starter · #3 ·
thanks echosierra.

that makes good enough sense of it to me.

strangely, i'm also glad to know that my idea is a common one, and I feel better informed knowing that (a) it's been thought of and (b) it's been a ***** to use on any real level so it hasn't been used.
 
It's a common idea because it seems like the simplest possible solution, but in practice it's impossible for anything but the most simplistic of examples.

I've seen a proof-of-concept developed for GameBoy emulation, but it never became available to the public. Even on such a simple system, far to much work had to be done on each instruction.

Every memory operation becomes ungodly complex if the source and target architectures are different. Every memory access must be looked up on a giant table, since varying instruction size between the source and destination make every memory address (both immediate and those calculated at runtime) wrong. For example:

Architecture 1 (Source):
Cell 0 contains instruction to add three numbers. Cell 1 contains instruction to subtract 1 from that number

Architecture 2 (Destination):
This architecture doesn't have a single instruction to add three numbers. It has to be replaced with equivalent code.
Cell 0-5 contains instructions to add three numbers. Cell 6 contains instruction to subtract 1 from that number.

All memory accesses are so far off by 4, since memory must be used to implement what cannot be done natively. This continues for the entire image. There is no easy way to predict this error, so to correct for it every single memory address (and it's corrected location) must be cached to some sort of gigantic table.

It's for reasons like this that I never seen binary translators outside of academia, they become stupidly complicated very quickly.
 
Yeh, three things come to mind.

1. Lookup tables in the MIPS code will be hard to recompile
2. Self modifying code won't work (though I don't think PS2 code usually makes use of SMC)
3. The MIPS elf file mixes both data and code. The recompiler needs to be able to differentiate between code and data somehow.
 
your idea is similar to (or the same as) static recompilation. the drawbacks are.

1. Youd need silly amounts of memory, which 32bit OS's cant address (over 100mb is needed just to emulate the system memory, let alone a 4gb dvd which will take up more room in staticly recompiled code than it does on the disc)
2. Its unreliable
3. It wouldnt be miracle workingly faster

As it is, PCSX2 caches a lot of the recompiled code, only swapping chunks out when data in memory changes, altho this does happen a lot, it isnt often enough to make a massive speed impact.
 
1 - 6 of 6 Posts