Next Generation Emulation banner
1 - 20 of 35 Posts

·
You're already dead...
Joined
·
5,472 Posts
Discussion Starter · #1 ·
I know there's a lot of NES emulators out there.
I've started my own NES emu from scratch (with help from gigaherz), and its a lot harder than I thought.

Mostly the hardest part is finding bugs and why games aren't working properly.

Anyways, I made this thread so I can ask the question: "Are there any NES emus that use dynamic recompilation/JIT?"

I ask because I'm considering going in that direction if it hasn't been done before.
If it has been done before, then I probably don't have the motivation to do it.

cottonNES started out as a self-project to see how difficult it was creating an emu from scratch, to see if I had the potential to do it, and as a good learning experience.
The JIT/dynarec idea is an extra challenge and a nice motivator if it hasn't been done before.
 

·
Banned
Joined
·
42 Posts
"Are there any NES emus that use dynamic recompilation/JIT?"
No. All either use per-cycle (Nestopia/QuickNES/RetroCopy/Schpune/Nintendulator) or per-opcode granularity. (mine/Nestron/loads others)

As for difficulty, I am finding correct audio emulation atm to be the hardest, so currently I am looking at just using blargg's sound libs (blip/Blip_Buffer/Nes_Snd_Emu) for either true emulation or just for signal generation.

So granted, I would love to see your dynarec for it, even though I consider it a waste of time in a practical sense other than a POC. That said, I seen a C64 emulator with a dynarec, so seeing a NES emulator like that would be just as insanely cool :).

For my POC: I want a feature complete NES emulator under 64KB. I'm currently at 43KB with sound, so I am hoping with more drastic optimizations, I can go a lot, lot less.....
 

·
You're already dead...
Joined
·
5,472 Posts
Discussion Starter · #4 · (Edited)
after some searching, i did find one x86 linux NES emu that claimed to use dynamic recompilation, called "nestra".

i haven't thought it out all the way, but a NES emu using dynamic recompilation could potentially be many times faster than one using interpreters; or it could just be a complete waste of time.

the main problem i'm thinking of are the different mappers. i'm not sure how tricky different mappers will get, and depending on how evil they are, the dynarec idea might not be favorable.

i think what i'll do is get further with my emu using the conventional interpreter way, emulate a few different mappers, and then based on my experiences I'll decide if JITing will work well.


And currently cottonNES doesn't emulate APU at all, so i don't know the difficulty of that yet.
Gigaherz gave me his source for his W.I.P. NES APU, and I'll probably use that for the time being, until I get to the point where I can focus on APU.
I have no experience in sound emulation, so its going to be a big learning experience when i get to it. But for now I'm focusing on the CPU+PPU+Core

Am thinking to work on my own Nes emu using JIT but to be honest i haven´t seen many emus around using it.
yeh i've also considered porting the code to java or c#, then starting the JIT idea using java bytecode or MSIL. Effectively having a 'very portable' emulator.
the main benefit I have in mind would be stuff like cellphones that have support for those VMs could be running the emu w/o any sourcecode changes.

but since what i'm used to is c++, it might just be too much work for me to do something that complex with languages i'm not used to.

I've already written a dynarec for pcsx2, but if i do it with the NES I have to take a completely different approach, or else its not going to be faster than an interpreter.
I haven't thought it through all the way though. So still not sure if its a good idea.
 

·
Premium Member
Joined
·
17,148 Posts
Well i think it´s not worth the effort as the results may not be that what you expect to see(my opinion)... now talking about emulation related coding.. since emulation appeared back in old days we´ve been using the same methods to get our job done but to be honest nobody cared to look for a different way at all. a while ago i read an article about a university professor that claimed to have found a different way to emulate things... i don´t know how is done or if is true but he claimed to get way faster results than traditional dynamic compilation methods. anyways until it gets clear how the hell he got it done or if is true we will have to wait and see.

i can confirm that sometimes many methods that we learn aren´t always the most optimized ones.. for example back in the days when i learned how to use GDI+ i thought the methods i learned were the best but then i found different ways to speed up things and get better results at faster speeds than even traditional windows controls using more stuff and so almost removing the "SLOW" title from GDI+.. of course you can´t compare GDI+ with emulation but just to give you an idea ;)
 

·
Emu author
Joined
·
1,488 Posts
Like you said, Nestra has done it, and I've seen some output logs before and they're okay, but nothing really to write home about. It's strictly an instruction at a time recompiler and it uses a bit of an overblown pattern matching scheme to accomplish it. It looks nicely structured but it really is a lot more lines of text for something much more limited. It also doesn't really try to offer a solution for code executed from RAM and just falls back on the interpreter. Which is probably alright.

All of the mappers have pretty large paging sizes. So long as you handle branches inbetween this page size as indirect then it won't hurt a recompiler.

Like all recompilers I've ever seen it's also purely block granular, meaning that its timing has some inaccuracies that go beyond instruction boundary level. Contrary to what many may tell you, it actually isn't really that bad (so I'm told) and most things run, with some very minor glitches here and there. You'd have to see for yourself though. I'm sure that the compatibility problems are worse than with recompilation for newer platforms.

There are techniques for making a recompiler capable of handling instruction or cycle boundaries while still executing a majority of instructions in optimized blocks. I'm not aware of anyone who has implemented such a thing, or at least not purely in software. But I do think it's possible. You can predict cycle edges in advance and go through an interpreter until you hit them, then again until you get back onto a block. Or if you're really daring you can do a transactional/logging system that is capable of rewinding. In fact, for a platform like NES that doesn't have a lot of bidirectional feedback, you might be able to get full accuracy strictly from logging, although you still need to stop on interrupt boundaries precisely.

I was pretty interested in recompiling 65xx for a while because there are some cool optimizations that present themselves more readily than in recompiling for other platforms. I made a thread about it on emutalk. Maybe you'd like to read it? Recompiling 8bit CISC CPUs (especially 6502 family) - EmuTalk.net

Ultimately I decided it wasn't worth the trouble, because 65xx also lends itself to writing pretty decent interpreters. I don't think you'd gain a several time performance improvement over a highly optimized interpreter written in ASM.

So you have to wonder whether the effort justifies the means.
Let's see. Does it matter if it's significantly faster to begin with? Who is the potential target audience that can't run an NES emulator released for PC?

On the other hand, this is coming from someone doing a new NES emulator. How does the world benefit from yet another NES emulator? Is anyone going to actually use it? I'm skeptical. Obviously you're doing it for your own benefit and little else, so I think you shouldn't be questioning the value of expending effort for something cottonvibes would also clearly be doing for his own edification.
 

·
Banned
Joined
·
42 Posts
How does the world benefit from yet another NES emulator? Is anyone going to actually use it? I'm skeptical. Obviously you're doing it for your own benefit and little else
1) It doesn't.
2) No. No one in their right mind will use it.
3) Indeed, as a optimization exercise.
 

·
Banned
Joined
·
23,263 Posts
optimisation exercises are worth the time.
 

·
You're already dead...
Joined
·
5,472 Posts
Discussion Starter · #10 · (Edited)
There are techniques for making a recompiler capable of handling instruction or cycle boundaries while still executing a majority of instructions in optimized blocks. I'm not aware of anyone who has implemented such a thing, or at least not purely in software. But I do think it's possible. You can predict cycle edges in advance and go through an interpreter until you hit them, then again until you get back onto a block. Or if you're really daring you can do a transactional/logging system that is capable of rewinding. In fact, for a platform like NES that doesn't have a lot of bidirectional feedback, you might be able to get full accuracy strictly from logging, although you still need to stop on interrupt boundaries precisely.
on the dynarec for pcsx2, i handled cycle accuracy by doing the cycle counting at recompile time. and then each recompiled 'block' was indexed by 'start PC' and by 'pipeline state'.

in the worst cases, the same block would have to be recompiled multiple times since it started with different pipeline states. i also took some complex measures to limit this, by removing information that could in-no-way effect the accuracy of the block.

the idea made indirect jumps slower because i would have to check against start PC AND pipeline states when jumping to different blocks at execution-time; but i created a hand optimized SSE function to do the comparisons quickly, and the end result was pretty good.

the nes isn't pipelined; but the times the PPU and interrupts should be updated should probably be treated in a similar way since they need cycle accuracy.

I was pretty interested in recompiling 65xx for a while because there are some cool optimizations that present themselves more readily than in recompiling for other platforms. I made a thread about it on emutalk. Maybe you'd like to read it? Recompiling 8bit CISC CPUs (especially 6502 family) - EmuTalk.net
thanks for the article link and info.
don't have time to read the whole topic now, but i will later today at work ^^

there's a possibility that after i read your article though, i'll "decide it isn't worth the trouble" like you did :dead:

from the people i've talked to, the average seems to be ~80% of emu coders think its a waste of time; and the remaining ~20% think it "would be interesting", but not sure if its worth it.
its a shame my fav. console has been emulated so much times that likely little people would care about such an emu ><
even though this nes emu project is mostly for myself, the extra motivation from other people wanting the product does help me find the energy to continue coding-through difficult projects at times where i feel like giving up.

Ultimately I decided it wasn't worth the trouble, because 65xx also lends itself to writing pretty decent interpreters. I don't think you'd gain a several time performance improvement over a highly optimized interpreter written in ASM.
i think at least on the simple Mapper0 games with 1~2 read only program rom banks, the idea has the potential to at least be 'noticeably' faster than an optimized ASM interpreter.

but i should also add, working with asm is so much nicer with an emitter than it is using inline-asm or separate asm files.
its also very portable.

on pcsx2, Jake and I have been working on converting most of the asm functions to be generated with the pcsx2 emitter. which is nice since currently a lot of the code is duplicated for win vc++ and linux gcc.
 

·
Emu author
Joined
·
1,488 Posts
on the dynarec for pcsx2, i handled cycle accuracy by doing the cycle counting at recompile time. and then each recompiled 'block' was indexed by 'start PC' and by 'pipeline state'.

in the worst cases, the same block would have to be recompiled multiple times since it started with different pipeline states. i also took some complex measures to limit this, by removing information that could in-no-way effect the accuracy of the block.

the idea made indirect jumps slower because i would have to check against start PC AND pipeline states when jumping to different blocks at execution-time; but i created a hand optimized SSE function to do the comparisons quickly, and the end result was pretty good.

the nes isn't pipelined; but the times the PPU and interrupts should be updated should probably be treated in a similar way since they need cycle accuracy.
It's not about counting cycles appropriately but granularity of when events can happen. Unless it was possible to execute a block partially or it checked cycles every instruction (which ruins a lot of what a dynarec is good for) then you wouldn't have ones that could be interrupted mid-way in order to accommodate this. For something like PS2 it wouldn't make a difference, but for NES where cycle counting is frequently used you need to have events transition at the right times.

Also, not all cycle timing can be determined statically. This is especially true for a more complex CPU like PS2's that has cache and various stalls outside of statically predictable interlocks, but even on NES cycles vary with branches (taken or not) and if any memory accesses can have wait states. Not that a dynarec can't handle this, it just has to be at runtime.

thanks for the article link and info.
don't have time to read the whole topic now, but i will later today at work ^^

there's a possibility that after i read your article though, i'll "decide it isn't worth the trouble" like you did :dead:

from the people i've talked to, the average seems to be ~80% of emu coders think its a waste of time; and the remaining ~20% think it "would be interesting", but not sure if its worth it.
its a shame my fav. console has been emulated so much times that likely little people would care about such an emu ><
even though this nes emu project is mostly for myself, the extra motivation from other people wanting the product does help me find the energy to continue coding-through difficult projects at times where i feel like giving up.

i think at least on the simple Mapper0 games with 1~2 read only program rom banks, the idea has the potential to at least be 'noticeably' faster than an optimized ASM interpreter.
I think it'll be hard to make it noticeable because it'll be so dominated by other things like the PPU emulation and even the overhead of updating the window surface. Maybe on platforms that are so slow that they need to use frameskip in NES emulation.

Motivation is definitely a good thing and I think all emu authors could desperately use some. I think getting an emulator to work is one of the hardest things that you can do as a programmer. Debugging non-working games is very difficult, especially when you lack a lot of reference code like open source homebrew and test ROMs. Fortunately NES has plenty of those things, plus lots of other emulators with debuggers that you can compare against. Still it feels like black magic sometimes. Would be cool if more emulator authors posted some details on how they overcame specific problems.

but i should also add, working with asm is so much nicer with an emitter than it is using inline-asm or separate asm files.
its also very portable.

on pcsx2, Jake and I have been working on converting most of the asm functions to be generated with the pcsx2 emitter. which is nice since currently a lot of the code is duplicated for win vc++ and linux gcc.
I don't follow this, why is using an emitter preferable to external files? You don't get inlined code either way and the emitter has runtime costs, lots of extra characters in your code for the emitter calls themselves, and probably not as much functionality as an assembler. I would definitely hesitate to consider any assembler "very portable" when it's locked to the CPU architecture, but assemblers like YASM are available anywhere you'd want to compile x86 and can link against object files created by GCC and Visual Studio.
 

·
You're already dead...
Joined
·
5,472 Posts
Discussion Starter · #12 ·
It's not about counting cycles appropriately but granularity of when events can happen. Unless it was possible to execute a block partially or it checked cycles every instruction (which ruins a lot of what a dynarec is good for) then you wouldn't have ones that could be interrupted mid-way in order to accommodate this. For something like PS2 it wouldn't make a difference, but for NES where cycle counting is frequently used you need to have events transition at the right times.

Also, not all cycle timing can be determined statically. This is especially true for a more complex CPU like PS2's that has cache and various stalls outside of statically predictable interlocks, but even on NES cycles vary with branches (taken or not) and if any memory accesses can have wait states. Not that a dynarec can't handle this, it just has to be at runtime.
yeh i know; in the VUs case they don't have a cache (they use they just have their own 16kb work-ram), so every opcode is 100% predictable at recompile time except for 1 of them (the xgkick instruction) which can stall with dma transfers.

in the EE's case we can't do the same thing since it can have cache-misses and such...

anyways, for the NES, a lot is predictable ahead of time. and there's no cpu cache, and no pipelining which simplifies things.
i think its possible to do a similar system to what i used in pcsx2, but it would be difficult.

I think it'll be hard to make it noticeable because it'll be so dominated by other things like the PPU emulation and even the overhead of updating the window surface. Maybe on platforms that are so slow that they need to use frameskip in NES emulation.
yeh i was mainly talking about comparing core emulation speed w/o the other factors involved.
on my current interpreter emu, my display renderer is definitely the limiter of speed.

this is also another reason why i might decide the nes JIT idea isn't worth it.

I don't follow this, why is using an emitter preferable to external files? You don't get inlined code either way and the emitter has runtime costs, lots of extra characters in your code for the emitter calls themselves, and probably not as much functionality as an assembler. I would definitely hesitate to consider any assembler "very portable" when it's locked to the CPU architecture, but assemblers like YASM are available anywhere you'd want to compile x86 and can link against object files created by GCC and Visual Studio.
the runtime cost is only done at emu initialization once.
so its negligible.

and we currently have a lot of big & messy external *.asm and *.S files duplicated for vc++ and gcc.
that's very annoying; and not to mention they're a lot harder to read than emitter code.

furthermore, the emitter allows you to simplify code and do cool tricks in some cases that would be uglier to do w/o the emitter.
like using high-level loops or using macros to repeat generated asm code. which can make the readable code nice and compact.


anyways the portable argument was obviously not about cpu architecture; I instead was talking about practically any c++ compiler being able to generate the same emitted code w/o any modifications in changing the ASM syntax.


p.s. i would have been more thorough with some of my points, but i have to leave for work...
 

·
Banned
Joined
·
42 Posts
optimisation exercises are worth the time.
No they are not. According to Exo, emulators have to be special before someone will use them...

And lest not forget

1) Cycle accuracy pwns all, if you don't for a NES emu, expect to be ranted at for NOT using it
2) Speed does not matter. Cycle accuracy does

I already got ranted at by certain developers for not caring about cycle accuracy. Seems I made more enemies.
 

·
Emu author
Joined
·
1,488 Posts
yeh i know; in the VUs case they don't have a cache (they use they just have their own 16kb work-ram), so every opcode is 100% predictable at recompile time except for 1 of them (the xgkick instruction) which can stall with dma transfers.

in the EE's case we can't do the same thing since it can have cache-misses and such...
Okay, I didn't realize you were talking about the VU recompilers. This conversation seems very familiar now, I feel like we talked about this before. Do the VUs have interlocking? If they don't then the pipeline emulation is probably important for far more than just timing. Do branches take the same amount of time whether they're taken or not? That wouldn't be surprising if their delays were 100% eaten by delay slots. Lack of interlocks and a lot of delay slots, starting to make me think of TI's C6x DSPs..

anyways, for the NES, a lot is predictable ahead of time. and there's no cpu cache, and no pipelining which simplifies things.
i think its possible to do a similar system to what i used in pcsx2, but it would be difficult.
More predictable, still with some components.. anyway, this was all just an aside. None of it solves the switch granularity I was talking about.



the runtime cost is only done at emu initialization once.
so its negligible.
Wastes program space though. Yeah, I know, doesn't really matter, but still. I'm going to guess that the emitted code takes way more space than its compiled equivalent would.

and we currently have a lot of big & messy external *.asm and *.S files duplicated for vc++ and gcc.
It's because you're using MASM and GAS respectively, right? Is there a technical reason why you didn't want to use an unrelated assembler which is compatible with both of them?

that's very annoying; and not to mention they're a lot harder to read than emitter code.
Please give me an example to indicate the emitter code being harder to read. I'm envisioning that things look like this:

mov eax, [ecx + 10]

VS

emit_x86_mov32_memory_reg_imm32(X86_REG_EAX, X86_REG_ECX, 10);

And I just don't see an argument for the latter being more readable. I suppose you can use function overloading and typing enums (in C++ do instances of enums take their type, instead of just taking int like in C? Otherwise you'd need constructors instead), and you can use operator overloading, but you still don't have the freedom of syntax to reproduce the conciseness an assembler provides.

Maybe some people find less conciseness to be more legible in cases like these, but it's very subjective. I feel that people working on a group involved open source project, especially if they're not a dominating developer over everyone else, should be very careful about restructuring code based on subjectivity. Of course it's undeniably/objectively better to not have redundant source files but is this really a solution with good universal appeal?

furthermore, the emitter allows you to simplify code and do cool tricks in some cases that would be uglier to do w/o the emitter.

like using high-level loops or using macros to repeat generated asm code. which can make the readable code nice and compact.
I'll concede that it does allow for some nice things using procedurally generated code, but I don't think this really comes up much in such a way that the same thing can't be done with macros in an assembler. I personally use CPP (C preprocessor) with assembler, but if I were doing stuff x86 only then I'd stick with YASM and its macro capability, which is pretty competent.
 

·
You're already dead...
Joined
·
5,472 Posts
Discussion Starter · #16 ·
Okay, I didn't realize you were talking about the VU recompilers. This conversation seems very familiar now, I feel like we talked about this before. Do the VUs have interlocking? If they don't then the pipeline emulation is probably important for far more than just timing. Do branches take the same amount of time whether they're taken or not? That wouldn't be surprising if their delays were 100% eaten by delay slots. Lack of interlocks and a lot of delay slots, starting to make me think of TI's C6x DSPs...
the pipeline emulation was the most important part indeed.
and branches take a constant time whether they're taken or not.
most vu instructions were interlocked, but some weren't; this complicated things, but worked well with my idea to save pipeline state info with cached blocks.

the only problem was xgkick which can stall depending on external dma transfers from other parts of pcsx2.
this was the only instruction i couldn't handle accurately because i couldn't determine the instructions stalling/latency at recompile time.
our dma system is a complete hack anyways, so even if this instruction was coded exactly how it behaives on the ps2, it wouldn't work correctly with the rest of pcsx2.


More predictable, still with some components.. anyway, this was all just an aside. None of it solves the switch granularity I was talking about.
the granularity problem can be solved in multiple ways; obviously without attempting the idea i'm not going to know the best way to handle it.

at this point though, i'm leaning towards just sticking to an interpretter, since it may just be more entertaining to go for a simple and accurate interpretter.

Wastes program space though. Yeah, I know, doesn't really matter, but still. I'm going to guess that the emitted code takes way more space than its compiled equivalent would..
well you allocate the amount of memory for the emitted code. usually we just give it some buffer room and fill it with 0xcc (INT3).
if you're really picky you can allocate the buffer to exactly the same space the emitted code takes up; but its really dumb to be that picky IMO unless your target platform has very limited ram.

but since you need to have the 'emitting function' AND the 'emitted code', it does take up a bit more memory than the compiled equivilant.

It's because you're using MASM and GAS respectively, right? Is there a technical reason why you didn't want to use an unrelated assembler which is compatible with both of them?
well i didn't code the external asm code.
some older member decided to do it that way; and its nicer to just switch the code to the emitter which we use for all our recs, so we're all familiar with...


Please give me an example to indicate the emitter code being harder to read. I'm envisioning that things look like this:

mov eax, [ecx + 10]

VS

emit_x86_mov32_memory_reg_imm32(X86_REG_EAX, X86_REG_ECX, 10);

And I just don't see an argument for the latter being more readable. I suppose you can use function overloading and typing enums (in C++ do instances of enums take their type, instead of just taking int like in C? Otherwise you'd need constructors instead), and you can use operator overloading, but you still don't have the freedom of syntax to reproduce the conciseness an assembler provides.
the emitter we have in pcsx2 does what you mentioned and takes advantage of operator overloading.

the above example would look like this:
xMOV(eax, ptr[ecx + 10]);

or to show you what i meant earlier, you can essentially do something like:
Code:
for (int i = 0; i < 10; i++) {
    xADD(eax, ptr[ecx + i*4]);
}
taking advantage of the highlevel loop, instead of writing the ADD 10 times...
now i think thats pretty cool :p

its also nice writing emitter code in highlevel functions, and then calling them in another highlevel function in specific orders to generate unique x86 machine code.
with asm any function calls like that would at least add a few JMPs/CALLs into the mix.
with macros its possible to do the same thing i guess; but my experience with vc++ inline asm and macros is pretty bad.
does it even support macro'd inline asm functions? i just remember i couldnt' get some stuff to compile so i switched to using the emitter.

Maybe some people find less conciseness to be more legible in cases like these, but it's very subjective. I feel that people working on a group involved open source project, especially if they're not a dominating developer over everyone else, should be very careful about restructuring code based on subjectivity. Of course it's undeniably/objectively better to not have redundant source files but is this really a solution with good universal appeal?
well if you're going to be doing anything lowlevel in pcsx2 emulation-wise, you're going to have to learn the emitter anyways.
its essentially saying "you just have to learn the emitter syntax, instead of getting familiar with multiple assemblers syntax + the emitter syntax."

i personally have worked more with emitters than assembler syntax (i got into lowlevel coding when i started working on pcsx2 using the emitter), and so i just like using it so much more.

I'll concede that it does allow for some nice things using procedurally generated code, but I don't think this really comes up much in such a way that the same thing can't be done with macros in an assembler. I personally use CPP (C preprocessor) with assembler, but if I were doing stuff x86 only then I'd stick with YASM and its macro capability, which is pretty competent.
the benefit of using something we already need to have in pcsx2 (for the dynarec) makes it appealing and i think a good idea.

any other method is of course just as subjective.
i'm not saying you have to use emitters in your projects or the likes...
i'm just saying thats what we've decided to do with pcsx2; and since Jake, pseudonym, and I are the ones that have been doing the most lowlevel coding in pcsx2 for the past 1+ years, we're at a point where we can decide the way we want the project to conform to regarding asm.
 

·
You're already dead...
Joined
·
5,472 Posts
Discussion Starter · #18 ·
what does that have to do with anything?

and you quoted me out of context.
i meant using macros to generate inline asm doesn't usually work out well with vc++.

i looked up the issue, and other people have the same problem.
Inline Assembler Macros? : assembler, inline

unless you keep it simple to just 1-line asm statements, then macro'd asm doesn't work well (or at all) in vc++.
 

·
Banned
Joined
·
42 Posts
what does that have to do with anything?
It has everything to do with your competence.

You are trying to write a recompiler WITHOUT proper knowledge of ASM. You are working on recompilers WITHOUT a proper knowledge of ASM.

So, how the heck are you doing this recompiler for this emulator then? >_> Or for PCSX2, for that matter.
 

·
You're already dead...
Joined
·
5,472 Posts
Discussion Starter · #20 ·
It has everything to do with your competence.

You are trying to write a recompiler WITHOUT proper knowledge of ASM. You are working on recompilers WITHOUT a proper knowledge of ASM.

So, how the heck are you doing this recompiler for this emulator then? >_> Or for PCSX2, for that matter.
uh...
i didn't make this thread to start a flame-war.

but obviously you didn't understand my previous posts or you purposely misread things in order to start an unrelated argument.

anyways you can believe what you want Mr. B...
 
1 - 20 of 35 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top