You are not logged in.
I just compiled desmume 0.9.2 in Ubuntu intrepid 32 bit and noticed a low performance in the game. The graphics were perfectly, and I haven't noticed any bugs on the game play, but the game were a bit laggy.
It couldn't be my pc, becouse it is a AMD Phenom 9750 Quad-Core with 2GB of ram DDR2 800.
Is there any issue about this? Would I experience best performance in 64 bit system?
Last edited by fernandoc1 (2009-04-26 23:59:28)
Offline
your system isnt as good as you think. those extra cores dont help you any in desmume. at that processor range, you wont be able to run some games at 60fps.
how many fps are you getting, anyway? thats the only way to quantify your performance problem.
Offline
I'm getting around 25 fps in the game play. It is a little strange that this emulator get this little performance because I've tried NO$GBA using wine, and the performance were good - despite of some bugs that happen. I think that should be some optimizations to be done that would make it faster.
Which processor could make the game run better?
Offline
more ghz. faster memory. desmume is slower than nocash
Offline
Do desmume makes use of an external GPU?
I've measured the usage of my cores and none of them were 100% used.
Another question: is there any multi-core optimization on desmume, like put a core to handle ARM7 and another for the ARM9 for example?
Last edited by fernandoc1 (2009-04-28 14:17:29)
Offline
desmume hardly uses your 3d cores. the 3d responsibilities are very light.
there are no multi core optimizations. everything runs in a single thread.
Offline
I'm interested in helping desmume project, but I wish to know if there is a documentation about the code, and how it were implemented.
We could try thinking in a way to divide the physical unities of NDS in threads to make the emulator run faster, like using a thread for the SPU, another for GPU and others for the ARM7 and ARM9. I think that it should improve the emulation speeds.
Offline
Its not as easy as you think. It may be possible to get speedups on quad core, but not dual core. But I dont have a quad core system to try it on. Check the faq for information on svn if you want to try it.
Offline
Get the code and start reading, then
Offline
Is it possible to use gdb to see the execution flow? Which IDE do you recommend?
Offline
When you can answer those questions yourself, then you are ready to tackle those problems.
Offline
Is it possible to use gdb to see the execution flow? Which IDE do you recommend?
On which platform ?
Linux ?
Then you can attach gdb to the emulator like any other program to see the flow of the emulator. If you want to see the execution of the ARM processors on Linux, you can run desmume with --arm9gdb=PORT_NUM and --arm7gdb=PORT_NUM, but it's not been tested in a while. See FAQ: http://wiki.desmume.org/index.php?title … ll_work.3F
Windows?
Use MSVC 2008 Express. Im guessing gdb is not or no longer supported on Windows, but you can do a developer build and go to 'Tools | Disassembler' to see the ARM CPU's execution, and MSVC to step through the emulators code.
Last edited by lbalbalba (2009-04-29 23:25:17)
Offline
I guess there are two sides to the multithread debate. One side says if you get a decently powered (2.4+ GHz?) processor, one core should be about good enough, so why bother with multithread.
But I think in the near future, we are going to continue to see a lot the same speed processors, just adding cores. I think offloading the GPU to a separate thread is a good idea that you should at least consider. I'll grant you that it might not be as simple as forking a chunk of code, but I know there were some unauthorized zoom plugins for no$gba, that actually increased my FPS because it moved a chunk of the graphics processing to my other core.
Now, how on earth the author managed to intercept the video output from no$gba and gain me a few FPS on top of it is beyond my skill level, but it makes me think its at least reasonably possible.
Either way, great job, just food for thought.
Offline
Now, how on earth the author managed to intercept the video output from no$gba and gain me a few FPS on top of it is beyond my skill level, but it makes me think its at least reasonably possible.
If you're not part of the solution, then you're part of the problem
Offline
Meh, well, it wouldn't be the best approach if you have the source code. My main point is that improvements can be made by kicking something CPU intensive off to another core.
It actually sounds interesting to me, but I've looked at the code and... I think someone who was intimate with the code already would be more successful than I would. Threading stuff is a total pain, but a GPU sounds to me like something (more or less) straightforward to thread.
Anyway, I was just saying it COULD help a little. Nobody's going to do it unless they find it interesting, I know.
Offline
There is no side to the argument that says 'why bother'. The only counter-arguments are: it is hard, and there are better things to do. I'm glad you mentioned the speedup from the final display presentation running on another core, though. That is a useful statistic to have, considering that of all the possible things to multithread, it is likely the easiest.
Offline
Do desmume uses pipelining to execute it's instructions?
I don't know how the emulator was implemented, but I think that it is some kind of infinite loop that is going to get the rom instructions and direct them to a correspondent module that can handle that kind of instruction, so, I think that we could create some kind of prefetch of the instructions and execute them in a pipeline, and so when they are needed, they could be ready. Each stage of the pipeline can be executed in a core, it will increase the emulation speed with the availability of cores.
Offline
Too complicated. How about we get all the bugs out of the current non-pipelined cpu engine first.
Offline