Post Reply 
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Understanding the Source Code (or trying to)
05-20-2014, 10:13 AM (This post was last modified: 05-20-2014 10:19 AM by mtmarco.)
Post: #1
Understanding the Source Code (or trying to)
Hello. I'm trying from Yesterday to read and understand the source code, to get into the argument and understand how the emulator works.

But i need more info about the architecture of the emu:

(for now i'm following Windows source files)
1-> Starting from "Windows\main.cpp" i've found the WinMain start process, and various calls to the ini config load/save class, os version check etc., ending in the message pump OK; but basically from there we call MainWindow::Init;

2-> So we go to Windows\WndMainWindow.cpp; here (looking quickly) i can see there is, mainwindow initialization, wndproc callback to manage all events (menu, commands, resize, etc.) and I focalized on BrowseAndBoot()/BrowseAndBootDone() functions, that would load a rom and start emulation --> ending into NativeMessageReceived("boot", filename.c_str()); ---> so we are sending a message to Native framework to start emulating that file??;

3-> for how hard I searched where exactly we end up to read this message and start emulation, I could not find a sure/exact point... but for what i can understand in the entire solution, i think we end in "UI\EmuScreen.cpp"->class EmuScreen function sendMessage (EmuScreen::sendMessage), and from here in case of message "boot" we end up in bootGame(string filename):
here we configure coreParam object, and then we launch PSP_InitStart(coreparms,err_string) and PSP_InitUpdate function;

4-> This ending into Core\System.cpp: function PSP_InitStart() and PSP_InitUpdate(), in these methods I can see we call CPU_Init(); and GPU_Init();

5-> now we go into 2 different sourcefiles:
-> for CPU_Init() we remain into System.cpp, and in that method as I can see, we basically start up CPU emulation: we set some parameters, then some modules are started: Replacement_Init();Memory::Init();host->AttemptLoadSymbolMap();Audio_Init();CoreTiming::Init();HLEInit();if (!LoadFile(filename, &coreParameter.errorString)) {...} and we add to recent files the currently executed rom;

-> for GPU_Init() we end to GPU\GPUState.cpp, and in this method we switch case the type of GPU from: NullGPU,GlesGPU,SoftGPU, or DX9GPU, doing a SetGPU(T*obj) method that ends in specifing GPU Interface into a more specific GPU class

6 -> here i'm getting lost: i've tried to manipulate output randomly messing up the PixelShaderGeneratorDX9.cpp, thinking this was the display output generator for a Windows PC with dedicated GPU (thinking the switch will chose DX9GPU), but from some test i saw no result in games, modifying boolean values into ComputeFragmentShaderIDDX9(id); so i tried with changing the same code in the FragmentShaderGenerator.cpp from GLES GPU class, but no result the same as before;
So i understand that our software when executed on Windows pc, is not going through those portions of code: i'm sure our software is going through all the classes/methods/functions till the point 5, then i'm getting a bit lost... it's just a bit difficult to understand a so big source from 0, moreover with all these implicit calls/message exchanged.

7 -> in the end I've ended in GPUCommon.cpp, and i think our emulator is specifing the GPU class into the Common one (not the DX9 nor the GLES, that obviously is for mobile); unfortunately if I want to test my presence in a portion of code, if till the EmuScreen i could use OnScreenMessages.h (osm.show) or Windows.h (MessageBox), from System in advance I can't (headers not included), so to verify if i'm in those methods, i've used a little ugly thing: fopen(file,"w") of a new empty specific file(thinking the debug console isn't working with printf cause we are in a win32 app).... and ive discovered that the software go into gpucommon.


This for the GPU part, while for the CPU part i've not read much more code, but I only suppose that there (in the CPU part) we have the instruction translation (elf reading, relocation or what has to be done to get the instruction disassembled and reexecuted), and at the end of the process the execution of the correct task (audio/video/etc); in the GPU part we come to only after the translation, when we have a GPU minimal instruction to render, and so as I can see/guess in the Specific GPU classes we have specific methods for the particular GPU rendering system (dx, opengl, soft,null), that render Lights, fogs, polygons, blend, and all other effects/models... or at least generate parameters for the following rendering....

Ok now this last part is mostly my assumption, so my question is: from the point 5 in advance what does really happens??:
CPU_init() basically with which class/sourcefiles connects and what pactically does? And where does GPU_init() hook/end up?

Where and with which logic after we boot a Game, the system starts the ISO reading, elf/bin decoding/disassembling, and then in which methods the instructions are reinterpretated and redirected to the correct function; from this cpu-side elaboration how we will reconnect to the gpu side, and where we actually transfer the output to the directx device (in Windows case at least, using d3d*.h headers, and d3d rendering classes).

Thank you in advanceSmile Sorry if i've been too long, but I want also to "contribute" if I can; i really wanted to play Silent Hill origins, only to found annoying flickering starting the game, so i wanted to check into the source what call was causing this.... and also i have a snapdragon 800 smartphone with GLES 3 adreno 330 gpu (samsung galaxy Note 3), with not so high performance as expected with ppsspp on android, and so i want to see also this... But apart from these specific cases, I want to understand the code/architecture, and in the forum/site I can't find much informations; so I'm asking some help to youSmile

It would be very useful if we could get more info, to work all together, understanding better where to put our handsSmile Sorry if I ask all this to you, but it's very very difficult to understand the Whole code without the help of who has already deal with it from the very start; however thanks for every help. (Also a basic scheme would be good)
Another thing, sorry for my (very)baddd english, i'm italianSmile

Bye, Marco
Find all posts by this user
Quote this message in a reply
05-21-2014, 02:04 AM (This post was last modified: 05-21-2014 02:06 AM by [Unknown].)
Post: #2
RE: Understanding the Source Code (or trying to)
(05-20-2014 10:13 AM)mtmarco Wrote:  here we configure coreParam object, and then we launch PSP_InitStart(coreparms,err_string) and PSP_InitUpdate function;

For all intents and purposes, this is where it "enters" the emulator. All the stuff up to this point have just been platform specific / UI specific stuff.

(05-20-2014 10:13 AM)mtmarco Wrote:  6 -> here i'm getting lost: i've tried to manipulate output randomly messing up the PixelShaderGeneratorDX9.cpp, thinking this was the display output generator for a Windows PC with dedicated GPU (thinking the switch will chose DX9GPU), but from some test i saw no result in games, modifying boolean values into ComputeFragmentShaderIDDX9(id); so i tried with changing the same code in the FragmentShaderGenerator.cpp from GLES GPU class, but no result the same as before;

So, as far as the CPU, it's a bit more complex but that's mainly handled in Core/MIPS/. I recommend not worrying too much about that, because it's believed to be mostly correct (except some lingering issues in the armjit and NAN stuff etc.)

For most CPU stuff that's buggy, look in Core/HLE/. This is where we emulate all of the library functions that ship with the OS/firmware updates.

As far as the GPU, you're gonna want to stick to the GLES stuff, yes (or Common.) Look at GLES_GPU.cpp, everything runs through FastRunLoop() which calls the Execute_XYZ() functions. These set state which eventually pumps through StateMapping and is flushed in TransformAndDrawEngine.

If you're changing FragmentShaderGenerator, note that it CACHES shaders. There's a function GenerateFragmentShaderID(), which needs to generate something unique per shader. If the shader id matches, it won't go into the generate function.

The generation of the shader source code is done only one time per shader id.

(05-20-2014 10:13 AM)mtmarco Wrote:  7 -> in the end I've ended in GPUCommon.cpp, and i think our emulator is specifing the GPU class into the Common one (not the DX9 nor the GLES, that obviously is for mobile);

GPUCommon is definitely used, but even on desktop GLES is used.

(05-20-2014 10:13 AM)mtmarco Wrote:  fopen(file,"w") of a new empty specific file(thinking the debug console isn't working with printf cause we are in a win32 app).... and ive discovered that the software go into gpucommon.

I recommend you open Debug -> Log Console and instead use:

NOTICE_LOG(G3D, "Hello.");

It uses printf() syntax so you can log variables. You'll see LOG statements in various places of the code.

(05-20-2014 10:13 AM)mtmarco Wrote:  (elf reading, relocation or what has to be done to get the instruction disassembled and reexecuted)

ELF is in Core/ELF, as is relocation. Mostly it's triggered in Core/HLE/sceKernelModule.cpp. Jit is per arch, e.g. in Core/MIPS/x86/.

(05-20-2014 10:13 AM)mtmarco Wrote:  or at least generate parameters for the following rendering....

Yes, these parameters then affect primitives that are drawn when it flushes (these use the vertex and fragment shaders.)

FYI, Software can also be used and obviously does not use any shaders. DirectX doesn't work on Windows yet.

(05-20-2014 10:13 AM)mtmarco Wrote:  Ok now this last part is mostly my assumption, so my question is: from the point 5 in advance what does really happens??:
CPU_init() basically with which class/sourcefiles connects and what pactically does? And where does GPU_init() hook/end up?

The GPU is initialized, potentially running on a separate thread from the CPU (depending on settings.) Then the CPU runs. All GPU work is triggered by the CPU in some way.

Normally, this is via Core/HLE/sceGe.cpp. It will submit displaylists or what have you. Also, frames are displayed in Core/HLE/sceDisplay.cpp. These functions are called by game code or on a timer in a couple cases (like vblanks.)

(05-20-2014 10:13 AM)mtmarco Wrote:  and also i have a snapdragon 800 smartphone with GLES 3 adreno 330 gpu (samsung galaxy Note 3), with not so high performance as expected with ppsspp on android, and so i want to see also this...

Well, it might be useful to profile your device to see why it does not have good performance. My phone does not even support GLES3 and is Adreno as well, and I get okay performance. I know that larger screen resolution has a big impact (try e.g. "small display" to see if this is a problem.) But, different games have different issues.

-[Unknown]
Find all posts by this user
Quote this message in a reply
05-21-2014, 04:54 PM (This post was last modified: 05-21-2014 05:01 PM by mtmarco.)
Post: #3
RE: Understanding the Source Code (or trying to)
Fantastic. Thank you very very much unknown... all this was really enlightening.
I'll try to concentrate into Core/HLE modules, GPU_GLES, GPUCommon and try to understand something more.

My lack is that I don't have knowledge of low level SCE Kernel instructions, of low psp mips architecture, so I can't understand what each address stands for.

You've got some reference? Reversed games via real psp itself? or was enough yourself experience?

I think basically we'll get from kernel/firmware execution and elf reading/relocation about 3 main type of instructions: input controls polling, display rendering, audio rendering.... obviously there will be also power management, wi-fi management, and something else.

In the HLE/sceGe sceDisplay i think we manage display rendering. But there we are already in the part where we know the function to trigger, because if i've understood something you said that relocation is done in Core/ELF, triggered in Core/HLE/sceKernelModule, Core/MIPS.

Ok now i'll read the code. Excuse me for all these questions... i'm trying to get into this complex thing.

(05-21-2014 02:04 AM)[Unknown] Wrote:  
(05-20-2014 10:13 AM)mtmarco Wrote:  and also i have a snapdragon 800 smartphone with GLES 3 adreno 330 gpu (samsung galaxy Note 3), with not so high performance as expected with ppsspp on android, and so i want to see also this...

Well, it might be useful to profile your device to see why it does not have good performance. My phone does not even support GLES3 and is Adreno as well, and I get okay performance. I know that larger screen resolution has a big impact (try e.g. "small display" to see if this is a problem.) But, different games have different issues.

-[Unknown]

I would not say that i'm getting bad performance... but not so so powerful as i would expect with the at this time best performing smartphone, 2.3ghz x4 snapdragon 801 cpu, 330 adreno es 3.0.... ok i'm not expecting the performance of my i7 x4 notebook with nvidia 650gt m, but for some games i think something more.

Obviously i've tested same games on Galaxy S2 (exynos 4210, 1.2 GHz dual-core ARM Cortex-A9 with Mali 400 gpu), Galaxy Note 3 (quad snapdragon 801), notebook i7 3630-qm 2.4ghz-3.3ghz tb/gt650m nvidia gpu.
I can say obviously the notebook has beaten on all games the android smartphones, getting Always from the same to very higher performance (obviously if on the phones i managed to get smooth graphics with all off or some hacks and many lowered settings, on pc i could max everything from anisotropic filter to each optimization, fxaa filters etc); the strange is that on some games i've got same performance on galxy s2 and galaxy note 3 (as on obscure), while on some other title (resistance) i've got very different speeds, but the same unplayability on both the devices.

God of war runs very smooth on PC, on note 3 you get a decent fluidity only with a bunch of hacks, clocking to 45 mhz, and with non buffered rendering (losing many effects).

That's what I mean with not so high performance... i think probably we can get something, also a bit more from those devices... not so much.... obviously comparing to an i7+nv gpu isn't a logical comparation, but between the two smartphones (3 year of distance) i thought a more effective difference.
Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump: