Advertisement

(in a proxy d3d9.dll) forcing a game to use fixed pipeline instead of shader?? HELP!

Started by April 07, 2018 10:37 PM
10 comments, last by turanszkij 6 years, 9 months ago

Hello.

Could anyone please help me with this:

I'm playing around with a proxy d3d9.dll to make games run on my slow system. If I've understood correctly, a lot of the heavy and costly things done in a game (such as postprocessing effect, fogs, etc. etc.) happen in the shader part. What I'm struggling with is to intercept the game's call to SetVertexShader and SetPixelShader and instead tell directx to simply use the fixed pipeline instead of custom shaders, hoping that would make the game run smoother (I don't care about the loss of all the fancy eye candy shader effects!). The interception is easy, but I have no idea how to pass the fixed pipeline as a shader to the original d3d9.dll. I've done a lot of digging about shaders and stuff, but I'm still totally confused.

Can anyone point me in the right direction please?

Thank you so much in advance.

 

There's no way to do this without knowing what's going on in the game's shader programs. Shaders don't just add "fancy eye candy", they're completely responsible for computing vertex locations and screen space, and the final value of a pixel written to a render target. It's totally possible that the game's shaders are only doing straightforward things that could be mostly replicated in the old fixed-function pipeline, or it could be doing things in a complex, unorthodox way. Even common things like skeletal animation will screw you up if you don't know what's going on in the shader, since you would have to know that the vertex shader is doing joint skinning and set up the equivalent functionality with fixed-function states (if it's even possible to do so).

Advertisement

Modern Windows emulate Fixed Pipeline using a set of shaders. Such modification does not rather help.

Thanks a lot. I was afraid that would be the case!!

So, does a game do all of the gpu-intensive stuff (fog, postprocessing, etc.) using shaders?

2 hours ago, megatenfreak said:

Thanks a lot. I was afraid that would be the case!!

So, does a game do all of the gpu-intensive stuff (fog, postprocessing, etc.) using shaders?

Yup! Post-processing is usually done by drawing a triangle that covers the entire screen, and using a pixel shader to sample textures and perform image processing operations on them. 

Thanks again for the help.

Could you please help me with one more question? I'd really appreciate it:

I'm working on Life Is Strange: Before the Storm as a test ground for my project, and the initial logos, before I've even gotten to the menu, run at 15 FPS. I don't understand. Isn't there supposed to be a single 2D texture on the screen? Why would a black screen with a single image run only at 15 FPS?? where does the heavy load come from??

Getting the answer to this question could mean a breakthrough for me.

Thank you sooo much in advance.

p.s. my diggings seem to show that even on a black screen, tons of stuff are being rendered.. but why?

Advertisement

It's impossible to say for sure without digging in with some tools to see what the game is doing during those initial logos. I shipped a game where we would load up the player's current save file and start streaming in the level data in the background while the logos were showing, which meant we were doing a lot of stuff in the background! It's possible that this game is doing something similar. Or, it could just be that it's rendering the logos in a rather inefficient way, possibly because that was the simplest way to get things working. Performance analyzers could help give you an idea of what's going on, but ideally you would want symbols or even source code to really understand what's happening.

2 hours ago, MJP said:

It's impossible to say for sure without digging in with some tools to see what the game is doing during those initial logos. I shipped a game where we would load up the player's current save file and start streaming in the level data in the background while the logos were showing, which meant we were doing a lot of stuff in the background! It's possible that this game is doing something similar. Or, it could just be that it's rendering the logos in a rather inefficient way, possibly because that was the simplest way to get things working. Performance analyzers could help give you an idea of what's going on, but ideally you would want symbols or even source code to really understand what's happening.

Thanks for the quick reply! What you mentioned about loading resources and stuff while the logos are being rendered actually makes a lot of sense! I'll probably find out more if I look at it from that perspective.

Thank you so much;)

p.s. In my proxy d3d11.dll, I hook into the DeviceContext's END method as something that runs once per frame, since hooking the Present method in DXGI requires a completely separate file and code. I wonder if I'm messing up by doing so!

31 minutes ago, megatenfreak said:

p.s. In my proxy d3d11.dll, I hook into the DeviceContext's END method as something that runs once per frame, since hooking the Present method in DXGI requires a completely separate file and code. I wonder if I'm messing up by doing so!

Yes, you are. Begin and End are used for queries, not frame delimiters like D3D9. You can have any number of queries per frame, including zero.

I see. Funny things is, the Begin method never gets called by the game. It handles every frame by a long series of DrawIndexed calls and then an End method.

On a side note, if I were to tackle and intercept the game's textures and intensive postprocessing effects, I should target the Pixel Shader stage, right? It seems to be the last stage before the Output Merger.

This topic is closed to new replies.

Advertisement