Advertisement

Low level drawing on screen whit gpu some kind of secret knowladge

Started by August 07, 2022 09:17 AM
14 comments, last by Hodgman 2 years, 4 months ago

Hello.

I have been working realy hard for months to find something as low level as possible to draw on screen. Due to limitation whit the windows OS I was only able to find Win32 whit a bitmap and the function SetPixel/SetPixelW the issue is that theses fonctions are extremely slow because they run on one core. So I was wondering how old game console could do sutch thing at decent resolution whit not even 1/1000 of nowdays prcoessing power (Where there gpu instruction to draw on screen?). I also realised that gpu drivers and extremely low level gpu programming are some kind a ‘secret’ knoladge that nobody know how to do except the elit at vulkan and other specialised company that won't tell us there ‘secrets’. that's why I am asking : How to draw from gpu to the screen or on a winuser window whiout any fancy specialised api ,only low level programming ?

foxo said:
So I was wondering how old game console could do sutch thing at decent resolution whit not even 1/1000 of nowdays prcoessing power (Where there gpu instruction to draw on screen?)

Old computers don't have a GPU (for sufficiently "retro"). Instead, video memory was directly accessible from the CPU, especially for memory-mapped video memory, writing a value in it directly changes the pixel(s).

Do note that current screens are nowhere near the “old" screens. You'd typically have 1 byte for 1 of more pixels (eg 2 color video mode gives 8 pixels in a byte). Often (but not always) it had hardware palette support to convert bytes in video memory to RGB colours. Resolution was also 640x400 or (much) less, rather than the 2K or 4K screens we have now. Video memory was 16-20K rather than the current multi MBs.

So what is basically lacking in the above setup is “proper” 3D. a CPU isn't designed for computing such scenes, you don't have enough parallel cores to get fluent movement in 3D. This is where the video-card manufacturers jumped in, adding GPU processing, aimed for massive parallelism in computing pixels. Slowly the current standards evolved. Also, people found they could do other massive parallel computing tasks if you could convert the problem into a 3D scene to render. That caused the current GPU computing branch.

Advertisement

@Alberth Yes they didn't have a gpu . So nowdays How to directly write in video memory or a HWND window's memory ?

The GPU is at the video card itself, so it can directly access video memory. No idea how it does that, since it's an internal interface from the video card manufacturer. The OS only manages access to devices so GPU access is a thing they control, but the OS doesn't sit between GPU and actual video memory as far as I know. Note that not having direct access to video is a good thing in general, it makes the current desktops much more reliable.

Your slow behavior comes from accessing the connection between CPU and GPU where you need to transfer instructions and data to tell the GPU what to paint. A common assumption is that this connection isn't heavily used, since in normal use you tell the GPU once what to draw, and then the amount of data to transfer is very small, so not much point in making the connection fast (= much more expensive) from a competitive point of view between video card manufacturers.

I am not even sure you can even access video memory from the CPU directly in todays computers. Even if you can, I am guessing the Windows OS won't allow that. (But all quite speculation, you need to ask someone that knows what is possible in todays hardware for the answer of my first guess, and for the second guess someone deep in the Windows OS, although my money is on some non-Windows alternative.

I think the closest thing that exists currently are the video libraries like SDL2, that (I think) have a mode where you transfer images, you may want to check how they achieve that.

@Alberth I realy don't want to learn Vulkan I tryed and its inhumain . openGL is easyer but its dead and both of them are overkill to render a bitmap 100 times a sec

Unfortunately for you, but the days where your program is the one and only thing running with full control where you can write a bitmap to video directly, are gone in modern computers, as far as I know. Basically you'd need DOS, no idea if that still exists.

I think that mostly leaves you the options of either using some simple video library such as SDL2 (but others exist), or switch to something less modern and/or smaller. Maybe Raspberry pi or a modern retro computer (remake of old computers with more modern hardware) are more your cup of tea.

Advertisement

You have used the win32 tag so…

https://docs.microsoft.com/en-us/windows/win32/gdi/windows-gdi

https://docs.microsoft.com/en-us/windows/win32/gdiplus/-gdiplus-gdi-start

GDI+ is OOP and has more functionality than GDI.

For videos, Google “Handmade Hero Day 003” and also Day 004. I think this uses GDI, not GDI+ but not sure.

🙂🙂🙂🙂🙂<←The tone posse, ready for action.

GDI is slow for complex graphics. It also does not have built in anti-aliasing. For simple 2D graphics it is sufficient and does not have any dependencies. GDI+ has more features but it is also significantly slower than GDI.

For real-time graphics on Windows it is better to use a newer API such as DirectX. This is as low-level as Microsoft supports outside of specialized companies.

@DirectX9 direct X just to display a bitmap ….

This topic is closed to new replies.

Advertisement