Byter said:
on a programming level. But in the end the PC/GPU does it. It's the lowest level of graphical interaction I can think of (and that is what I want).
This has not happened on any recent PC. There is no user code on Windows that can do a “byte *pixels = GetScreenBuffer()” and directly manipulate the display.
You can find emulators if want to do that sort of thing (or I suppose you could boot some much older OS maybe). In some cases the display is literally a block of bytes at a certain fixed memory address, and you can write that memory as the display chip scans through it.
Byter said:
That's why I ask for a “virtual” screen (and of course because I couldn't find anything on the internet).
Creating a HBITMAP that you “treat like the display”, or a buffer you copy to OpenGL or Direct3D, or equivalent through some other library (SDL_Updatetexture etc.) is how to get a “virtual” screen.
Consider that any time you can take say a .png file from disk and display that you are providing pixels, so instead of taking the image, you can just write whatever you want.
unsigned width = 800;
unsigned height = 640;
auto pixels = std::make_unique<unsigned char[]>(width * height * 3);
memset(pixels.get(), 0x00, width * height * 3); // black
for (unsigned x = 0; x < width; ++x)
pixels[300 * width * 3 + x * 3 + 0] = 0xFF; // horizontal line
glTexImage2D(GL_TEXTURE_2D, 0, GL_SRGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels.get());
There is also like `glDrawPixels` which is perhaps a little more direct, you need to go back to an older OpenGL version (before 3?) to get it, but I believe the drivers/implementation will still have it for software compatibility. That can then write a pixel array to the current frame buffer (render target).
EDIT:
If you want to know how GPUs do it then Direct3D11 or OpenGL is probably where to look. It's not direct pixel access because the hardware just doesn't work like that these days. Instead you will generally draw triangles defined by 3 points (and 2 triangles can make a rectangle), then you have a “pixel shader" (or fragment shader in OpenGL terms) which is a small program you write that will process one pixel at a time as it scans through that triangle (the “output merger” actually puts the output into the result, you can control the configuration of this, e.g. tell it to overwrite or do alpha blending or additive blending etc., but it is not a software component you can code).