Advertisement

2d rendering?

Started by August 23, 2019 04:48 AM
24 comments, last by tolazytbh 5 years, 4 months ago
13 hours ago, Steven Ford said:

Hi @tolazytbh, it's perfectly possible to do what you want (I think), there's nothing to preclude you from generating the GPU memory on the CPU and then copying it up to the GPU for display purposes. However, by doing so, you will be throwing away almost all of the capabilities of the GPU and probably backing yourself into a corner (hence why people above are recommending not doing so).

To simplify the 2D graphics, depending on what you're using, there will be a load of libraries which can simplify things. Personally, because we're using DX, I use the DirectX toolkit which provides things like 'spritebatch' and' spritefont' which simplify the writing of the game and then we can focus on the game play rather than boiler plate code.

Do you have any specific questions with which you need assistance?

Well either way I have to write the Object/animations from the ram to the GPU, and I had mentioned above that I'd use the GPU for addition/subtraction (shader) but I want to use 2d  (pixel coordinate, and then pixel color) instead of using vertex/triangles

pretty much write to GPU the pixel, it's coordinate and color, and then i'd just shift it if there's shade...

I thought it was ridiculous going to 3 different game engine's, 3d engines and they all require you to build the area as a geometrical pattern/3d and no methods to do that yourself (vulkan, direct3d, openGl)

Also right now it seems like I got 4 operations on the CPU before i'm at 120 frames per seconds (2 milliseconds, 4 milliseconds, 6 milliseconds) and that's before I multithreaded it (can't write to the screen that fast though without the GPU)

 

Anything you can provide about sending 2d images with a framerate, (I don't know if video players do that or if they use vertex's as well) or if you have your own rasterization method and you want to just handle the coordinates yourself

5 hours ago, tolazytbh said:

Well either way I have to write the Object/animations from the ram to the GPU, and I had mentioned above that I'd use the GPU for addition/subtraction (shader) but I want to use 2d  (pixel coordinate, and then pixel color) instead of using vertex/triangles

pretty much write to GPU the pixel, it's coordinate and color, and then i'd just shift it if there's shade...

 

This is the part i can't follow.

You can write to coordinates. You can calculate where such a coordinate (am thinking of ndc, but you can probably do that in world coords as well) is on screen by querying (or setting) resolution and doing some gymnastics that you have probably already figured out. You can use colour formats in a depth per channel per fragment no monitor can actually display. And all that largely platform/monitor/windowing system independent and, with a few lines of code with graphics and windowing apis.

A ray tracer for example can have no triangles (the object's models may have, but they could also be just implicit), it traces to a texture that is then displayed. The act of display would need two triangles for the texture coordinates because this is - to my limited knowledge - how stuff works. Edit: well, the texture could be displayed without any transformations or monitor adjustments as well, of course. Though it may look somewhat distorted ... ?

I would say that it is very tedious and error prone to write directly to screen without the use of a library, more so in client/server based environments.

Use a library, Luke ? Preferably one that can handle different environments.

Sure, i may have a total lack of understanding here, or maybe it is just a sporty challenge to go the whole way on foot ... am prepared for correction. Or maybe you could write what exactly you are doing, maybe people can up with more specific ideas then ... if i understand it correctly others here have done renderers without a graphics api ... search a bit ?

 

Advertisement
4 hours ago, Green_Baron said:

This is the part i can't follow.

You can write to coordinates. You can calculate where such a coordinate (am thinking of ndc, but you can probably do that in world coords as well) is on screen by querying (or setting) resolution and doing some gymnastics that you have probably already figured out. You can use colour formats in a depth per channel per fragment no monitor can actually display. And all that largely platform/monitor/windowing system independent and, with a few lines of code with graphics and windowing apis.

A ray tracer for example can have no triangles (the object's models may have, but they could also be just implicit), it traces to a texture that is then displayed. The act of display would need two triangles for the texture coordinates because this is - to my limited knowledge - how stuff works. Edit: well, the texture could be displayed without any transformations or monitor adjustments as well, of course. Though it may look somewhat distorted ... ?

I would say that it is very tedious and error prone to write directly to screen without the use of a library, more so in client/server based environments.

Use a library, Luke ? Preferably one that can handle different environments.

Sure, i may have a total lack of understanding here, or maybe it is just a sporty challenge to go the whole way on foot ... am prepared for correction. Or maybe you could write what exactly you are doing, maybe people can up with more specific ideas then ... if i understand it correctly others here have done renderers without a graphics api ... search a bit ?

 

I'm pretty sure you can display stuff without using triangles

 

This is some serious 90s thinking right here. If you insist on doing CPU rendering, just upload to a texture and render a full screen quad.

https://en.wikipedia.org/wiki/Raster_scan

I hate being served wikipedia links without a comment. Revenge ?

We're back on page 1. A quad is two triangles. Maybe you can get around these and use a geometry shader to emit 4 Vertices with texture coords, since you already use shaders for other stuff as you wrote up thread, but i never tried.

Apart from that, any widget api or whatever can display a texture. Though maybe not screen resolution at 500/s if it must over the bus each frame ...

Advertisement
On 8/22/2019 at 10:48 PM, tolazytbh said:

I had tried direct3d and after I had implemented it everything is using a vertex/triangle group and I want to load everything as a 2d image (already rasterized) and I can't seem to find anything on direct3d that doesn't have to do with triangles...

Blasting pixels onto a texture and drawing it as two full screen triangles is the most efficient way.  This is how you communicate to the GPU your intent:  A simple vertex shader, a simple pixel shader, a simple pair of full screen triangles.  Let the GPU and its tens of thousands of workers decide how best to take your texture and map it onto a display.

It should be a win-win scenario:  You get to blast pixels to a buffer somewhere, and the GPU does its thing and puts it somewhere.
 

You could size one triangle to cover the entire screen, but then the GPU will need to divide it into multiple triangles to fit the display.  You may have just as well sent it two full screen triangles.

If you are against drawing triangles, then perhaps you can look to using Direct2D.  You get the benefit of a simple (yet slow*) `setPixel(x,y,color)` interface, and images are rectangles.

3 hours ago, tolazytbh said:

I'm pretty sure you can display stuff without using triangles

 

I'm pretty sure you can't. Even the window you are looking at right now is a textured quad. That's just how things work nowadays with the GPU in charge. Even old API's like DirectDraw are rerouted to conform to this.

Edit: Better rename it to IndirectDraw :D 
 

8 hours ago, Prototype said:

I'm pretty sure you can't. Even the window you are looking at right now is a textured quad. That's just how things work nowadays with the GPU in charge. Even old API's like DirectDraw are rerouted to conform to this.

Edit: Better rename it to IndirectDraw :D 
 

I was thinking I'd have to program the GPU myself or write my own engine to, I had watched a couple videos about making your own graphics card and the protocol is there's a horizontal scanline and a vertical scan line and you have to delay a couple of things before you write and you can't send anything faster than the monitor can handle otherwise you'd fry/break the monitor (for example mine I can't send anything faster than 60 frames a second (60 hz)) but through that method you send exact coordinates for the pixels and then the color.

It just feels a lot more organized to me, and plus I have my own method for handling depth (for example a dungeon/underground map in a game)

EDIT:

Also if you were to only were to use 300x300 pixels on the screen you'd have to be able to send the entire screen anyways (have the rest send a blank pixel). I'd assume that's what is at the core of the GPU

It simply does not work like that way any more.  There is no electron gun with which to stream color data.  We have since transitioned from streaming color data with an electron gun to streaming digital image data.

CRTs are obsolete.  With HDMI and the advent of DisplayPort, monitors are no longer constrained to the physical limitations of an electron gun and its exact analog timings.  Images are now compressed and uploaded to multiple monitors at irregular intervals.  The monitor itself may even subdivide the image and update sections of itself in parallel.  This is how we can achieve outrageous 4k resolutions at 60 fps, or 1440p resolutions at 144 fps.  Resolutions and framerates that would otherwise be physically impossible or infeasible using traditional timed analog (VGA) or digital (DVI) signals.

This topic is closed to new replies.

Advertisement