Advertisement

Is this game actually 3D?

Started by May 27, 2020 03:19 PM
7 comments, last by Dawoodoz 4 years, 8 months ago

The game i'm talking about is Sea of Stars. The game isn't out yet but they put out a trailer:

Since i'm trying to make a game with similar graphics i can't help but wonder how they made this.

Us it 2D, but pre-rendering objects in 3D to have data such as height maps and then finishing the pre-rendered diffuse map by hand, for example?

But what bothers me is that there are some scenes that make me believe they made it in 3D(the ones from 0:11s to 0:18s). I know it is possible to make 3D games look like pixel art, i'm even trying to do it right now in Unity, and although there's some problems like pixel dancing with character movement, it looks like pixel art. But how did they manage to create objects like the big tree we see at 0:17s in the clip, as well as the other trees? Everything in the object looks exactly like pixel art. The shape and outline, specifically.

Things like floor and wooden bridges, they feel tangible in concept to recreate for me, but i have no idea how to make such things feel so good and natural as 3D pixel art without being flat billboard with the pixel art as texture. How could i create such perfect pixel art in a 3D environment(and in Unity, if possible)?

Looks like isometric to me (or rather, dimetric): https://en.wikipedia.org/wiki/Isometric_video_game_graphics

The trick is to stop thinking in 3D. In the end everything is just pixels on the screen, so having multiple sprites, and drawing them in the right order, possibly on top of each other (so you can ‘walk under’ a tree), you can achieve anything.

The goal is to create the illusion of depth, any way you can achieve that is good. Eg you can have layers perhaps.

Good pixel art comes a long way if you don't scale graphics, use orthographic projection instead, and setup the GPU to draw textures 1:1 ratio with the screen pixels.

EDIT: Other option is a more side-way view, like in the simcities, rather than the dimetric view where one corner points towards the player.

Advertisement

Alberth said:
ather than the dimetric view where one corner points towards

@Alberth That seems fair, but in the OP i didn't mention the shadows aswell. They are big reason i think it's 3D. I'm still studying this on GPU programming, but i don't know how it'd be possible to create those dynamic lights with "perfect" shadows on a game that is entirelly 2D and with the sense of depth coming from sorting order only.
I think it's entirelly possible with pre-rendering in 3D to get occlusion maps, normal maps and other 3D data like they did with Pillars of Eternity, but i don't know if Pillars of Eternity has those dynamic lights varying through the game time. I have never played it, just found a devlog article that they showed in very little detail the proccess of making the game 2D.

IvanNeves said:
Since i'm trying to make a game with similar graphics i can't help but wonder how they made this.

You might find these videos inspiring/helpful.

https://www.youtube.com/watch?v=cQCQd0gFtqs

https://www.youtube.com/watch?v=P2zMHMBqbdo

🙂🙂🙂🙂🙂<←The tone posse, ready for action.

As fleabay noted above, it is possible to do this in 2D, but it's not trivial. Unity allows normal maps in 2D sprites, but this is only part of the job, as you will probably also need occlusion and casting dynamic shadows, which might require custom shaders. You will still need good pixel art, and spend some time making this maps. Sprite Lamp and CrazyBump can help with that, but you still can try using image editors like Photoshop for that.

Check Pathway for another example of what you can do (they use a custom engine, based on libGDX, as far as I know). There are some articles on their process, for example https://steamcommunity.com/games/546430/announcements/detail/1700601856676355706

If they use per-pixel depth buffering then it's isometric 3D using 2D draw calls by my subjective definition.

These old techniques are heavily optimized for CPUs to avoid cache misses with linear memory read patterns, where they still outperform 3D graphics cards. So to emulate 2D on a resource hungry 3D rendering pipeline without making it faster would be wasteful to say the least. I can explain the underlying math for implementing this kind of dynamic light with pre-rendered triangles, dynamic light, normal mapping and depth based shadows as pure math if you want. It's easier than maintaining an OpenGL dependency and won't stop working after a few years.

https://github.com/dawoodoz/dfpsr

Advertisement

@Dawoodoz If you could explain it, it'd be the best, really. I'm not great at the math but my basis of CG is pretty much 80% math from introductory courses on college, so i think it's good enough to don't get lost at least. I started to study GPU programming with OpenGL about a week ago and i just finished the basics of it(ya know, shading, cameras, etc. The things i saw in college), so i can get a little bit behind with OGL explanations and such.

You don't need to bother explaining all the details in a post here, if you have articles or papers explaining the proccess, linking them would be great aswell

@undefined Sure, a short summary then.

First step is to generate sprites with diffuse, normal and height maps. This is painful to do by hand if you want normal maps for a more realistic look. In my Sandbox example, I made a program for generating 3D models from high-resolution images using two triangles per pixel. Same detail level that Unreal Engine 5 will use, but on the CPU in higher frame-rates, by pre-rasterizing into deep sprites.

By assigning the texture pixels as vertex colors, random memory access can be avoided because vertex color is stored in triangle draw order. Isometric triangle rendering with only vertex colors also save us the expensive depth division in each pixel, because we can interpolate colors by pre-generating DX and DY color offsets for each triangle. Rendering a few million triangles on the CPU is then done in an instant and saved to image files for the game to load.

Then you keep a full-screen image for each property. The actual height of the sprite in 3D is then subtracted from the Y location on the screen and added to the sprite's sampled depth pixel before comparing and writing. If the sprite's new pixel is closer than the screen's depth buffer, you write to diffuse, normal and depth buffers.

To speed up rendering of rarely moved items, the background has a set of pre-drawn blocks with all static items in that region. When the camera moves, just use memcpy calls to draw the visible background blocks. Render them when made visible and recycle when too far away from the camera. Knowing which static items to draw for a region can be solved using either a 2D grid with a maximum height (easy to manage), or an oc-tree structure (more reusable).

When you have your diffuse, normal and height images for the frame, you add a light image and fill it with black using memset. Then make a draw call onto the light image for visible light sources. The height image solves the position equation by extruding multiplied by a vector from a flat zero plane. Then unpack normals and you know the scene pixel's relation to the light source. If close enough and not occluded in the shadow depth-map, you can add that light to the light image.

Then you multiply diffuse with light to get the final image, up-scale and tell a background thread to upload the result to the window while moving on with game logic for the next frame.

Dynamic light equations on isometric CPU rendering are pretty much like deferred light using post effects on the GPU. You just go through all pixels in memory using SIMD intrinsics and multi-threading. Intel and ARM has reference manuals for SSE/NEON vectorization. Any article about “deferred rendering” will explain the theory and you just do the same without the 3D rendering pipeline. You can begin with a basic pixel loop until you learn CPU optimization, because it's still fast in retro resolutions.

This topic is closed to new replies.

Advertisement