I'm designing a game in the vein of old ps1 era FF games but with some modern twists. I want the camera angles and visuals though to looks similar to something like those games, specifically FF9. I know these are prerendered backgrounds with a 3d model. but how does this work exactly? Are these backgrounds multiple 2d images, arranged in 3d space so that the character can move between these objects? I also understand that the backgrounds were originally 3d objects that are then arranged and essentially an image is take of the, created that prerendered image. I'd like to find more information on that process and how things can interact with these images/backgrounds, such as collisions. I can't seem to find any thorough tutorials on it, so if anyone knows of any good tutorials or can give me any info on it, I'd be so grateful! Thank you!
How to create a prerendered environment (similar to ps1 era FF games)
Does it have to be the exact same tech?
I'd just render it in a really high resolution, along with a normalized depth buffer and any other deferred rendering buffers needed.
Then I'd associate them with scenes and a walkable area, render them and test against the normalized depths when rendering 3D models.
Today it should be easy to do. I reckon you'll get most of the challenge from picking good camera angles and covering the walkable area: It might be beneficial to create a tool allowing you to place and preview cameras as well as to export the needed assets.
@SuperVGA nah doesn't necessarily need to be the same tech. Just the same end experience.
I'll have to do a bit of research based on what you've said here. I don't know what some of this stuff means. Thanks for the response!
ethancodes said:
I'll have to do a bit of research based on what you've said here. I don't know what some of this stuff means. Thanks for the response!
Hey no problem! This normalized depth buffer could just be a copy of the depth buffer for the scene, but it might be more beneficial if you define the ranges.
It's called the Z-buffer in DirectX and Depth buffer in OpenGL.
If there were other terms you'd like us to help explain feel free to ask (but good idea to search first!) ;-)
When I worked at an adventure studio, they used Unity for it and created a nav-mesh which covered the walkable area and placed some simple trigger volumns where the interaction points were expected. The whole thing was then blended with a shader so that the white nav-mesh wasn't seen anymore.
There were however some obstacles which had to be placed in the 3D scene but those were just simple geometry with a rendered texture. In Unity, you could just use a plane and increase the bounding volumn. The camera angle choosen for some scenes made it also
Shaarigan said:
…blended with a shader so that the white nav-mesh wasn't seen anymore.
@shaarigan That sounds odd; was it necessary to hide the nav-mesh? Why was the nav-mesh rendered in the first place?
It sounds like a good approach otherwise - were your scenes then all 3D? That would also allow you to do some fancy things with your cameras in real time, without quitting the "static camera" genre entirely…
Indie company, don't ask why they did it that way ?
I think it was because they used the default player controller which adds force to a rigidbody object and therefore needs some kind of plane for the character to walk on or else it'll fall through the world. We had at least bugs which were related to that issue (character falling through the world)
The final game scenes were 3D as the character was rendered 3D at least but there were small details more often in the scenes
Unfortunately the game has had some legal issues between the company owner and the publisher so it was removed from steam