I am assuming bit block transfer, a method of rasterizing sprites, originated from the fact 3d accelerators did not exist? I wonder if using 3d libraries to develop 2d sprites became popular shortly after the first full 3D GPUs from ATI and Nvidia in the early 2000s? Also monitors manufacturers did away with legacy video modes during the Y2K days. The Unreal Game Engine was around but Unity did not show up until '04. I wonder if my question is not well known due to libraries that existed in house.
When did game developers start using 3d libraries to develop 2d games?
It's not about libraries, it's about hardware and the APIs to access this hardware. If the best way to get 2D performance is through a 3D API, developers are going to start using 3D APIs for 2D games.
@undefined I was wondering when? I mentioned hardware. The hardware I spoke of were what could be called a VGA core and in the past bit block transfer was used to transfer bytes from image files, the bit maps, to the graphics card frame buffer, itself an array of bytes. A library was utilized. In the 2000s DX9 was a 3D API. Graphics cards around Y2K had shaders and were deemed full 3D GPUs. The Unreal Game Engine was available. When was it around this time that there was a push by the industry against 2D? Making 2D developers embrace the new 3D infrastructure?
rei4 said:
When was it around this time that there was a push by the industry against 2D?
It started with Doom, so already 1993. First GPUs came out 1996 (3Dfx).
I also remember arcade games in 3D, e.g. Daytona USA 1994.
Playstation was 1995.
So that time was the big change towards a focus on 3D.
Regarding 2D games, well even the C64 had hardware accelerated sprite rendering and collision detection.
So there was not so much of a change overall i think. On some platforms more than on others ofc.
I wonder if using 3d libraries to develop 2d sprites became popular shortly after the first full 3D GPUs from ATI and Nvidia in the early 2000s?
It worked both ways mixing 2D into 3D, and also mixing 3D into 2D.
Non-game animation has used it since the earliest animated motion pictures, rotoscoping 3D projected elements into 2D animation. The concept there predated video games by decades.
The cartoon South Park launched in 1997, they used 3D modeling and rendering to produce the look of paper cutouts. The process was a slow rendering, far too slow for games, but it was in people's mind for implementation.
I remember the Layered Depth Image (LDI) paper from 1998, followed by the LDI tree paper the following year. Many games implemented the concepts over the next decade as hardware became more common, and I used it in a few Nintendo DS titles.
With LDI you can encode 3D data like depth and record it already projected in 2D, and you can use it as textures or just write directly to depth buffers, letting you draw 2D images in the world like a hand-drawn scene, with animated 2D objects, draw the depth information into the depth buffer, then use 3D objects in the world that take advantage of the depth data for clipping and depth tests. Leveraging the stencil buffer could allow for more manipulation.
You could build a 2D game that incorporates 3D elements, or a game that feels like a 3D world but was almost entirely 2D assets. Both directions worked. Since the hardware had relatively few polygons, limited texture space (if any) and fixed functions for lighting and shading, it usually worked better to build up with 2D and augment with 3D, rather than building what looked like 2D out of 3D. The early hardware looked terrible for 3D rendering, the PS1 allowed a few 256x256 textures but many made 128x128 their max, and other hardware was color only. Both directions were discussed and noted, but better visual results were just for one direction.
As hardware advanced many of the 2D acceleration elements were removed as the 3D methods were better performing. Starting around 2005 it was becoming faster to build 2D elements on the screen using 3D primitives. We have moved even farther along that line with hardware optimized for complex operations on point clouds, so rendering the world is a better fit for the hardware to just set an orthographic projection and render a pile of textured quads.
IIRC it was around 2009 or 2010 that both Unity and Unreal created a bunch of interfaces around explicitly locking the view into a 2D perspective, physics constraints on an axis, etc, but some games had already done it. The announcements are probably still in archives, including discussion here on the site. Adding engine support increased popularity.
So the direct answer is: always, even since before the 3D hardware was widely adopted. Use has grown as hardware changed, but it was always something people realized.
@undefined Thanks Frob. I was thinking that people who actually develop computer graphics applications would see things very differently than myself. Then I wondered what the reasons were for the switch from bit block transfer (bit blitting) to using textures and 2D polygons. You were very forthcoming with that information. Thank you for taking the time to share your experiences.
rei4 said:
Then I wondered what the reasons were for the switch from bit block transfer (bit blitting) to using textures and 2D polygons.
Textured triangles are a generalization over former HW features of sprites and tile maps. You can implement all tohe older features with the newer 3D HW. So there was no more point to waste die area on functionality which was replaced and extended with newer functionality.
From the software perspective it's the same. We can do all 2D stuff still with the 3D HW, so we don't miss the older and restricted 2D features, even if we make 2D games.
I think it began around the mid to late 1990s. Blending the depth of 3D environments with the simplicity of 2D gameplay.
None