The Riva TNT is a relatively mature development: earlier 3D accelerators for PCs like the 3dfx Voodoo series where cards that were used in addition to a regular graphics card.
[History] When did 24-bit color become a thing in games?
Historically 2D and 3D were separate, yes.
Early 3D cards like the Voodoo were add-on cards that connected to your 2D card and were only activated when running a 3D game. The Voodoo only supported 16-bit colour, could only be used in fullscreen, and it's limited framebuffer memory restricted resolutions to 640x480.
2D cards must surely have existed but I have no memory whatsoever of even seeing, let alone buying, one. By the time I came in, this was integrated on the motherboard as standard.
Combined 2D/3D did exist in the high end workstation market, and in niche consumer products like the Voodoo Rush. Together with 24-bit (32-bit really) colour, these didn't really go mainstream in the consumer market until the late 1990s. Even then 3dfx and 16-bit colour was still a thing for a while, and 3dfx did strongly advertise the Voodoo 3 as having a performance advantage as a tradeoff for being 16-bit only. That didn't last long, with the TNT2 IIRC being the card that caught up with the Voodoo 3 for perf, offered 32-bit colour, 32-bit depth, more memory, and proper combined 2D/3D allowing for hardware accelerated windowed modes. This was around the time of Quake 3 which was widely used as a benchmarking tool, so the benefits were very visible to everyone.
The Voodoo 3 though was a combined 2D/3D card as well, just somewhat limited in features compared to the TNT, and ultimately overtaken in power by the TNT2. 3dfx failed to deliver a full OpenGL ICD in a timely manner, so Voodoo 3 couldn't be used for windowed rendering. A total pain to debug on one, as well.
So the story of 24-bit colour going mainstream in the consumer market really does overlap with the switch from separate to combined 2D/3D and the collapse of 3dfx, with the Quake 3 game and engine being roughly contemporary.
Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.
@21st century moose Nice! Thank you for the comprehensive explanation. I'm still trying to wrap my head around this 2D/3D separation. I can understand 3D being a separate hardware component. This is where textures and meshes would be stored, it would execute the fragment and vertex shader, perform any blending effects. So where does color depth come into play? I understand textures can have a different depth, depending on the compression method, but that's different. I mean doesn't the 3D accelerator render the scene and write the colors directly into the frame buffer or the 2D accelerator?
I assume SGI would be a good place to look for first 24bit ‘3D acceleration’, e.g. https://en.wikipedia.org/wiki/SGI_Indy
I remember visible dithering from 16 bit GPUs like Voodoo, but without much blending going on back then the difference was not enough to say the move to 32 bits was important for games.
Also, there was no need to develop for that specifically. It was not really 'a new generation of graphics'.
Acceleration in PCs was not a thing until the 90s.
However graphics was still done (without any acceleration) by writing pixels into the framebuffer.
First official VESA standard for 24bit (truecolor) was in 1994.
However SXGA by IBM was allready doing this in 1990.
That said: I don't think that any full-screen animated apps where using 24bit color (or even 16bit color) when they came out. Most animated games were using 8bit color until very late, because blitting speed was allways the bottleneck for pushing pixels on unaccelerated cards.
There were some games, with non-full screen animations which did use 16bit color (but not many).
My Oculus Rift Game: RaiderV
My Android VR games: Time-Rider& Dozer Driver
My browser game: Vitrage - A game of stained glass
My android games : Enemies of the Crown & Killer Bees
What does acceleration mean in 2D? Back then the sprites were loaded in RAM and the CPU built an array representation with the frame which was in turn blitted to the VRAM only to be displayed, nothing more. At most GPUs would have 768 reserved bytes for a CLUT. Is that considered software rendering?
Software rendering is letting the CPU put all the pixels.
Acceleration in 2d means something like DirectDraw supported, actually uploading images, and blit them. Mirroring, scaling, and some cards could even rotate it.
Fruny: Ftagn! Ia! Ia! std::time_put_byname! Mglui naflftagn std::codecvt eY'ha-nthlei!,char,mbstate_t>
First of all, thank you for plugging my (now ancient) book, Encyclopedia of Graphics File Formats by James D. Murray. You might also be interested in the companion Graphics File Formats FAQ I published on USENET circa 1994-97.
Indeed there were 24-bit graphics cards back in the 1980s. In the late 80's, I wrote software that supported the AT&T Targa card that had 16- and 24-bit variants. (The Targa 32 had 24-bit color and 8-bits of transparency.) These were graphics card for professional applications and not affordable by the consumer.
In fact, the single greatest problem I remember with producing 24-bit graphics cards for the consumer market was the enormously high cost of RAM (VRAM) in the late 80s and early 90s. I remember how expensive it was to me to upgrade my Video 7 card to a full 1MB of VRAM. I felt it was worth it because that was the “hot card” that all the gamers had to have back in 1988-89. I'm not sure that I ever ran a program that could used the full 1MB, but the capability was there if I needed it.
jdmurray said:
First of all, thank you for plugging my (now ancient) book, Encyclopedia of Graphics File Formats by James D. Murray.
You are welcome! And thank you for writing it ?
24-bit seems less illusive now when you've mentioned all of those examples in the early 90s. The way I see it all the card needs is enough VRAM to fit the frame buffer and the memory bandwidth to fill it in time. Does that mean Voodoo 1 to 3 set some artificial limitations? Why would they do that? Is it simply to protect devs from themselves? Something like “Yes, of course you can fit 640x480 pixels at 24 bits each in the frame buffer, but you wouldn't be able to blit it 60 times per second, so why bother letting you to shoot yourself in the leg like that?”
Or was there a technical challenge? Or a design consideration?
Voodoo 1 and Voodoo 2 had memory architectures that might seem a little weird today. Rather than a single pool of memory, they had different chunks of memory dedicated to storing the framebuffer and textures (nothing else was stored in video RAM back then). A Voodoo 1 would have 2mb of framebuffer memory, which was just about enough for 640x480 16-bit double-buffered plus a 16-bit depth buffer. Voodoo 2 had more framebuffer memory but that was used for higher resolution (800x600 or 1024x768 in SLI mode) rather than higher colour depth.
3dfx took a gamble here, betting that higher performance was more important than higher colour depth, so their hardware was primarily sold on performance rather than on features. That worked for them until the competition caught up and they weren't able to respond.
Vertex and fragment shaders didn't exist back then, not on consumer hardware. The per-vertex stages of the pipeline were done in software on the CPU, the per-fragment stages on the video card (the term “GPU” didn't exist either). Everything was fixed-function via a limited set of selectable options.
Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.