Advertisement

paletted texture on today's hardware?

Started by August 21, 2002 02:01 AM
7 comments, last by qingrui 22 years, 6 months ago
the old palette skills are still good. but it seems that today''s hardware don''t accelerate paletted texture. is that true? should i forget about palette?
nVidia hardware fully supports paletted textures in hardware. ATi does not, though.

You can integrate it as an option into your 3D engine. But you shouldn''t rely on it being available (esp. since it is an extension).

About it''s usefulness: I have always found it to be an excellent texture compression method. It only takes approx. 25% of the space of a conventional 32bit RGBA texture, but (depending on the image and conversion algorithm) can yield almost identical quality. And it is considerably faster than a full colour texture (less memory access).

Comparing it to other compression methods: If a high quality quantization system is used (to do the 24bit -> 8bit indexed conversion), then the quality will be far better than s3tc/dxtc compressed textures.

/ Yann
Advertisement
It is extremly easy to go from palet to full colour, so why don''t you just have an option to put textures into full colour when they get transfered into gfx memory.

Also, if you do that, explain what is going on with that option, so people don''t think they are turning on high quality textures when they are just wasting speed.

Do not meddle in the affairs of moderators, for they are subtle and quick to anger.


ANDREW RUSSELL STUDIOS
Cool Links :: [ GD | TG | MS | NeHe | PA | SA | M&S | TA ]
Got Clue? :: [ Start Here! | Google | MSDN | GameDev.net Reference | OGL v D3D | File Formats | Go FAQ yourself ]

quote:
Original post by Yann L
Comparing it to other compression methods: If a high quality quantization system is used (to do the 24bit -> 8bit indexed conversion), then the quality will be far better than s3tc/dxtc compressed textures.



I personally use ONLY dxtc .dds texture for everything, and I'd dissagree with that, sortof. A lot of people judge s3tc/dxtc (same thing) on what it looks like on nvidia hardware (which utterly screw up the non-alpha formats - adding very ugly banding..) But the point is that the compression ratio is 1/6 when on the hard drive, and since that include mipmaps, is effectivly 1/12.. Which is obviously not something to be dismissed when compared to the 1/3 ratio of a palleted texture. (and they usually zip just as well as .bmps).. yet if they are decompressed, or the hardware draws them correctly, you can't really see the artifacts of compression unless you really look (which is also true of well palleted textures)..

In the past I've also found a surprising thing, being that when running in 16bit, an 24bit uncompressed texture would look worse due to the banding than the same compressed texture... which was odd indeed. (the compressed textures rendererd very similarly in both 16 and 32bit)

I've read in places pallet swapping can be quite slow... but I don't know.

anyway. just my thoughts.

[edit]

seems the current batch of nvidia drivers fix this problem. (I was going to make a couple of comparison shots to show the -significant- difference)


[edited by - RipTorn on August 22, 2002 8:30:29 PM]
here:



note the image it'self is a 256 png... the magnified areas are as they were though.

pretty good for 1/12. imo.

(the magnified area is the worst area of the image too)

The 256 colour image was reduced in photoshop using perceptual reduction and 50% noise (which looked the best)..

[edited by - RipTorn on August 22, 2002 9:00:05 PM]
en, i agree that full color and compressed images are better most of the time.
the reason why i think about palette is the palette skill. e.g. for a sprite image, i can use a blue palette to make it looks like frozen, or a red palette for burned. also, i can use palettes to make sprites look different from each other.
but with full color image, i have to use much more images, or use fragment shader which is still not commonly supported now.

about s3tc, i got a big problem the first time i use it . i used a big sphere and a big texture for environment. it''s a sky picture of 1024*1024. big enough. but looks very bad with s3tc while just well with uncompressed. the conclusion is that s3tc looks bad when magnified.
Advertisement
RipTorn:
LOL, you have a problem there You posted 3 256 colour images to demonstrate the difference between 32bit / s3tc / and 256 colour ? Take your original 32bit image. Process it through a neural net quantization system (eg. NeuQuant) with adaptive diffusion dithering (not that low quality crap Photoshop uses). You will almost see no differences between both, even when magnified (esp. on such input material as you posted). If you want, post your 32bit image, I'll compress it to 256 colour for you, and post it for comparison.

Fact is, s3tc/dxtc does not look good on photorealistic textures. It creates 'coloured spots' and banding. That's not driver dependent, that's because of the way it compresses colours. We've done a lot of visual tests for our game, and paletted textures were almost always far better quality.

About the compression ratios: paletted textures are 1:4 (because all modern 3D cards don't do a 24bit RGB format. They always pad to 32bit RGBA). s3tc/dxtc compresses 1:8 (not 1:12). Mipmaps are neither specially handled by paletted texture, nor by s3tc, you need to store them separately with both formats. You can of course generate them at runtime when using s3tc, but that's a very bad idea: it will then decompress the s3tc texture, downsample the already corrupted material, and recompress it for each mipmap level. That's where the last bit of quality goes definitely down the drain.

If you want to use s3tc (which is not a bad format, and since it's the only texture compression ATI supports it has it's uses), then you should either supply 32bit data and let the driver do the compression, or supply s3tc compressed textures with mipmaps (that were filtered from the original uncompressed image, and compressed later).

Where s3tc is really better than paletted textures, is on very coloured images, ie. when they use a wide range of very different colour gradients. Paletted systems will quickly run out of indices, and you'll get banding. s3tc does not have that problem. But if your textures use a narrow spectrum (eg. photographic material textures), then paletted textures will be higher quality (since basically, they use more bits to represent each colour component).

/ Yann

[edited by - Yann L on August 23, 2002 9:36:59 AM]
I mentioned it was a 256 colour png. The point was the magnified section. I thought this was obvious. I wasn't going to post a muti mb bmp, or jpg.


I was quoting the dxtc .dds format. It has the mipmaps in the texture. The final size of the file is 1/6 over the original bitmap, with the mipmaps thats 1/12. Although this does vary slightly between DXT1-5. Considering this makes it 1/3 the size of a palleted texture, I feel thats it is a significantly better way to keep texture size on the video card down.

and I was wrong, nvidia drivers at least still do band the images and generally cause very ugly artifacts still.

quote:

If you want to use s3tc (which is not a bad format, and since it's the only texture compression ATI supports it has it's uses), then you should either supply 32bit data and let the driver do the compression, or supply s3tc compressed textures with mipmaps (that were filtered from the original uncompressed image, and compressed later).



I do use pre compressed images. I don't trust the driver compressors one bit. As I say, .dds format. Which is an awesome format. And they do have mipmaps in the image, which makes them impossibly fast to load.


I took a shot of my lord of the rings demo, which used only s3tc pre-compressed .dds textures. This is the difference that nvidia cards make..

on the left is what the actual compressed image looks like when decompressed correctly (ie, NOT using the original bmp). On the right is what it looks like when loaded as a compressed texture on my geforce. It's a pretty well known bug, but they never seem to fix it (doesn't occur with DXT3 or 5 I think).. Note the sphere map is a image of a lens flare... (the circles are meant to be there)



[edited by - RipTorn on August 23, 2002 11:46:15 PM]
This is getting pretty off-topic.

Let''s settle on a compromise: both paletted and s3tc compression have their benefits and drawbacks. Depending on compressor, image data, 3D decompression hardware and personal preference. OK ?

Just a last little thing: if we consider the memory a texture takes in video memory on the 3D card, then the compression ratio of an s3tc texture is far from 1:12. Actually, it''s very simple: s3tc does not handle mipmapping in any special way, mipmaps are stored as sequential compressed images of lower resolution in VRAM. Just as with any other format.

32bit and 24bit raw RGBA textures always take 32 bits per texel in VRAM. Paletted textures take 8bit per texel (and a constant overhead of 1024 byte for the palette in system RAM). S3TC textures take 4 bit per texel (DXTC-1) or 8bit per texel (DXTC-3/5).

So if you don''t use alpha, an s3tc texture will take half the size of a paletted one. If you use alpha, it will be exactly the same size.

/ Yann

This topic is closed to new replies.

Advertisement