Advertisement

MP3-Beating Compression

Started by April 06, 2000 01:58 PM
494 comments, last by kieren_j 24 years, 8 months ago
I have a question, kieren (be honest):

Do you just make fun or do you believe yourselve that your compression algorithm works??

Visit our homepage: www.rarebyte.de.st

GA
Visit our homepage: www.rarebyte.de.stGA
Maybe some code then.
Stage 4, which replaces the most common byte with one bit:
	memset(&rle_occurance, 0, sizeof(rle_occurance));	for (sourcebyte = 0; sourcebyte < size; sourcebyte++)		rle_occurance[source_buffer[sourcebyte]]++;	rle_code = 0x00;	rle_length = 0x00000000;	for (sourcebit = 0; sourcebit < 256; sourcebit++)	{		if (rle_occurance[sourcebit] > rle_length)		{			rle_length = rle_occurance[sourcebit];			rle_code = (unsigned char)sourcebit;		}	}	memset(target_buffer, 0, size);	target_bitstream.ptr = target_buffer;	target_bitstream.byte = 0;	target_bitstream.bit = 0;	for (sourcebyte = 0; sourcebyte < size; sourcebyte++)	{		l = source_buffer[sourcebyte];		if (l == rle_code && sourcebyte < (size-2))		{			write_bit(⌖_bitstream, 0);		}		else		{			write_bit(⌖_bitstream, 1);			write_byte(⌖_bitstream, l);		}	}


If you all think this is worthless crap, or maybe you all think it''s worth keeping: but not we''ll vote all if we want to keep this thread.
Advertisement
Perhaps I''ve just missed something--5 pages is a lot to read--but when did kieren_j say he was trying to compress random data? It seemed to me that his reordering scheme was taking advantage of some property inherent in a couple of file formats (which I know nothing of). To me, that seems quite _possible_, although it certainly isn''t proven to me yet.
did you understand anything what i said kieren?
it won''t lead to anything if you keep throwing code snippets that don''t prove anything at us. (well, at least it won''t prove anything to the less influenceable of us)

i make you an offer: you try to compress the following and tell me how big your results are:

- a 3 MB MP3
- a 30 MB WAV
- a 1 MB BITMAP
- a 1 MB ZIPPED Bitmap
- a file with 100k contents of RANDOM data

then we can talk again and we''ll tell you if you found out something cool or if you are dead wrong.

if you don''t take the offer, well, i''d say we leave this thread as it is, because this discussion leads to NOTHING.
anonymous poster, kieran said he was compressing zips and mp3s. these files are already essentially random data. think about it. if they weren''t, why couldn''t zips be re-zipped?
Everybody vote then, mixed opinions!!.......
Advertisement
In your last post it seems you''re trying to prove that you can do huffman compression (sorta, I don''t think replacing the most common byte with a bit would decompress). What we want to see is your actual (nonexistant) "rearranging" code.
I already explained it!
Code:
	sourcebyte8 = 0;	for (sourcebyte = 0; sourcebyte < size; sourcebyte++)	{		l = source_buffer[sourcebyte];		sourcebyte_d8 = sourcebyte / 8;		if (l)		{		////// if not zero		if (l & 1)		{			sourcebit = (sourcebyte8);			targetbit = (sourcebyte);			targetbyte = (targetbit / 8);			targetbit = targetbit % 8;			set_bit(⌖_buffer[targetbyte], targetbit);		}		if (l & 2)		{			sourcebit = (sourcebyte8) + 1;			targetbit = (size) + (sourcebyte);			targetbyte = (targetbit / 8);			targetbit = targetbit % 8;			set_bit(⌖_buffer[targetbyte], targetbit);		}		if (l & 4)		{			sourcebit = (sourcebyte8) + 2;			targetbit = (size2) + (sourcebyte);			targetbyte = (targetbit / 8);			targetbit = targetbit % 8;			set_bit(⌖_buffer[targetbyte], targetbit);		}		if (l & 8)		{			sourcebit = (sourcebyte8) + 3;			targetbit = (size3) + (sourcebyte);			targetbyte = (targetbit / 8);			targetbit = targetbit % 8;			set_bit(⌖_buffer[targetbyte], targetbit);		}		if (l & 16)		{			sourcebit = (sourcebyte8) + 4;			targetbit = (size4) + (sourcebyte);			targetbyte = (targetbit / 8);			targetbit = targetbit % 8;			set_bit(⌖_buffer[targetbyte], targetbit);		}		if (l & 32)		{			sourcebit = (sourcebyte8) + 5;			targetbit = (size5) + (sourcebyte);			targetbyte = (targetbit / 8);			targetbit = targetbit % 8;			set_bit(⌖_buffer[targetbyte], targetbit);		}		if (l & 64)		{			sourcebit = (sourcebyte8) + 6;			targetbit = (size6) + (sourcebyte);			targetbyte = (targetbit / 8);			targetbit = targetbit % 8;			set_bit(⌖_buffer[targetbyte], targetbit);		}		if (l & 128)		{			sourcebit = (sourcebyte8) + 7;			targetbit = (size7) + (sourcebyte);			targetbyte = (targetbit / 8);			targetbit = targetbit % 8;			set_bit(⌖_buffer[targetbyte], targetbit);		}		/////// end if not zero		}		sourcebyte8 += 8;		}

I can''t really expand on this any more!

Please vote!
Of course it decompresses!
It's really simple:-

(1) Find most common char, store in P
(2) For all source bytes:
(2.a) If byte is P, write a ZERO
(2.b) else, write a ONE then the byte
(3) Repeat for all bytes

It's really quite simple!
Besides, it's not huffman - it's one of my own invented schemes - I think (I got the idea from RLE2).


Edited by - kieren_j on 4/14/00 3:16:52 PM
Yeah, keep pushing him. It''s almost open source.

kieren_j, can you inform us as to what you are doing? Are you still working on it? Testing? Looking for a patent? What?


Lack

Christianity, Creation, metric, Dvorak, and BeOS for all!
Lack
Christianity, Creation, metric, Dvorak, and BeOS for all!

This topic is closed to new replies.

Advertisement