MP3-Beating Compression
Actually, now that I think about it (this is the logical side of ime, ppl), how WOULD you convert a 100Meg file to 3 bytes w/o losing information? It just don''t make [logical] sense!!
byte has finite amount of values
any file is a long string of bytes
100meg file has finite amount of byte order combinations
if you start trying all possible combinations in some predefined pattern, eventually(after VEERY long time tho) you reach that particular one. now you just have to store the index(order number) of this combination.
(just kidding, theres one big flaw with this scheme ... guess )
-kertropp
C:\Projects\rg_clue\ph_opt.c(185) : error C3142: 'PushAll' :bad idea
C:\Projects\rg_clue\ph_opt.c(207) : error C324: 'TryCnt': missing point
any file is a long string of bytes
100meg file has finite amount of byte order combinations
if you start trying all possible combinations in some predefined pattern, eventually(after VEERY long time tho) you reach that particular one. now you just have to store the index(order number) of this combination.
(just kidding, theres one big flaw with this scheme ... guess )
-kertropp
C:\Projects\rg_clue\ph_opt.c(185) : error C3142: 'PushAll' :bad idea
C:\Projects\rg_clue\ph_opt.c(207) : error C324: 'TryCnt': missing point
-kertropp C:Projectsrg_clueph_opt.c(185) : error C3142: 'PushAll' :bad ideaC:Projectsrg_clueph_opt.c(207) : error C324: 'TryCnt': missing point
I have web sites with information about huffman, lzw, and rle compression routines. But they are a secret!
Ok....listen. The numbers he queted are NOT concrete numbers. I bet everyone does that...exagerate when talking about something. I DO NOT believe for a second that he compressed a 100Mb file to 3 bytes. That IS mathematically impossible unless you have a file with just repeating values Eg:
Byte 1 - Byte 2 - Byte 3
----------------------------
Value No. of times repeated.
But hey, thats not the point. I am sure he will give us a demo (I haven''t seen anything yet). But thing about it. 100Mb to 5Mb. That IS mathematically possible.
I am not saying he is right or wrong. I am saying: Stop being so single minded about it, ''cos if he is right, then you will be fu^$$d!!!
Byte 1 - Byte 2 - Byte 3
----------------------------
Value No. of times repeated.
But hey, thats not the point. I am sure he will give us a demo (I haven''t seen anything yet). But thing about it. 100Mb to 5Mb. That IS mathematically possible.
I am not saying he is right or wrong. I am saying: Stop being so single minded about it, ''cos if he is right, then you will be fu^$$d!!!
OK people, there''s this small branch of mathematics called information theory. In other words, the study of how information is needed to represent something. Compression absolutely can not overcome that boundary. Think of it this way... There are 2^N possible files that are N bits long. To compress any such file to a shorter length means that the same algorithm must also "compress" other files of that length to some longer length ... decompression MUST be deterministic after all.
Thus, even if kieren were to say that he could reduce the size of any file by 50%, I''d still know he had been hitting the crack pipe a little too often. Decompressing all possible files of length (N/2) can only yield, at most, 2^(N/2) files, which is still much less than 2^N. Give it up kieren (and anyone else hoping that he''s right) this has been done many times before. Anyone heard of gzus? (sp?). It was a "miracle" compression program on an older system that just dumped huge pieces of the file to a protected part of the disk where the user wouldn''t notice.
Have a nice night. Drive home safely.
-Brian
Thus, even if kieren were to say that he could reduce the size of any file by 50%, I''d still know he had been hitting the crack pipe a little too often. Decompressing all possible files of length (N/2) can only yield, at most, 2^(N/2) files, which is still much less than 2^N. Give it up kieren (and anyone else hoping that he''s right) this has been done many times before. Anyone heard of gzus? (sp?). It was a "miracle" compression program on an older system that just dumped huge pieces of the file to a protected part of the disk where the user wouldn''t notice.
Have a nice night. Drive home safely.
-Brian
April 06, 2000 08:55 PM
Hey guys not that i know anything but taking 100MB of information and making it that small seems a little far off. Unless you could describe all the possible patterns inside the 100MB and then create a decoder for that....but G.. how long could that take? And how big of a information data base would you need to store all those different patterns?
Then again I could be totally wrong... im merely working on my first 3d game engine.. and so little information you can find for someone new 2 that field that explains the processes right
anyways more power 2 yas...if u get it 2 work let me know cause i got about 700 cd''s i wanna get rid of
-SillyClown
Then again I could be totally wrong... im merely working on my first 3d game engine.. and so little information you can find for someone new 2 that field that explains the processes right
anyways more power 2 yas...if u get it 2 work let me know cause i got about 700 cd''s i wanna get rid of
-SillyClown
April 06, 2000 08:59 PM
Wow. I can''t believe this thread has actually recieved as much discussion as it has! Are we so wanting for something to do online that we would bother posting anything under this obviously BS thread other than "sure, whatever you fucking putz".
I was going to post a rant here, but osmanb covered my points for me. My guess is that kieren really believes he has found something, but he''s just lacking basic information theory knowledge.
People invent extremely-high compression algorithms all the time. One is even patented. None of them have been shown to work in practice. There''s only so much information you can fit into 16kb. Assuming you can compress any 5Meg file down to 16kb is just wrong. There''s not nearly enough information there to guarantee uniqness of each compressed file from so large a source.
So, its a bit harsh to attach kieren. Better to let him figure out what he''s missing on his own, if he''s unwilling to share his insight with us.
People invent extremely-high compression algorithms all the time. One is even patented. None of them have been shown to work in practice. There''s only so much information you can fit into 16kb. Assuming you can compress any 5Meg file down to 16kb is just wrong. There''s not nearly enough information there to guarantee uniqness of each compressed file from so large a source.
So, its a bit harsh to attach kieren. Better to let him figure out what he''s missing on his own, if he''s unwilling to share his insight with us.
OK, well first of all 3 bytes was me giving an example and just showing off! 100mb would actually come to about 400kb.
Secondly, I''ve just fixed a bug in the bitstream code, and now it only works half as well as it did - but it''s still pretty amazing!
I thought I''d give you a clue to how it works; so..
It uses 6 stages of compression, using bitstreams; stage 1 re-orders all of the bits in a file, making new more "compressable" bytes.
In case you''re interested
Secondly, I''ve just fixed a bug in the bitstream code, and now it only works half as well as it did - but it''s still pretty amazing!
I thought I''d give you a clue to how it works; so..
It uses 6 stages of compression, using bitstreams; stage 1 re-orders all of the bits in a file, making new more "compressable" bytes.
In case you''re interested
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement