Advertisement

Delta compressing

Started by May 20, 2005 04:46 PM
0 comments, last by hplus0603 19 years, 9 months ago
Okay I got some questions about delta compressing, I just wanted to make sure I understand delta compression right. Also some constructive feedback on my current implention would have been good. Delta compression works by only sending the part of the data that has changed. right? I have tryied to implent a genetic delta compression. So that new objects dont have to think about adding delta compression. Im my current implention I have every net_object serialize themself into a binary buffer that store them in order so I can just un serialize them on the other end. Like:

SomeNetObject::Serialize( NetBuffer& buff )
{
     buff << m_hp << m_ammo << m_pos;

     ParentObj::Serialize( buff );
}

SomeNetObject::UnSerialize( NetBuffer& buff )
{
     // must be in the same order as it was serialized
     buff >> m_hp >> m_ammo >> m_pos;

     ParentObj::UnSerialize( buff );
}

// netbuffer can be treated as a BYTE[] when its sendt across the wire
I have this working atm. Now I want to delta compress the buffer. This is somewhat inspired by the zen of network programing (was that the name?) I know the last state the object was buffred in. So I delta compress between the LastState and NewState. My current implention works on byte level, and create chunks of the data that have been changed.

struct Chunk
{
        BYTE start_pos_in_buffer;
        BYTE chunk_size;
        BYTE* data
};
But there must be some smarter way to do this, as this scheme limits the buffer to 0xFF in size. And scales badly when the changed data dont format nicely. Like changed - not_changed - changed - not_changed. This will generate 2 chunks. Each chunk of a min size of 2bytes + changed_data. Pros: Is that it will update all variables that you serialize automaticly. And only when they changed, it also suport the OOP scheme nicly as every object act in layers and you dont have to make spesial code to suport this. Cons: Can generate a lot of chunks. ------------------------------------------- I really like the Serialize/UnSeralize system. Its mostly the delta compression scheme and implention that I want to disscuss. Thanks for all help, links and construtive feedback. - CoMaNdore edit: changed some formating, and some spelling. [Edited by - CoMaNdore on May 20, 2005 8:13:44 PM]
- Me
Dynamically discovering and transmitting the format of your data is likely always going to require more representation than an implementation that takes advantage of static knowledge. That's the price you pay for dynamicism.

However, the good thing about dynamic solutions is that they adapt to do the best they can with whatever input data is available; this means that a dynamic algorithm is likely to out-perform a poorly tuned static algorithm, and it will likely do a good job on new data without needing re-tuning.

When it comes to specific representation, how about a byte stream of edit commands? The decoder would work something like:


while( data ) {  b = nextbyte;  if( b < 0 ) {    copy 1-b bytes from source    increment pointers by 1-b  }  else {    copy b+2 bytes from the next data in the packet    increment pointers by b+2  }}


This allows you to send as much data as you want, where the overhead is, worst case, one byte for every 129 bytes of state, and the best possible savings is sending only a single byte per 129 bytes of state. The adjustment-by-2 comes from the fact that stopping an emitted data stream just to copy a single byte is actually a loss; you only want to copy chunks greater than one byte (or, depending, two bytes).
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement