Advertisement

TCP send time

Started by March 25, 2003 05:12 AM
4 comments, last by skeitaridaudans 21 years, 10 months ago
I´m wondering how long it´s normal to have to wait from the time I send a packet from one computer until it's ready to be recieved by another computer (actually after having been recieved by a server and sent again). So, this is how it goes: Every time I send a mesage I start by sending a packet of a fixed size (4 bytes) which contains the size of the next packet. I then send a packet of that size, usually between 30 and 40 bytes. (this 2 packet thing might be a problem). When the server recieves these packets it sends them on to the computers connected (1 other computer in my tests). I was using blocking TCP sockets and calling select() to check if they were ready to be recieved from but I noticed that even when select() said there was something ready to be read the sockets still had to wait for a while which resulted in stops for a few milliseconds that ruined the gameplay. I am now using non'blocking sockets so I dont wait to recieve until the whole packet is ready to be read. I still notice in my log that when the first packet has been recieved (the 4 byte one) the other one still isn't ready for a few frames (100 - 400 milliseconds) so even though the rendering is now smooth, updates from the other player come in to late so all the movements start with a jump before they are even (I'm taking the delay into account, hence the jump, but it's still not nice). Does anyone see an obvious problem in the way I'm doing this or is it just normal to have to wait 1/3 of a second for data to arrive using TCP? Could UDP possibly shorten this long wait? Thanks alot in advance! [edited by - skeitaridaudans on March 25, 2003 6:17:25 AM]
quote:
Original post by skeitaridaudans
Every time I send a mesage I start by sending a packet of a fixed size (4 bytes) which contains the size of the next packet. I then send a packet of that size, usually between 30 and 40 bytes. (this 2 packet thing might be a problem).



Theres no reason for it. Just send it in one go. When you receive at least 4 bytes then read how long the packet is and when you receive enough data then parse it.

quote:
Original post by skeitaridaudans
Does anyone see an obvious problem in the way I''m doing this or is it just normal to have to wait 1/3 of a second for data to arrive using TCP? Could UDP possibly shorten this long wait?



UDP should send faster (or more correctly it sends right away unlike TCP). But UDP might not reach the destination (unreliable).
I think you can optimize TCP so that it does not wait before sending. But I do not know how.



-------------Ban KalvinB !
Advertisement
quote:
Original post by granat
Theres no reason for it. Just send it in one go. When you receive at least 4 bytes then read how long the packet is and when you receive enough data then parse it.



Yeah! I figured as much and was planning to do that but the wait time is so long I didn''t think it would change that much. If most of the wait time is some TCP ACK NACK bs then that could still actually cut the wait time in half.
Also I''m still using blocking sockets and select() on the server but don''t think that matters much as the server doesn''t (for now) have much else to do than recv() and send().
Maybe fixing this stuff will reduce the problem but if anyone has an answer to the first question it would be nice to hear.
This first question that is:
quote:
Original post by skeitaridaudans
"how long it´s normal to have to wait from the time I send a packet from one computer until it''s ready to be recieved by another computer"



Thanks alot granat!
If anybody''s interested:
I changed the program so that it sends only one buffer, which starts with the buffer size, rather than sending the size seperately. This solved the problem almost completely. It seems to me that the socket needs a minimum time between packets, so that if two are sent with a very short interval the later one will be a bit "late". I still get short delays if changes occur fast in one client which will make it send out packets with little time in between. Then some of those updates will be late in the other clients. This delay is still alot less than it was before.

enough said.
Try turning off Nagle''ing. See setsockopt() and the TCP_NODELAY option.

Basically by default TCP uses what is called the Nagle algorithm. This algorithm is designed to improve bandwidth for applications that send lots of tiny packets (i.e. telnet). The Nagle algorithm delays sending to try to bunch up several of them instead of sending them all individually. It works great if your app is a dumb terminal like telnet but just gets in the way for everything else.

-Mike
Thanks, Anon Mike, alot.
That TCP_NODELAY thing really sealed the deal.
I am now network playing like it aint''no thaing.
woo!

This topic is closed to new replies.

Advertisement