Advertisement

Detecting when Bandwith fully used

Started by December 16, 2002 01:44 PM
6 comments, last by LonelyStar 22 years, 1 month ago
Hi everybody, in an Action Game, were I send "state updates" as often as possible, I want to send Messages as often as possible, band not so often that the Bandwith is not sufficient anymore and latency raises. How do I detect, how many Package a seceon I can send and when Bandwith is fully used? thx
I don''t think you can at the application level. I''m not sure that any of the network protocols related to TCP/IP or UDP detail bandwidth detection in their specifications. I could be wrong (though I''m pretty sure I''m right ).

You basically just have to go by a general rule of thumb:

Cable - 30-300 KB / sec upload (REALLY depends on provider)
Dial-up - 2-3 KB / sec

[email=direwolf@digitalfiends.com]Dire Wolf[/email]
www.digitalfiends.com
Advertisement
I don''t know if you can do this. I also don''t think that DirectPlay supports this feature either. I think you''d have to implement it yourself.
---------------------http://www.stodge.net
One way to do this is to keep track of the ping between the server and each of the clients. Then, you can send updates to the clients more or less often depending on their ping. Not sure if this would present any problems, but it seems like it could work

-John
- John
A ping will give you some idea what the connection''s latency is like, but it won''t help you determine available bandwidth. Imagine a Quake server in Texas. A dial-up user in Arizona might have a better ping than a cable user in Australia, even though the cable connection offers much more bandwidth.

Latency and bandwidth are two different things.

Most games just ask the user to select the type of connection they''re on (28.8, 56k, ISDN, xDSL, cable, etc.) and then make assumptions about the availability of bandwidth based on that information.
here's an idea which works (i just tested it) basically just send a whole lot of data and see how long it takes , don't know if there is a better 'known' way to do it...

this is what i did and then realised it was over-elaborate, a simple blocking send and time would be sufficient but since I have already typed it out:

---------------------------------
(sender must be faster than receiver !!)
set the sending socket to non-blocking.
send small messages to the receiver as fast as you can until you get a WOULDBLOCK.
loop {
start timing
loop send until you DONT get WOULDBLOCK
stop timing
totalTime += thisTime;
bytesSent += msgSize;
}
you can watch bytesSent/totalTime until it seems to have converged to a certain value.
I found 1kb was a good size for the messages, although WARNING sometimes took up to 30 seconds to get a accurate convergence, you dont wanna make people wait for that do you... also i found with this method that since the timing of the initial messages seem to fluctuate greatly, sending the first 100 or so messages without timing gets a better average. I don't think its so necessary to get such exact measurements though, as noted above using the norm for 28.8k 56k etc and making this a user setting would probably be good enough.


[edited by - zppz on December 17, 2002 5:03:01 AM]
Advertisement
Ahh, good point MattB. I obviouosly hadn''t thought it through too far

However, that aside, do you think doing something like I was talking about would be helpful to do anyway? That is, re-calibrate the number of updates you''re sending based on the latency. Seems like this could help a bit when things start to lag since you wouldn''t be clogging things up with more updates than they can handle.

...just thinking aloud for my own stuff.

-John
- John
The easiest way may be to examine change in packet loss and adjust your rate on that. If you are sending out sequenced UDP Packets and notice an unacceptable loss, try lowering the rate. Then if the loss decreases, the problems may be bandwidth related instead of general network trouble. On the flip-side, you could increase the send rate until you notice an increase in packet loss. Then go back to the previous rate and continue to monitor packet loss.

This topic is closed to new replies.

Advertisement