Hi
Do you use QueryPerformanceCounter to measure the time? The standard windows timer only has a resolution of 10ms, so it isnt good for measuring ping!
Seahawk
unusual ping times
yeah, Im using the performance timer.
the ping times are REALLY bad, I mean 40 - 100ms. I put the server code in a barebones console application so the only thing that is being run is the netcode.
I am thinking the time it takes to render the scene in the client affects the lag because the client application is single threaded, which is why Im gonna try making it multithreaded.
Edited by - evilchicken on November 26, 2001 11:27:30 PM
the ping times are REALLY bad, I mean 40 - 100ms. I put the server code in a barebones console application so the only thing that is being run is the netcode.
I am thinking the time it takes to render the scene in the client affects the lag because the client application is single threaded, which is why Im gonna try making it multithreaded.
Edited by - evilchicken on November 26, 2001 11:27:30 PM
i already have it multithreaded, i''m not using any timer stuff, just GetTickCount() which returns a DWORD accurate to the millisecs
ne other ideas?
yes, i have used the ping in command and have gotten 0ms average
ne other ideas?
yes, i have used the ping in command and have gotten 0ms average
i''m using the TCP/IP protocol, not UDP. that probly makes a difference
also, you dont divide the time it took between client''s send & recieve by 2, right?
also, you dont divide the time it took between client''s send & recieve by 2, right?
GetTickCount() is not very reliable...You need the highperformance timer !
-------------Ban KalvinB !
are you sure? do you have a link to support that? also, where can i learn about highperformancecounter?
class C_TIMER{private: C_SYS_LOG SYS_LOG; __int64 frequency; float resolution; __int64 start; __int64 elapsed; public: float timeElapsed(void); C_TIMER();};C_TIMER::C_TIMER(void){ if (!QueryPerformanceFrequency((LARGE_INTEGER *) &frequency)) { SYS_Log.Print(" ERROR: PerformanceTimer not available"); PostQuitMessage(0); } else { // Performance Counter Is Available, Use It Instead Of The Multimedia Timer // Get The Current Time And Store It In performance_timer_start QueryPerformanceCounter((LARGE_INTEGER *) &start); // Calculate The Timer Resolution Using The Timer Frequency resolution = (float) (((double)1.0f)/((double)frequency)); // Set The Elapsed Time To The Current Time elapsed = start; }}float C_TIMER::timeElapsed(void) { __int64 time; // Grab The Current Performance Time QueryPerformanceCounter((LARGE_INTEGER *) &time); // Return The Current Time Minus The Start Time Multiplied By The Resolution And 1000 (To Get MS) return ( (float) ( time - start) * resolution)*1000.0f;}
I know from my own experience that timeGetTime() makes animation and movement jumpy because it is not very precise.
I have read here on the boards that GetTickCount() is just as bad. When I wrote the values from timeGetTime() to a file it looked like this:
10
10
20
10
10
20
..
..
As you can see it makes occational jumps which is bad.
I have read here on the boards that GetTickCount() is just as bad. When I wrote the values from timeGetTime() to a file it looked like this:
10
10
20
10
10
20
..
..
As you can see it makes occational jumps which is bad.
-------------Ban KalvinB !
November 27, 2001 01:51 PM
you overlooked the obvious (or i misread heh) Sounds like you are sending the tickcount to the server then the server responds with its own tickcount. VERY VERY bad, since the clocks are not synced and will defintaly be off and give poor ping times. the algo you should use is as follows:
1. client sends ping packet (PING) (store start clock value locally)
2. server recives PING and responds (ACK PING)
3. cient recives ACK PING and now caculate the elapsed time (figure endtime as the clock value when you recieved the ack packet). dont divide by 2, unless you dont need round trip time.
goto step one if you need more ping times for avg (dont do more then 3-5 otherwise you are just flooding)
4. dont send the same server mulitple packet at once in a flurry. instead just send one wait for reply, the send another if needed.
1. client sends ping packet (PING) (store start clock value locally)
2. server recives PING and responds (ACK PING)
3. cient recives ACK PING and now caculate the elapsed time (figure endtime as the clock value when you recieved the ack packet). dont divide by 2, unless you dont need round trip time.
goto step one if you need more ping times for avg (dont do more then 3-5 otherwise you are just flooding)
4. dont send the same server mulitple packet at once in a flurry. instead just send one wait for reply, the send another if needed.
i''m not sending back the server''s tickcount (yah, u misread =) )
i put my tickcount into a packet, send packet
server gets packer & immediately returns it (doesnt change it)
i read the tickcount from packet & calc ping
i think my problem is in the 10ms resolution of gettickcount
i''m changing that now
i put my tickcount into a packet, send packet
server gets packer & immediately returns it (doesnt change it)
i read the tickcount from packet & calc ping
i think my problem is in the 10ms resolution of gettickcount
i''m changing that now
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement