Advertisement

Is Something Wrong With "gettimeofday"

Started by September 05, 2005 06:26 AM
7 comments, last by _winterdyne_ 19 years, 5 months ago
i use gettimeofday calculate the time,and if it satisfy the condition,i will send the packet to client.

void Test()
{
struct timeval tv;
gettimeofday(&tv,NULL);
printf("Time:%d,%d\n",tv.tv_sec,tv.tv_usec);
sendPacket();
}

the difference between the printed time under linux is right,but when i recvPacket() in the client which runs under windows 2000pro,the differece between the received packet time is less than the printed time under linux. also i use tcpdump capture the sending packet under linux, the diff time is less than the real time. linux OS: AS4 i use "ace" to send the packet! Can u tell me why? Thank u in advance!
Gonna need more information.

If the printed value on linux is correct, then why would there be a problem with gettimeofday? The point at which you have a problem is only after sending it on linux and recieving it on windows.

-=[ Megahertz ]=-
-=[Megahertz]=-
Advertisement
It's quite possible that the scheduler jitter such that you receive two packets closer to each other in time than they were sent.

If you need a specific interval for packets, you need a de-jitter buffer, which receives packets as soon as they come in, and schedules them for delivery at the predictable interval. Ideally, put a time-stamp in the packets on send, too.
enum Bool { True, False, FileNotFound };
I try all the methods!

Precondition:
1. the buffer of sending packets is 16M.
2. the diffience time between Server and Client ,i mean ,sending packets and receving packets is often about 100ms.
3.i write a *Test* program to test the ACE framework, it runs right!
4.i move all the codes from LINUX to WINDOWS,and i replace gettimeofday with
GetTickCount,i also runs right!


and i search gettimeofday in google's group,it's notorious! can u tell me which function can replace the gettimeofday, i will use millisecond!


Thank U In Advance!
It's not something silly is it?

Remember the usec is MICROSECONDS on linux... from my timer class, works ok on both linux and windoze.

float WSTimer::getSeconds(){#ifdef WSCORE_WIN32   // timeGetTime is MILLISECOND PRECISION	m_iEndclock = timeGetTime();#endif#ifdef WSCORE_LINUX  // gettimeofday is MICROSECOND PRECISION	gettimeofday(&tv,0);	m_iEndClock = tv.tv_sec * 1000000;	m_iEndClock += tv.tv_usec;	m_iDeltaTime = m_iEndClock - m_iStartClock;#endif	// Rate_inv on windows = 1/1000        // Rate_inv on linux = 1/1000000        m_iDeltaTime = m_iEndClock - m_iStartClock;	return (float)((double)(m_iDeltaTime)*m_fRate_inv);};



Does this help?
Winterdyne Solutions Ltd is recruiting - this thread for details!
Thank u , _winterdyne_ !

The problem i occur is an exception!
i know how to use gettimeofday?but the fact is that when i am sending the packet,i print the time, and when i receive(using tcpdump capture the packet or using a short winsock program to capture the packet and print the time).

for example

[Server]
// the difference is 1120ms
1222222222,123456 Sending the packet 1
1222222223,243456 Sending the packet 2

[Client]
// the difference is 1000ms
26555555,123 Receiving the packet 1
26555556,123 Receiving the packet 2


this is not the factual data,but the difference is right!

I dont know how to explain it?
Can u explain it for me?

BTW:i trace the program, it immediately sending the packet to Client!
Advertisement
I see.

You're saying the delay between packets on the server is longer than on the client.

What is the size of the packets? Is 2 larger than the MTU on the server but smaller on the client?

I think you'll have to do what hplus has suggested and implement a receiving buffer to schedule deliver at a predicatable time, if you really need to do so.


Winterdyne Solutions Ltd is recruiting - this thread for details!
Thank u,_winterdyne_ !

My sending buffer is 16M ,and my packet is about 0x39 bytes!
if there is anything wrong with the transfering packets ,why the interval of sending two packets is often longer 100ms than the client receiving the packets!

i cann't explain it,and u?

Where are you printing the time difference? What is happening between the sends?
What socket implementation are you using to send and receive?
Is nagling (Nagle's algorithm) occuring in send? Try setting TCP_NODELAY if you're using TCP methods.
Winterdyne Solutions Ltd is recruiting - this thread for details!

This topic is closed to new replies.

Advertisement