quote:but it didn''t mention that, what if I return from the message callback with something else, something like E_FAIL. I was wondering, if on receving the corrupted messages, I simply return with E_FAIL and in case of guarenteed messeging, will the message be sent again (by the underlying layers)??? Any help/suggestion/link/reference in this matter will be highly appriciated & thanx in advance. Regards, Ejaz.
Return from the message callback function with DPN_OK.
What if I return something else then DPN_OK from the message callback of DirectPlay?
Hi guys,
I''ve develop my lib based on DirectPlay and its working fine, but I''m facing a problem while testing it.
If I send data very fast, then some of the data messages got corrupted.
I understand that there must be some logical problem with that and I''m working on it, but I would like to ask one question.
In the SDK it is mentioned that
> If I send data very fast, then some of the data messages got corrupted.
How do you send it? Using DPNSEND_GUARANTEED or DPNSEND_SYNC or something else? My guess is that you are overflowing the internal buffers while sending the data asynchronously. The return error code from Send() should give you a hint as to what happened to your outgoing message.
On the receiving end, make sure you protect common data areas with a critical lock. Otherwise, more worker threads will be recruited by DirectPlay and more than one thread will end up modifying the same data areas, thus giving rise to an apparent receiver-end data corruption.
-cb
How do you send it? Using DPNSEND_GUARANTEED or DPNSEND_SYNC or something else? My guess is that you are overflowing the internal buffers while sending the data asynchronously. The return error code from Send() should give you a hint as to what happened to your outgoing message.
On the receiving end, make sure you protect common data areas with a critical lock. Otherwise, more worker threads will be recruited by DirectPlay and more than one thread will end up modifying the same data areas, thus giving rise to an apparent receiver-end data corruption.
-cb
Hay cbenoi1,
Long time no see (offcourse its my fault)
Anyway, As usual your guess appears to be correct. I''m trying to do some more test cases to exactely verfiy that where the culprit is lying.
One thing that I would like to ask (sine I''ve been out of touch with DP quite for a while), in case of non-guarenteed messages, will the return value will also tell that what happened to me message? (sound stupid, trying to find it myself, if you do know, do pass the details).
btw, do u know any good book about direct play?
Long time no see (offcourse its my fault)
Anyway, As usual your guess appears to be correct. I''m trying to do some more test cases to exactely verfiy that where the culprit is lying.
One thing that I would like to ask (sine I''ve been out of touch with DP quite for a while), in case of non-guarenteed messages, will the return value will also tell that what happened to me message? (sound stupid, trying to find it myself, if you do know, do pass the details).
btw, do u know any good book about direct play?
> in case of non-guarenteed messages, will the return value
> will also tell that what happened to me message?
No. Two scenarios here. First, your message is queued and eventually sent without any guarantee it will ever reach its destination; it may or may not reach its destination depending on network conditions. Second, if there are too many outgoing messages already queued, DirectPlay will silently drop it without ever sending it; that behaviour is not documented, but based on my observations of DP8 (DirectX 9.0a).
That undocumented behaviour doesn''t explain the corruption you are observing. Here are a few areas you may want to check more systematically. On the sending end, either you are sending with a DPNSEND_NOCOPY without managing the buffers coherently, or sending a slew of non-guaranteed messages and you are missing a few of them on the receiver end. For the server side, the code is not multi-thread safe and threads are colliding in critical data areas, or you are receiving the packets in a different order than you sent them and you fail to reassemble them correctly.
... or you have a systemic corruption you can only observe through the DirectPlay buffers (i.e. it has nothing to do with your DirectPlay integration).
> btw, do u know any good book about direct play?
None that are worth the asked price. Most of the good bits & pieces come from either MSDN or GDC conference briefings, or by talking to other game developers. It is less than likely documentation will be forthcoming in the future as XNA is replacing much of what we know now as "DirectX".
Hope this helps.
-cb
> will also tell that what happened to me message?
No. Two scenarios here. First, your message is queued and eventually sent without any guarantee it will ever reach its destination; it may or may not reach its destination depending on network conditions. Second, if there are too many outgoing messages already queued, DirectPlay will silently drop it without ever sending it; that behaviour is not documented, but based on my observations of DP8 (DirectX 9.0a).
That undocumented behaviour doesn''t explain the corruption you are observing. Here are a few areas you may want to check more systematically. On the sending end, either you are sending with a DPNSEND_NOCOPY without managing the buffers coherently, or sending a slew of non-guaranteed messages and you are missing a few of them on the receiver end. For the server side, the code is not multi-thread safe and threads are colliding in critical data areas, or you are receiving the packets in a different order than you sent them and you fail to reassemble them correctly.
... or you have a systemic corruption you can only observe through the DirectPlay buffers (i.e. it has nothing to do with your DirectPlay integration).
> btw, do u know any good book about direct play?
None that are worth the asked price. Most of the good bits & pieces come from either MSDN or GDC conference briefings, or by talking to other game developers. It is less than likely documentation will be forthcoming in the future as XNA is replacing much of what we know now as "DirectX".
Hope this helps.
-cb
Dear cbenoi1,
I followed your guidlines and following results got:
* Non-Guarenteed Messaging : No problems (cool)
* Guarenteed Messaging + DPNSEND_SYNC : No problems
* Guarenteed Messaging without DPNSEND_SYNC: Now, there comes the problem.
The problem is, these messages when send in bulk (like in a very large loop), then my memory shoot up (like zzzzzZZZZZZZZZZ :D)
The sending process is very very fast then the receiving end and the DP buffer got overflow (thats what I''ve concluded).
I haven''t implemented throtteling yet, I think its time to put that into action to manage Guarenteed Async messaging.
Beisde that, I would like to have your suggestions on another idea.
Is it possible to use P2P solution at internet.....err......I know that it sounds stupid, like when you have client/server architecture for this purpose, then why using P2P.
Well, actually I want to develop something, which is much more robust, like the game state can be managed at the server end, that offcourse server will send to the clients, but for exchanging minor details or such messages that required to be broad cast, peer directly send them to others, rather then sending first to server and so on.
Its something the combination of both P2P & CS. So, what are your suggestions?
btw, thanx alot for the hints, they really saved alot of time & efforts & frustration level too
Thanx & Regards,
Ejaz.
I followed your guidlines and following results got:
* Non-Guarenteed Messaging : No problems (cool)
* Guarenteed Messaging + DPNSEND_SYNC : No problems
* Guarenteed Messaging without DPNSEND_SYNC: Now, there comes the problem.
The problem is, these messages when send in bulk (like in a very large loop), then my memory shoot up (like zzzzzZZZZZZZZZZ :D)
The sending process is very very fast then the receiving end and the DP buffer got overflow (thats what I''ve concluded).
I haven''t implemented throtteling yet, I think its time to put that into action to manage Guarenteed Async messaging.
Beisde that, I would like to have your suggestions on another idea.
Is it possible to use P2P solution at internet.....err......I know that it sounds stupid, like when you have client/server architecture for this purpose, then why using P2P.
Well, actually I want to develop something, which is much more robust, like the game state can be managed at the server end, that offcourse server will send to the clients, but for exchanging minor details or such messages that required to be broad cast, peer directly send them to others, rather then sending first to server and so on.
Its something the combination of both P2P & CS. So, what are your suggestions?
btw, thanx alot for the hints, they really saved alot of time & efforts & frustration level too

Thanx & Regards,
Ejaz.
> * Guarenteed Messaging without DPNSEND_SYNC:
> Now, there comes the problem.
How much Mb are you trying to send?
> I haven''t implemented throtteling yet
DP does that.
> I would like to have your suggestions on another idea
Try increasing the number of threads on both the server and client sides ( IDirectPlay8ThreadPool::SetThreadCount() ) Very heavy loads require something like 40 threads.
> Its something the combination of both P2P & CS.
P2P requires an exact duplicate of the game simulation on each peer along with precise command timings. Check this article for more info:
http://www.gamasutra.com/features/20010322/terrano_pfv.htm
-cb
> Now, there comes the problem.
How much Mb are you trying to send?
> I haven''t implemented throtteling yet
DP does that.
> I would like to have your suggestions on another idea
Try increasing the number of threads on both the server and client sides ( IDirectPlay8ThreadPool::SetThreadCount() ) Very heavy loads require something like 40 threads.
> Its something the combination of both P2P & CS.
P2P requires an exact duplicate of the game simulation on each peer along with precise command timings. Check this article for more info:
http://www.gamasutra.com/features/20010322/terrano_pfv.htm
-cb
Dear cbenoi1,
Sorry for the late response, coz of the weekend.
I''m sending about 4k (at max) data, but generating the string randomly so, the data varies up to 4k.
Currently, I''m simulating without multi threads, like in the main message loop, I''m sending messages like
and in the ProcessSceneTick(...), I''m sending the messages. On thing that I spoted just few minutes back while debugging...I''m sending messages like
Now, the docs statees about DPNHANDLE (hAsyncHandle), This parameter must be set to NULL if you set the DPNSEND_SYNC flag in dwFlags.
If I send guarenteed messages with DPNSEND_SYNC , it goes smoothly, but when I send without it, the memory consumeption starts to increase rapidly.
I''m trying to figure out, where the problem lies. So far, I haven''t use hAsyncHandle to cancel the async operation. The only operation that I used for cancellation is the enumeration of host, if required.
Sorry for the late response, coz of the weekend.
I''m sending about 4k (at max) data, but generating the string randomly so, the data varies up to 4k.
Currently, I''m simulating without multi threads, like in the main message loop, I''m sending messages like
while( !g_bExitApp ) { if( PeekMessage( &msg, 0, NULL, NULL, PM_REMOVE ) ) { if( msg.message == WM_QUIT ) break; //if( !g_cDxSmokeWin.GetSmokeMgrDlg()->IsMessageUsed( &msg ) ) { TranslateMessage( &msg ); DispatchMessage( &msg ); } } else { // Be friendly to other apps and return some cpu cycles.. Sleep( 1000/70 ); QueryPerformanceCounter((LARGE_INTEGER*)&end); //if((end - start) >= update_ticks_pc) { s_uFrameCount++; uint32 uTick = GetTickCount(); static s_uPrevTick = uTick; ProcessSceneTick( uTick ); start = end; } } }
and in the ProcessSceneTick(...), I''m sending the messages. On thing that I spoted just few minutes back while debugging...I''m sending messages like
hr = m_pDP->SendTo( nPlayerDPNID , // dpnid &pBufferDesc, // pBufferDesc 1, // cBufferDesc 0, // dwTimeOut NULL, // pvAsyncContext nSyncMsg ? NULL : &hAsyncHandle, // pvAsyncHandle dwMsgFlag ); // dwFlags
Now, the docs statees about DPNHANDLE (hAsyncHandle), This parameter must be set to NULL if you set the DPNSEND_SYNC flag in dwFlags.
If I send guarenteed messages with DPNSEND_SYNC , it goes smoothly, but when I send without it, the memory consumeption starts to increase rapidly.
I''m trying to figure out, where the problem lies. So far, I haven''t use hAsyncHandle to cancel the async operation. The only operation that I used for cancellation is the enumeration of host, if required.
> I''m sending about 4k (at max) data {...}
Hmmmmmm..... Assuming a 50ms(*) tick time and 4K per tick, you are trying to shove ~80Kbytes/s or ~800Kbps in the Ethernet pipe. That''s a tad close to the theroticical 1Mbps limit of most ADSL modems. And that''s guaranteed packets btw, so there would be some resends given the uncertain network conditions.
No wonder data keeps accumulating in buffers.
-cb
(*) Closer to 15ms or 2.8Mbps given the Sleep() in your code. It''s actually worse if the tick counter is faster than 15ms.
Hmmmmmm..... Assuming a 50ms(*) tick time and 4K per tick, you are trying to shove ~80Kbytes/s or ~800Kbps in the Ethernet pipe. That''s a tad close to the theroticical 1Mbps limit of most ADSL modems. And that''s guaranteed packets btw, so there would be some resends given the uncertain network conditions.
No wonder data keeps accumulating in buffers.
-cb
(*) Closer to 15ms or 2.8Mbps given the Sleep() in your code. It''s actually worse if the tick counter is faster than 15ms.
Dear cbenoi1,
Thanx for the suggestions. I followed your guidlines and found that you are correct. I test my code with different values for the sleep like Sleep( 1500 / 70 ), Sleep( 1000 / 70 ) & Sleep( 500 / 70).
at Sleep( 1500 / 70 ) and found that you are right. More data is being send at the eithernet, specially on the LAN if the conditions varies (which do most of the time, specially in my case, they alot), then messages starts to pile up at one another.
As you suggested, I''m trying to calculate the work load and an appropriate value for the tick.
If you give any more suggestion, it will be more then welcome. Thanx for the help anyway.
Regards,
Ejaz.
Thanx for the suggestions. I followed your guidlines and found that you are correct. I test my code with different values for the sleep like Sleep( 1500 / 70 ), Sleep( 1000 / 70 ) & Sleep( 500 / 70).
at Sleep( 1500 / 70 ) and found that you are right. More data is being send at the eithernet, specially on the LAN if the conditions varies (which do most of the time, specially in my case, they alot), then messages starts to pile up at one another.
As you suggested, I''m trying to calculate the work load and an appropriate value for the tick.
If you give any more suggestion, it will be more then welcome. Thanx for the help anyway.
Regards,
Ejaz.
One more question ...... :)
Is it possible to increase the size of direct play buffer. I mean, before sending the message at the network pipe, the message first goes to the direct play, and for guarrenteed message, it remains there until the confirmation received.
So, if continuously I send guarrentted messages of larges size and increase network traffic at LAN (by artifical means, like copying large files from here to there), then my messages has to wait longer in the buffer and eventually they got corrupted.
I didn't find anything like, is it possible that I specify somehow to direct play to use larger buffer then the normal, so that my guarrentted messages can stay there for a while. (Sounds silly isn't it :) )
Or how about, for larger messages, should I use some compression to reduce the network traffic? Offcourse this will increase processing as well.
Any comments will be highly appriciated & thanx in advance.
Regards,
Ejaz.
Is it possible to increase the size of direct play buffer. I mean, before sending the message at the network pipe, the message first goes to the direct play, and for guarrenteed message, it remains there until the confirmation received.
So, if continuously I send guarrentted messages of larges size and increase network traffic at LAN (by artifical means, like copying large files from here to there), then my messages has to wait longer in the buffer and eventually they got corrupted.
I didn't find anything like, is it possible that I specify somehow to direct play to use larger buffer then the normal, so that my guarrentted messages can stay there for a while. (Sounds silly isn't it :) )
Or how about, for larger messages, should I use some compression to reduce the network traffic? Offcourse this will increase processing as well.
Any comments will be highly appriciated & thanx in advance.
Regards,
Ejaz.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement