Hi everyone,
I'm building my mmorpg from scratch, it's going pretty well I would say. ( I will surely open a development blog soon)
My network protocol is built on top of tcp, and a couple of days ago I implemented a way for me to spam other players in the world, that open their own connection to the server. (for now I'm running both client and server on localhost).
Everything is fine, (indeed I'm surprised to see 300 players easily handled in these conditions) except for a sublte bug that I'm having from yesterday.
My world is subdivided in islands, and when a player changes island, the server sends the player information about the new island, so that the client can open a new connection to the other server. (Yes I'm running 2 server on my pc)
When the numbers of players increase, sometimes the client don't receive that "critical" message, and so the connection to the other island never happen, even if the server has sended it. (or at least the call to send() returned the right number of bytes).
So my question is: could it be the case that even if the call to send() effectively returns the right number of bytes, in reality the clients dont receive them because their buffer is too full? that would be explained by the fact that all is running locally, so I guess both client and server are sharing the same tcp receiving buffers at their base.
Shound I implement an "acknowledge" for those critical packet? so that until the server don't receive the acknowledge it continues to send the critical packet again and again?
That would sound a bit odd, because that's the principle around tcp right? but I'm pretty sure the bug is not something in the logic of the game, is just that the server call to send return a "fake" value.
Maybe in real world scenario this will never happen?
Thank you all, Leonardo.