Advertisement

Are sockets full duplex?

Started by July 14, 2003 10:42 PM
30 comments, last by WebsiteWill 21 years, 6 months ago
quote:
Original post by WebsiteWill
ThreadA: This thread is passed a function that does nothing but continuously reads the socket for an incoming packet. When a packet arrives then this thread/function reads in the packet and stores it onto a serverside data structure.

ThreadB: This thread pops a packet from the top of my serverside packet holder and processes it. By processing it I mean that it will first determine what kind of packet it is and then send that packet on the the actual thread that handles that kind of packet.

ThreadC: This thread executes a function that simply waits for a packet to be sent. When received, the thread will send the packet through the socket to the other side. It does nothing more than wait for a packet and then send it when it gets one.

ThreadsD-XXX: These will handle packet types. Not individual packet types but more like one thread to handle all chat packets. One thread to handle all movement_request packets. One thread to handle all inventory_specific packets.
This gets broken down however far I feel necessary but probably not very deep.


In one thread you recv(), lock queue, enqueue request, unlock queue. In another thread you lock queue, pop data off the queue, unlock queue, check the type, then lock another queue, queue data, unlock the second queue, and let it sit until sometime later for processing in yet another thread? Keep in mind that there will be context switches, and periods of time when your threads are waiting for their time slice to run. So this doesn''t necessarily happen in an immediate manner, even with multiple CPUs. I think this is overcomplicated to the extent that it will introduce more overhead than anything else.

ThreadsD-XXX: If you insist on using seperate threads for handling requests, I''d suggest just using a pool of generic worker threads rather than having a specific thread for each request type. That way all of the threads in the pool can spread the load a little better. example: imagine sitting in town where there is no combat, only tons of chat and tons of position updates. The thread handling combat requests is just sitting there while the chat thread appears to be lagging, and the location updates are making the client ''pop'' all over.

quote:
1) I am planning for quad processor machines minimum for the world servers so 4 threads minimum to utilize all processors.

Reality check. That is cost prohibitive to even companies like SOE and Blizzard. They use a larger number of budget Servers (EQ zone servers use single 500MHz systems IIRC). You might want ONE high-end server for your production database server, but you will not get funding for quad processors across the whole backend.

quote:
Please keep poking holes...
Please don''t take this as a flame, it''s just more of what you asked for... Good luck. And if you are a self-funding developer that just inherited a billion dollars or something crazy, then disregard my comments about quad systems.
Heh,

As you can see Will, there are many opinions and models to choose from. The trick is to do the work and see for yourself what works for your particular situation. This includes hardware capacity planning in relation to purchases, in addition to application code.


.zfod
Advertisement
A little uninformed on my end. I was under the assumption that games like EQ or DAoC were actually running multi-processor servers for their worlds. But heck, if a single processor machine can do EQ or DAoC then that means they will work just fine for me.
Considering that EQ came out in what 1999 when processors were I think in the PII 450 range and DAoC is a 2001(?) creation so probably around 1Ghz this gives me good hope. Considering no server purchases/leases will be made for years if ever then I''ll be dealing with quite nice computers. But realistically, if I can achieve even the quality of DAoC at some point in the future then I am pleased

Everything I am working on now is pretty much machine independant. The only dependence will be that, obviously, a faster machine will work even better. I''ll have to dig around and see if i can find any actual details about recent MMORPGs and the server hardware they are using. This would be really helpful. It does indeed make the design a lot more simple if working for a single processor machine. Thanks fingh, I simply wasn''t aware of that being the case. I might be reworking my design in totality. No harm done, that''s why they call it a "design phase"

Thanks,
Webby
I know of several that use single-threaded servers. You just can''t beat the price/performance of single-CPU, rackable Intel Linux boxes (or XxxBSD, if that''s your fantasy).

Locking overhead would kill most MMOG servers, because the number of objects interacting can be pretty dramatic.
Heh,

Like anything else it depends on how your application is designed.

To be general, in most cases a well-written single-threaded application will outdo a threaded application ( being very general here ).

However, like anything that is worth a shit, if your application and hardware is designed to exploit threads it can far exceed a single-threaded application. It all depends.

Just because EQ or DAOC does ''X'', doesn''t mean anything. Your application and architecture can have very different needs than the aforementioned products. Also, if you''re looking to do new things I wouldn''t just accept a mentioned paradigm on a forum for hobbyists, nor would I accept a company like SoE or Mythic''s way of doing things as gospel. Don''t assume all of the the best and brightest people in the world are designing MMOs, because they aren''t.

Give people the benefit of the doubt, but don''t act blindly.


.zfod
There is an interesting article to help you make a decision about what I/O strategy you can or should use in your case:

http://tangentsoft.net/wskfaq/articles/io-strategies.html

-cb

PS: Hi Todd, there should be a 'list of interesting links' in the forum FAQ and this one should be in this list. (I assume you have time on your hands now that the book is out to the publisher... )

[edited by - cbenoi1 on July 16, 2003 8:51:08 AM]
Advertisement
Got any websites like that aimed at Unix? According to that site, my options are limited to blocking sockets, non-blocking sockets and threads. Blocking sockets without threads is completely out of the question. Non-blocking sockets are a possibility. Threads according to that site and the book "Unix Network Programming" are the best way to go.

I do see your issues with my design as I presented it. Thread locking a queue to put in a packet. Another thread waits to lock the queue to insert the packet onto the queue.

But, since this is a queue, FIFO, would it be possible to have a thread push a packet onto the queue without having to lock the entire queue? And the other side would be able to read from the queue without locking it also. I can use a simple counter like if (queue.size > 1) then pop. That way I won''t pop the queue unless I know there are at least 2 so that the two threads won''t be using the same memory at the same time.

The sending the handling threads don''t use the queue at all. Only the accessing thread and it then sends the packet directly to the necessary thread for processing so no locking is needed there.

The sending thread simply recieves the packet to send and sends it. I''m using a thread here because the call to send might not happen immediately. I probably won''t need a thread for this in the end but I''m still working with it.

All in all, the number of threads will probably be fewer than 10 and will each encompass a specific task that in general won''t affect the flow of the rest of the program. This is the method that I see as providing the smoothest IO possible which is what the servers will need.

One design alternative would be to single thread the packet handling with the main processing thread.
So I''d have one thread simply blocking on recvfrom() until it gets a packet. Once a packet is received then it adds it to the back of the packet queue and goes back to blocking on recvfrom()
The main processing thread can be a single thread that does all game logic.
ComputeAI
DetermineCollissionsAsNecessary
ReadInputFromQueue
ProcessInputPacket
DoOutput
etc

So that would narrow it down to 3 threads. One receiving, one processing everything like a single threaded program would and one sending.

This might be a more efficient method as I can certainly see hosting on dual processor machines. Quads and Octets may be out of my range but surely not duals hehe But even on a single processor machine, this model would pretty much work the same because if my theory about the queue is correct, there will be no locking to be done at all on this one.

Thoughts?
Very helpful website BTW.

Webby
> I can use a simple counter

You can protect the ''send'' queue with a pair of semaphores. One counts the number of outstanding message the ''send'' thread need to process making it block when there is nothing to process. The other one counts the number of bytes being queued up and blocks any thread that queues a request that exceeds a preset limit (say 4Mb); otherwise you risk having a queue overrun during a network bolus and things get accumulated to the point of chewing up swap space and make your server fall over.

-cb
quote:
Just because EQ or DAOC does ''X'', doesn''t mean anything. Your application and architecture can have very different needs than the aforementioned products.

Well, I mostly agree with you. There''s one thing that is missing from the above discussion, and that is questions of design validation. Those of us who have played either of the hit games mentioned knows that despite the flaws the games have in actual gameplay, technically, THEY WORK, and both of them work pretty darn well. Experience is a wonderful thing, never discount it. Nor would I blanketly reject an idea that is ''new'' (see below).

quote:
Also, if you''re looking to do new things I wouldn''t just accept a mentioned paradigm on a forum for hobbyists, nor would I accept a company like SoE or Mythic''s way of doing things as gospel. Don''t assume all of the the best and brightest people in the world are designing MMOs, because they aren''t.


zfod, keep in mind that although this forum consists of mostly ''hobbyists'', there are professional developers (games and otherwise) that frequently take part in discussions here, including people from the aforementioned companies. Does that mean that their word is gospel? Uh, no But usually getting something to work well includes going through several iterations of things that -don''t- work well (which brings us back to never discounting the experiences of those that have gone before us).

quote:
I''ll have to dig around and see if i can find any actual details about recent MMORPGs and the server hardware they are using.

Webby, you probably won''t find much info about the technical specs of commercial MMOs (maybe some middleware though!). The only thing I''ve seen published is the fact that DAoC uses dual CPU servers... good luck at any rate.
cb

You mention a queue overrun. Do you mean what would happen if say, messages were coming into the queue faster than they are being processed?

If this is the case then I can set some limits on this. Maybe leave the queue dynamic in the sense that it uses STL and only takes up as much memory as is necessary but limit the upper size of the queue. So that if say 300 (random number) packets are on the queue then the recvfrom thread would not be allowed to add another packet to it. That could get difficult but could work. Semaphores...more stuff to hunt up and read about

I''m just not seeing many ways to implement this so that it is
1) scalable to servers with varying #s of processors.
2) fast enough to accommodate XXX clients (probably on the order of a few hundred).

Again, blocking is out unless the blocking function call is isolated to a thread so that other things can still occurr.

Asynchronous IO in Unix is not very good according to "Unix Network Programming" leaving me with non-blocking and threads.
Plain non-blocking in a program with only one process would not scale at all if I can afford better servers to run it on.

That limits of to threads so now it''s just a matter of coming up with the best design I can. I''ll keep spitting out design options with threads for you all to destroy I don''t mind the criticism one bit. Sooner or later, I''ll hit something that works well enough and voila, gamedev will have some nice networking info to view.

Going to take a while now to work on the design. Will report back when I come up with something different, or at least reasons to defend the old way

Webby

This topic is closed to new replies.

Advertisement