Quote:
Original post by hplus0603
If you're reading from a single socket, using select() just means you make a useless kernel call. It's more efficient to just set the socket in non-blocking mode, and reading from the socket until it returns no more data; then go through your regular game loop.
Agreed. If you're really concerned about optimizing your main game loop, you need to evaluate and weigh every single kernel transition your loops causes.
Quote:
Non-threaded game loops mean that you don't have to worry about locking your world state, which is a performance improvement. However, on a multi-core CPU, you'll probably do OK by doing receipt in one thread, and put data on a queue for the main world processing thread to pick up; you only need to lock this queue once per game tick which is a negligible cost. You can even double-buffer the queue if you're so inclined, although that may incur a latency penalty.
I do most of my programming on a dual xeon, and all of it in Windows. When it came time to write my octree code and run through all the entities and determine which entities can "see" the other entities around it, I found doing this in multiple threads was both easy and yielded phenomenal performance gains. I could successfully run this computation on 500,000 entities in about 750ms in my simple unoptimized test octree app using both cpu's (speed depends on density of the entities and other factors of course).
This ran with two threads running side-by-side, calling InterlockedIncrement() on a shared index to the entities. There is one SWMR (single writer multiple reader) lock class I wrote that manages the array in its entirety, but those locks are only called at the beginning and ending of each walk.
Ya'll can pry multithreading from my cold dead fingers! Until then, I ain't givin' it up!! :)
Robert