Original post by xJOKERx • 3,000GB Monthly Transfer • 100Mbps Non-Capped Network Connection
What does the none capped network connection mean? It says above a 3TB Monthly transfer. Is that not a "cap". Or does that refer to transer between the server and the server owner (for uploading content and such)?
I suspect the "cap" refered to is "max bandwidth" - that is, your connection will not be capped at 80Mbps, you can use all 100Mbps if the need arises. The 3TB/month limit may also not be a hard cap, rather, simply the amount of transfer included within the base plan after which you will be charged extra for exceeding.
As best i remember a server unit hosts a system, or more likely several, and they all plug into the database component(s). IIRC one of the early upgrades was to the interconectivity between the servers and the database.
EDIT: Thinking about it this is probably one of the earliest decisions you might want to consider. Large numbers of players with an uncertain load per server is probably easier to plan for if the database component is separated from each physical server.
Does anyone know what servers halo for pc uses? or how many it would use? when you download halo it downloades with it gamespy arcade is this what hosts the game online? another feature i like is creating dedicated servers from your computer and it will automaticly creat the server and run it untill you turn your pc off ofcurse but i don't know how hard that would be to impliment into a server.
I've been working for the past couple of months for a small indy studio developping a mmorpg called Star Sonata. I haven't been part of any of the initial design and coding, I got on the team mostly to do post release maintenance, upgrades etc. As someone said, as an indy developper just hope you'll have some users to begin with when you start to ask players to pay. At first, we barely had enough users to pay for the server costs. Now we're actually doing pretty well as far as user base growth goes but a growing player base is a problem in itself, which brings me to my advice: Plan ahead when you design your server code.
I've mostly been fighting a constant battle to keep our server chugging along at a decent speed so that the game experience never go too bad. Our lag problems have never been network related, our net code is actually pretty good, it's always been the server crumbling under its own weight. We started with about 200-250 separate solar systems which the server could barely support to 800 solar systems and a decent player experience. A lot of problems were lack of foresight, like loops going thru all the players until it found the one it was looking for. Another one going thru all the items, in all the ships of all the players who ever logged on at some point. Some were similar in cause but MUCH more subtle and it was hard to find them when you're air dropped into a 120k lines code base.
Sure, it worked great when we had 60 players with a few galaxies but it wasn't never meant to be extended to a much larger player base and the game almost died under it's own weight. We had period of time where I was scrambling to find a fix to our server lag problems where user subscriptions just stopped cominb in because the game was so unplayable due to lag so it's a pretty big issue, especially when you're an indy studio on a shoestring budget.
Probably the biggest mistake in my personal opinion, was ignoring completly clustering when creating the server architecture. We're going to be in deep trouble if the user base grows too fast for me be able to finish creating a clustering system. Actually, it's more a question of how much space we want to give players to build into ( they can create user bases, build colonies, deploy drones and slave ships that do their bidding while they're afk etc ). Running all that real estate is where all the juice goes, having a lot of users online isn't much of a bother. Anyway, players need space to have fun since you can win the game by conquering the galaxy and it's boring if you don't have enough space to move around ^_^.
Right now, I almost lost the battle. We had the server become downright unplayable for 4 days because some unforeseen O^2 function really started to kick in. It's been a 40 hours straight marathon to fix it and a bit of foresight could've save me a lot of pain. It's back to running smoothly again and we're moving to a dual opteron in under a week so we're not going to be worried about performance for a long time. I've fine tuned a lot of the code to run under a 64 bit architecture and we have the fastest performance we've ever achivied right now and we have a lot of room to grow too. It's exploiting both cpu now, with threads. Not much but a bit and I'm going to expand it to be fully parallel for it's update cycle as a first step toward clustering.
For our defense, I could say that it's a twitch space fighting game to add to our problems, so any amount of server lag shows. Had we been a rpg we could've gotten away with much higher server update cycles.
Some parts were well planned, like networking. We've never had any problems at all with bandwith and net lag. There's a pretty decent dead reckonning system, which could be improved still of course but that still does a great job.
I'm rambling, I know, I haven't gone to sleep yet and I'm beginning to be slightly incoherent but my point still stand: Plan ahead for future expansion, it isn't much harder and it'll save you an incredible amount of pain and time you could put into new features and content. Also, IMHO, if you can cluster, you can just throw another machine at it and solve your server lag issues. It's going to reduce the profit margin but it gives you time to solve your server lag issues properly and then you can go back to one or use two just in case and have more room to grwo thanks to your fix but you don't have your back agaisn't the wall.
If you want to check it out, it's www.starsonata.com . We still have some minor lag issues but it's mostly under control and it's going to be immaterial once the move is completed.
If you're having significant steady-state server load problems, Vtune, gprof or oprofile are going to tell you why.
In my opionin, it's the occasional big stutters that halt processing for a few seconds -- only once in an unpredictable while -- that will keep you up at night... and, yes, all of them are usually the fault of some programmer not thinking ahead in one way or another.
Original post by xJOKERx • Intel P4, 3.2GHz, HT, 800MHz Bus, 1MB Cache • 160GB RAID rated Server-Class Hard Drive • 1,024MB Dual-Channel DDR400 SDRAM • 3,000GB Monthly Transfer • 100Mbps Non-Capped Network Connection • 5 IP Addresses
Sorry for the OT question:
What does the none capped network connection mean? It says above a 3TB Monthly transfer. Is that not a "cap". Or does that refer to transer between the server and the server owner (for uploading content and such)?
Good luck with your game xJOKERx. They're hard to pull off :)
It can mean a couple of things. The first is that your 100Mbps connection (ethernet LAN connection on your dedicated server) is not going to be plugged into a smaller pipe (50Mbps) before going out the door.
The second thing is can mean is that your connection is guarenteed to be 100Mbps all the way out the door (all the way to a OC class backbone pipe) - so the data center will ensure 100mbps are dedicated to your connection, even if other servers are using peak bandwidth (remember that all the centers servers are basically connecting the same trunk to get to the backbone). In some situations your 100Mbps would only be guarenteed if other servers where not using the bandwidth (like getting home cable internet)
Yes, I'm a big fan of gprof for general optimisations.
But lately it was mostly stutter bugs that happened at unpreditable times, random peaks etc.
Plus yes it was a constant battle but I'll admit I was rather tired and painted a grim portrait. I do spend quite a lot of time on adding features and fixing bugs, it's the combination of doing all that stuff and meetting deadlines that can make it challenging.
it's the combination of doing all that stuff and meetting deadlines that can make it challenging.
We're drifting into software engineering process here, but a deadline cannot have a fixed manpower pool and a fixed feature set and a fixed date. One of the three has to be flexible (typically, the feature set).
Thus, when you have many bugs during a certain iteration, you will get fewer (if any) features done. Working longer hours to meet all the different requirements may work for a single deadline, but quickly wears you down over time.
The nice thing with having the same people fixing bugs and adding features is that the bug/feature balance is self-regulating. If you add too many features for the given timeline, they will be buggy, and you won't be able to add more features until you've fixed the bugs ;-)
Alright, I admit I didn't read every post. So bear that in mind when I write this response.
The number of servers you will need will directly be influenced by your server software developers. Asking "How many servers would you need for X game" won't be enough info by any means. For example, from what I've come to know, World of Warcraft is heavily based on real-time database queries (which I wouldn't have done personally) and on top of that they don't have people very experienced with optimizing database flow. So what happens is, they'll need to toss more cpu processing power at their servers than some other team of skillful developers would have. The client isn't what decides the load (well, not all of it) it's the back end. So I guess the real question is, how skilled is your team at writing servers? Are they the kind of people who can be efficient on every level? Or are they the type of people who multithread everything because they don't know how to implement a non-blocking singlethreaded program?