You better count on spending $10K+ per "world" server. Even then EQ can barely handle 2000 players, much less 10000.
hardware requirements for MMPOG
I have a question for those who''ve played it. Play the game and then report how much data was transmitted over your modem while you were playing and for how long. I''m just curious.
Ben
Ben
The reason why EQ only supports 2000 active players per world is due to the finite amount of in game places to go. If there''s 100+ people in one zone, there''s not enough stuff to fight.
I too would like to know about the modem bytes rx & tx...
I too would like to know about the modem bytes rx & tx...
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
January 27, 2001 11:17 PM
Im getting the impression that everyone in this forum makes multiplayer games but no one plays them! 8^)
-ddn
-ddn
10,000 simultaneous users is likely to be quite a challenging project, to put it gently! As several readers have pointed out, none of the current MMRPGs can manage anywhere close to that number of simultaneous users.
BTW, some OSs have troubles scaling to 20k simultaneous socket connections (see recent Slashdot discussion of Linux 2.4s scalability; Win2k and BSD do better here), so careful design is called for to keep TCP connections to a minimum (UDP is the norm for actual gameplay, but TCP can be really handy for world updates and similar - predictable - data). Hopefully not an issue for you, but it came close to biting my *ss in a non-gaming project recently!
Anyway, for this sort of project you are looking at some serious costs. If you want to self-host, in addition to server costs you need to consider:
* Find a location as close as possible to an Internet backbone; zero hops is ideal, but typically REALLY expensive to arrange.
* Arrange for redundant connections from more than one provider - otherwise, anything from a DNS glitch to an upstream (and therefore utterly beyond your control) router glitch can dump all 20k users out of your game.
* Keep several more servers than you need "warm swappable" - ready to take over when things go wrong.
* Arrange backup power for all routers, switches and servers required for this project.
A better option may be co-location, because you''ll have a dedicated staff (who generally know a lot about keeping their data center running), good physical security and someone to shout at when it breaks. On the other hand, you start having to consider:
* Rack mountable servers - the fewer units used the better, since rack-space in colocation arrangements isn''t cheap.
* Heat efficiency. Don''t get Athlons! In my experience, they are really prone to heat problems in a rack environment.
* Remote management, including full remote boot capability.
* Local staff training. Many colocation services will "let you" (ie. you pay for it!) train some of their staff in administering to your project''s needs.
For colocation, you really do get what you pay for. The better sites cost more, and every additional service (backups, reboots, etc.) adds to your cost. Not fun.
In either case, your bandwidth costs are going to be pretty high - high enough that they dwarf server cost! That said, you will want some pretty impressive servers for that type of load. If you can manage it, go for a "horizontally scalable" system - ie. one that lets you throw more servers at the problem if and when load increases. You''ll probably want a good back-end database server (or two - failover clustering can''t hurt), a SOLID firewall (ideally featuring traffic shaping - Free/OpenBSD is great for this), and however many servers you need for the actual game. Multiple CPUs can help a LOT with load, but only if you have a good threading model with a minimum of locks. You do hit diminishing returns pretty quickly, though, especially in setups in which more than one thread can be looking at similar data at a time. Your servers will probably also need GOOD network cards (the Intel DualPort server adapters are excellent performers under extreme loads). Also, don''t skimp on the internal network; you''ll probably be looking at gigabit ethernet if you want to support a world with 10k players - and that may be overtaxed if you have a lot of database updates! Lastly, make sure that your website doesn''t share a server with your games. In fact, if you can manage it, you might want to put it on a different connection. From experience, it sucks when a game goes downhill because of web traffic!
Lastly, on bandwidth.... whatever you do, don''t sign up for a "virtual (insert connection name here)". The big selling point of virtual T1/T3/OC3/DS3 (etc.) is that you never lose a customer because the connection grows as needed. That''s also where it can hurt you badly; suppose you have a massive spike in demand, for example because you get the equivalent of being "slashdotted", or because some k1dd13 decides to Ddos you. Sure, the connection keeps going - and you bill goes through the roof.
If you need more detailed answers on any of this, feel free to email me. I''m a networking/complex issues consultant by day, and this is definitely be more interesting than most of the questions I get asked!
Question for you - how much serverside (back-end) world logic do you hope to have with 10k players?
BTW, some OSs have troubles scaling to 20k simultaneous socket connections (see recent Slashdot discussion of Linux 2.4s scalability; Win2k and BSD do better here), so careful design is called for to keep TCP connections to a minimum (UDP is the norm for actual gameplay, but TCP can be really handy for world updates and similar - predictable - data). Hopefully not an issue for you, but it came close to biting my *ss in a non-gaming project recently!
Anyway, for this sort of project you are looking at some serious costs. If you want to self-host, in addition to server costs you need to consider:
* Find a location as close as possible to an Internet backbone; zero hops is ideal, but typically REALLY expensive to arrange.
* Arrange for redundant connections from more than one provider - otherwise, anything from a DNS glitch to an upstream (and therefore utterly beyond your control) router glitch can dump all 20k users out of your game.
* Keep several more servers than you need "warm swappable" - ready to take over when things go wrong.
* Arrange backup power for all routers, switches and servers required for this project.
A better option may be co-location, because you''ll have a dedicated staff (who generally know a lot about keeping their data center running), good physical security and someone to shout at when it breaks. On the other hand, you start having to consider:
* Rack mountable servers - the fewer units used the better, since rack-space in colocation arrangements isn''t cheap.
* Heat efficiency. Don''t get Athlons! In my experience, they are really prone to heat problems in a rack environment.
* Remote management, including full remote boot capability.
* Local staff training. Many colocation services will "let you" (ie. you pay for it!) train some of their staff in administering to your project''s needs.
For colocation, you really do get what you pay for. The better sites cost more, and every additional service (backups, reboots, etc.) adds to your cost. Not fun.
In either case, your bandwidth costs are going to be pretty high - high enough that they dwarf server cost! That said, you will want some pretty impressive servers for that type of load. If you can manage it, go for a "horizontally scalable" system - ie. one that lets you throw more servers at the problem if and when load increases. You''ll probably want a good back-end database server (or two - failover clustering can''t hurt), a SOLID firewall (ideally featuring traffic shaping - Free/OpenBSD is great for this), and however many servers you need for the actual game. Multiple CPUs can help a LOT with load, but only if you have a good threading model with a minimum of locks. You do hit diminishing returns pretty quickly, though, especially in setups in which more than one thread can be looking at similar data at a time. Your servers will probably also need GOOD network cards (the Intel DualPort server adapters are excellent performers under extreme loads). Also, don''t skimp on the internal network; you''ll probably be looking at gigabit ethernet if you want to support a world with 10k players - and that may be overtaxed if you have a lot of database updates! Lastly, make sure that your website doesn''t share a server with your games. In fact, if you can manage it, you might want to put it on a different connection. From experience, it sucks when a game goes downhill because of web traffic!
Lastly, on bandwidth.... whatever you do, don''t sign up for a "virtual (insert connection name here)". The big selling point of virtual T1/T3/OC3/DS3 (etc.) is that you never lose a customer because the connection grows as needed. That''s also where it can hurt you badly; suppose you have a massive spike in demand, for example because you get the equivalent of being "slashdotted", or because some k1dd13 decides to Ddos you. Sure, the connection keeps going - and you bill goes through the roof.
If you need more detailed answers on any of this, feel free to email me. I''m a networking/complex issues consultant by day, and this is definitely be more interesting than most of the questions I get asked!
Question for you - how much serverside (back-end) world logic do you hope to have with 10k players?
Ok I know this is abit off topic but if you want to make more money/bandwith use you could sell your game to contries with diffrent timezones like sweden usa 5k player in usa and 5k player in sweden or something like that that way youll have 10k players/server who plays at diffrent times = 5k/server at a given time. Yeah I know this is like cheating since your server won''t actually support 10k at the same time but I just wanted to pint out how you can make your bandwidth use cost effective.
Kressilac and I are designing our MPOW to accommodate 50,000 users at once... we have been researching the feasibility and found this information to be enlightening...
Winsock Programmer''s FAQ
On Win9x machines, there''s a quite-low limit imposed by the kernel: 100 connections. You can increase this limit by editing the registry key HKLM\System\CurrentControlSet\Services\VxD\MSTCP\MaxConnections. On Windows 95, the key is a DWORD; on Windows 98, it''s a string. I''ve seen some reports of instability when this value is increased to more than a few times its default value.
The rest of this discussion will cover only Windows NT and Windows 2000. These systems have much higher intrinsic capabilities, and thus allow you to use many more sockets. But, the Winsock specification does not set a particular limit, so the only sure way to tell is to try it on all the Winsock stacks you plan on supporting.
Beyond that vague advice, things get more complicated. The simplistic test is to just write a program that just opens sockets, to see where the program stops running: [C++ Example].
The above program isn''t terribly realistic. I''ve seen it grab more than 30,000 sockets before failing on Windows NT 4.0. Anecdotal evidence from the Winsock 2 mailing list puts the real limit much lower, typically 4,000 to 16,000 sockets, even on NT systems with hundreds of megabytes of physical memory. The difference is that the example program just grabs socket handles, but does not actually create connections with them or tie up any network stack buffers.
According to people at Microsoft, the WinNT/Win2K kernel allocates sockets out of the non-paged memory pool. (That is, memory that cannot be swapped to the page file by the virtual memory subsystem.) The size of this pool is necessarily fixed, and is dependent on the amount of physical memory in the system.
On Intel x86 machines, the non-paged memory pool stops growing at 1/8 the size of physical RAM, with a hard maximum of 128 megabytes. The hard limit is 256 megabytes on Windows 2000. Thus for NT 4, the size of the non-paged pool stops increasing once the machine has 1 GB of RAM. On Win2K, you hit the wall at 2 GB.
The amount of data associated with each socket varies depending on how that socket''s used, but the minimum size is around 2 KB. Overlapped I/O buffers also eat into the non-paged pool, in blocks of 4 KB. (4 KB is the x86''s memory management unit''s page size.) Thus a simplistic application that''s regularly sending and receiving on a socket will tie up at least 10 KB of non-pageable memory.
Assuming that simple case of 10 KB of data per connection, the theoretical maximum number of sockets on NT 4.0 is about 12,800s, and on Win2K 25,600.
I have seen reports of a 64 MB Windows NT 4.0 machine hitting the wall at 1,500 connections, a 128 MB machine at around 4,000 connections, and a 192 MB machine maxing out at 4,700 connections. It would appear that on these machines, each connection is using between 4 KB and 6 KB. The discrepancy between these numbers and the 10 KB number above is probably due to the fact that in these servers, not all connections were sending and receiving all the time. The idle connections will only be using about 2 KB each.
So, adjusting our "average" size down to 6 KB per socket, NT 4.0 could handle about 21,800 sockets and Win2K about 43,700 sockets. The largest value I''ve seen reported is 16,000 sockets on Windows NT 4.0.
There''s one more complication to keep in mind: your server program will not be the only thing running on the machine. If nothing else, there will be core OS services running. These other programs will be competing with yours for space in the non-paged memory pool.
HTH,
Game On,
David[Dak LozarLoeserElysian Productions, Inc
Winsock Programmer''s FAQ
On Win9x machines, there''s a quite-low limit imposed by the kernel: 100 connections. You can increase this limit by editing the registry key HKLM\System\CurrentControlSet\Services\VxD\MSTCP\MaxConnections. On Windows 95, the key is a DWORD; on Windows 98, it''s a string. I''ve seen some reports of instability when this value is increased to more than a few times its default value.
The rest of this discussion will cover only Windows NT and Windows 2000. These systems have much higher intrinsic capabilities, and thus allow you to use many more sockets. But, the Winsock specification does not set a particular limit, so the only sure way to tell is to try it on all the Winsock stacks you plan on supporting.
Beyond that vague advice, things get more complicated. The simplistic test is to just write a program that just opens sockets, to see where the program stops running: [C++ Example].
The above program isn''t terribly realistic. I''ve seen it grab more than 30,000 sockets before failing on Windows NT 4.0. Anecdotal evidence from the Winsock 2 mailing list puts the real limit much lower, typically 4,000 to 16,000 sockets, even on NT systems with hundreds of megabytes of physical memory. The difference is that the example program just grabs socket handles, but does not actually create connections with them or tie up any network stack buffers.
According to people at Microsoft, the WinNT/Win2K kernel allocates sockets out of the non-paged memory pool. (That is, memory that cannot be swapped to the page file by the virtual memory subsystem.) The size of this pool is necessarily fixed, and is dependent on the amount of physical memory in the system.
On Intel x86 machines, the non-paged memory pool stops growing at 1/8 the size of physical RAM, with a hard maximum of 128 megabytes. The hard limit is 256 megabytes on Windows 2000. Thus for NT 4, the size of the non-paged pool stops increasing once the machine has 1 GB of RAM. On Win2K, you hit the wall at 2 GB.
The amount of data associated with each socket varies depending on how that socket''s used, but the minimum size is around 2 KB. Overlapped I/O buffers also eat into the non-paged pool, in blocks of 4 KB. (4 KB is the x86''s memory management unit''s page size.) Thus a simplistic application that''s regularly sending and receiving on a socket will tie up at least 10 KB of non-pageable memory.
Assuming that simple case of 10 KB of data per connection, the theoretical maximum number of sockets on NT 4.0 is about 12,800s, and on Win2K 25,600.
I have seen reports of a 64 MB Windows NT 4.0 machine hitting the wall at 1,500 connections, a 128 MB machine at around 4,000 connections, and a 192 MB machine maxing out at 4,700 connections. It would appear that on these machines, each connection is using between 4 KB and 6 KB. The discrepancy between these numbers and the 10 KB number above is probably due to the fact that in these servers, not all connections were sending and receiving all the time. The idle connections will only be using about 2 KB each.
So, adjusting our "average" size down to 6 KB per socket, NT 4.0 could handle about 21,800 sockets and Win2K about 43,700 sockets. The largest value I''ve seen reported is 16,000 sockets on Windows NT 4.0.
There''s one more complication to keep in mind: your server program will not be the only thing running on the machine. If nothing else, there will be core OS services running. These other programs will be competing with yours for space in the non-paged memory pool.
HTH,
Game On,
David[Dak LozarLoeserElysian Productions, Inc
Dave Dak Lozar Loeser
"Software Engineering is a race between the programmers, trying to make bigger and better fool-proof software, and the universe trying to make bigger fools. So far the Universe in winning."--anonymous
"Software Engineering is a race between the programmers, trying to make bigger and better fool-proof software, and the universe trying to make bigger fools. So far the Universe in winning."--anonymous
February 26, 2001 12:02 PM
I think your packet calculation is off. 20 bytes/sec? You haven''t taken in account for the UDP header itself. That''s usually around 30 bytes/packet. Typically you will get 5-10 packet updates per second. That''s 150-300 bytes/sec for only the UDP header, not including the data you''re trying to send.
If you want to run a MMORPG and have the server running on a Windows machine with 10000 clients then you are looking at a few things to consider.
1) Windows 2K Advanced Server with server clustering and load balanceing. To write code that takes advantage of both of those is a challange in it''s self. I dont know if they have the available interfaces in VB if you are writing it in that, which you wouldnt.
2) You would probably want to host them on a Win2K Datacenter Class server cluster. We are talking BIG bucks ($100K +), but it works alot like Beowolf clusters in Unix systems. Up to 128 processors are supported with 64 gigs of ram. Yes you will need SHIT Loads of ram, especially with the size of Database you will be needing with 10k ppl connected at once.
3) Screw gigabit, you will want 10 Gigabit. It is not a defined standard yet, but you can get adapters and switches already.
4) The database you will probably want to use is SQL. Why SQL? Because it WILL take advantage of clustering and load balancing.
Then there is the matter of internet access. You will not want one pipe, but multiple, for redundancy. Say each client transfers at 28.8kbps I''ll bet they will all use that full 28.8kbps (about 3k to 4k actual transfer) per second. What about 33.6 users? or DSL and Cable? will they be bandwidth limited?
oh speaking of redundancy, you will want a Fibre Channel SAN (Storage Area Network)
Anyway it will be no small feat. Oh also someone was talking about Ultima Online servers. I do know when they first released the game each "shard" was made up of 9 Sun Os servers. That was what, 3 years ago? maybe 4. They got laggy with 1500 ppl on them. Good luck.
-Dazz
1) Windows 2K Advanced Server with server clustering and load balanceing. To write code that takes advantage of both of those is a challange in it''s self. I dont know if they have the available interfaces in VB if you are writing it in that, which you wouldnt.
2) You would probably want to host them on a Win2K Datacenter Class server cluster. We are talking BIG bucks ($100K +), but it works alot like Beowolf clusters in Unix systems. Up to 128 processors are supported with 64 gigs of ram. Yes you will need SHIT Loads of ram, especially with the size of Database you will be needing with 10k ppl connected at once.
3) Screw gigabit, you will want 10 Gigabit. It is not a defined standard yet, but you can get adapters and switches already.
4) The database you will probably want to use is SQL. Why SQL? Because it WILL take advantage of clustering and load balancing.
Then there is the matter of internet access. You will not want one pipe, but multiple, for redundancy. Say each client transfers at 28.8kbps I''ll bet they will all use that full 28.8kbps (about 3k to 4k actual transfer) per second. What about 33.6 users? or DSL and Cable? will they be bandwidth limited?
oh speaking of redundancy, you will want a Fibre Channel SAN (Storage Area Network)
Anyway it will be no small feat. Oh also someone was talking about Ultima Online servers. I do know when they first released the game each "shard" was made up of 9 Sun Os servers. That was what, 3 years ago? maybe 4. They got laggy with 1500 ppl on them. Good luck.
-Dazz
-Dazz
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement