Quote:
The clients currently use a 'change timestamp' to see if they have to request a new copy of an area which may have deformed/changed since the client last had that area in a LOD3 context. The client only keeps a connection to 9-16 LOD3 areas to get sent their update event streams, but may pass thru a succession of hundreds if the player moves in a straight line(these get cached to disk on the client).
This implies visual range is less than 1 area. I use a similar method for terrain chunks, using specialised fixed POR's to do so. I don't consider it necessary to use a connection per area, (I'm actually using UDP so connection is an abstract term) and can use a quadtree to rapidly determine relevant areas for a client, since fixed POR's have a unified orientation - a common coordinate system is derived from the offsets of the POR's involved. The quadtree tests are expandable to non-uniform zones, since there is a policy of containment for parent-child relations. (If a parent is completely enclosed in a quadtree test, we can assert that it and its entire sub-hierarchy is relevant). Then relevance layer culling can be performed to narrow the required set.
Quote:
I dont know if there is much different for anybody's 'abstract' execution without getting down to specific details. The interactions are simplified/generalized and based on logic/equations to approximate what goes on far from the client/players and enact some kind of reasonable/likely patterns of behavior. There still can be a more than a few detail/factors/coeffcients/states to represent the 'minimized object (and require scripting complexity that rivals what current games have for the 'full' detail AI).
I find that in general abstracted systems are difficult to generalise, and can vary wildly depending on the kind of behaviours you're trying to simulate. I had the priveledge of attending an audience with Jeff Minter demonstrating his game space principles which was fascinating in this regard.
Quote:
The disk use isn't quite as bad as you might think, as zone transitions dont happen for each client too frequently (area size being reasonably large - I projected crossing an area at run took 30 seconds) and the preloader gives adaquate time for data to move 'in_mem' ahead of need (normal disk requests serviced in seperate thread...)
Although threading only provides advantages in multi-processor hardware, I operate a similar method (I've segregated network transmission, which is my presumed bottleneck, most of my simulation held in RAM).
Quote:
Currently the design is to have the client talk directly to the server which owns each 'area' (client gets notified of any change of ownership). The master server owns the DB and distributes copies when 'areas' get farmed out to other slave-servers (eventually it will probably do nothing but this distribution and assignment). There is only one active copy of the data allowed on the servers.
Similar to the discussed method of farming off 'domains' above, then.
Events will be transfered across boundries (to a different server if needed) and most events wont travel more than one area away from the area they occured in (again because of the large size of the areas) and like what you described, there will be an event magnitude filter radius to cull out unneeded inter-area event transfers. Each area has its own event accumulation queues (possibly with a prefilter on insert). 'Realized' objects have their own event queues and events directly effecting them are added, otherwise the object may monitor the event queue of the area it is in and any appropriate adjacents'.
Quote:
The regular structure (grid) of the primary terrain makes boundry checks easy and unitizes alot of operations BUT the 'structures', 'tunnels', and 'mobiles'
offer many of the complications of your irregular areas (and Im sure to face the same problems which I hope can be localized).
These aren't as hard as you might think, providing you have a fixed hierarchy. A structure place directly on a terrain area would pass its event to its parent for distribution, and that parent would, if necessary, forward it on to adjacent terrains. It's important to be aware that there is an additional lag in event forwarding - and a certain amount of synchronisation and timeout checking has to be expected. Ensuring that interesting content is easily contained by a particular domain is the caveat - people won't hang (in great numbers) around the edges of your areas if there isn't anything to do there. You might get some griefers hoping to catch someone with load-lag, but given a good prefetching scheme, this shouldn't be too much of an issue.
Quote:
I have yet to test to see if a broadcast of events on the LAN will have any advantage. They will most likely be marshaled into groups of events to hold down the packet count.
I really wouldn't broadcast to all areas unless absolutely necessary. System alerts ('The server will shut down in 10 mins') ok, but whatever you do, localise chat. An MMOG is just a chatroom with toys.
Quote:
My data for each area is packaged as a contiguous block with internal offsets for pointers and its own heap/freechains. This makes it easy to relocate
without alot of serialization overhead. 'Lackey' objects contract into small sets of coefficients and 'significant' objects are only markers as they live in an AI process and dont contract - they interact as if they were clients and maintain a much more complex world representation.
So you transfer ownership without commital to the DB / central store? I assume you assert the completion of such a transfer in some way? In my case the hierarchy helps, since the parent can cache events for transferring objects, and then forward them to the appropriate new domain, once relocation is complete. A lag is incurred for events directly related to the transferring object, but other event targets can proceed 'normally', allowing for access lag on the machines involved in the transfer.
Quote:
Its definitely a persistant system. The areas once built, are allocated a slot in the DB files and then can roll in and out of memory quickly -- simple random access and 'synchonization' files written at shutdown from the world state held in memory(area index web, etc..). Checkpoint copies of all the data files will probably be needed to minimize impact of server abort. (copying of Gigs may require some copy-while-updating methodology).
So you're commiting to DB when areas are LOD'd out, or at shutdown?
Quote:
The AI data and processing for the 'significant' objects is going to dwarf the rest of the system.
The AI is going to (Ive only done preliminary work for this part) do planning
using a hierarchical/inheritted set of behaviors (goal solutions) and a preference system to decide 'best' solution selection and priority.
I want(plan) to have a self adjusting system for the 'preferences' that will analyze logs of previous actions/results (probably offline) and try to determine better solution selections for particular situations.
This part of the project is probably going to take years (in typical AI, the data (scripted logic) usually takes 90% of the effort and remainder the engine).
The simulation engine is really only the visualization/simulation needed to facilitate my interest in AI (someday I actually want to get back to that part of it).
Sounds interesting! I've done preliminary work on the AI for Bloodspear (the AI included in the libraries basically will comprise A* and some requirement / supply / risk assessment heuristics) but similarly, that part of the project is beastly, even though I'm not implementing any form of learning system.