Advertisement

Networked Physics and an Optimal Bounding Volume Hierarchy

Started by July 09, 2015 03:47 PM
2 comments, last by arnero 9 years, 6 months ago

Hey there,

I'm putting together a physics library for a 2D networked game, and I'm starting to tackle the problem of spatial decomposition. With a networked game, there are two use cases for the engine:

1) Clients have one dynamic object (their own simulated/predicted player pawn) and all the static geometry for the level. They will be rolling back and rolling forward each frame for input prediction, because of this, they have potentially low frame coherence (i.e. large unexpected movements between frames if a correction comes in).

2) Servers have all the dynamic objects and all the static geometry for the level. Dynamic objects do not collide with one another (they pass through). They do not roll back and simply simulate the world as any other physics engine would.

Because clients have low frame coherence, I'm treating the server as if it does too (not doing warm starting etc. for collision resolution) to stay as consistent as possible between the simulations. In fact, the only information I maintain between simulation ticks is the position, orientation, velocity, and angular velocity of each object. I don't keep anything else -- manifolds, contact points, bias forces, nothing.

To me it doesn't make sense to add dynamic objects to a spatial hierarchy. They only collide with static geometry anyway, and it means that if I move them a lot for network corrections, I'd have to update the hierarchy each time (which is expensive). Because of this, I want to create a bounding volume hierarchy of just the static geometry to do collision checks against. What I would like to do is pick the most optimal representation and pre-compute it as an offline process during the level design stage (I don't care if it takes a minute to compute, for example, if it means that tests at runtime are faster).

The one caveat is raycasts, which also need to collide against dynamic objects. For this though I could keep a separate dynamic AABB tree or something, and then when I perform a raycast I can traverse both the dynamic object tree and the static object tree to check for collisions.

Has anyone here ever done something like this before? Any advice?

> hierarchy each time (which is expensive).

Some people use b*-trees others do not. But only trees scale. Just put 100 objects into leafs to justify the overhead.

I do not understand why you make so many assumptions. I do not understand why the server simulation has to suffer from client-only aspects. Just recently I read that consistent simulation is almost impossible. Almost consistent is by design of the FPU. The designers never thought that people want playbacks.

I hate AABB. As physics tolds us, Space is isotrop. I like Spheres and BSPs.

Advertisement

I'm worried that if I do warm starting on the server and not the client, then the server will resolve collision detection with significantly faster convergence than the client can, which would cause a desync and a snappy correction. I understand that I won't have perfect determinism without, say, fixed point operations, but I'd like to still stay as consistent as possible. Currently the engine performs just fine without warm starting anyway.

Both the client and the server need, at times, to check back in time for things like movement prediction and historical raycasts for lag compensation. When this happens I need to roll back every relevant dynamic object to its position at a given past simulation tick. When this happens I think it will be too expensive to update all of those objects in the BVH, as I'll be doing this multiple times per frame.

I guess my problem is that I never understood why I should use constant time steps in simulations (see my other posts). I read about sweep volumes here on gamedev all the time, so apparently detection of collision inbetween timeSteps is known to the skilled programmer ;-) . With sufficient accuracy the user should not be able to see the difference between server and client simulation. I think I can get correct simulation, and not one which looks merely consistent, with little more effort. Sorta like "in the end honesty pays of". Right now I am thinking about "stiff" systems and implicit integration (eg Verlet). This gives energy conservation. For implicit integration one needs to solve a system of nonlinear equations ... which is impossible. I would like to see some text which states: Putting double CPU time in, energy is conserved better by a factor of 10 by adding one step Newton to damp Energy change.

I think there is one single source for client server desync and that is lag in client server communication. Or variation of lag. I still do not know why the high speed traders can communicate over glass fibres with almost no lag overhead and we mere mortals have endless lag even within one country. Hey admins and telcos, throttle big downloads and streams all you want, but let my small UDP packets pass without paying lag toll!

This topic is closed to new replies.

Advertisement