Advertisement

How to structure my pure client/server model properly?

Started by September 05, 2016 11:49 AM
12 comments, last by deftware 8 years, 2 months ago

Guys, I'm kind of a newbie at networking. Especially when it comes to real time stuff like games.

I'm making a first-person shooter, and I decided to use the simplest networking model for starters. From the research I've done, the answer seems to be the pure client/server model that the old Quake used.

But I'm having a lot of problems because there is not anything on the internet that is easy enough to grasp or I'm not good enough to figure it out on my own maybe, I don't know.

Whatever the reasons, the problems are the following.

1st: I'm wondering how to deal with clients that have different frames per second.

For example, one client has 1 fps and the second one has 5 fps. This means that the second one processes all the 5 of the received packets from the server while the first one processes only one packet from the 5 that are received(in a second).

How to solve this?

One way is to somehow say to raknet: drop all previous packets and get the newest one. The other way is to send the fps of the player to the server and base the numbers of packet that are sent every second on the current client fps.

And second question is: what is better: to send gameStates or keyStates?

I'm having other questions as well, but I will ask them one at a time. :lol: Thanks for reading this, of course. :rolleyes:

Why is there even a link between CRT refresh rates and network transfer speeds? Are you making the mistake of having a fixed timestep?

Stephen M. Webb
Professional Free Software Developer

Advertisement
Simple answer:
Don't allow different simulation rates. Update the simulation at the same frequency on all players. Render at a different framerate to suit the player.
Complex answer:
Use timestamps instead of ticks, and introduce time into all input values.
The simple answer is both simpler, more predictable and a whole lot more reliable. It's also quicker as you're not introducing lerps and other tolerance based comparisons where timestamps don't match exactly.
The important point is that you need to decouple the rendering framerate, and the simulation tick rate. You can render the game at a different framerate to the update.
If you set a baseline minimum for the update frequency, clients who can render at a higher framerate than the simulation rate can interpolate between the most recent states. This does mean you're rendering in the past by one-two simulation ticks, but that is usually acceptable, as you would otherwise be rendering the old frame anyway, and the effect is to reduce jagged movement, which will improve the UX.
I tried doing things using timestamps and variable simulation rates. It doesn't make any sense. You gain nothing by doing it, but a whole host of problems. Something about your game needs to be predictable and constant. That is the simulation. The rendering can happen at any rate, and the client will still move predictably and have a predictable experience across hardware.

Use timestamps instead of ticks, and introduce time into all input values


And then take that to the point of counting all events in simulation timesteps, were each timestep is fixed. 60 per second, or 144 per second, or whatever.
I documented the canonical way to implement this a long time ago: http://www.mindcontrol.org/~hplus/graphics/game_loop.html
It still works very well!
enum Bool { True, False, FileNotFound };

//Figured out all the stuff I asked here.

:lol:

Guys, I didn't understand anything from what you said, but then I found out the best and most wonderful article on the internet( about game loops ): http://gameprogrammingpatterns.com/game-loop.html

Now that I've read it, I re-read all your posts and they make a perfect sense now. :cool:

Ok, in order to keep things simple and not bother with very-hard-to-find bugs and floating-point rounding errors, I've chosen to do the logic using a fixed timestep.

The rendering, on the other hand, can be done perfectly fine using a variable timestep since rendering doesn't experience any problems (except motion blur and maybe something else I don't care about ).

I just need to figure out 3 things.

1st: how to run two loops concurrently on 1 processor?

2nd: what fixed timestep should I use for the logic( 30 fps, 60 fps, 120 fps?? ) .

3rd: how to deal with different render and logic ticks/frames, but I will bother with this when I finish with the previous steps.

EDIT: The questions kind of got very far from multiplayer programming, but this is the final goal, so I guess it's fine.

Advertisement

1. You don't run 2 loops concurrently on 1 processor. Re-read the article you linked, paying special attention to the bit near the end where processInput is called every frame but update is only called every MS_PER_UPDATE milliseconds. That allows you to have 1 loop that is updating 2 systems at different rates. You can extend that to 3 or more systems.

2. Whatever you like, depending on what you mean by 'logic'. Try them and see. It's only one or two lines of code you need to change in each case.

3. Ideally what you render graphically is based on interpolation or extrapolation of the logical data so that you don't get jerky movement. The Fix Your Timestep articles talk about this.

So you say that every mainstream engine like unity and unreal and who knows what else, uses that same game loop that hplus0603 posted and does the same interpolation and extrapolation stuff for rendering and there is no other way around this? :blink:

This code here doesn't use any interpolation and extrapolation, so I guess this is how I get the jerky movement you talked about.


double previous = getCurrentTime();
double lag = 0.0;
while (true)
{
  double current = getCurrentTime();
  double elapsed = current - previous;
  previous = current;
  lag += elapsed;

  processInput();

  while (lag >= MS_PER_UPDATE)
  {
    update();
    lag -= MS_PER_UPDATE;
  }

  render();
}

Well, no, I didn't say that. But it's very likely that they do use some variation on that. You can see from Unity's docs that it has an Update function that is called once per frame, and a FixedUpdate that is called every N milliseconds (where you can configure the value of N). There is also an optional interpolation value you can set on RigidBodies which makes the movement smoother. You don't strictly need it, but it helps, which is why Unity recommend enabling it for the player characters at least.

Some engines - especially on console games - just pick a frame rate they're targeting and do everything based on that. If they're rendering at 60Hz then they can update at 60Hz as well. If they have to drop down to rendering at 30Hz then 60Hz updates still work fine - you just get 2 of them per frame. But if you need a rendering rate that isn't a factor of the update rate, movement will need to be smoothed somehow.

Getting back on topic, the network update rate is quite different and matching it to rendering doesn't matter so much - but extrapolation and interpolation is arguably even more important there since the actual rate of updates will fluctuate. I usually like to go for a very simple system which just interpolates between the previous location and the most recent location, but while that's fine for an MMO it will not be fine for most FPS players where an awful lot can happen in a tenth of a second.

I can say with some degree of certainty though that they aren't running a bunch of different game loops on different threads. That would be very hard to coordinate correctly, and with no real benefit since the locking of game stare that you'd have to do would break the timing.

But if you need a rendering rate that isn't a factor of the update rate, movement will need to be smoothed somehow.

True. There really is no simple solution to this game loop stuff.

This topic is closed to new replies.

Advertisement