The problem on the client is that more simulation steps at once means more user commands sent at once which will add latency on the server.
Yes, slower frame rate and larger batches of simulation steps leads to more buffered simulation steps.
What I'm trying to say is that this should look, to your system, almost exactly as if the client simply had a slower and more jittery network connection.
If you adjust clocks correctly, you don't need to do any other work (as long as the server does, indeed, buffer commands and apply timesteps correctly.
So when the server buffer is too full because extra latency I will discard a certain amount of user commands so they can skip ahead.
Are you not time stamping each command for the intended server simulation tick? You should be.
You only need to throw away user commands when you receive two separate commands time stamped with the same tick, which should only happen if the clock is adjusted backwards in a snap, or if the network/UDP duplicates a packet.
And, yes if the client runs at 5 fps, then you need to buffer 200 ms worth of commands on the server. There is no way around that.
If you throw away any of that data, then you will just be introducing more (artificial) packet loss to the client, and the experience will be even worse.
You know that, because the client won't send another batch of commands until 200 ms later, you actually need that amount of buffering.
Separately, if you really want to support clients with terrible frame rates, you may want to put rendering in one thread, and input reading, simulation, and networking, in another thread.
When time comes to render to the screen, take a snapshot of the simulation state, and render that. That way, the users control input won't be bunched up during the frame render, so at least the controls will be able to pay attention to 60 Hz input rates, even if they can't render at that rate.