The point of this sort of system is that a state update takes some length of time (say, 'M') to reach the client from the server, and then some length of time (say, 'N') before the client fully reflects that change, to help ensure the client has received another state update from the server before the current one is reached.
You can adjust N slightly on a per-message basis to account for fluctuations in M. In other words, if a message arrives sooner than expected, you might want to take longer to interpolate towards it, and vice versa. This keeps movement smooth.
It is reasonable to consider decreasing N during play so that it's not too long and imposing unnecessary client-side latency. This will be determined by M and your send rate.
It's also reasonable consider increasing N during play so that it's not too short and leaving entities stuttering around, paused in the time gaps between reaching their previous snapshot position and receiving the next snapshot from the server. This is determined by the variance of M (jitter) and your send rate.
Often N is left at a fixed value (perhaps set in the configuration), picked by developers as a tradeoff between the amount of network jitter they expect to contend with, and the degree of responsiveness the players need to have. And if you're happy occasionally extrapolating instead of just interpolating, i.e. you are happy to take less accuracy for less latency, then you can reduce N down even further.
The idea to "catch up earlier" doesn't make sense in isolation. Earlier than what? The idea is that you always allow yourself a certain amount of time to interpolate smoothly towards the next snapshot because you still want to be interpolating when the subsequent one comes in. You don't want to be decreasing that delay over time because of the stuttering problem above, unless you have sufficient information to be able to do so.