As a result of the things hplus described, there are some things you can do around it.
You also didn't mention the kind of 'lication they were developing. Games, business needs, banking needs, there are many programs with different needs.
Geolocation is quite common for game matchmaking servers. You want to put people in the same location in the same game. While it isn't precise, if you've got four IP addresses showing as registered in Sydney, Moscow, London, and South Africa, those four should never be automatically placed in a game together. If for some reason you do have people who intentionally join their long-distance friends through a direct invitation, be prepared to radically increase the timeouts.
Fallbacks for location are important. Any client that doesn't have geolocation information consider their network route, or anything else you're able to. If you still have no clue where they're at, look at ping times to various locations in the globe and guess. If you still have no idea where they're located, blindly bump the timeouts.
If you've got approximate location, try to matchmake within a thousand kilometers or so, then 1500, 2000, 3000, progressively falling back with an ever-increasing penalty. 5000 km is a long way, and anything over that should probably never get connected automatically.
For business software, if you know their IP address is located in a problematic country then yes, you should increase their timeouts. You don't need to do it for everyone, if you know they're on the US east coast and you've got servers hosted on Amazon's US east coast, keep those times down. But if they're coming from South Africa or Sydney or Tokyo or Calcutta and connecting to those same US east coast servers, bump the timeout.
Another is dynamically adjusting timeouts based on round trip times. Allowing 2x or 3x RTT can help, with appropriate minimums to compensate for people on the same LAN. Thus if machines normally wait about 2 ms for data, you might wait 4 or 6 ms before retransmitting; two machines communicating at 80ms might delay 160ms or 240ms before retransmitting, two machines that are normally 120ms might delay 240ms or 360ms before retransmitting.
Consider having a cutoff, particularly for games. You might decide that it doesn't matter if the players are best friends separated by distance, if the RTT exceeds some specific time they're not allowed to play. What that value is depends on the game. If you've got an action brawler the game will be unplayable when round trip times reach must past 150ms or so.
Similarly to round trip latency, bandwidth has requirements for gameplay. If you've got a game that requires about 1MB per minute or 250KBps up/down, whatever the value is, that is what the game requires. It doesn't matter why the required rate cannot be maintained, if it is because they've got a modem from the 1990s or because somebody in their home is torrenting the newest Linux kernel drop or streaming 4K movies, the cause is irrelevant. If they don't have the bandwidth to play, tell them so.
The cutoff can be hard or soft, and games have implemented both. Voting to drop players based on connectivity is somewhat common, meaning you can still play with your friend who is on the opposite side of the world, then when the prompt shows up to boot the player you can indicate that you want to keep playing despite their 400ms latency and 0.5kbps bandwidth.
That kind of vote is basically the same as increasing the timeouts as you described, but allowing players the choice instead of making it for them.
Systems of caching and queuing can potentially cause problems, particularly when services become completely unreachable and connections lost. How you handle that depends tremendously on what the data is. You don't want half-completed transactions when players are selling items to another player, and when someone is trying to pick up an ultra-rare or unique item that completes their collection you will need to ensure it is handled well, but on the other hand, aborting a quick match might be done with no player-facing repercussions. Outside of games, something like filling out a long sequence of government forms and then having the caching/queuing system fail when submitting them can result in major discontent, dropping forum posts from such a system is an annoyance but less of a problem. Similarly, if your banking software times out and won't let you transfer money, a long timeout is far less of a problem than a failed transaction.
Most systems need to make sure they have idempotent solutions. You don't want to duplicate a transaction if it comes across the wire twice. As you have connectivity problems the risk of duplicate communications increases tremendously.