Assuming we deal with authoritative servers, what other approaches could be (if any)?
Many of the more common features are based on public post-mortems or public writeups of how the networking was accomplished.
The one described in no-bug's article is typically called the counterstrike / valve model, since that's where it was widely published. The thing that was innovative at the time was that effectively the server would rewind time, validate that they could be properly inserted at that time, then re-simulate from that point. Most of the rest of the model was based on existing networking models.
The Doom model was only used for a short time, but it was widely published at the time in the PCGPE. (PC Game Programmer's Encyclopedia). There wasn't anything fancy going on, and relative to what you see today the network code was laggy and terrible at anything except local network play. It was enough to get the job done.
The Quake model, common in the late 1990s on action games, was to basically simulate what you've got and then correct the positions when new data came. It still did the dead reckoning and prediction that nothing would change. (Note that they did not rewind time like the counterstrike model would eventually introduce, only run the simulation and snap/slide people to correct locations on update.) These models also tended to use a data compression model of sending large block deltas that had sequencing difficulties. On highly reliable networks they worked well enough, but on unreliable networks performance became terrible.
The Quake3 model was picked up by a lot of other networking systems of the era, and it improved on the Quake model. Instead of larger deltas and assuming reliable packets, a sequence of deltas were combined and compressed together. Every time a sequence number is passed it includes the sequence number of the last packet they handled. So instead of sliding windows preserved for retransmission, they retransmitted all the windows every time until acknowledged, then dropped the tail of the window. This used more bandwidth but with widespread adoption of faster internet access it wasn't a problem. Again, clients used dead reckoning and estimation, but they did not rewind time to insert data as the counterstrike method since that wasn't out yet.
Another historic model was used in RTS genre and bears mentioning here. It was used by Westwood in Dune games and Command and Conquer, and by Blizzard in Warcraft and Warcraft 2, and by the Age of Empires folks. It was best known from the 1500 Archers paper. Some systems went full lockstep, others partial lockstep and partial independent simulations. Even though what you saw on screen was fluid animation, the simulation operated on 4 or 5 updates per second. All the events of a simulation step were forwarded for execution two simulation steps in the future. So if you sent a command at simulation step 2000 by clicking a command, it would be queued for execution on simulation step 2002, and optionally show a pre-animation on simulation step 2001. That would give plenty of time for all clients to receive and process the command before executing it. You would hear a response immediately on your side so you know the command was issued, e.g. "Yes Sir!", followed by a pre-execution animation of turning or dust or whatever, followed by motion a half second later. Combined this meant the simulation could be slow, just four updates per second, but the feedback could be instant to the player and animation variations could give visually pleasing experiences.
Some of the effects of the 1500 Archers model are still useful and common in today's shooters. Many games that employ the counterstrike model for lag compensation also use the archers model to help mask communication times. They can enqueue and broadcast the command while showing a 45+ms trigger pulling animation. They can communicate to the player that bullets take time to fly through the air by showing a tracer line for another 3 or more frames and a small puff of smoke, buying another 45ms-60ms. The local animations give the player a feeling of immediate responsiveness but also allow some time for machines to communicate
Every game tends to incorporate different techniques, those are just a few of them. Generally they are not publicly discussed, but you will see a few conference papers or public post-mortems about features a game added to their networking toolkit. Most are fairly standard techniques that are commonplace, but occasionally someone will document a groundbreaking technique.