If power failure or outright hardware failure is not an issue (and oh boy... believe me, it is!) then you can protect against "data loss because process died" problem rather easily.
Note however, that a runaway process can still very easily fuck up your entire dataset even without aborting (runaway code could e.g. just overwrite random memory locations!). If you haven't saved your data somewhere, you're in trouble.
Assuming something POSIX-like here, but you can do it on any other system, too. Spawn a "launcher" process that creates a large shared mapping, and have the launcher fork/exec the actual server thereafter. Whenever waitpid tells the launcher that the server exited in a non-clean way (WIFEXITED(status) == false), it fork/execs again right away, restarting the server (which reads its data from the same still-existing mapping). Otherwise, it's a regular server exit, the launcher writes data to disk, and exits. Note that if you make the mapping file-based (not anonymous), you can skip the write-to-disk step, the OS will do it for you (just hopefully, power doesn't fail half-way!).
The launcher/watchdog and the actual server can even be in the same executable. In that case you only need to fork and can skip the execve, just call server_main_loop() if fork returned zero (i.e. you're in the child process). The restart logic should be simple enough so you can guarantee with 100% confidence that there will be no bugs/failures happening in there.
But remember: There's still power failures, and there's hardware failures. If "all data since yesterday's save point gone" is not acceptable in such a case, you should really, really, really, consider using a database, with and without transactions, and with at least eventual consistency.
This depends a lot on the situation, some things absolutely must be transacted, but not everything needs to be, and usually not the entire world must be consistent at all times, but some things need to be. Some things change 10 times per second (think hit points), others once every few seconds (pick up gold/items, trade with another player), others change maybe twice per day, once per week, or less often (think guild membership, or achievements). Not all of them are of equal importance, or need to be equally consistent over a transaction, or within the world state.
What tool exactly you should use is therefore difficult to tell. Most people will use more than a single tool because one tool does not serve everything well enough.
Storing blobs or "documents" in some kind of key-value store is usually much faster with much higher transaction counts, but on the other hand, stock SQL databases are sufficiently fast for some operations, and they offer the ability to run an analysis (if, one day you're inclined to do that!) which is sheer impossible otherwise (well, not impossible, but you know...), and they are very well-suited to some tasks, making your life a lot happier.
Decide to make a highscore board a year later? WIth a stock SQL database, you replicate to a slave (which takes like 3 commands to set up) and there you run a SELECT * FROM characters ORDER BY score DESC LIMIT 20 or something similar on the replicated data, and there you go. That's it. Want to do the same thing with data stored in binary blobs or JSON documents? Here's a rope, go hang yourself.
Need to do 50,000 transactions per second? Well, good luck trying with a SQL database. For a key-value store that's not much of a challenge.