Advertisement

Computer Entropy

Started by May 30, 2010 10:19 PM
28 comments, last by phresnel 14 years, 5 months ago
Quote: Original post by way2lazy2care
Quote: Original post by AndyEsser
Running a Windows machine with no Pagefile is guaranteed to fail eventually. Once you start running more and more memory intensive applications it'll start to cause a problem, as soon as you hit the physical memory maximum your machine will either BSOD, become unresponsive, or just altogether fail.


his point was that with 64 bit windows and enough ram you should never reach the maximum unless you're doing something ridiculous.


It's easy to break through physical memory barriers when doing memory heavy applications, like video cutting or maybe production 3d rendering (hobbyist or not), and the like. If I run out of swap, all that remains is a hard reboot, 32bit or not. Personally, I have always a swap or two which is at least as big as 2x times main memory.

Maybe Antheus can clarify a bit :)
Quote: Original post by phresnel
Quote: Original post by way2lazy2care
Quote: Original post by AndyEsser
Running a Windows machine with no Pagefile is guaranteed to fail eventually. Once you start running more and more memory intensive applications it'll start to cause a problem, as soon as you hit the physical memory maximum your machine will either BSOD, become unresponsive, or just altogether fail.


his point was that with 64 bit windows and enough ram you should never reach the maximum unless you're doing something ridiculous.


It's easy to break through physical memory barriers when doing memory heavy applications, like video cutting or maybe production 3d rendering (hobbyist or not), and the like. If I run out of swap, all that remains is a hard reboot, 32bit or not. Personally, I have always a swap or two which is at least as big as 2x times main memory.

Maybe Antheus can clarify a bit :)
You've got to be kidding; Windows BSODs when it runs out of memory? That's ridiculous! Linux' OOM killer might not be the most pleasant experience, but it's worlds better than the OS killing itself. (And if you know you're only running well-behaved applications, you can turn off overcommit to get rid of it entirely.)
Advertisement
Quote: Original post by Valderman
Quote: Original post by phresnel
Quote: Original post by way2lazy2care
Quote: Original post by AndyEsser
Running a Windows machine with no Pagefile is guaranteed to fail eventually. Once you start running more and more memory intensive applications it'll start to cause a problem, as soon as you hit the physical memory maximum your machine will either BSOD, become unresponsive, or just altogether fail.


his point was that with 64 bit windows and enough ram you should never reach the maximum unless you're doing something ridiculous.


It's easy to break through physical memory barriers when doing memory heavy applications, like video cutting or maybe production 3d rendering (hobbyist or not), and the like. If I run out of swap, all that remains is a hard reboot, 32bit or not. Personally, I have always a swap or two which is at least as big as 2x times main memory.

Maybe Antheus can clarify a bit :)
You've got to be kidding; Windows BSODs when it runs out of memory? That's ridiculous! Linux' OOM killer might not be the most pleasant experience, but it's worlds better than the OS killing itself. (And if you know you're only running well-behaved applications, you can turn off overcommit to get rid of it entirely.)


Sure it's far better, especially in a production environment it's a life-saver. But at home where I don't need three nines, rebooting and restarting application is often cheaper (not always! sometimes I have some important data, but which is unsaved, and then, knowing that it's slow, but that it is is a good thing.)

But on the other hand, I have the slight feeling you quoted the wrong post, as mine was more about running out of memory in general, 32bit or not (I mean, e.g. considering how video resolution has grown and will grow, there seems to be enough space for OOM up to a remote future) ...
Quote: Original post by AndyEsser
Running a Windows machine with no Pagefile is guaranteed to fail eventually.

The only thing guaranteed in this world is that we die.


Quote: Once you start running more and more memory intensive applications it'll start to cause a problem

There are two types of applications - those that fit fully into RAM - and those that run on clusters.

But I'm honestly curious - which applications are that?


Quote: as soon as you hit the physical memory maximum your machine will either BSOD
No.

It will BSOD if you use pagefile. Bear with me:
- OS sees a process is using up memory, and starts paging first the applications, then services, then everything pageable, including parts of OS
- OS runs out of pagefile, the offending application triggers heap alloc failure, which is either handled by app or not.
- If offending application crashes/faults, its window is closed or it loses focus - but disk is slow, and isn't cleared instantly, so there is still no memory left.
- Suddenly, OS moves focus to crash dialog or another application
- Since that application is paged, it needs to be loaded into RAM. But there is no RAM, and there is no pagefile, since offending process still hasn't released memory - that application/window manager/something fails.
- And voila - cascading failure, which eventually results in OS parts being unable to page themselves back.

While Windows can recover the above cleanly, there is a chance of catastrophic failure.

Without pagefile - this doesn't happen. OOM with pagefile is the absolute worst combination of failures.

What can happen is that in the short period when system runs out of memory, another process tries to make a heap allocation, and that one fails.

There are two failure modes:
- Fragmentation, where there isn't an adequately sized continuous memory (no problem, if OOM, there is still plenty of memory left for OS to have some breathing room)
- Memory leak. If these grown in under 4k increments, they have the potential to exhaust entire memory - but it takes time to gobble up 8GB. And isn't that hard to notice.

And if running 32-bit apps on 64-bit machine with 4GB+ of RAM, it's a non-issue. The process will grow to 2GB max.

Quote: become unresponsive
No. System becomes unresponsive when working with pagefile. It *never* becomes unresponsive due to OOM when not using pagefile.

Quote: or just altogether fail.
When a process makes a heap allocation, and that fails, it's up to original developers to handle that. If they don't, application terminates. There is nothing inherent that should crash the system. At least I've never seen it.



Seriously - get 8GB or 16GB of RAM, then turn off the pagefile. I'll be very hard to convince that desktop work breaks that.

Disclaimer: this is solely about desktop usage - servers have different requirements, where pagefile may or may not be required, again depends on whether virtualization is used, long-running processes, etc.

----------
A different demonstration of pagefile catastrophic failure:
foo() {#1  Data someLocalDataAbout16kb_maybepointer_maybeRAII;#2  consume16kb();}


In #2, there is no more RAM, so someLocalData is paged to disk (just barely).
OS then tries to allocate 16kb, but there is no RAM, there is no pagefile. heap allocation fails, and whether application chooses to handle it or not, someLocalData first needs to be paged back. This takes a lot of time.

Meanwhile, some other part of OS, some other application runs and part of that is paged into RAM.

Back in foo(), foo() is stack unrolled, exception handler tries to print a helpful message, which causes hundreds of DCs to be created, WM_ to be sent - all into an OOM system. And all of these actions causing page swaps.

If everything is in RAM - when OOM, stack is unrolled, and that's it. Ironically, trying to gracefully handle OOM in application can cause more problems than just exit() would. As a rule, OOM handling should not consume any memory at all, but that is wishful thinking, especially for C++ apps which make liberal use of rich exceptions, where implicit allocations occur all over the place, whether intended or not. I don't think any software is adequately tested for this type of edge case (not that there is any real need).

[Edited by - Antheus on June 1, 2010 6:35:02 AM]
Quote: Original post by phresnel

Sure it's far better, especially in a production environment it's a life-saver.

I'm honestly curious about names and numbers.

Care to provide them? Total RAM, application that OOMs. 32 or 64 bit OS?

Quote: considering how video resolution has grown and will grow

Again, which application. Even quad-HD is finite and holding a several seconds of raw video in RAM is trivial - and that is all that's needed.

I have only limited experience with video, but no tool I used needed more than hundreds of megabytes - all processing is external.

Quote: Original post by Antheus
Quote: Original post by phresnel

Sure it's far better, especially in a production environment it's a life-saver.

I'm honestly curious about names and numbers.

Old legacy server, serving 50 users with some buggy application of which 1 instance goes crazy.

with swap:
Quote: phone call
my app is slow
Quote: phone call
my app is slow
Quote: phone call
snail
Quote: phone call
my app is slow

$ ps -A [...] kill -9 XXX
Quote: phone call
my app is away, thanks moron


without swap:
Quote: phone call
my app is away, thanks moron
Quote: phone call
my app is away, thanks moron
Quote: phone call
idiot
Quote: phone call
my app is away, thanks moron

$ ps -A [...]
kill -9 XXX
fail: out-of-memory
Quote: phone call
my app is away, thanks, moron

$ ps -A
fail: out-of-memory
Quote: phone call
my app is away, thanks, moron


panic, destruction.

$ reboot
fail: out-of-memory

*does physical reboot*

Quote: mooooooroooon!!!




That is, not every freaking company does own a big array of distributed machinery. There's a lot of legacy around. I'd go a lot with virtual machinery, if applicable. But it's not always applicable. Pretty much as textbook coding is usually not applicable.


Quote: And if running 32-bit apps on 64-bit machine with 4GB+ of RAM, it's a non-issue. The process will grow to 2GB max.

But "the" process might not be alone in these modern multi-tasking times. And maybe it would run better on 64bit. Maybe because it's a virtualizer. Or because it could make good use of enhanced register banks. Or see below.


Quote: I have only limited experience with video, but no tool I used needed more than hundreds of megabytes - all processing is external.

I am not into video editing, too. But it's very convinient (tho not mandatory) to be able to instantly move to specific sections of the non-downscaled video. Consider e.g. a time-lapse HDR record with 10 Megapixels each shot. Each pixel is RAW 48bit for that specific type of camera. This sums up to 60MiB per image, or 3000MiB for just two seconds, just for the pure, unprocessed image data. And ssd's are still not so affordable for most hobbyists.
a) This is shitty with a 32bit application
b) This is shitty with a 64bit application as well if you just have 4GiB physical

Admittedly, solid state disks will do a lot. But until then, I'd pack my mobo full with what it can bear, and run in 64bit "mode".

Advertisement
Quote: Original post by phresnel
Old legacy server, serving 50 users with some buggy application of which 1 instance goes crazy.


The arguments I made were for the following:
- desktop use
- Windows machine, preferably 64-bit

There is a specific disclaimer regarding servers. For servers it makes a lot of sense to put alerts on pagefile use. Logging such data is also convenient to convince management to add a tiny bit of extra memory. And if management doesn't care - then screw them, let them wait till the disk grinds. It's their time.

Quote: I am not into video editing, too. But it's very convinient (tho not mandatory) to be able to instantly move to specific sections of the non-downscaled video. Consider e.g. a time-lapse HDR record with 10 Megapixels each shot. Each pixel is RAW 48bit for that specific type of camera. This sums up to 60MiB per image, or 3000MiB for just two seconds, just for the pure, unprocessed image data. And ssd's are still not so affordable for most hobbyists.
a) This is shitty with a 32bit application
b) This is shitty with a 64bit application as well if you just have 4GiB physical


Again, nothing to do with pagefile.

Typical argument is this - when my application needs the 3000GB, it will page everything else to disk. Which gains absolutely nothing, except that entire system will get bogged down due to excessive paging.

The application above only needs to cache several frames, especially if it's for encoding. There will be a hard upper limit. For everything else, it will be designed for streaming - after all, raw footage will be in hundreds of gigabytes.

In this case, turning off paging is even beneficial. The video application will just buffer a couple of frames less (either disk or encoding will be the limit) and the rest of the system will remain usable during that time.

Increasing physical memory would improve performance. Increasing pagefile would have no positive effect whatsoever. Other applications are fixed size and way too sluggish to use, and if main application runs out of memory, then it will grind to a halt anyway.


And besides - this is about 80/20 majority case.
If, once in a blue moon *you* (not some random remote Joe Sixpack) suddenly needs to process this much video, and memory really is a problem, turn the pagefile on.
If this type of task is performed daily, then the savings in time will cover the cost of 1GB stick (about maximum that can be paged) just through time saved in about a week.
Okay, arguments fallen, but this one: I tend to hibernate my machine at home. Having uptime of days and weeks and sometimes even months. The Nb.1 motivation being not having to restart all the stuff, where all-the-stuff is very broad: Coding App A, Coding App B, bit of photography and video, commodity surfing by gf and me, hearing music, and some more. I tend to not make a definite end to editing code or photographs, so often I switch from coding to surfing to video and back. Thanks to swap files, I never run out of memory and can keep applications open, even if one of my own graphics renderers is running out of core already.

That's not the John Doe use case, of course :)
Quote: Original post by phresnel
Okay, arguments fallen, but this one


I don't get it - there's an advice that could potentially improve performance and reduce disk load, and you're fighting tooth and nail against it.

I'm not forcing anyone, all my claims were made on basis on experience with 64-bit OS and 8GB of memory (or 7, due to RAMDrive).

If it doesn't work out, nothing was lost, just flip it back on.

The RAM vs. pagefile argument just annoys me due to seeing so many 64-bit machines running on pittance of RAM, and people complaining about stuff being slow. The first PC I owned had whooping 4MB of RAM which cost ~$1000. Today, 1000 times more costs a one tenth of that - and OSes can even make full use of it.

Quote: Original post by Antheus
Quote: Original post by phresnel
Okay, arguments fallen, but this one


I don't get it - there's an advice that could potentially improve performance and reduce disk load, and you're fighting tooth and nail against it.

I'm not forcing anyone, all my claims were made on basis on experience with 64-bit OS and 8GB of memory (or 7, due to RAMDrive).

If it doesn't work out, nothing was lost, just flip it back on.

The RAM vs. pagefile argument just annoys me due to seeing so many 64-bit machines running on pittance of RAM, and people complaining about stuff being slow. The first PC I owned had whooping 4MB of RAM which cost ~$1000. Today, 1000 times more costs a one tenth of that - and OSes can even make full use of it.


Nobody got harmed ;)

It's just that I, with my non-average usage, I made the exact opposite experience. I once forgot to swapon a swap-partition after doing some maintenance. Some days later, with my usual style of using the box (as described above), I got a full freeze of the system as for OOM (btw, I forgot to mention that I have an array of virtual boxes in near-daily use, for each I tend to spend half of physical memory). Since then I didn't forgot to swapon that foam rubber again.

Just my experience, if it works out for you to not have pagefile/swap/foam rubbery, then of course do so :)

This topic is closed to new replies.

Advertisement