High physical memory usage?
I noticed today that my Linux/KDE is using all of my RAM. That''s right, all of it (give or take 5 MB). I have 512 MB ram and a gig of swap space on its own little partition.
Why is it doing this? Is it simply because it can? And if I start using a lot of ram for something else, will Linux shift stuff into swap space neatly so I don''t have to worry about it?
I don''t really mind the high memory usage as long as it''s not choking my system...
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Put simply, the kernel caches everything. Open a terminal and run "free". Look at the line marked "-/+ buffers/cache". The used column is how much memory is actually being "required" at the moment.
2.4 is pretty intelligent about when to swap things to and from the disk, but 2.6 is much better still.
2.4 is pretty intelligent about when to swap things to and from the disk, but 2.6 is much better still.
Yah, Null_and_Void is correct. The Linux kernel sucks up the resources until it actually needs to free some. This way you prevent the unnecessary swaping that you get in things like windows 95. You really don''t need to free pages unless there is no more available memory. I can see how one might want pages freed periodically to prevent fragmenting of RAM though. Additionally, the likely hood that the memory that an app is looking for is already in memory is quite large. You don''t want to free it too quickly and then have to take page faults to swap in something that should have been there in the first place because page faulting is expensive.
RandomTask
RandomTask
quote: Original post by Salsa
ok.
No, silly, it doesn''t work for questions. You''re supposed to say it to people who are ranting or posting generally pointless, not-a-question threads.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
November 08, 2003 11:42 PM
This is also why unmounting your disks and shutting down properly is important in Linux, all that stuff sitting up in memory needs a chance to get written to disk when the time comes.
Linux has a daemon (really, a special kernel thread, IIRC) which flushes pending dirty blocks to disk. If it notices that it falls too far behind, it''ll synchronously commit a large chunk of blocks in one swell foop.
Unfortunately, if you try to page or write anything else while it''s doing that, you''ll stall until it''s done. Which may be > 10 seconds, if you have lots of memory and fast disks, and longer if you don''t.
This is somewhat tuneable, and might have changed in 2.4, but unified buffering and VM isn''t always a clear win; surprising things may happen if you don''t watch out.
Unfortunately, if you try to page or write anything else while it''s doing that, you''ll stall until it''s done. Which may be > 10 seconds, if you have lots of memory and fast disks, and longer if you don''t.
This is somewhat tuneable, and might have changed in 2.4, but unified buffering and VM isn''t always a clear win; surprising things may happen if you don''t watch out.
enum Bool { True, False, FileNotFound };
quote: Original post by hplus0603
Linux has a daemon (really, a special kernel thread, IIRC) which flushes pending dirty blocks to disk. If it notices that it falls too far behind, it''ll synchronously commit a large chunk of blocks in one swell foop.
Unfortunately, if you try to page or write anything else while it''s doing that, you''ll stall until it''s done. Which may be > 10 seconds, if you have lots of memory and fast disks, and longer if you don''t.
I haven''t experienced this at all recently, not even on my (rather low-spec and reasonably high bidirectional traffic) fileserver. I remember this happening a few years back (2.2.x), but not ever since.
- JQ
~phil
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement