Java and memory size
Apparantly when running a Java application you need to set beforehand how much memory you allocate to it. And things like Eclipse appear to have a bad default, whenever you do something heavy it goes out of memory and you need to change stuff in configuration or run script files.
Why can't a JVM just use memory that is available in the OS, like normal programs do?
Does C# in .NET have this same problem or does that virtual machine work better in this aspect?
Personally I find this a serious shortcoming of the usefulness of Java. We're almost 2010, people aren't supposed to set some fixed amount of memory before running an application. If .NET doesn't have it, is there any chance the makers of Java might change their mind and drop or fix this whole idea of setting memory beforehand?
Can't a JVM implementation for PC just ask the OS how much RAM you have and give itself that as max size parameter? I notice that it's not because you set it to 3GB for every java process, every java process will really take 3GB, so if I just set them all to 3GB it appears to work the way that it should be supposed to work.
Choosing how much memory an application may take is not an option I as user want to take. I just want that if I have 3GB RAM, then either 1 application can take the full 3GB, or N applications get it divided amongst them by the OS. But Java apparantly seems to think it's fun to let the user have to work with Java applications that have a default memory size in their config files that was entered there by some old fashioned guy with a pentium 3 computer with 256MB RAM or something?
Quote: Original post by Lode
Can't a JVM implementation for PC just ask the OS how much RAM you have and give itself that as max size parameter? I notice that it's not because you set it to 3GB for every java process, every java process will really take 3GB
I don't think that's entirely true - it'll appear that way in the task manager because that much memory has been "allocated" but only a tiny fraction will actually be written to, meaning that Windows should only have to allocate virtual pages. In theory these take no space at all (they're not even swapped out to disk because windows knows there's nothing in them) so as far as I know there's nothing intrinsically wrong with just setting the max heap size to 3Gb (or whatever). I believe the only reason why this isn't done automatically is because Sun don't want people to look at the task manager and complain that Java is sucking up 3Gb for every process (even though it's not).
As to why, I believe it's due to the way the GC likes to work with big, continuous chunks of memory, so it maps out it's entire heap at startup. I agree, it does suck, but I have no idea exactly what impact having the entire heap to be relocatable/resizable at runtime would have.
If you're really worried about it you could write a little stub exe that starts up the JVM with a suitable memory argument based on the machines physical/virtual memory sizes (maybe you could add it in to JSmooth?).
Edit: also note that there's a difference between -Xms and -Xmx command line options. Xmx sets the maximum size, but will not actually allocate that amount until it needs it IIRC.
[size="1"][[size="1"]TriangularPixels.com[size="1"]] [[size="1"]Rescue Squad[size="1"]] [[size="1"]Snowman Village[size="1"]] [[size="1"]Growth Spurt[size="1"]]
Quote: Original post by OrangyTang
Edit: also note that there's a difference between -Xms and -Xmx command line options. Xmx sets the maximum size, but will not actually allocate that amount until it needs it IIRC.
That's what I was wondering, why doesn't the JVM just automatically use a very huge value, or your amount of RAM, as "Xmx", instead of requiring a fixed value and having "64MB" as default? It only uses it if necessary anyway, and all this maximum does is create troubles and annoyances if it is reached, even if your PC has lots of free RAM.
I'm wondering about if .NET does that too because I'd like to know if this a general technical problem of virtual machines, or a java-only problem.
I haven't run into such a limitation using .NET/C# yet. As far as I know .NET allocates memory when it needs it. I must admit though I have never really tested .NET for an upper limit. The most I have ever seen one of my .NET apps use is about 300-400MB.
It seems weird to me that Java has such a limitation by default. There is no reason I can think of right now why you would want to set a memory allocation limit on a desktop system. On embedded systems however having such a setting would be a good thing. Are you sure you are not using an embedded JVM or used an installer for an embedded system?
It seems weird to me that Java has such a limitation by default. There is no reason I can think of right now why you would want to set a memory allocation limit on a desktop system. On embedded systems however having such a setting would be a good thing. Are you sure you are not using an embedded JVM or used an installer for an embedded system?
STOP THE PLANET!! I WANT TO GET OFF!!
Quote: Original post by LodeQuote: Original post by OrangyTang
Edit: also note that there's a difference between -Xms and -Xmx command line options. Xmx sets the maximum size, but will not actually allocate that amount until it needs it IIRC.
That's what I was wondering, why doesn't the JVM just automatically use a very huge value, or your amount of RAM, as "Xmx", instead of requiring a fixed value and having "64MB" as default? It only uses it if necessary anyway, and all this maximum does is create troubles and annoyances if it is reached, even if your PC has lots of free RAM.
I'm guessing that's a legacy/backwards compatibility things - something .Net isn't going to have as much trouble with.
I can't quite see what compatibility issues raising it would cause though.
[size="1"][[size="1"]TriangularPixels.com[size="1"]] [[size="1"]Rescue Squad[size="1"]] [[size="1"]Snowman Village[size="1"]] [[size="1"]Growth Spurt[size="1"]]
From what I remember reading about the .NET 2.0 CLR. Items on the heap are limited to a maximum of 2GB of continuous memory. Even in 64bit .NET enforces this limitation. But based on my understanding, the sum of the memory consumed on the heap may exceed 2GB but the maximum size any single item is 2GB, and the GC will dynamically alloc and free the memory it requires up to the maximum available memory.
Short of you needing to operate on 2gigabytes of data as an array, I would doubt that you would cross paths with this particular limitation regularly. Even then, I remember reading a few ways to get around it (namely with pinvoke and/or unsafe code).
I can't really comment on Java's memory allocation, though.
Short of you needing to operate on 2gigabytes of data as an array, I would doubt that you would cross paths with this particular limitation regularly. Even then, I remember reading a few ways to get around it (namely with pinvoke and/or unsafe code).
I can't really comment on Java's memory allocation, though.
I think its weird to have as a default. Sure, a web browser launching a Java VM might want to control the allocations, but when you start a JVM from the command line why not act like any other program?
The general solution to this problem, if your program uses more memory than is given by default, you need to set the parameters either with a batch/shell script or wrap the program as a native executable (see Launch4J for Windows).
When distributing through the internet, memory parameters can be set inside the webstart JNLP file, which also works for webstart-based applets.
When distributing through the internet, memory parameters can be set inside the webstart JNLP file, which also works for webstart-based applets.
- Single linear address space. This is quite a big deal, since it simplifies many things. Even though JVM uses generations, they are all in same address space. CLR's LOH approach complicates things a bit. OS uses paging to solve this problem, but that is too expensive and too complicated for a VM.
- Resizing isn't possible as such. Once a block is allocated from OS, it's rarely possible to just grow it. So growing Java heap by 50% would mean allocating another heap larger by 50%, so total memory use at that point would be 250%.
- OS memory fragmentation. While paging solves this problem for most part, JVM needs huge slabs of memory, and requesting continuous 1 or 2 or 3GB block is unlikely to work reliably.
- Embedded devices. While partially implementation detail, JVM was design to scale from 64kB embedded systems to hundred gigabyte server big irons. Having a single consistent memory model simplifies VM development (or so I'd imagine).
- Simplified error handling. VM heap allocation can fail due to various internal details of OS memory allocation, but once it has claimed entire heap, all allocations can be checked completely and entirely inside VM. The problem with paged OS allocation is that even though memory is allocated, it might not be made available until it's committed. So if VM were to allocate separate chunk of memory, it would need to rewrite it start to finish, at which point OS could run out of memory. CLR needs to deal with this case for LOH. For example, accessing memory allocated by malloc or new in C/C++ can fail for above reason, even though the call succeeds. The only reliable way to allocate would be to zero out each allocation.
- Paging. JVM and GC frequently perform full sweeps over entire generation. If heap were allowed to grow beyond bounds, it would inevitably get paged, resulting in huge stalls during GC, when whole generations would be paged. Unlike typical applications which keep data pinned at same location, all of JVM data moves from time to time.
It's not the only way, and JVM has been made famous for being a memory hog. Still, all of the above are engineering trade offs. Different ones could be made, but considering Java is not relevant on desktop, there is little need for it. On servers, upper limit might actually be desirable, and since such applications are administered anyway, the ease of use is not a sufficient argument.
- Resizing isn't possible as such. Once a block is allocated from OS, it's rarely possible to just grow it. So growing Java heap by 50% would mean allocating another heap larger by 50%, so total memory use at that point would be 250%.
- OS memory fragmentation. While paging solves this problem for most part, JVM needs huge slabs of memory, and requesting continuous 1 or 2 or 3GB block is unlikely to work reliably.
- Embedded devices. While partially implementation detail, JVM was design to scale from 64kB embedded systems to hundred gigabyte server big irons. Having a single consistent memory model simplifies VM development (or so I'd imagine).
- Simplified error handling. VM heap allocation can fail due to various internal details of OS memory allocation, but once it has claimed entire heap, all allocations can be checked completely and entirely inside VM. The problem with paged OS allocation is that even though memory is allocated, it might not be made available until it's committed. So if VM were to allocate separate chunk of memory, it would need to rewrite it start to finish, at which point OS could run out of memory. CLR needs to deal with this case for LOH. For example, accessing memory allocated by malloc or new in C/C++ can fail for above reason, even though the call succeeds. The only reliable way to allocate would be to zero out each allocation.
- Paging. JVM and GC frequently perform full sweeps over entire generation. If heap were allowed to grow beyond bounds, it would inevitably get paged, resulting in huge stalls during GC, when whole generations would be paged. Unlike typical applications which keep data pinned at same location, all of JVM data moves from time to time.
It's not the only way, and JVM has been made famous for being a memory hog. Still, all of the above are engineering trade offs. Different ones could be made, but considering Java is not relevant on desktop, there is little need for it. On servers, upper limit might actually be desirable, and since such applications are administered anyway, the ease of use is not a sufficient argument.
Xms and Xmx typically control the size of the object heap and don't represent all of the memory a JVM uses. Most JVMs need their object heap to be a single continuous space, so they will reserve Xmx from the OS and grow/shrink the heap between Xms and Xmx. If a JVM uses a very large Xmx as default then it risks running out of virtual address space for other things, like JIT, native VM code (AWT,NIO), user native code (JNI), etc.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement