开发者

What is the largest possible heap size with a 64-bit JVM?

开发者 https://www.devze.com 2023-04-11 03:26 出处:网络
The theoretical maximum heap value that can be set with -Xmx in a 32-bit system is of course 2^32 bytes, but typically (see: Understanding max JVM heap size - 32bit vs 64bit) one cannot use all 4GB.

The theoretical maximum heap value that can be set with -Xmx in a 32-bit system is of course 2^32 bytes, but typically (see: Understanding max JVM heap size - 32bit vs 64bit) one cannot use all 4GB.

For a 64-bit JVM running in a 64-bit OS on a 64-bit machine, is there any limit besides the theoretical limit of 2^64 bytes or 16 exabytes?开发者_高级运维

I know that for various reasons (mostly garbage collection), excessively large heaps might not be wise, but in light of reading about servers with terrabytes of RAM, I'm wondering what is possible.


If you want to use 32-bit references, your heap is limited to 32 GB.

However, if you are willing to use 64-bit references, the size is likely to be limited by your OS, just as it is with 32-bit JVM. e.g. on Windows 32-bit this is 1.2 to 1.5 GB.

Note: you will want your JVM heap to fit into main memory, ideally inside one NUMA region. That's about 1 TB on the bigger machines. If your JVM spans NUMA regions the memory access and the GC in particular will take much longer. If your JVM heap start swapping it might take hours to GC, or even make your machine unusable as it thrashes the swap drive.

Note: You can access large direct memory and memory mapped sizes even if you use 32-bit references in your heap. i.e. use well above 32 GB.

Compressed oops in the Hotspot JVM

Compressed oops represent managed pointers (in many but not all places in the JVM) as 32-bit values which must be scaled by a factor of 8 and added to a 64-bit base address to find the object they refer to. This allows applications to address up to four billion objects (not bytes), or a heap size of up to about 32Gb. At the same time, data structure compactness is competitive with ILP32 mode.


The answer clearly depends on the JVM implementation. Azul claim that their JVM

can scale ... to more than a 1/2 Terabyte of memory

By "can scale" they appear to mean "runs wells", as opposed to "runs at all".


Windows imposes a memory limit per process, you can see what it is for each version here

See:

User-mode virtual address space for each 64-bit process; With IMAGE_FILE_LARGE_ADDRESS_AWARE set (default): x64: 8 TB Intel IPF: 7 TB 2 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE cleared


I tried -Xmx32255M is accepted by vmargs for compressed oops.


For a 64-bit JVM running in a 64-bit OS on a 64-bit machine, is there any limit besides the theoretical limit of 2^64 bytes or 16 exabytes?

You also have to take hardware limits into account. While pointers may be 64bit current CPUs can only address a less than 2^64 bytes worth of virtual memory.

With uncompressed pointers the hotspot JVM needs a continuous chunk of virtual address space for its heap. So the second hurdle after hardware is the operating system providing such a large chunk, not all OSes support this.

And the third one is practicality. Even if you can have that much virtual memory it does not mean the CPUs support that much physical memory, and without physical memory you will end up swapping, which will adversely affect the performance of the JVM because the GCs generally have to touch a large fraction of the heap.

As other answers mention compressed oops: By bumping the object alignment higher than 8 bytes the limits with compressed oops can be increased beyond 32GB


In theory everything is possible but reality you find the numbers much lower than you might expect. I have been trying to address huge spaces on servers often and found that even though a server can have huge amounts of memory it surprised me that most software actually never can address it in real scenario's simply because the cpu's are not fast enough to really address them. Why would you say right ?! . Timings thats the endless downfall of every enormous machine which i have worked on. So i would advise to not go overboard by addressing huge amounts just because you can, but use what you think could be used. Actual values are often much lower than what you expected. Ofcourse non of us really uses hp 9000 systems at home and most of you actually ever will go near the capacity of your home system ever. For instance most users do not have more than 16 Gb of memory in their system. Ofcourse some of the casual gamers use work stations for a game once a month but i bet that is a very small percentage. So coming down to earth means i would address on a 8 Gb 64 bit system not much more than 512 mb for heapspace or if you go overboard try 1 Gb. I am pretty sure its even with these numbers pure overkill. I have constant monitored the memory usage during gaming to see if the addressing would make any difference but did not notice any difference at all when i addressed much lower values or larger ones. Even on the server/workstations there was no visible change in performance no matter how large i set the values. That does not say some jave users might be able to make use of more space addressed, but this far i have not seen any of the applications needing so much ever. Ofcourse i assume that their would be a small difference in performance if java instances would run out of enough heapspace to work with. This far i have not found any of it at all, however lack of real installed memory showed instant drops of performance if you set too much heapspace. When you have a 4 Gb system you run quickly out of heapspace and then you will see some errors and slowdowns because people address too much space which actually is not free in the system so the os starts to address drive space to make up for the shortage hence it starts to swap.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号