开发者

How can a moderately sized memory allocation fail in a 64 bit process on Mac OS X?

开发者 https://www.devze.com 2023-02-14 21:09 出处:网络
I\'m building a photo book layout application. The application frequently decompresses JPEG images into in-memory bitmap buffers. The size of the images is constrained to 100 megapixels (while they us

I'm building a photo book layout application. The application frequently decompresses JPEG images into in-memory bitmap buffers. The size of the images is constrained to 100 megapixels (while they usually do not exceed 15 megapixels).

Sometimes memory allocations for these buffers fail: [[NSMutableData alloc] initWithLength:] returns nil. This seems to happen in situations where the systems's free physical memory approaches zero.

My understanding of the virtual memory system in Mac OS X was that an allocation in a 64 bit process virtually (sic) can't fail. There are 16 exabyte of address space of which I'm trying to allocate a maximum of 400 megabytes at a time. Theoretically I could allocate 40 billion of these buffers without hitting the hard li开发者_高级运维mit of the available address space. Of course practical limits would prevent this scenario as swap space is constrained by the boot volume's size. In reality I'm only making very few of these allocations (less than ten).

What I do not understand is the fact that an allocation fails, no matter how low physical memory is at that point. I thought that—as long as there's swap space left—memory allocation would not fail (as the pages are not even mapped at this point).

The application is garbage collected.

Edit:

I had time to dig into this problem a little further and here are my findings:

  1. The problem only occurs in a garbage collected process.
  2. When the allocation from NSMutableData fails, a plain malloc still succeeds to allocate the same amount of memory.
  3. The error always happens when overall physical memory approaches zero (swapping is about to take place).

I assume NSData uses NSAllocateCollectable to perform the allocation instead of malloc when running under garbage collection.

My conclusion from all that is that the collector is unable to allocate big chunks of memory when physical memory is low. Which again, I don't understand.


The answer lies in the implementation of libauto.

As of OS X 10.6 an arena of 8 Gb is allocated for garbage collected memory on 64-bit platforms. This arena is cut in half for large allocations (>=128k) and small (<2048b) or medium (<128k) allocations.

So in effect on 10.6 you have 4Gb of memory available for large allocations of garbage collected memory. On 10.5 the arena had a size of 32Gb, but Apple lowered that size to 8Gb on 10.6.


Another guess, but it may be that your colleague's machine is configured with a stricter maximum memory per user process setting. To check, type

ulimit -a

Into a console. For me, I get:

~ iainmcgin$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 266
virtual memory          (kbytes, -v) unlimited

From my settings above, it seems there is no per-process limit on memory usage. This may not be the case for your colleague, for some reason.

I'm using Snow Leopard:

~ iainmcgin$ uname -rs
Darwin 10.6.0


Even though a 64 bit computer can theoretically address 18 EB, current processors are limited to 256 TB. Of course, you aren't reaching this limit either. But the amount of memory your process can use at one time is limited to the amount of RAM available. The OS may also limit the amount of RAM you can use. According to the link you posted, "Even for computers that have 4 or more gigabytes of RAM available, the system rarely dedicates this much RAM to a single process."


You may be running out of swap space. Even though you have a swap file and virtual memory, the amount of swap space available is still limited by the space on your hard disk for swap files.


It could be a memory fragmentation issue. Perhaps there are not any single contiguous chunks of 400 MB available at the time of allocation?

You could try to allocate these large chunks at the very start of your application's life cycle, before the heap gets a chance to become fragmented by numerous smaller allocations.


initWithBytes:length: tries to allocate its entire length in active memory, essentially equivalent to malloc() of that size. If the length exceeds available memory, you will get nil. If you want to use large files with NSData, I'd recommend initWithContentsOfMappedFile: or similar initializers, as they use the VM system to pull parts of the file in and out of active memory when needed.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号