开发者

What's going on with the Vista heap?

开发者 https://www.devze.com 2023-01-31 06:02 出处:网络
I\'m trying to better understand why the Windows Vista heap behaves the way it does. Consider the following very simple program:

I'm trying to better understand why the Windows Vista heap behaves the way it does. Consider the following very simple program:

#include <vector>
#define NUM_ALLOCS 10000000

int _tmain(int argc, _TCHAR* argv[])
{
    for (int iteration=0; iteration<10000; ++iteration) {
        std::vector<unsigned char *> buffer;
        buffer.reserve(NUM_ALLOCS);
        for (int i=0;i<NUM_ALLOCS;++i) {
            buffer.push_back(new unsigned char);
        }
        for (int i=0;i<NUM_ALLOCS;++i) {
            delete buffer[i];
        }
    }

    return 0;
}

Basically this is a loop that for each iteration allocates a lot of 1 byte blocks and then releases them. Naturally, the memory usage of this program goes up while allocating the buffers and then down when the buffers are released.

The behavior that I am seeing on Windows Vista 64-bit is that the peak memory usage (as reported by task manager or by vmmap) stays roughly constant over time, whereas the lowest memory usage reported grows until it is close to the peak memory usage.

On Windows 7 64-bit the lowest memory usage reported does not grow over time.

Edit: I've tested on two Windows Vista 64-bit machines with 8 GB / 4 GB RAM and one Windows 7 64-bit machine with 4 GB RAM. I've tested the 8 GB machine with both low and high memory usage scenarios.

Edit: I'v开发者_开发问答e built the above example with Visual Studio 2005 and 2010 with the same result.

This example isn't doing anything useful, but the memory usage scenario is similar (albeit heavily condensed) to a program of mine for which I've tried to figure out why it appears to use a lot more memory than it actually does. From what I can tell, the memory is being held by the heap manager.

Does anyone have any insights on the heap mechanisms? Do I need to do something extra to convince the heap manager to fully release the used heap memory? Are there alternative strategies that I should use, such as creating a separate heap and then destroy it?

Any comments or insights are appreciated!


Could it be the Low-Fragmentation Heap?

It seems to me that I read somewhere that LFH is enabled by default on Windows 7. However, quick search didn't reveal the confirmation, so I may be wrong here.

There is an easy way to check, though. Call HeapQueryInformation on a handle obtained from GetProcessHeap and compare the results on different systems.


Have you tried this under memory pressure? It doesn't make sense to release the memory unless something else needs it.


atzz's on the right track, but this behavior will happen with any heap - when you call that first "new" with the one-byte size, the Heap is going to allocate a "bucket" and preallocate a certain block of memory (probably some multiple of the page size, 4K); this way, when subsequent allocations of the same size come in, it can very quickly give you memory.

Furthermore, when you call delete, it is just marking that region as unallocated, but keeping it around in case you want a new object of similar size later.

If the Heap Manager operated as you describe, it would run extremely slowly because it would have to constantly ask the kernel, "Can you give me another byte?" and "Demap this please!" (in fact, this is impossible since the smallest allocation you can ask the kernel to give you is page sized as I recall)


Small memory allocations that are free'ed to the heap are usually placed into a list that is used for fast allocations.

Even without this optimization, the heap mamager is free to hold into the heap bucket from which the allocation was made. In order for memory to be returned to the system (VirtualFree'ed) all blocks in a 64KB block must be free'ed and combined by the heap manager.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号