开发者

Glassfish V2.1.1 Heap size never decrease after server batch job

开发者 https://www.devze.com 2023-04-11 03:31 出处:网络
I\'ve set up a glassfish cluster with 1 DAS and 2 Node Agents. The system has TimedObjects which are batched once a day. As glassfish architecture, there is only 1 cluster instance allowed to trigger

I've set up a glassfish cluster with 1 DAS and 2 Node Agents.

The system has TimedObjects which are batched once a day. As glassfish architecture, there is only 1 cluster instance allowed to trigger timeout event of each Timer created by TimerService.

My problems is about Heap size of a cluster instance which triggers batch job. The VisualVM shows that one instance always has scalable heap size (increase when the server is loaded and decrease after that) but another one always has heap size at the maximum and never decrease.

It is acceptable to tell me that the heap size is at the maximum because the batch job is huge. But, the only question I have is why it does not decrease after the job is done???

VisualVM shows that the "Used Heap Memory" of the instance which triggers timeout event decreases after the batch job. But, why its "Heap Size" is not scaled down accordingly?

开发者_如何学C

Thank you for your advice!!! ^^


Presumably you have something referencing the memory. I suggest getting a copy of MAT and doing a heap dump. From there you can see what's been allocated and what is referencing it.


This is the final answer (thanks Preston ^^)

From the article :

http://www.ibm.com/developerworks/java/library/j-nativememory-linux/index.html

I captured these statements to answer my question!

1 :

"Runtime environments (JVM) provide capabilities that are driven by some unknown user code; that makes it impossible to predict which resources the runtime environment will require in every situation"

2 : This is why the node which triggers batch job always consumes the memory at all time.

"Reserving native memory is not the same as allocating it. When native memory is reserved, it is not backed with physical memory or other storage. Although reserving chunks of the address space will not exhaust physical resources, it does prevent that memory from being used for other purposes"

3 : And this is why the node which does not trigger batch job has scalable Heap Size behavior.

"Some garbage collectors minimise the use of physical memory by decommitting (releasing the backing storage for) parts of the heap as the used area of heap shrinks."

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号