Problem with OutofMemoryError

I am referring to explicitly that example, in which it works. If there are bytecodes that make the order of construction determinant then this optimisation cannot be performed.

Cas :slight_smile:

[quote]I am referring to explicitly that example, in which it works. If there are bytecodes that make the order of construction determinant then this optimisation cannot be performed.

Cas :slight_smile:
[/quote]
Yes but knowing that means analyzing the object constructors and all methods referenced by them. Then with virtual functions in the picture it all goes hairyā€¦ This would only be useful for trivial objectsā€¦ which may of course be what you are after.

Most of the benefits of escape analysis do come from smal objects; things like iterators and classes implementing complex numbers. The iterator for an ArrayList would be an obvious example that with escape analysis would almost always be reduced down to
for (int i=0; i<list.size(); i++)
{
}

Thereā€™s a few cases involving very large allocations where a bit of smart destruction by the JRE would come in handy - in particular image loading, where weā€™re talking about megs and megs per image potentially.

Ok, ok, so it can be worked around with ease but itā€™s a very easy bug to overlook and very difficult to track down when it happens. One more thing to make my days easier is always welcome!

Cas :slight_smile:

Hi,

Iā€™ve had a similar problem today. Iā€™ve found that one of our Web applications wonā€™t run if the JVM is not started with the tuned parameters -Xmx256M -Xms128M -XX:NewSize=60M -XX:MaxPermSize=128M -XX:MaxNewSize=120M.

If these parameters are not set, I get a OutOfMemoryError in some pages, even if Iā€™ve got a lot of memory on that server. Iā€™ve always thought that the JVM would do the best effort to manage memory, and that these errors would happen on badly tuned JVM, if not on default configurationā€¦

This happens consistently with JDK 1.3.1_09 and 1.4.2_03!

[quote]Hi,

Iā€™ve had a similar problem today. Iā€™ve found that one of our Web applications wonā€™t run if the JVM is not started with the tuned parameters -Xmx256M -Xms128M -XX:NewSize=60M -XX:MaxPermSize=128M -XX:MaxNewSize=120M.
[/quote]
Can you try using IBMā€™s latest JRE/JDK? Memory managment was one of the things that was historically very different between Sunā€™s and IBMā€™s (I remember a 1.1.x IBM JVM which automatically bloated to 120Mb+ within 15 seconds (or something like that) of starting my app, that on Sunā€™s JVM only needed 15 Mb to run - but I was doing a lot of big image operations and IBMā€™s JVM was a heck of a lot faster, taking advantage of all available memory).

Hi,

I canā€™t try another VM as this web application is running on a Sun serverā€¦

[quote]If these parameters are not set, I get a OutOfMemoryError in some pages, even if Iā€™ve got a lot of memory on that server. Iā€™ve always thought that the JVM would do the best effort to manage memory
[/quote]
You still have to specify the max size that the heap is allowed to grow to. The VM wonā€™t go past the default limit.

OK. Today I learnt that the default is 64MB for the Solaris JVM (http://java.sun.com/j2se/1.4.2/docs/tooldocs/solaris/java.html, -Xmx). Quite strange that this options uses a non-standard option!

It is a pity that this is necessary, but I presume that the heap implementation currently requires that the heap address range be contiguous.

Iā€™ve never written code to perform escape analysis myself, but off the top of my head, I can think of methods of determining whether an object escapes a scope or not that run in less time than determining the exact last place an object was referenced. Considering that this optimization would be done at runtime, Iā€™m sure you would want it to be as lean as possible.

God bless,
-Toby Reyelts

Sunā€™s Windows (and perhaps others) JVM requires a contiguous address space, which is the reason for the pitiful 1.4/1.7G heap limit on Windows. I donā€™t see what this has to do with presetting a max heap size, though. I believe, for example, the current 1.5 VMā€™s parallel garbage collector performs automatic heap sizing.

God bless,
-Toby Reyelts

Yes, in general, escape analysis is going to help more with smaller objects - which is a good thing. Because the garbage collector seems to performs the worst when dealing with large numbers of small objects. Anything we can do to lessen its burden is good.

In your example, you seem to be implying that the Iterator object just totally disappears, but I seriously doubt thatā€™s going to happen. Itā€™s not going to be able to pull out concurrent modification checks, and it probably wonā€™t be able to pull out the redundant range checking either. That overhead is just noise in a normal application, but it turns into significant overhead in specialized cases.

To this day, I canā€™t understand why the library code does silly things like perform redundant range checks.

God bless,
-Toby Reyelts