I am referring to explicitly that example, in which it works. If there are bytecodes that make the order of construction determinant then this optimisation cannot be performed.
Cas
I am referring to explicitly that example, in which it works. If there are bytecodes that make the order of construction determinant then this optimisation cannot be performed.
Cas
[quote]I am referring to explicitly that example, in which it works. If there are bytecodes that make the order of construction determinant then this optimisation cannot be performed.
Cas
[/quote]
Yes but knowing that means analyzing the object constructors and all methods referenced by them. Then with virtual functions in the picture it all goes hairyā¦ This would only be useful for trivial objectsā¦ which may of course be what you are after.
Most of the benefits of escape analysis do come from smal objects; things like iterators and classes implementing complex numbers. The iterator for an ArrayList would be an obvious example that with escape analysis would almost always be reduced down to
for (int i=0; i<list.size(); i++)
{
}
Thereās a few cases involving very large allocations where a bit of smart destruction by the JRE would come in handy - in particular image loading, where weāre talking about megs and megs per image potentially.
Ok, ok, so it can be worked around with ease but itās a very easy bug to overlook and very difficult to track down when it happens. One more thing to make my days easier is always welcome!
Cas
Hi,
Iāve had a similar problem today. Iāve found that one of our Web applications wonāt run if the JVM is not started with the tuned parameters -Xmx256M -Xms128M -XX:NewSize=60M -XX:MaxPermSize=128M -XX:MaxNewSize=120M.
If these parameters are not set, I get a OutOfMemoryError in some pages, even if Iāve got a lot of memory on that server. Iāve always thought that the JVM would do the best effort to manage memory, and that these errors would happen on badly tuned JVM, if not on default configurationā¦
This happens consistently with JDK 1.3.1_09 and 1.4.2_03!
[quote]Hi,
Iāve had a similar problem today. Iāve found that one of our Web applications wonāt run if the JVM is not started with the tuned parameters -Xmx256M -Xms128M -XX:NewSize=60M -XX:MaxPermSize=128M -XX:MaxNewSize=120M.
[/quote]
Can you try using IBMās latest JRE/JDK? Memory managment was one of the things that was historically very different between Sunās and IBMās (I remember a 1.1.x IBM JVM which automatically bloated to 120Mb+ within 15 seconds (or something like that) of starting my app, that on Sunās JVM only needed 15 Mb to run - but I was doing a lot of big image operations and IBMās JVM was a heck of a lot faster, taking advantage of all available memory).
Hi,
I canāt try another VM as this web application is running on a Sun serverā¦
[quote]If these parameters are not set, I get a OutOfMemoryError in some pages, even if Iāve got a lot of memory on that server. Iāve always thought that the JVM would do the best effort to manage memory
[/quote]
You still have to specify the max size that the heap is allowed to grow to. The VM wonāt go past the default limit.
OK. Today I learnt that the default is 64MB for the Solaris JVM (http://java.sun.com/j2se/1.4.2/docs/tooldocs/solaris/java.html, -Xmx). Quite strange that this options uses a non-standard option!
It is a pity that this is necessary, but I presume that the heap implementation currently requires that the heap address range be contiguous.
Iāve never written code to perform escape analysis myself, but off the top of my head, I can think of methods of determining whether an object escapes a scope or not that run in less time than determining the exact last place an object was referenced. Considering that this optimization would be done at runtime, Iām sure you would want it to be as lean as possible.
God bless,
-Toby Reyelts
Sunās Windows (and perhaps others) JVM requires a contiguous address space, which is the reason for the pitiful 1.4/1.7G heap limit on Windows. I donāt see what this has to do with presetting a max heap size, though. I believe, for example, the current 1.5 VMās parallel garbage collector performs automatic heap sizing.
God bless,
-Toby Reyelts
Yes, in general, escape analysis is going to help more with smaller objects - which is a good thing. Because the garbage collector seems to performs the worst when dealing with large numbers of small objects. Anything we can do to lessen its burden is good.
In your example, you seem to be implying that the Iterator object just totally disappears, but I seriously doubt thatās going to happen. Itās not going to be able to pull out concurrent modification checks, and it probably wonāt be able to pull out the redundant range checking either. That overhead is just noise in a normal application, but it turns into significant overhead in specialized cases.
To this day, I canāt understand why the library code does silly things like perform redundant range checks.
God bless,
-Toby Reyelts