Jeff is mistaken; you still need to copy the DLL. In beta1 at least.
The Apple server VM is not the same as Sun’s server VM. It merely changes memory tuning characteristics (at this point in time).
Cas
Jeff is mistaken; you still need to copy the DLL. In beta1 at least.
The Apple server VM is not the same as Sun’s server VM. It merely changes memory tuning characteristics (at this point in time).
Cas
Exactly. A default install of java 1.5 beta gives this in response to java -server -version
Error: no server' JVM at
C:\Program Files\Java\j2re1.5.0\bin\server\jvm.dll’.
Hmm. Thats the beta. Im not sure what that means if anything.
AFAIK there is no intent to stop destributing the server VM with the JRE but who knwos, maybe someone thought it was a brillaitn way to reduce JRE size.
I’ll poke around and see what I cna find out.
Umk are you guys telling me its been this way since 1.4?
Thats nutty. I need to look into that. Thanks.
[quote]Umk are you guys telling me its been this way since 1.4?
[/quote]
Yes.
You live in a nice world: no Windows PCs for years. Lucky you…
[quote]Umk are you guys telling me its been this way since 1.4?
[/quote]
Since the server VM was released (1.3 ?).
You could at least make the server VM an option in the JRE install so that you don’t have to download it unless you want to.
Cas
[quote]Umk are you guys telling me its been this way since 1.4?
Thats nutty. I need to look into that. Thanks.
[/quote]
Yep, there’s not server VM in Windows JRE for a long time now. However, Linux 1.5.0 beta JRE has it. Sweet.
You mean its just the 1.5 JDK installation which doesn’t copy the server vm to the JRE location? That 's just nutty.
Yes it has always been the case that the server VM is not included in the “JRE” but is included in the “JDK”, on Windows at least.
On Mac there currently is NO server VM at all, but the -server command line arg does change some parameters to the client VM so it behaves a bit more server like - though without any of the server VM optimizations.
Heh, what a mess! ;D
I’d always assumed that the reason why the server VM wasn’t in the JRE was because most people wouldn’t know it was there anyway - it’s an easy 2MB to cut. The minority who want the server VM are capable of getting the full JDK instead - classical client-side apps would prefer to avoid the startup cost so it’s application servers et al that really benefit, and they frequently come bundled with the preferred VM.
That was always my perception, anyway.
What annoys me is that all the performance optimism and clever compiler tricks quoted about Java are all based on the server VM. The client VM just sucks, really
Cas
Well not ALL the clever tricks, but certainly the expensive optimization ones are server only.
I still think its wacky that they too it out of the JRE. i never noticed because I ALWAYS install a JDK.
I’ll do some digging on it but I suspect some brilliant person decided that “only GUI users download the JRE and they don’t need server” which as PC points out is not a good assumption.
Perhaps related to this mess?
It certainly seems that someone in marketing is keen to fork <*> java.
<*> (the similarity between this and another f word is rather appropriate here I think)
[quote]The client VM just sucks, really
[/quote]
That’s over-generalized and therefore simply not true. Eclipse, for example, runs much better (read: faster) on a client VM than on a server VM. For whatever reason…
It sucks for what we want, which is high-performance games that can compete with C++
Eclipse runs faster with the server VM - it just needs a little tuning.
Cas
try to run this
public class Bench
{
public static void main(String[] args)
{
long time = System.currentTimeMillis();
int x = 0;
for (int i = 0;i<1000000000;i++)
{
x+=5;
x+=10;
}
System.out.println(x);
System.out.println((System.currentTimeMillis()-time)/1000f);
}
}
with the client and server vm (1.5). it’s amazing.
time client vm : 8 seconds on a 1 ghz pentium 3.
server vm : 0.01 (!!!) seconds
It is conceivable the server VM processes that loop completely and turns it into a single instruction…
not particularly useful in the real world maybe but you never know
Cas
Maybe simply, the server vm is smart enough to remove completely the loop as the result can be calculated with a simple multiplication… that is 1000000000 * 15… Can even add the result directly in the print statement…
I bet that if you get the loop count from… mhhh commandline, you will not get the same results in server VM… You then finally bench the timer granularity …
Not an amazing performance, but a nice optimization…
corrections… can’t type…
Probably has more to do with the JIT. AFAIK, the server VM is much more proactive at determining which methods it should JIT compile. Whereas the client VM waits until it actually sees a hotspot before it compiles.
Try moving the code into a method, calling it a couple of times, THEN perform the benchmark. This should give the client VM time to compile the method, if this is indeed what’s happening.