Things you disagree with in the java language

It may not work for you, but the GNU java compiler had fast c/c++ interface 'cause it treated Java like any other compiler source. Trouble is that they didn’t implement much of the Java common libraries.

The problem with JNI is not java strict definitions of types, but rather C very sloppy definition of everything. Marshling types between langs is never as easy as it looks.

However i though that direct byte buffers fix a lot of this? I don’t see and performance issues with calls in my opengl, i am always fill/vertex limited long before call overhead is an issue.

Going around in circles here - I think I mentioned JNA on page 2! :wink:

@princec - this kind of syntax is pretty much what JNA gives you, along with the ability not to have to cross-compile stubs for different platforms. It’s a better JNI on the user side by far, but it’s still a wrapper to JNI so doesn’t get over the issues of how heavyweight that may or may not be.

@Roquen - meant to ask this back on page 2 when you or someone else mentioned it - any links to discussions on JNI replacements for Java core? Faster JNI then everyone wins, including all those nice syntax wrappers like JNA.

@Oskuro - got an example of what you were trying to do? There shouldn’t be much if anything that JNI gives you that JNA can’t.

+1 I doubt that any Java -> native boundary cross is going to have zero overhead, so I would suggest that 99% of the time if JNI is the bottleneck you’re crossing the boundary too much. :persecutioncomplex:

No I have no real information. It’s just be talked about in a hand-waving manner in some of the recent “future directions” presentations…you know all PR no content kind-of-thing.

I timed it once, years ago on a Celeron 300 I think - the overhead of a basic Java->JNI call jump is in the order of hundreds of nanoseconds. This means you have to do an awful lot of them before you actually notice the overhead.

Cas :slight_smile:

(To be honest I haven’t timed the boundary since probably JDK5…so I might be full of poop.)

JNA is not a wrapper around JNI. It’s a wrapper around libffi, or at least its moral equivalent. It might use JNI to interface to that, but after that, JNI is not involved. Since it’s always using dynamic libraries with dlopen/LoadLibrary, it’s always going to be a bit slower than JNI.

The overhead of JNI isn’t so much the call overhead as the defensive copying or locking it has to do.

I wrote a benchmark to measure JNI overhead last year, here are the results on jre7u7, Sandy Bridge 3.1GHz, Win7:

glGetInteger(GL_MAX_TEXTURE_IMAGE_UNITS, buffer); // 23ns server, 27ns client
glColor4f(1.0f, 1.0f, 1.0f, 0.0f); // 16ns server, 17ns client
glDepthMask(true); // 11ns server, 13ns client

These times include everything in the call (Java, JNI, driver). IIRC on Linux I had seen times below 10ns (on slower CPU, etc). I think JNI overhead is simply noise in a desktop environment. Things are much worse on ARM CPUs, though that depends on the JVM implementation (hint: you want to use Oracle’s).

Could you please explain how you wrote that benchmark? And also, is the OpenJDK slower? Did you test that? How slow is it :X (I hope it’s not slow)

Here’s the code:

import org.lwjgl.BufferUtils;
import org.lwjgl.LWJGLException;
import org.lwjgl.opengl.Display;

import java.nio.IntBuffer;

import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL20.*;

public class JNIPerfTest {

	private JNIPerfTest() {
	}

	public static void main(String[] args) {
		try {
			Display.create();
		} catch (LWJGLException e) {
			e.printStackTrace();
		}

		final int DURATION = 1024 * 1024;
		final int WARMUP = 5 * DURATION;
		final int BENCH = 10 * DURATION;

		benchmark(WARMUP, BENCH);
	}

	private static void benchmark(final int WARMUP, final int BENCH) {
		final IntBuffer buffer = BufferUtils.createIntBuffer(16);

		innerLoop(buffer, WARMUP);

		long time = System.nanoTime();
		innerLoop(buffer, BENCH);
		time = (System.nanoTime() - time) / BENCH;

		System.out.println(time);
	}

	private static void innerLoop(final IntBuffer buffer, final int duration) {
		for ( int i = 0; i < duration; i++ ) {
			glGetInteger(GL_MAX_TEXTURE_IMAGE_UNITS, buffer);
			//glGetInteger(GL_MAX_TEXTURE_IMAGE_UNITS);
			//glDepthMask(true);
			//glColor4f(1.0f, 1.0f, 1.0f, 0.0f);
		}
	}

}

I forgot to mention that I tested with the server VM. I updated my previous post with times on the client one. I haven’t tested on OpenJDK, but it should have similar performance.

Pedant! :stuck_out_tongue: Yes, JNA is a JNI based wrapper to libffi (a forked version from memory). Not exactly sure what you mean by JNI is not involved after that - JNI is involved in every call.

If you’re sensible about not doing too much of the auto-magic stuff, speed of a call is 2x-3x that of direct JNI. That’s still noise!

On topic, I highly recommend keeping an eye on Kotlin. It fixes A LOT of issues I have with Java (many mentioned in this thread), without becoming a monstrosity of a language like Scala. It also has a strong focus on performance which is ideal when developing games and you need a modern language to work with. If anyone has decompiled Scala code will know what I mean.

I’ve also done several tests with JDK8 and lambdas and I can honestly say that, unlike generics in 5.0, they got everything right. It’s a huge improvement over how we write code now. But Java (the language) is too slow moving, any way you look at it. Many will argue that it’s a good thing, but with invokedynamic I think it’s only a matter of time before another language becomes the first choice for JVM development.

I’ll keep an eye on Kotlin, but I’ll come back when it actually implements pattern matching. Right now it’s nothing but instanceof checks with no actual patterns to be found.

The documentation also seems to imply Option is nothing but typesafe null, and that makes me wonder if they’ll recognize the utility of monads the way C# has. With a for-comprehension or a LINQ select, I can use the same code to deal with failure synchronously or asynchronously, with or without error messages, with or without alternatives. I just change the type that goes in, and my flow control logic itself is all polymorphic.

I’m all for “let a thousand languages bloom”, but I have little use for ones that throw away perfectly good theory with a proven track record.

I’ve the impression that a fair amount of resources are being devoted to “other languages” on the JVM. This is a good call as java is moving slowly and this strategy should aid draw more users. More users = more contributors = JVM moves faster = win.

BTW: I dug up one of the hand-waving presentations. “To Java SE 8, and Beyond!”

I like what I see in Kotlin - like Scala, it has nice features that Java doesn’t. Unlike Scala, it doesn’t add a whole new area of complexity (Scala’s type system)

I’m largely happy enough with Java. By and large I don’t find language features are stopping me produce code - it’s usually time and motivation that are the real issues :o)

By way of comparison, I dropped C++ when I realised I was spending too much time chasing memory leaks, circular dependencies, dangling pointers and other problems of the past which have been well and truly solved. This is my basic issue with Java++ languages - Kotlin has some nice ideas but I doubt it’s going to greatly increase my coding efficiency, whereas the C++ to Java switch really paid off in terms of time not wasted.

I never said that there is a problem with the idea, just the implementation. Using the efficiency of arrays for a 1-100 arguments magnitude is just loony.
That a array introduces problems with generics, is not lazy (and this and the previous express the erasure problem and requires copies) and can be altered inside the method (uselessly by the way, since the type is constructed a runtime in both cases, and probably harmful for code clarity if you use it as a tmp) is just the cherry of crazy on top.
Not to mention that collections are infinitely more flexible when ordering matters.

The only ‘advantage’ i can see is that a array is very amenable to get into cpu cache, but a varargs iterable built over a array probably would too, not to mention that the varargs arguments by necessity tend to be a small amount (or a actual array).

Also i wish java had made the Collections library have full immutable type interfaces (not super of mutable ones), so that libraries don’t need no steenking copies.

Rust has some ideas here (the freeze concept applied to the type system is probably a better idea than a simple type because you can thaw, add stuff and freeze as a ‘protocol type’ without any copies on both sides of the client-api border. Not sure how it deals with concurrency).

See, I don’t understand someone programming in Java complaining about the lack of lazy sequences. If you want lazy sequences, use Haskell. Java programmers would expect a common Java idiom to be used for varargs, and an array is pretty common for Java. Don’t you think that a Lazy sequence would have been a surprising choice for most Java devs?

I’m saying that arrays not being the good choice for the type of varargs was utterly predictable in the face of generics and the well known limitations that the type erasure workaround brought to the table - especially if you needed to transform the array. This is where lazyness (not infinite sequences) is a advantage by not requiring allocation of a new array to iterate over a ‘new’ sequence. By specifying the runtime type of … as a array they mandated that any transformation that is used as input to … needs to return [].

That you’re telling that i should be ‘using Haskell’ doesn’t make it any less the wrong choice. Common idiom? Surely less than collections, which are all iterable. Make arrays iterable too by VM wizardry for all i care.

Meanwhile, every ‘framework’ api in java 7 and 8 is deprecating all the array methods they can get away with in favor of generic collections (swing for instance). Terrible planning, left hand doesn’t know the right stuff.

I don’t think that using varargs for methods that perform a transformation is their intended usage, nor is it good programming practice.

Arrays should definitely implement Iterable though.