http://mail.openjdk.java.net/pipermail/discuss/2012-August/002717.html
I think this would be very interesting. Most likely wont be coming for a long time, but could be an alternative to LWJGL in the future.
You should also check out Aparapi (http://code.google.com/p/aparapi/) which has been mentioned here before. One of the project proposers is behind it.
It’s not really an alternative to LWJGL.
arent we already using the GPU natively with jni libraries that communicate with opengl directly and whatnot ?
Not really. This is about general purpose computation on the GPU auto-magically generated by the back-end compiler.
my point is, if you write a c++ game you would likely only use opengl/directx and thats it.
Rootbeer and Aparapi could (and even should) use a backend based on JOCL (JogAmp) or the OpenCL binding of LWJGL.
It won’t be an alternative to existing Java bindings to OpenCL but it eases a lot the use of the GPU for computing.
Never seen Aparapi before – it looks really interesting. Could it be practical to use in a game, i.e. for GPU-accelerated physics?
@cero - again not really. OGL,DX,cuda,OpenMP,etc require explicit conversion or mark-up. This is auto-magic and will be much more limited as it must detect a convertible pattern. The upside is it requires no work on the programmer’s side.
Magically detect a pattern, as in “I refactored one little thing and now my performance has dropped by 90%! Why is Java so unreliable???”
Yeah, that’s a very real possibility.
Some people just want to watch the world burn
This would make much more sense if we were talking about using the GPU to speed up Java programs/applications on older computers that need a performance kick. And why would this apply to games? Why would you use something like this to make a game when you are already rendering to the graphics card??
Java 8–which would tout this feature–will not be available on pre-Windows XP machines, assuming that these kinds of accelerations are included at all, right? I think this a great idea, by the way, so long as the implementation is done in a certain way. But what would be the correct implementation? The best case scenario is that all computers with dedicated 32MB-128MB video cards could potentially run modern Java apps at increased speeds. Most of the rigs that would benefit from this would be old gaming desktops from a not so distant computing era which is now long (but not completely) forgotten. Maybe you could justify buying that new 2GB GigaGeForce by telling yourself that you can offload your Java applications to it? What about compiling, by the way? Let’s make the compiler offload to the GPU–that might be a way to convince a very small group of people to get excited over this sort of feature.
Maybe Aparapi gains a strong enough following for this to go somewhere, in case the official implementation doesn’t come together. But probably not. 1GB graphics cards will be $1.49 by the time this maybe gets released, so why should we care about performance enhancements like these? Again, I very much like the idea of using available hardware that’s just sitting around doing nothing, but how or when does it become practical?
Sure we render graphics directly to the card but the game logic could use a boost too (e.g. AI, moving particles, pathfinding, etc).