Hardware acceleration?

As mentioned in this thread: http://www.java-gaming.org/index.php/topic,20886.0.html

…there seems to be a problem with hardware support for 3D acceleration on Android. I couldn’t get it to work on a Samsung Galaxy. Not with my own code, not with the examples from Google’s SDK…it’s all rendered in software. I’ve seen some demos/games (Omnigsoft) that use it even on my phone, but they seem to rely on their own, native libraries (judging from the debug output) and i’ve found a lot of blah on the internet that it is possible…but none of those posters has actually tried it or at least not with something more complex than a spinning cube.
I’ve also found a post from Google, that 1.0 couldn’t do it and i found a statement in a german forum, that the native SDK (NDK) can’t do it either even with Android 1.5. So my guess is, that just isn’t possible right now.

Has anybody found a statement from Google on this? 3D on Android seems to be low on priority for them…

—deleted…double post.

My Android game uses OpenGL no problems on the G1 and Magic: “CraigsRace” http://www.cyrket.com/package/com.dlinkddns.craig

Would be interested to know if it works ok on the Galaxy. I put an “OpenGL” checkbox on the main screen of the game so you can run the game without OpenGL.

It works fine on the Galaxy. The point isn’t that OpenGL doesn’t work, the point is that i can’t get hardware acceleration for it to work. I’m unable to decide if your game uses the GPU of the device or not. I’ve to have a look at the debugger when i’m at home again to make a more educated guess. Performance is fine tough.

Cool, thanks EgonOlsen! My game scales and rotates around 2,000 vertices with textures every frame (about 30fps), so… err… actually, I don’t know if that is a lot or not.

The problem of the software renderer is fillrate, not vertex count. Judging from the look and feel of your game, it seems to use hardware IMHO, but i’ll check it out in the debugger later.

I expect vertex count might be a bit of a problem when you start getting fancy… we’ve got about 1,000-2,000 sprites per frame - between 4,000 and 8,000 vertices, none of which are shared - and we want 60fps :confused:

Cas :slight_smile:

I ran Craigs Race using ddms to see the console output…looks like software rendering to me. My guess (again, i have no real prove for this…) is, that lines like this one


07-29 16:19:12.637: INFO/ARMAssembler(1051): generated scanline__00000077:03010104_00000504_00000000 [ 18 ipp] (37 ins) at [0x38a518:0x38a5ac] in 1641668 ns

indicate that the software renderer has assembled a new, optimized scanline renderer for the particular texture/mode combination. The first Android version used to crash in this method when using multi texturing. It seems to be the case, that for each texture/mode, such thing is being generated (and cached somehow…if you rerun the application without applying a change, it may not happen).
However, the fillrate in this game is far better than in my example code. Is it using ortho-mode? In that case, the renderer can skip the expensive perspective correction, which will speed up things.

Yes (I think). Because I converted it from my 4K entry. I do this:


gl.glOrthof(0, widthScreen, heightScreen, 0, 0, 1);

I also do this:


gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glDisable(GL10.GL_DITHER);
gl.glDisable(GL10.GL_LIGHTING);

Not sure if this is important or not, however, the game does not run properly using OpenGL in the Android emulator. When running in the emulator it only gets about 1 fps, it draws the vertices all over the place, and the screen goes completely white after a few seconds.

The emulator has other bugs too, like wrong lighting.

If the Android phones are comparable to the iPhone in terms of horsepower, you should be able to get that count easily if you are smart about when you’re binding textures, and maybe use PVRTC. Our game can run at 30 fps with a lot of pathfinding and stuff going on, as well as 8,000 vertices for 3D models in the game. Each 400 vertices is a single draw call, but I’ve also been able to fit hundreds of particles (just TRIANGLE_STRIPs with a texture) on the screen, but that was with a single shared texture. And I haven’t spent nearly as much time optimizing as I could have.

So a long story short, I think that’s totally doable.

Ok, i’ve written a little test case based on the cube-sources from the SDK. You can find the modified sources here: http://www.jpct.net/download/misc/android_test.zip. To run it, you’ll have to replace the texture with some other of yours (line 155 in CubeRenderer). It renders two textured and statically lit cubes. Run it with the phone turned upright, so that both cubes fill the whole screen. On my phone, it “runs” with 10fps that way.

60 fps here in log cat here.

HTC Hero (TMobile G2 Touch)
Android 1.5

Kev

Should add the textures look odd though. Colours don’t look right and texture mapping seems wrong?

Kev

For interest it also crashed (and hung out the phone) once I tried to rotate the screen back.

Kev

Colors are taken directly from the cube example, texture coordinates are more or less random. I had that crash too once, but i really don’t care. It’s simply a modified cube-example…all i did was to add the texture loading and -rendering stuff.
60 fps is with a texture loaded and the phone upright?

Re: egl.eglTerminate(dpy);

My phone is still on Android 1.1, and I wanted to make my game 1.1 compatible, so I had to do all the eglCreateContext, eglCreateWindowSurface, eglMakeCurrent, … and at the end eglDestroySurface, eglDestroyContext, and finally eglTerminate. By the look of it, this is all now handled for you in Android 1.5.

I see. I’ve removed all that 1.1 related stuff from my code. The way 1.5 does it is waaaay easier to use.

Very true. One advantage of the old way, is I can see what is taking all the time. And I can tell you, in the Android framework, when it wants to push the update to the OpenGL chip, it calls:

egl.eglSwapBuffers(dpy, surface);

And this, on my game anyway, takes a minimum of 6ms, but it is not uncommon for it to take 15ms!

If you haven’t seen it already. A Google Android game developer gave a presentation about writing games for Android at the last Google I/O. I found his talk very helpful: http://www.youtube.com/watch?v=U4Bk5rmIpic

Ok, but if, as in my example, the rendering of one frame alone takes 100ms, it doesn’t really matter. I still don’t get it…if the test case runs @60fps on kev’s HTC Hero, it obviously uses the GPU on that device. But the Hero uses (according to HTC’s hoempage) the exact same chipset as the Samsung Galaxy does (Qualcomm® MSM7200A™, 528 MHz). The Samsung also does hardware 3D when running Omnigsoft’s games or the Neocore demo…it just doesn’t seem to be able to use it in combination with the Dalvik VM. Why is that and who has to do what to change this? Can i do something about it? Or Samsung? Or Google? I got this phone solely for doing 3D on it and now i obviously got the only phone that can’t do it correctly…this is so annoying.