JPCT vs JME vs Xith3D vs ... benchmark

thanks I feel better now :slight_smile:

EDIT:
just in case someone care to know, the hardware layer of 3DzzD use JOGL

These tests make little sense. You are measuring the rendering speed of a single 1mill triangle model, and actually measuring the rendering speed of a single OpenGL Display List when using lockMeshes(). Code written and optimized to render specifically that model would beat all the engines, tho would not make big difference because of Display List rendering. Choose a test with lots of small models, specific features (physics, particle system, animations, shadows, etc), a test where the Java code of the engine actually matters, and not such a minimalistic test. You need a test where different subsystems of the engine are active, then you can see how good those subsystems work together, and do they slow down when used together compared to other engines.

Set a minimal fps requirement (for example 100 fps), then start adding more objects and features, and see what you can add, so that you don’t go below the fps requirement. That’s a more realistic test.

You have a point about this particular test only comparing one part of all the engines (i.e. rendering a single model). However, I believe this test was done correctly - it compared performance of a particular test case for each engine, which was the point. I do not think that throwing together a ton of various features of the engines to see how much you can do within some minimum fps would be a useful way to compare the engines at all. To be scientific and unbiased, you must compare apples to apples in whatever test you run. If you want to see how the engines compare in other situations besides rendering a single model, then you must set up individual minimalistic unbiased tests for each thing you want to compare. For example lots of small models as you suggested might be one such test. In each different type of test the ranking may very well be different (in fact, I would expect it to be), but that doesn’t mean the other tests or rankings are invalid. It would just mean that different engines are better at different things (which is probably the point you were trying to make in your post).

I am very interested in how many un-needed opengGL calls are made. Because when i fidled with JME, there were a whole bunch of states set each render, that wasnt needed.

Go ahead! Nobody prevents you from doing this. Personally, i have no time to learn how to use 4 additional engines properly and write the tests. I’ve read a lot of talk about why this test is wrong, why i’m biased, that it can’t possibly be etc…and i tried my very best to make this test as unbiased as possible and to make each engine perform as good as it gets given my limited knowledge of them…i’ve not read anything from anybody except from one person from JME about doing his/her own tests.
And BTW: The model has 1.1 vertices, not triangles (around half a million IIRC).

yes, that is one point that make things faster : unecessary opengl call . also and I think that the main difference is using quads rather than triangles:3DzzD use triangles to be homogeneous with the software render , using quads it will be probably faster.

In regards to the quads comment. Thats probably true, But if the model is cached (VBO/Display List) with proper indices used and such. I assume it would be just the same as a Quad, As the video card needs to breakup the quad into triangles before rendering anyway (this used to be the case with old video cards, dont know if it still is).

dont know a proper test is necessary, opengl compilation is not that good like for the states changes => they are not collapsed even if they are not requiered or meaningless

Egon, let’s just be fair. I don’t know what you expected besides some rivalry and suggestions on why the tests may not be a proper representation when you kicked the thing off with a comment like this (emphasis is mine):

[quote]To come back to the beginning of this thread, i think it’s time now for a little pissing contest. I haven’t much experience with Xith3D or jME, so i relied on the tests that come with both engines and modified them (to be exact i simply changed the loaded models) to do some simple benchmarking on two machines.
[/quote]
Still, I think most people were pretty civil and clear headed.

As an aside, I might add that it’s a little easier to deflect criticism when you provide the exact test code and such so other people can try… Trust me, I’ve been through this before. It’s also easier when you are comparing engines that are open source as it is simpler to know that the same features are being used on both sides, etc. shrug

Anyway, as was said here before, nice work to all the engines really. The 3d Java world is so far from where it was a few years ago isn’t it? :slight_smile:

That’s a quote from the jPCT-forum. I wouldn’t have used the same wording if i would have posted it here. I have no problem with people questioning my tests. After all, it lead to better tests in the end. But it’s cheap to say that a test is wrong if you don’t do anything but talk to prove (or improve) it. In fact, we have NO comprehensive test for all engines and we never will. The tests and the test procedure is the best that i could come up with given limited time and resources. It may be primitive and doesn’t reflect most real world scenarios, but it’s still the only test that has been run on all 5 engines so far.

I see no one else testing but Egon… So I say if your against the tests, then write your own. Can’t expect Egon to do everything…
For now JPCT for this test is the winner.

Something every library writer should know is that example code will always be used as base code and/or copy and pasted into a larger app. Example code will be taken as the canonical method, and anything non-standard or non-optimal (like extra debugging output) needs to be clearly flagged as such.

It’s entirely reasonable to use the example code from each of the libraries to do a simple model rendering test. Yes, in a way it has elements of being a micro benchmark, since it only tests a small set of functionality but rendering a model is such core functionality it needs to be as optimal as possible since everything else builds on top of it. In theory all the libraries should be boiling it down to just a handful of identical calls, and so see near-identical results. But since obviously this isn’t happening it highlights deficiencies in the libraries themselves.

An interesting variation would be to see how many instances of the same model each library could render whilst still maintaining a certain framerate (say, 60fps), which would give the state handling code a bit more of a workout.

hello. I generally keep to the jME forums but I’ve been following this discussion since it was linked there.
I don’t have a problem with the author of an engine posting a test which shows that it’s well, really good.
Nor do I have any interest in pissing contests. As a user such things are unlikely to be very relevant.

However I do think that some things have been taken out of context, because everyone is thinking in performance terms.
I will talk only about jME because that is what I know.

jMETest code is there to demonstrate features of the engine.
The object loading tests demonstrate loading objects. They are not performance tests.
I would say that if you are trying to do something highly unusual (display 1million triangle objects), then you might have to check into possible optimizations. If the loading tests were optimized for this, they would contain code not relevant or perhaps appropriate to their purpose. For example, they would not work with animated meshes.

[quote]Example code will be taken as the canonical method
[/quote]
Which it is, as far as loading obj files go. It is quite possible to write great looking jME games without delving into any performance optimization at all. But again, if you are trying to do a meaningful performance test, you need to do a little more. Let’s be honest, there are not many users around who are pushing the limits of these engines like this (I know I’m not).

It is reasonable to point out the flaws in a test if it is flawed. It is preferable to create a better one. But that takes work, and people may not have the time (or inclination to piss).

In my case I thought the jME number very low. So I tried it myself, got a similar result. Added a line and the result was good enough for me. Since rendering ultra high poly models is not what I’m interested in, there would be no reason for me to go any further than that.

…and that was very helpful and i really appreciated your post in the JME-forum about this. It helped the test to become much better and fairer than it was before.

Thanks for the effort. Could you please try again with the current xith cooker release? I fixed the broken link. http://xith.org/downloads/builds/0.9.7/xith3d-0.9.7-dev-B1618.zip Thanks for pointing that out.

Ok, i did that…system and all stays the same. I’ve run the car- and the tank-test.

Here’s the result of the tank test:

jPCT: 3500fps
3DzzD: 2100fps
JME: 2000fps
xith3D: 1950fps(*)

and here’s the result of the car:

jPCT: 580fps
JME: 525- 530fps
xith3D: 510fps(*)
Java3D: 120fps

(*) updated