performance of own SDK 3D engine

Has anybody else written a 3D-engine using plain old Java SDK? I’m approaching completion on a simplistic but generally designed one and am curious about how it’s performance compares.

I haven’t done thorough benchmarks yet but currently estimate mine to render around 100,000 monochrome polygons per second.

100000 fullscreen ( 1024x768 ) polygons, alpha-colored and Z-buffered? Cool… ;D

If by alpha-colored you mean blended, it doesn’t do that, or any other blending/shading/lighting, all polygons are simply rasterized with the color specified by the model. I may be adding lighting later on, but more advanced stuff I’m not going to spend time on at this stage.

It does do proper depth sorting though (instead of Z-buffering).

Forgot to mention - the hardware for this number is a 800MHz laptop with no 3D acceleration in it.

That benchmark number is meaningless and uncomparable without knowing the framebuffer’s resolution and the polygons’ size at least.

I’ve found that the number of polygons drive this number more than the number of pixels drawn. Anyway, the resolution is 800*600; 32-bit pixels. The polygon edge length is mostly between 2 and 10 pixels.

The main bottleneck of software engines always has been fillrate.
You cannot benchmark fillrate with polys of size 2-10.

Draw polygons of size 800x600! And see what happens!
Normally your performance should go down by a factor of 10000 for a pixel-setting software engine.

Unless your trianglesetup isn’t extremely bad… :slight_smile:

I see what you’re getting at - but in that argument it’s the frame rate that is the important thing - if the polygon fill the frame, you’ll only need to draw 50-something of them a second…

In my case - I’m modeling terrain contours with arbitrary view perspective - the relevant perspectives are with hundreds to tens of thousands of polygons. The rasterization is around 30%-50% of the processing per frame, and the number of polygons is what drives the rest (the projection, culling and depth-sorting). Hence my measuring on number of polys.

Your numbers sound OK for performance with small tris.
Just did a test with some of my own Java renderer, and using small tris ( around 10pixels on edge length) on a 640x480 screen, I get 6,600 flat coloured tris per frame at 18.5 fps, giving 122,000 tris per second rendered (on an Athlon 1.8GHz).

[quote]I see what you’re getting at - but in that argument it’s the frame rate that is the important thing - if the polygon fill the frame, you’ll only need to draw 50-something of them a second…
[/quote]
Depends on the scene overdraw. Its usually quite rare to know beforehand that the closest poly covers the scene.

The point about fill rate is two-fold:

  1. an object has 100-1000 tris. A tri has 100-1000 pixels.
    So the pipeline scales by 2-3 orders of magnitude as you go from object->poly->pixel.
    Your pixel pipeline has up to 1,000,000 times more influence on the time taken than the poly setup. This is also important because:

  2. The slowest part of a software renderer is the system bus. For example, a 133MHz 32bit bus can shift (peak) 133,000,000*4 = 500MB per second.
    A 800x600 32bit frame is 1.8MB, but it needs to be read by the CPU, written back, and then sent to the hardware, = 3 trips over the bus, so 5.4MB per frame. This alone will cost 11ms on this theoretical bus. If you have to clear the screen each frame too, thats another 7ms spent (2 bus trips), and this is without any other CPU processes running!
    This bus limit is the one you will tend to hit first with a software renderer, and will probably be the bulk of the time spent while rendering.

  • Dom