[quote]A big problem is testing results.
You could capture and journal a set of inputs and play them back froma file easily enough, but the only way to test for correctness of output is to screen scrape.
[/quote]
Whilst I see what you’re thinking, it’s not really true. You’re thinking in the wrong direction, imagining that the user wants to unit-test the rendering library. Whilst that would be nice it’s a huge amount of effort that doesn’t really do you any good - it’s not sensible unit-testing, it’s too brute-force.
With a scenegraph, a lot of what you want to check is actually “are the correct thigns in the correct places when I try to render”. Assuming there are no bugs in the renderer, and none in the SG (yeah, right, but … ) then you can write unit tests that examine the current state and see what’s actually being rendered and compare to what is known should be rendered. It’s one of the main points of having a SG - abstraction ;D.
So, AFAICS, it boils down to "how much runtime reflection-esque / query access do you have to J3D, to:
- the tree in memory
- the PVS
- the renderer FSM?"
With sufficient access to each, you can find out whehter your SG is correct, find out whether it’s rendering what it should (and check that your solid objects aren’t transparent!) and finally check that all your things that should be optimized out or switched off, or behaviours in certain states, are as you would expect.
All of which, IIRC, you can in fact do from within J3D?