OpenGL Servlet?

I want to build a Java Servlet to render 3D still images using OpenGL and dish them up to web-browser clients.

It seemed desirable to use hardware acceleration, but I have heard that rendering a single frame and transfering it from the card into main memory would likely be slower than software rendering. I was surprised at this and would have expected a high-end card (maybe with PCI-Express for faster i/o) to easily out-perform software rendering.

A few Q’s:

  1. Any opinions on hardware v software rendering of single frames?

  2. Can LWJGL do off-screen accelerated rendering (with no on-screen windows at all)?

  3. can LWJGL access a second adapter?

TIA :slight_smile:

David

The speed of the rendering is just noise amongst the grand performance bottleneck you’ve got, which is the servlet. Given that most GL scenes render in under 20ms and most servlets take many seconds to stream an image back to a consumer you’re really not going to have a problem with the 5ms transfer time from a pbuffer back to system RAM!

To answer your questions:

  1. Probably doesn’t matter.

  2. It can do pbuffers and render-to-texture. You don’t need a display.

  3. Yes, it can using the GLContext class, but we don’t provide a context for anything other than the default adapter, as we’re game-oriented rather than server-oriented. You’ll have to get your own context. A tiny bit of native code would do the trick for you.

Cas :slight_smile:

Thanks Cas. Most illuminating :slight_smile:

I agree web clients see the slow stream as a bottleneck, but servlets are multi-threaded and see it as an opportunity to serve 100’s of users simultaneously. So I reckon my concern about rendering performance is not entirely unjustified.

Cheers
David

Multiply by 1000 users in 1 second and you’ve got 20 seconds of rendering and 2.8 hours of streaming :slight_smile:

Cas :slight_smile:

On a current graphics adapter, rendering in hardware and reading back the results is still faster than doing software rendering…at least if your rendered scene isn’t very simple.
In a test application using jPCT, i got ~30fps in software, ~45fps in hardware with reading back into a BufferedImage (includes blitting the BI to screen) and ~300fps with hardware only on a P4HT@3.2Ghz/Radeon9700pro setup. On a GF2MX under Win NT, software rendering was much faster (8fps compared to 0.5fps).

[quote]Multiply by 1000 users in 1 second and you’ve got 20 seconds of rendering and 2.8 hours of streaming :slight_smile:
[/quote]
Hey Cas, someone’s stolen your server’s Fast Ethernet card and stuck in a dial-up modem! ;D

Assuming a decent server could max out a 100Mbit/s Fast Ethernet card, thats about 10MBytes/sec of bandwidth (ethernet saturates at about 80%). If each image is 50KBytes, ignoring packet overhead and acks thats 10MB/50KB = 205 images/sec (1 every 4.9ms). So bandwidth (streaming) isn’t the bottleneck.

Or using your analogy: 205 images * 20ms = 4.1 secs of rendering and 1 sec of streaming.

Cheers
David

[quote]On a current graphics adapter, rendering in hardware and reading back the results is still faster than doing software rendering…at least if your rendered scene isn’t very simple.
In a test application using jPCT, i got ~30fps in software, ~45fps in hardware with reading back into a BufferedImage (includes blitting the BI to screen) and ~300fps with hardware only on a P4HT@3.2Ghz/Radeon9700pro setup. On a GF2MX under Win NT, software rendering was much faster (8fps compared to 0.5fps).
[/quote]
Fascinating. Thanks Egon :slight_smile:

Just to confirm, on the GF2MX (NT) are you saying software rendering was 16x faster than hardware? Is this an NT “feature”?

Cheers
David

[quote]Just to confirm, on the GF2MX (NT) are you saying software rendering was 16x faster than hardware? Is this an NT “feature”?
[/quote]
Well, hardware rendering alone was faster than software (by a factor of ~5) even under NT4 with its limited AGP and chipset support, but grabbing the rendered output from the card was slooooow (the 0.5 fps i mentioned).