Orthographic View Screenshot ?

Hi, I have a curious problem that I’m not sure how to tackle…this may be simply an OpenGL question but also may be a JOGL question…not sure.

I have a 3D scene in which there is a vehicle…and this vehicle is moving forward with wind hitting it. Well, I need to figure out the cross sectional area of the vehicle that is in the wind. Basically the main way I think of doing this is to take a screenshot from the wind point of view looking back at the car and then calculate the number of pixels of the car that are in the screenshot - basically subtract the number of background pixels to get the result cross sectional area of the vehicle.

Currently my scene is composed of a perspective view but I know that I would need to use an Orthographic projection.

So, my first question is how do I convert gluPerspective() to glOrtho()…I tried putting in the screen width and height for the first 4 elements of glOrtho() and the same near and far clipping plane distances for the last 2 elements, but I did not get the same view as I did with gluPerspective(). I figure this might be an easy fix and I need to do some research here so don’t trouble yourself too much with this question.

The main question is how do I get the “screenshot” and do all of the changing from the Perspective view to the Orthographic view without changing the actual view for the user…basically I would need an instantaneous screengrab without making the view flicker or anything bad. Is there a way to basically get the screen data in some buffer so that I’m not actually having to store an image file or any such thing?

Geez, I hope these questions were reasonable…this is my first sort of thought at solving the problem so I apologize if I came to early before having flushed out all the issues.

To get a screenshot, you could use glReadPixels and evaluate each pixel yourself. This isn’t the best approach. Another, probably easier method (sort of), is to use opengl’s feedback system. If set up properly you could use it to auto-count the number of pixels rendered (feedback freezes the draw buffer, and doesn’t actually render anything).

This approach has been used to compute the sky coverage for doing HDR exposure effects, but could be adapted to yours if you rendered the vehicle with back polygons, too, and computed the % coverage of vehicle in screen area -> then convert this into 2D area in world space based on the size of your ortho frustum. This should be pretty easy to do, since you can specify the frustum with camera space frustum plane dimensions.

HTH

OpenGL feedback mode is VERY slow (at least, every time I tried).

Using the OcclusionQuery extensions would do the trick, and are fast (and non-blocking in the latest NV extensions)

Seems a bit long-winded! ;D
Can’t you take the surface normals of the polys that make up the vehicle and dot product them with the wind direction? If the resulting angle is >-90’ && <90’ then the poly is affected by the wind and you can use the area of the poly and the normal angle to calculate the force (0’ - full force, 89’ minimum force). Would this work? Must be quicker and easier then taking screenshots and counting pixels.
Mind you, this wouldn’t work if there were obstacles between the wind and the vehicle…

Haven’t ever used feedback, it’s unfortunate that it’s slow :frowning:

You could manually project the vertices onto the screen plane (or wind “plane”), from here you could, if an approximation is acceptable, get bounding boxes for the 2d regions, find the intersection and easily compute that area. IMO this seems decent enough for a game.

thanks so far for the replies…I have a lot of investigating to do.

One thing that I’m not worried about is the slow aspect of the calculations because I’m not doing this as a part of the game and I don’t need the updates every second, per se. One thing that I do need is a way to get this information without affecting my display/view…basically a background process that is essentially separate from OpenGL, which is why I wasn’t sure this would be a JOGL issue or a Java issue.

Using the surface normals would be an easy calculation but the problem is that it could possibly be completely inaccurate unless I do depth testing and occlusion testing of some sort because you could have multiple faces facing the wind but have some be behind others, depending on the geometry of the model.

lhkbob, basically yes this is what I need…I need the screen area of the projected faces (using an orthographic projection), which is essentially the same idea as the pixel counting of a screenshot of the view. Btw, what is “HDR exposure effects”…sounds like I have to look at this feedback topic primarily.

Part of the hard thing with this issue is describing it and knowing the proper terms to use when searching for the topic. thanks.

No need for a separate OpenGL process.

Just render it for your special purposes, use the Occlusion Query (just google it), then clear the depthbuffer, change the modelview/projection matrices and render your scene as you’d have normally done. The user won’t see that you rendered something else in between, just as the user doesn’t see you building the scene triangle by triangle.

Also, if I’m not wrong, you should draw the object twice to prevent having more pixels than on result image. First time to initialize z-buffer, and then for occlusion query. Remember to use right depth function like GL_LEQUAL. It’s also good idea to mask off color output by glColorMask, as you don’t need any color output.

I dont’t know where I read it, but it basically involved rendering the skybox and counting the number of visible sky pixels (after being occluded by buildings and such). This could then be used to figure out how much exposure the “eye” was getting to bright light and temporarily brighten the entire screen because of overexposure. The bloom/HDR effects in the Source engine do something visually similar but I don’t know their technical details.

Would I use the ‘gluProject()’ function to project the 3D view to the 2D coordinates and then get the buffer of that projection which I could then search through in the background? Does that make sense? I’m currently searching for info on this topic so bear with me if I’m not asking the proper questions.

One other thing…Occlusion Query seems like a hardware based solution to me - more specifically a video card solution, and I would love to be video card independent. I would love to be able to use the most general/basic of OpenGL calls to be able to get the buffer of screen pixel data…am I asking too much of OpenGL? Still searching, but that question just came to mind. THanks.

I think I figured out something and I’d appreciate if you could poke at it to see if it will work right…

Basically, in my display() method, whenever I want to calculate the area of the vehicle in the view I switch from a PERSPECTIVE projection to an ORTHOGRAPHIC projection, then I just display the vehicle in the attitude I want, read the pixels and count the background ones and foreground ones, and then clear the display bits and reset the view back to PERSPECTIVE and then draw the normal scene.

So basically I have this:


display() {
    SetOrthoView()
    clearColorAndDepthBuffers()
    DISPLAYVEHICLE()
    glReadPixels() // Calculate the area from the pixel array

    SetPerspectiveView()
    clearColorAndDepthBuffers() // Resets the view to nothing
    glLoadIdentity()
    DisplayNormalScene()
}

Right now I do this every few cycles because I save the view to a PNG file to see how it looks, but it seems fast enough to do every cycle if I take out the write out to a PNG file.

Does all of that make sense? It seems to work right now, and it seems simple enough which is why is surprises me that it works.

One followup question I have is how to get the most accurate background pixel (or foreground pixel) count. If I have a small window (small number of pixels) a shape is spread over a smaller number of pixels and sometimes the math may say that part of the shape is only a half pixel wide but it can’t be drawn as a half pixel so it isn’t drawn at all and as a result the count misses all of those half pixels and the count is bad. So the best count (I guess) would be to get the maximum resolution possible so the shape is spread over the most pixels possible and so the count should be better…but the question I have is how do I maintain a small resolution window, but get the data from a large resolution window? As I mentioned above, I draw the shape all by itself and read the pixels from it but then clear the display and draw the actual scene and so I don’t change the window shape during that time and hence I don’t change to the maximum resolution possible because I don’t want to be resizing the window for unnecessarily…so is there some OpenGL way to get the max resolution image?

OR, do I just live with this lower resolution as long as the percent error of the area coverages in the lower res and higher res cases is acceptable?

If you use the Framebuffer Object (FBO) extension) you can render directly to a texture or off-screen surface which is independant of your actual display resolution.

Thanks Orangy Tang, the OpenGL forums mentioned that using FBOs will help also with window occlusion problems (ie other windows on top of the opengl window causing an occlusion, which I think the glReadPixels() would be affected by)…and I think you are right, using the FBO will let me solve the resolution issue…once I learn to use it.

Thanks.

Correction: glReadPixels() is not the cause of the issue…simply hidden displays are undefined.

One followup…would there be a case where FBOs are not available/supported? Are they hardware dependent? I ask because I was looking at some JME code http://forum.jpatch.com/download/file.php?id=117&sid=723756e4b88b8219b6e8ec6cd7addf0b&mode=view and it checks if FBO exensions are available before using FBOs.

Also, so far the FBO info I seem to find deals with writing to a Texture but I just want to get the buffer of data from the scene…I don’t need to save to a texture. I’m trying to follow this post: http://www.java-gaming.org/index.php/topic,17655.0.html and it is informational (not done with it yet).

FBOs write to a texture (primarily) as the backing buffer. There are hardware limitations. Some older cards do not support them, and even those that do, do not support all of the combinations of formats and types, or have different max sizes. That said, fbos are much much faster than writing to the screen, then copying the screen to a texture (if doing standard render-to-texture stuff).

You do not seem to need that speed boost, however being able to have a different size for your counter than your display would be very useful. To get the pixels out, bind the fbo, render it and then call glGetTextureImage, or you might be able to use glReadPixels() while the fbo is still bound. Also, there are things called render buffers, which store data for an fbo, but aren’t usable as textures. glReadPixels() may work with these (if render buffers can be used as a color attachment, sorry if I’m going overboard), too, if it worked with a texture color attachment.

I tried doing the glReadPixels() while the FBO is bound but it got me only a black screen…as a test I forward the pixels to a PNG file so I can see if it is working. I set the color to green so I should have at least seen a black screen so I’m not sure what is wrong…assuming that doing the glReadPixels is valid for the bound FBO as you are assuming.

Here is the pertinent part of the code:

Display method


  if (isFboSupported) {
    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, fboID);
    gl.glPushAttrib(GL.GL_VIEWPORT_BIT);
  }
 
  //
  // Convert to ORTHOGRAPHIC Projection
  //
  gl.glViewport(0, 0, frameWidth, frameHeight);        
  gl.glClearColor(0.0f, 1.0f, 0.0f, 0.0f);  // GREEN background
  gl.glMatrixMode(GL.GL_PROJECTION);
  gl.glLoadIdentity();
  gl.glOrtho(-model.getRadius(), 
              model.getRadius(), 
             -model.getRadius(), 
              model.getRadius(), 
              model.getRadius(), 
             -model.getRadius());
  gl.glMatrixMode(GL.GL_MODELVIEW);
  gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
  gl.glLoadIdentity();  

  //
  // Draw the model by itself
  //
  model.drawModel(gl);

  if (isFboSupported) {
    saveFrameAsPNG(gl, "frame.png");  
    gl.glPopAttrib();
    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0);
  }

  //
  // Convert back to PERSPECTIVE Projection
  //
  gl.glViewport(0, 0, frameWidth, frameHeight);
  gl.glMatrixMode(GL.GL_PROJECTION);
  gl.glLoadIdentity();
  glu.gluPerspective(65.0, frameWidth/frameHeight, 0.1, 20000.0);
  gl.glMatrixMode(GL.GL_MODELVIEW);  

  // Draw the entire scene
  drawScene(gl);

The saveFrameAsPNG() method simply does glReadPixels() and process the byte data to calculate the background pixels (they should be green)

I’m essentially doing the following:

  • bind the FBO,
  • convert to Orthographic projection,
  • draw the vehicle by itself,
  • read pixels of current buffer (or so I thought),
  • unbind the FBO,
  • convert back to Perspective projection,
  • draw the full scene

BTW, if I don’t execute any of the code in the ‘isFboSupported’ if statements then I am able to read the pixels as desired, but obviously this is the way I’m doing it without FBOs.

In my init() method I get a successful fbo when I create it using: glGenFramebuffersEXT()

But, when I did the status check in my display() method I get this error GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT_EXT which I need to track down.

Online I found that error is related to: “There is at least one image attached to the framebuffer.”

Searching on the web it seems that status error may be ok…someone posted this: “When you first create an FBO, it is always incomplete. It’s only after you have attached all targets, including your depth cubemap face, that the FBO will be complete.” at: http://www.gamedev.net/community/forums/viewreply.asp?ID=3312895

So I did some checking and I get the following statuses:
init() method
- Gen FBO
- Status: Complete

display() method
- Bind buffer (before rendering)
- Status: Incomplete, Missing attachment

 - After rendering, before unbinding
 - Status: Incomplete, Missing attachment

 - After unbinding
 - Status: Complete

So that verifies the comments I found…so the FBO may be working ok but I just either can’t simply read the pixels directly or I first need to attach a texture or a color RenderBuffer for it to work correctly.

I’m wondering if how I read the pixels is what the problem may be… I have the following code that works when NOT using FBOs, where I read from the BACK buffer (or so it seems…not my own code).


      gl.glReadBuffer( GL.GL_BACK );
      gl.glPixelStorei( GL.GL_PACK_ALIGNMENT, 1 );
      gl.glReadPixels( 0, 0, frameWidth, frameHeight, 
		       GL.GL_RGB, GL.GL_UNSIGNED_BYTE, 
		       pixelsRGB );

Maybe having the glReadBuffer() set to GL_BACK is wrong…so what should it be?

You haven’t set up the fbo correctly yet. An fbo (simplified view) is an object handle/id that keeps track of different buffers used for storing opengl pixel output. To get a complete fbo, you need to have at least one buffer for color (I won’t get into how to do more, since it’s more complicated and you don’t need it) and a depth buffer. If you don’t need a color buffer you have to call glDrawBuffer(GL.GL_NONE) (and same call for glReadBuffer()).

There are a couple of commands to use for attaching buffers to an fbo. There are two types of buffers, ones that are textures and ones are render buffers. Render buffers store the data, but don’t have the extra weight of writing into texture memory (useful if you only want a color image, but still doing a depth test).

Here is a useful site that goes over a lot of the necessary steps: http://lwjgl.org/wiki/doku.php/lwjgl/tutorials/opengl/basicfbo
It’s for LWJGL (don’t kill me :)), but it still applies to JOGL.
The opengl 3.0 spec is quite useful too, since the fbo support in there is almost identical to the FBO_ext documentation.