Problem displaying a triangle array/mesh

First explain what you’re trying to do. >_<

:smiley:

So, I am rendering a simple triangle in renderBuffer connected to a frameBuffer Object.

Then what I want to do is to check this render, by rendering in turn the FBO itself on the screen

Instead of rendering to a renderbuffer, render to a texture instead. It’s usually a lot less flexible, since accessing renderbuffers requires shaders and specific extensions (as far as I know at least). I actually have no idea what normal renderbuffers are for in the first place. Multisampled renderbuffers made sense since there were no multisampled texture a while ago which required you to blit the multisampled renderbuffer to the screen or another renderbuffer to downsample it, but it’s still better to do this downsampling yourself in a shader since you have much more control over how you can do it (the most important thing is tone-mapping in HDR with MSAA).

When you have everything in a texture, you can use it anyway you want, like drawing the texture to the screen. I still don’t see how you ended up with using FBOs when you were talking about normals a minute ago. Care to explain? =S

Well, the normal vectors were for the 3d model, now I need to work on the shadows on the floor. I calculate with cuda the projection of each triangle, that is a triangle in its turn as well. Then once I have these projected triangles, I render them in the renderBuffer and read the rendering pixel by pixel to know exactly where the shadow falls ( I create the renderBuffer with a specific size).

However I think I will give a try with textures, just to have something working…

I did it! With the RBO! :smiley:

However now since I am going to have n shadow matrixes (that I will merge in a final one), shall I declare n RBO or n FBO?

I don’t see why you need an renderbuffer or FBO in the first place. You’re shadows are supposed to be put on the screen, right? If you’re not using shadow maps (which I believe you aren’t since you’re projecting shadows to the floor, right?) then why would you want to render shadows to a different buffer only to read it back to put it on the screen again? All this can be accomplished by basic polygons, depth testing and a small depth bias.

If I however misunderstood you, I’d recommend that you keep only 1 renderbuffer and accumulate the shadows on the screen instead. Also notice that if you want to create a real-time application (e.g. a game) you pretty much can’t use readPixels or any kind of readback from the GPU, since it kills performance.

Basically I image my floor like a big matrix, based on tiles. I need to know which tales have shadow and how much.
The FBO is mandatory since with the default FBO on the screen I’d not be able to manage the results.

Let’s say that this operation of merging the shadows is not supposed to be frequent at all, but since the numbers in play are big I need to use the vga to avoid hours that a cpu would require. (and also because a rasterization is pretty easy and fast since is dedicated hw).

Only a problem is arising, at the end of the shadow merge, I’d need actually to render this final result on the floor… I really hope it is not a problem render to texture from a renderBuffer… ^^

I still don’t understand it. Either you do shadow maps with a FBO with a depth texture, or you can just render the shadow geometry directly to the screen. I still think you’re doing this all wrong.

Anyway, you’re supposed to use renderbuffers when you only want basic access to the pixels later through blitting, or you use a texture in which case you can feed it to a shader and do whatever you want with it. In your case, you could use a blit ( = copy) of the renderbuffer to the screen, but you’d need to enable additive blending to be able to “stack” lights.

Again, you’re doing this wrong. Read up more on shadows and lights before you continue!!! :-\

Fundamentally, I need to handle shadows in two ways:

  • the first, most important, having a square/matrix of nxn elements, I need to know exactly for each element, if it has core-shadow, soft-shadow or no shadow

  • the second, just render them on the screen

How would you do that?

The standard approach to shadows in 3D is shadow-mapping. You can get “soft shadows” by filtering it when reprojecting, but it’s impossible (?) to get a variable penumbra size based on distance between the light and the shadow caster and the shadow caster and the shadowed geometry. This is pretty easy to implement with multiple passes, but can be slow for many lights (let’s say 50+, depending on scene complexity).