Limitations on size of VBOs?

I’ve been playing around with my terrain rendering code recently and come across some odd behaviour.

Quick summary: application loads a gray-scale image, generates a mesh, uploads to a VBO, renders with multiple textures to blend grass, rock, snow, etc. Nothing amazingly complex and it’s been working solid for quite some time. Recently I’ve been extending the code to handle multiple terrain ‘chunks’ and dynamically loading/trashing as the camera moves. Again it’s all working nicely (or at least seems to be).

Now I’ve downloaded some massive grey-scale images (of parts of Italy cos went there recently) which will be cut up into smaller chunks, but I thought I’d test the software with a much larger image just to see how well it coped (or not). Normally the chunks are 256x256 sized, the image I tried was 2100x2100 pixels. The software took some time to load and process this as you can imagine, and at the end I got - nothing.

Debugged the code but it seemed to be working correctly. So I bodged the terrain generator so that it did 500x500 (i.e. the top-left part) and it worked fine, ditto 1000x1000, ditto 1500x1500, but the full image still fails! No errors, just nothing being rendered. The frame-rate indicates that it’s basically rendering nothing. Very odd.

Is there some limit on the size of NIO buffers and/or VBOs? Or has anyone come across similar behaviour? By my calculations the 2100x2100 should be about 70Mb worth of vertex data.

Note that the same terrain image broken down into the normal 256x256 chunks with multiple VBOs also works fine. Obviously this isn’t a big deal as the ‘proper’ chunked solution works, but I’m just surprised that the big image failed but without any errors.

Any suggestions?

Cheers

  • stride

Just try it :smiley:

When you call glDrawArrays or whatever, make sure the vertex count isn’t negative.

I usually just find the amount of vertices that are about to be drawn, and if they go over, say, 1000, then I render all of the points, clear the Vertices, and add the extra vertices that caused the restart. That way you do not go too high on the buffers.

When my Buffers get too big, I get some kind of Core dump on java, meaning you did not just throw an error, you broke java (still works, but the buffers need to be smaller). So do not screw around with buffers…
I did not know that it was just my code, panicked (reinstalled java, then linux login, then OS), and ended up screwing with my OS, and wiping my computer drive, losing all of my work. (See MERCury thread.) Major failure, all because I overloaded my Buffers.

I feel like something is wrong with your drivers. You should be able to render millions of vertices without Java running out of memory. That’s odd!

Yes, but you can overflow buffers, no?

Well, of course you can. But I think the limit would be higher than 1000 vertices. That’s really odd.

When I set it higher it gets laggy.
Probably just my CPU though (on a notebook here)

GL12.GL_MAX_ELEMENTS_VERTICES

Even on my integrated chip this give over 60k.

Well vertex arrays are CPU centered. If you need to process large amounts of static data, use a VBO as they are stored on the GPU.

Yeah, I was using Vertex Array Objects. A little slower, but if you can get 60K, then I must have done something wrong.

So I did a little stress test, and made mine 50K. It did well, so I geuss I made a problem, fixed it, then fixed the core solution, and went back testing it now? Whatever. I remember testing VAO, and over 1000 lagged it up a bit. But now it works. No idea how, but it worked :).

Hmmm done some more tests after the various feedback but still no clearer.

No definitely has the the correct (and positive) number of vertices in the index buffer.

I though HeroesGraveDev might have been on to something, maybe I was exceeding some maximum number size or something (the size of the index buffer is an integer) but no it’s not even close.

Will do some more tests tomorrow to see if I can work out what the ‘threshold’ is where it stops working.

Very odd.

Hi

This value is sometimes completely wrong and is no longer supported in OpenGL-ES. Use it with care.

If you still use CPU sourced data, you have to create a direct NIO buffer and you can run out of memory on the native heap. Anyway, you have to pass the data to the GPU data store even though you use a VBO, for example when calling glBufferData.

Did a bit more research on this but didn’t come up with anything definitive. There are limitations on the size of a VBO but it looks like it’s vendor-specific, anecdotally that limit appears to be 32Mb on most cards. Presumably I went over that limit when I tried to render the large terrain.

In any case the ‘correct’ way to do this is to split the mesh into multiple VBOs which is what I was doing anyway :), a chunk size of around 4Mb seems to be the general recommendation.

Cheers for all the responses.

  • stride

It seems plausible. If the constant has an absurd value, I put only a few thousands of elements into a VBO.