Getting out of the stone age - baseline OpenGL functionality

I’m getting back into some graphics programming, so I’d like to get a quick survey/discussion on what people think is good baseline OpenGL functionality these days. I’ve been stuck programming for 1.1 for aaaaages, with more fancy things like FBOs as optional extras. It’d be nice to increase the base level to something more recent, say, GL2.0 because things can be so much faster and nicer to work with.

So, what do you consider a good base line? Still in fixed-function land? Completely shader based? GL1.1 + VBOs + FBOs? GL1.5 + shader model 2? Or DSA all the way?

For reference, here’s the latest valve hardware survey (with the usual cavet that it’s mostly gamer hardware): http://store.steampowered.com/hwsurvey

The Valve hardware survey is massively biased toward hardcore gamers though.

My own stats show a slightly different - though encouraging - picture. If you’re mostly interested in Windows and newer Macs, you can safely go for OpenGL 2.1+; if you want to get as many Macs as possible, you’ll still need to target fixed function for another year or so (at a guess). Linux is still largely irrelevant and most of the drivers leave something to be desired anyway.

VBOs are universally supported and if you use them exactly right they work well and bug-free so far as I can see.

Cas :slight_smile:

Yeah, the valve survey is pretty slanted, but it’s useful none the less. And everyone will have their own idea of how ‘casual’ or hardcore they’re aiming at.

I’d also be interested in what kind of reliability people have found with the more exotic features. If I remember right FBOs were annoyingly picky at first, but they seem to have settled down nicely now from what I’ve seen, and you mention something similar for VBOs.

If Linux is irrelevant, then so is Mac, by your own numbers princec as well as humble bundle numbers. As for drivers, meh… i recall having the same pain on windows. Drivers are buggy.

I have found VBO to work fine. I had some ATi problems at first but that was because I had things not quite right and nvidia is more forgiving. I still don’t use FBO, but they are pretty old now as well. With so many games using shadow buffers and differed rendering i guess its something that is well supported…

But my bone to pick with all this is performance. I have a pretty new NVS 300. Which is a “work” level card. Its new and its pretty slow compared to a old GT8800 or even GF7600. It has quite low fill rates compared to other cards. Laptops are even worse. I have stuff that is in the 100s fps on a desktop that won’t get 5fps on a laptop yet the card should only be about 2-4 times slower, not 20!.

I hate to think what android would be like with variable performance.

100% OpenGL 3.3+. Because it’s how you’re meant to use OpenGL these days. All fixed functionality is deprecated. Do you use JFrame.show()? No? Then don’t use glBegin()-glEnd() either!

Quite a lot of cards out there don’t support opengl 3.x, but still support 2.1+. for example one of my desktops and my laptop.

Sure if your doing Rage or some other bleeding edge thing. But nothing is more irritating that a sprite game that uses some supper new card feature when it simply didn’t need to.

No offense princec, but we are not going to upgrade our cards for RoTT or some such games. Getting us to part with our money is about as much as you can expect.

Macs make up about 1/4 of my income but they also account for only 0.1% of my support time. (So: if I had to make my life easier, I’d ditch Linux)

A 20x slower card probably means you’ve hit an unaccelerated software path, not that the card is 20x slower necessarily, surely? Anyway for anything anyone in this forum is ever likely to do (in the context of game development as a hobby or indie, using Java) even a work card is going to exceed your requirements these days. Revenge was tested on a hilariously poor integrated Nvidia on a 3 year old laptop at the time and I still got it cranking out 60fps.

Cas :slight_smile:

No, no, no. That is not how this works, at all.

As developers we are advised which functions will be either removed in forthcoming versions of the software or discouraged because there are better alternatives. However these hints are entirely irrelevant when faced with the actual deployed software that is out there. A fairly huge proportion of systems are using OpenGL2.1 or less, end of. You are not meant to use the 3.3+ programmable pipeline if you want your software to actually run.

Using glBegin()/glEnd() has been effectively redundant since OpenGL1.1 when a faster alternative was given (vertex arrays). As all machines use GL1.1 or better you’d be daft to use it now it’s been marked as deprecated.

Cas :slight_smile:

  • The drivers for the new API is much easier to create and maintain for the graphics card companies, so they will have better support for a longer time in the future.
  • The new API has better performance, both CPU- and GPU-wise, even especially on low end computers.
  • Some functionality (like FBO’s) are pretty much required for anything I’d make. Guess when it was promoted to core? OpenGL 3.0.
  • It’s the future. People are abandoning DirectX 9 now. Direct 9 = OpenGL 2.x.
  • Less Intel graphics card to support / hack away for.
  • Sampler objects. Just those should be enough to make you never go back to OpenGL 2.x ever again after using them once.

At least use as much OpenGL 3.0 compliant code as possible. I do agree that using glBegin-glEnd is okay if you just want to draw a fullscreen quad… >_>


I’ll leave it as a reader’s exercise to find out when this card was released, and also to buy it if you have a card weaker than that in your desktop and hang on Java-Gaming.org.

Wait, people on Macs play games? I didn’t know! I thought they were busy with physics simulations (AKA that bird game).

If Khronos want to come and give everyone a proper 3.3 compliant card and drivers, then they get to decide which version of their API I use. But until that happens I’ll use whatever version and extensions I damn well like.

I find your post unhelpful and unproductive. :stuck_out_tongue:

You can find it unhelpful and unproductive all you like, but you don’t write games for a living, so your advice and arguments aren’t worth shit, I’m afraid.

Cas :slight_smile:

Reading that back I realise that sounds a little harsh, but the argument stands if not the mean sentiment.

Cas :slight_smile:

Harsh is an understatement… I would have used the words flamebait or troll :persecutioncomplex:

What about writing code that works with 1.1, but when it detects a higher version, it uses nicer features

Its more work if you really want more than 1.1. Seriously 2.1 is old now and is everything you really need for sprites and nice 3d with standard shaders. gl 3.x not so much.

Lets not forget that quake3 ran on TNT cards. princec is right, anything that is matched to the kind of art assets we can realistically produce, should run well on pretty much anything.

Till you go mobile of course.

Sorry, been in “not mincing my words” mode since yesterday about our twatbag civil service having a massive tantrum and walk-out because there’s not enough money in the country to keep them in their Cyprus holiday homes into their retirements as they had been led to believe 20 years ago.

Cas :slight_smile:

Thread title is “Getting our of the stone age”. OpenGL 2.1 is the stone age. OpenGL 1.1 is more about dinosaurs than computers.

Factually inaccurate too, but that’s never worth getting in the way of a good rant.

I’ve targeting 1.5 and implementing 2.0+ features using extensions or 3.x APIs when they are detected, or disabling the feature.

Orangy - I think I just made a very odd brain mistake back there and do apologise - I could have absolutely sworn that the post I was replying to was written by theagentd.

Cas :slight_smile: