Getting out of the stone age - baseline OpenGL functionality

[quote]You can find it unhelpful and unproductive all you like, but you don’t write games for a living, so your advice and arguments aren’t worth shit, I’m afraid.
[/quote]
So that was directed at me, huh? :frowning:

(I realize that people are flaming everyone by “mistake” here, so I’m not trying to douse you all in gasoline here by answering.)

Keep in mind that we have different interests in programming. I’d love to see you implement shadow map ray-traced volumetric lighting using OpenGL 1.1. Or deferred shading. My interest in game making is graphics, and I also think that people should use the most powerful tools they have available. I’ve stated my arguments for OpenGL 3.3. I’m also only targeting hardware that can actually run my stuff at 10+ FPS. I’m also ignoring Intel.

So I’m sorry, Intel card owners and Mac owners. No ray-traced volumetric lighting effects for you. Or GPU accelerated particle effects. Or deferred shading with MSAA. I’m getting sick of being told “YOU CAN’T MAKE GOOD GAMES SO USE OPENGL 1.1 AND JAVA 1.0”. I want to get into the gaming industry after university (5.5 years in the future), but I want to do game graphics most of all. I want to show my OpenGL 3.3 compliant demo with tessellation and deferred shading when I apply for a job.

I now realize that I actually AM dousing you all in gasoline, but whatever.

Uhm, I’m pretty sure Starcraft 2 uses deferred shading (and obviously MRTs) on anything over the absolute lowest setting, and have you heard any problems related to that? Driver bugs in up to date OpenGL 3.3 drivers are pretty much a myth IMHO.

First of all, MRTs cost no fill rate at all. It’s the same pixel, so no additional coverage checks ( = filling pixels) are done, which is the whole point of MRTs. Bandwidth, however is increased linearly by MRTs. HOWEVER, this doesn’t matter, because…

To spew some technical reasoning about the fill rate: How many new commercial games do not use deferred shading nowadays? Don’t you think that graphics cards makers have adapted? You can show this to yourself by enabling 8xMSAA in a forward rendering game. Your FPS will most likely not even drop to 3/4th the FPS with no MSAA (assuming a realistic test). Why? Because your graphics card has much more bandwidth and fill rate than it needs for basic forward rendering. HDR rendering + Deferred shading is three or four 16-bit floating point RGBA textures render targets. Add antialiasing, and you multiply both the fill-rate (subsamples) and the bandwidth needed, and you STILL don’t get a linear drop in FPS. I’ll even dare say that ALL graphics cards are unable to use their full hardware potential without antialiasing and/or deferred shading.

Anyway, if you need help with setting up MRTs, I’m you man. xD

Yes, it was, but if you start swapping names around in the thread then you can see the whole thing is a conflation of error (my humblest apologies). As you never said what I thought you said and Orangy never said what I thought he said my response was completely random…

I do realise you’ve got a completely different objective, and a worthy one at that: it is a very good idea to get a portfolio of awesome graphics programming together if you really want a job in the industry doing that. Orangy has a slightly different objective here though (not least coz he’s already in the industry ;)). Orangy’s targets are generally about 2-3 years behind the Steam demographic target (and you should be aware that the Steam demographic is heavily skewed, and not only that, but has a sharp divide along the Mac OS line last time I looked). Just because an API is new and an old version is deprecated does not mean it is no longer in use; in fact it’s probably the older API that you’d want to target if you want to actually release something. There is a reason why World of Warcraft remains at #1, and it’s because it runs on just about anything as it relies on very low specifications. So that is something to bear in mind - it’s probably useful for you to know that fancy effects is only half of getting a job in the industry: you’ll need to know how to fall back without them at some point.

Cas :slight_smile:

Which is where we differ I’m afraid. You seem to be aiming at only the high-end hardcore gamer spec, whereas Cas and myself both value being able to run on Mac and on intel chips. Across the entire forum yourself and Cas are probably at the extreme ends of the spectrum. :slight_smile: The whole point of this thread is that I want to move to something more in-between, and finding out the sweet spot of functionality and compatibility. Lurching to either extreme isn’t particularly practical I feel.

Edit: Wot Cas said.

Dammit. Please stop calling anything above at or above the level of a GTX 8000 series card high-end already. My friend’s 600$ laptop has DX11 support for god’s sake. You’re targeting the lowest of low-end, but I am NOT targeting high-end only. There’s a middle ground… -_-’

There is the problem - you are not a game developer.
Tech demos are naturally different and use more stuff.
If you want to create a game experience, you want to reach as many people as you can, and give up on graphical effects if necessary.

Also please stop talking crap about OpenGL 2 being stone age.
I bought a laptop 4 years ago, has 2.1, I work with it every day, I have money, I am an IT person - so it’s safe to say many many more people have this kind of equipment
It runs games until NFS: Carbon
and if those graphics aren’t pretty enough for you, then there obviously something wrong with you

Just a random idea, how about targeting OpenGL ES 2.0 ? Just a few days ago the news was posted that the ANGLE Library is now fully OpenGL ES 2.0 certified. LWJGL also now supports OpenGL ES, so in theory it should be a pretty reliable target for Windows (probably even more so than using OpenGL drivers).

As a side benefit it’ll make porting to mobile platforms and WebGL (maybe using GWT) easier.

Price isn’t the only factor though - it’s also related to how often people upgrade, or what’s bundled in off-the-shelf pcs. My desktop for example is only a 6800. Not because I couldn’t afford to put a better card in there, but because it’s perfectly adequate right now and I haven’t been bothered to upgrade yet. Similarly people choose laptops that get better battery life, which is often counter to having the latest shiny thing that nVidia have put out. There are lots of reasons why people lag several years behind what’s current.

The Steam survey even shows that only 56% of people are DX10 capable. And those are pretty hardcore gamers. Your friend with a DX11 card is within 5% of hardcore gamers. You might not consider it to be, but that’s very high end.

Yup - it’s all about what is actually out there in the wild, not what you yourself consider to be high or low. I think for Orangy Tang’s purposes targeting OpenGL2.1 will let him use most of the fancy new functionality he wants to try out like shaders etc. and only lose, what, maybe 15% of the potential audience which seems a good compromise.

Cas :slight_smile:

Observations about what’s cheap, what’s obsolete, judgments about how people should just get a clue and upgrade and stop seeing GL 3.x as “super fancy” and so on … it’s all well and good, but Cas runs a business. I’m sure he’d love to ditch the legacy cruft, but the hard fact is that casual games that support older OpenGL versions sell more units than ones that don’t.

Me, my target is “whatever runs on the random hardware I have on hand” and it’s all OpenGL 3.3 and higher. By the time I ever get my current project shipped, OpenGL 6.0 will be out ::slight_smile:

I’d love to ditch the old fixed function pipeline and finally start using shaders in earnest. Just unfortunately have to support some older hardware for another couple of years yet.

Also… there is a school of thought that says I seem to be achieving what I want to achieve without actually bothering :slight_smile: Retro graphics FTW!

Cas :slight_smile:

Are there any major technical hurdles to supporting multiple versions in one game? I envision different levels of graphical settings corresponding to different render pass implementations.

That’s a very difficult question to answer, I think. From my experience it’s very dependant on what you’re trying to do graphically, and how much you can afford to scale things back without breaking your gameplay.

(All this my own experience, if anyone wants to chip in with their own angle or methods that’d be helpful).

FBOs for example - if you restrict yourself to 24bit RGB then you can pretty easily emulate that by drawing to the backbuffer, and it’s pretty easy to do it so that only a tiny amount of code needs to change. But as soon as you do something more complicated (like wanting RGBA, or using FBOs with shared depth buffers) then the non-FBO path has to do much more elaborate things like drawing twice with different blend modes, or using the stencil buffer to emulate the shared depth buffer, and it ripples through to the higher level code quite badly.

Things get worse when you may have some sort of combinatorial explosion of enabled effects, hardware capabilities and fallback paths. Choosing a good baseline goes a long way to making this all much more manageable. And that’s before we’ve touched on keeping all these different options tested and working with only one pair of hands.

just to prove my point - a friend of mine told me he is buying a “IBM T42” Laptop - because he doesnt need anything powerful

Intel Pentium M (Centrino) 1,7 GHz 400 MHz
1 GB 400 MHz DDR1
and ATi Radeon 7500 32 MB -> (which supports OpenGL 1.4)

And when doing 2D games I think of awesome games like Diablo 2 and their requirements - a game that would run on that laptop no problem

It’s weaker than my PSP. I’m actually laughing my ass off at him if he’s paying for that shit. Is he, and how much?

I use OpenGL ES 1.0 and it still is beyond 50% of phone hardware. So some people are stuck in the stone age for 2 more years.

I’ve written an engine that had rendering paths for the following:

  • Fixed-function lighting + multi-texturing.
  • Two paths for semi-programmable hardware, utilizing NV_register_combiners and ATI_fragment_shader.
  • Full shader-based (GLSL). I’d used the low-level shader extensions as well, but ended-up throwing that out when GLSL drivers stabilized.

I would never ever do it again. Which means I’m closer to what theagentd is saying. I’m not comfortable with his strong opinion either, but in his defense Orangy Tang didn’t really specify what kind of engine he wants to build in the OP. Anyway, for Puppygames-style games, I’d go with GL 1.5 baseline and then build on that. Decent VBO support is the minimum requirement. One fixed-function path and one (basic) shader-based one is not that hard to build and support.

@theagentd: I’m wondering why you consider GL 3.3 should be the baseline. Take one step back and you can run on the newer Macs (3.2). Take one more and you can run on Ivy Bridge (3.1), which imho is going to be really popular. Do you really require geometry shaders to build a modern engine?

Stumbled across this which has a nice list of what functionality made it into different versions: http://www.opengl.org/wiki/History_of_OpenGL

Of course it’s not entirely helpful because it doesn’t say when extensions were first introduced.

Yeah, that was a mistake I only just noticed. For the sake of argument, let’s say “modern 2d”, although I’m sure people will have different ideas as to what that means.

I’d completely forgotten about pre-GLSL shaders. Would you consider those obsolete now? I’d be inclined to ignore them just because of how much of a pain they are to work with.

I’d use non-GLSL shaders only as a workaround to GLSL bugs or to speed-up start-up times. For “modern 2D” games you shouldn’t need to do that ever.

like a hundred bucks - and why is it shit
I would still buy a C64 =D

Not everyone is happy to ride the evil-capitalist-conspiracy planned obsolescence train. Also, IBM Thinkpads are well regarded for being pretty reliable machines.