Jolly good. All useless if I can’t ship an ICD with the game. I’m not writing two separate rendering paths. Just not.
Cas
Jolly good. All useless if I can’t ship an ICD with the game. I’m not writing two separate rendering paths. Just not.
Cas
Basically a useless driver update though. The whole OpenGL model is fucked. Until I can distribute the latest drivers with my game I’m stuck with using the lowest common denominator I’m willing to support.
Cas
Just wondering… what is the lowest common denominator for OpenGL right now?
Whatever you want. For me, I’m targeting 3.2 for Battledroid. Our older games work on 2.0 I think. Or is it 2.1. I forget.
Cas
Yeah, as Cas said there is no such thing. I usually make my games using OpenGL3.3 functionality, because I found this to be the best in terms of functionality/availability.
Usually Intel is the bottleneck as they only support OpenGL 3.1 with their HD3000 and HD2000 cards, and OpenGL2.1 with their low-end integrated GPUs. You can find a table of that here.
Nvidia is generally considered the best when it comes to OpenGL (their driver is the most stable out of all vendors, also they have pretty good linux drivers), they support up to OpenGL 4.5 from their GeForce 400 series and up, and OpenGL 3.x from GeForce GT 8 series cards and up. You can find more details about that here.
AMD is also pretty good, they support up to OpenGL 4.4 on their HD 5400 series cards and up. More info here (scroll down).
When it comes to Mac, you’re safe (now) to use OpenGL 3.3 (and even OpenGL 4.1 on the 2010 and newer machines) although for a very long time they only had OpenGL 3.2 support. Tables can be found here.
As I said, I usually go with OpenGL 3.3, but if I would make a game that gets greenlit on steam I would definitely consider adding a fallback to OpenGL 2.1 functionality to my engine.
I think > 98% of Steam users have cards that at least have available drivers giving 3.2+
Cas
I have a 2009 computer with an integrated Intel card that may or may not even support fragment shaders. (It’s whatever version supports it using an extension. 1.3? 1.4?) It’s depressing, but still makes me counterproductively determined to make the game run on it anyway.
One of the things I look forward to the most is explicit multi-GPU programming. Being able to control each GPU in any way you want has some extremely nice use cases, instead of relying on drivers and vendor specific interfaces to perhaps kind of maybe possibly get it to work with only some flickering artifacts.
That’s a basically useless feature for almost the entire world. It’s like looking forward to the sprinkles more than the cake! Give us client-installable drivers!
Cas
Multiple GPUs for people that play games is probably pretty common and will like become more common .
I dunnno about that… maybe I’m nuts, but it seems like things have been shifting away from custom rigs. Or maybe that’s always been a niche thing, and the niche is swelling along with the overall amount of computer users? But if people start moving away from desktops and laptops… hm.
The trend among game players has be place less everything else and more on GPUs. Desktop of course. More bang for you’re buck.
The trend in software development though is that catering for lowest common denominators is where all the money is. There is no point in taking advantage of multiple GPUs unless the software cost of it is free, that is, it’s all done magically under the covers and no-one needs to work too hard for it. Consider multithreaded code… very rarely used in C++ land as it’s basically just too bloody difficult to do pervasively; give us the appropriate tools and suddenly everything can be multithreaded with hardly any effort (eg Java 8).
Cas
On multithreading C++ and Java are pretty much the same. Either you do a simple design and it’s easier than single threaded OR you don’t and your life sucks. On multi-gpus…that sort like the apple dev thing of metal or opengl.
Whaa!? No, Java < 8 really does make it an order of magnitude easier, and Java 8 improves on it further with that lambda stuff. It’s no panacea but it’s far, far easier than attempting to do it in a language with no built-in support for it at all. It could of course get even better (some of the directions Rust is going in are very interesting) but until we start seeing 8- and 16+ core desktops as the “standard” no-one’s going to really bother.
Cas
To be fair, C++11 and C11 have added standard support for threads and atomics – they even managed to do it reasonably well. Not perfect, but certainly not as painful as it was.
Previous to the 2011 standards, you needed to use a platform specific threading library, which in the real world meant pthreads and or the Windows thread APIs. If you don’t need to support both POSIX and Windows, I personally don’t find it that painful. Cross platform, though, is a PITA. Simpler stuff works fine with one of the Windows ports of pthreads, but anything that gets clever means writing an abstraction layer.
You could use one of the cross-platform threading libraries, of course (e.g., SDL), but all that I’ve used have been less than ideal.
All that said, Java’s concurrency support in 1.7 is ahead of even the current C++11 stuff, and, with Java 8, the gap has grown.
Ya’ll are missing my point. All that fancy stuff means you’re doing it the hard way.
You have to admit that beyond moving to an entirely different paradigm like functional programming it’s a step in the right direction.
Cas