I have honed a special skill I call “not giving a shit”. I leave it engaged nearly all the time. I walk around not giving a shit left and right. The important thing for every shit not given is to know why that is so. Seems pretty obvious in this situation.
So a dude just came up to you and started talking ignorance about OpenGL, C, Java, Adobe Flash and whatever? Wierd, most people I know have no idea either of them exists.
Counter with “How many particles can you animate at 60 FPS then?”.
Another good one is that Flash is GPU accelerated nowadays, most likely by OpenGL at least on Macs and Linux, assuming there’s GPU acceleration on those platforms.
OpenGL is more up to date with hardware at the moment thanks to extensions.
If you need the performance of C, you’re doing something wrong. I doubt most people have even touched 3D, so if their 2D game is too slow for Java it’s their own fault. I took a university level programming course once. My teacher said that since C is so fast I didn’t have to bother with fast algorithms. He was an idiot, and also made me believe that having less performance forces you to code better.
Actually here he happened to be correct, but only by accident as far as I can tell. Adobe Flash IS a program. Which produces Flash movies. It applies a scripting language to make things more dynamic, it should not be compared to any programming language.
always nice to prove these people wrong by writing some perfectly valid C code (so nothing ugly) that performs like a mule The argument can then only be “Yeah, you wrote the code to be slow”. And the only answer to that is: “EXACTLY!”
OpenGL is a horribly outdated API. Not to be confused with outdated WRT accessing features. Flash is just a program with an embedded DSL. HTML5 is just a program with an embedded DSL. Java isn’t a choice made by pros. So what to all of these.
Since I currently writing a Bachelor thesis about this topic (kinda), I can tell that OpenGL isn’t outdated on the feature site. It’s more how the API works that many people don’t like. In the end you have a state machine and a lot of people actually don’t like that. I can’t understand this since it’s what I love about OpenGL.
But you also have to see future technology imho. I currently learning JavaScript (something I hated for sooooo long) do make games with HTML5 and hell… it just awesome to work with and I can see a new Future in it where this horrible performing Flash stuff will be replaced be JS + HTML5. Oh and don’t forget WebGL and OpenGL ES.
As I mentioned in another thread, if I provided a big (but very useful) java library which was composed only static methods which operate logically on diverse sets of things and the vast majority of API calls are out-of-date and should never be called by users targeting today’s feature set and requires a fair amount of state knowledge and manipulation whose only purpose is a work-around for the fact that all methods are static and that the diverse set of things are referred to by integer values. It also has a bigillion magic constants in a single namespace. Oh and it has a wide array of nearly, but not quite identical functionality which all has to be maintained 'cause users can mix-and-match features present since the dawn of history.
You’d tell me I’m insane and the clean it up. BTW: Where did I say anything sucked?
(EDIT: And yes…I’m aware of start of deprecation started in 3.1 BTW which is a long overdue start…but why drag out the pain for driver writers.)
OpenGL is so old it has all the design habits in there that used to be considered the way to go but are nowadays condemned.
The only way out is to take the plunge and do what OpenGL 3 should have been - a total redesign that breaks backwards compatibility on purpose. But yeah, that’s the equivalent of introducing a whole new API - not as easy as it sounds as it will also mean that hardware manufacturers will have to provide totally new drivers also.
The future of OpenGL is probably OpenGL ES, which is relatively straightforward and has all the baggage chucked out.
As it is though the OpenGL API is exactly how I’d design an API that has to interface directly with a bit of hardware using client/server architecture, such that it runs as fast as possible and can be made to work with any language under the sun.
OpenGL 3.2 without compatibility mode is really, with everything that isn’t necessary removed and mostly replaced by shaders, so I wouldn’t say it has a bigillion magic constants anymore. OpenGL 3.2+ has a lot less state you can forget to reset, and everything is just a lot easier to get right once you know the shader basics. I DO agree that the amount of state is still too much, which is why a lot of stuff is moving over to bindless state at least. As the GPU flexibility increases with each GPU generation we’ll probably see OpenGL get simpler and simpler. There’s already bindless textures from NVidia which removes texture units and texture binding altogether, allowing shaders to access as many textures as you can store in VRAM and increasing CPU performance since we don’t have to bind anything anymore. OpenGL may have a hard time deprecating old content, and an even harder time getting driver developers to give up backwards compatibility when big customers don’t want to rewrite their software, but I see it as if it’s moving in the right direction, albeit slowly.
OpenGL ES isn’t much different from OpenGL 3 without backwards compatibility. It’ll have to go through the same hardware evolutions to enable the next gen GPU flexibility (even faster considering the progress on the mobile market), so while it might be perfect now it’s probably going to feel just as outdated as we feel OpenGL 3.2> is now pretty soon.
I’d like to see support for multiple threads “rendering” commands to display lists which can then be executed with little overhead on the main thread, similar to what DX11 promised but failed to deliver (performance-wise). Having as little persistent state as possible clearly helps here. Games need to start using more CPU cores, or where going to end up in deep shit once computers get 8+ cores. I mean, there’s a reason why computers can have 6 hyper-threaded cores overclocked at 4GHz+ but not a single core at 24GHz…
The bottleneck is the bus and memory bandwidth and current consumer MP architectures are probably not going to ever be optimal. Multiple thread rendering always sounds like a cool idea until you remember that there’s only one bus to the GPU.
@theagentd : It’s still a flat C API with no exposed notion of ownership. Did supporting multiple GPUs (for instance, which should be trival) suddenly get simpler when I wasn’t looking?