HDR Fireworks + Particle Engine benchmark

I’d love to :slight_smile:
I’m running an i7 2600k overclocked to 4GHz. That’s 8 threads running at 4GHz…the power is too immense to grasp…

@Princec
Well, it’s on a computer, not a phone. The particles are also just colored points, so it’s lighting fast. The bloom filter is terribly slow even though it looks awesome. It takes over 10 ms GPU time, but at least it’s done in parallel to the CPU calculations… It obviously wouldn’t work out in a real game though. You can’t spend 13 ms blurring the screen. xD

@EgonOlsen
I think I’ve actually seen this problem before! A loooong time ago when I was doing 2D lighting for a RPG Maker style game I had an FBO which I drew the lighting and shadows to. Now, on our really old Radeon card in my family’s computer, this turned out black and white. I never really checked into it (it was a really old computer), but it could very well have been the same problem.
I have no idea what’s causing it though. The only thing I do for mouse clicks is change a boolean which controls if the bloom is rendered and added to the screen after the particle rendering or not. I’m not even unloading and reloading anything. I’m sadly gonna have to blame it on the Radeon drivers… T___T Sorry… Thanks for the screenshot!

@Rebirth
Why do you have to fire off 1000 fireworks per update?! xD That’s 60 000 fireworks per second (at 60 FPS), each producing a trail of 4 particles per update (240 particles per second). You’ll reach 10 000 000 particles long before they even explode from just the trail particles. And it doesn’t “crash”, it shuts itself down to prevent using excessive amounts of RAM. The engine is “stable” as in running but very slow up to about 16 000 000 particles. Sadly, I only have 4GB of RAM so that’s pretty much the limit for me.
The FPS can be seen in the console window if you start it with the bat-file. If you don’t have the server VM (which is faster) you can edit the bat-file with Notepad and remove the -server parameter. It also shows the number of particles.

@Cero
Thanks, good to know.

@Ra4king
Check your PMs. =)

Exception in thread "main" java.lang.IllegalStateException: Function is not supp
orted
        at org.lwjgl.BufferChecks.checkFunctionAddress(BufferChecks.java:58)
        at org.lwjgl.opengl.GL33.glGenSamplers(GL33.java:141)
        at fireworks.gfx.obj.Sampler.<init>(Sampler.java:11)
        at fireworks.HDRCore.init(HDRCore.java:32)
        at fireworks.tests.FireworkTest.init(FireworkTest.java:57)
        at fireworks.Core.<init>(Core.java:28)
        at fireworks.HDRCore.<init>(HDRCore.java:20)
        at fireworks.tests.FireworkTest.<init>(FireworkTest.java:53)
        at fireworks.tests.FireworkTest.main(FireworkTest.java:160)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at org.lwjgl.util.mapped.MappedObjectClassLoader.fork(MappedObjectClassL
oader.java:76)
        at fireworks.tests.FireworkTest.main(FireworkTest.java:155)

Didn’t run.

@Mads, you’re obviously running an Intel integrated gpu, amirite? :point:

I’m not. I’m running AMD Radeon HD 8)

Wow…crappy drivers then? Try to update your drivers since the drivers you currently have don’t even support OpenGL 3.3 >.>

But it’s using OpenGL 3.3 and its Sampler objects! Maybe an extension saved the day… If you can get some newer drivers, it would be nice to know if it fixed the problem.

Concerning the multithreaded particle test that Ra4king tested: it ran at almost 60 FPS with 3 million particles!!! Mind = blown. The next thing I try will be GPU particles, so I can avoid the single-threaded copying of the data to OpenGL. Eliminating my slow little CPU will be awesome!

Blurring the whole screen shouldn’t take 13ms! Render to a texture, then render that whole texture in one go to the screen with a bloom shader - should give similar results but be 10x faster. I think.

And the sprite engine I was referring to is the one I use on the desktop! (Well, it’s the same as the Android one)

Cas :slight_smile:

I am doing something like that, but I have a separable Gaussian blur so it’s two passes to blur it. However, I want the bloom to be able to make the whole screen white from a single extremely bright pixel (like if you’re looking into the sun), so I need a blur-kernel that is as big as the screen. To achieve this I basically have 8 levels of bloom, with the first level being at screen resolution, the second level being half as big, e.t.c. The problem is that I’m simply doing too many fullscreen passes with lots of texture lookups. Obviously in this particle example I can have this high quality bloom, but it is possible to scale it back to under 3 ms by reducing the number of levels and skipping the first full-sized level. It looks a lot worse for particles though.

I would never program something that requires OpenGL 3, only if its really just experimental, just saying - but we talked about this before

If I would want to use new fancy pancy OpenGL, I would at least check what version is available and write code that works on machines until GL1.1

and I what theagentd would say, but the bottom line is - if you don’t already have an audience /major publisher, then, filtering your audience like this will not help

doesn’t work here on linux, seems like you’re setting the path to the native folder inside your code using a backslash “”, you should always try use one of the following

  • use a forward slash “/”
  • or System.getProperty(“file.separator”)
  • or File.separator

This will allow it to work on windows and linux (and mac too).

Cool that you have a windows batch file to launch the program, if you want you can add a linux and mac shell script for the other two platforms, just put the following in a text file with the extension .sh

[quote]#!/bin/sh
java -server -Xmx1200m -XX:+UseParNewGC -jar HDRFireworks.jar
[/quote]
alternatively I noticed that you created a fat jar (all your jars into one), so why not just put the natives in the jar file too, so you just have a single clickable/launchable jar, you could use a tool like JarSplice (which also allows you to put vm parameters into the jar).

I would never use OpenGL 1.1 and its built-in crap unless I had to. There are so many glEnable switches that you can forget to enable or disable. Just forgetting to enable GL_TEXTURE_2D has made me search for bugs for hours until I found the problem. How everything is so convulated makes it so goddamn hard to know the state of the client at a certain point in your code. Just that should be enough to convince you to only ever use OpenGL 3+, as it has a lot less things that can go wrong.
Why would I prioritize OpenGL 1.1? Do you even know how old it is? Do you even know how old OpenGL 3 cards are? We’re talking the NVidia 8000 series. Even the GTX 7000 series had support for the functions I use. According to the Steam Hardware Survey over 90% of the cards used have support for DirectX 9. Over 60% have support for DirectX 10 / OpenGL 3. With the functions I use right now I have support for 90% of the computers used by people on Steam. I could easily limit myself to OpenGL 2 + extensions or even OpenGL 3.0 and not lose too many potential players. Besides, I like graphics programming so using newer features is interesting. It’s not like I released a full game. It’s a demo showing fireworks for god’s sake. You can stick with your OpenGL 1.1 support if you want, but I will have a hard time not calling you grandpa. xD

alternatively I noticed that you created a fat jar (all your jars into one), so why not just put the natives in the jar file too, so you just have a single clickable/launchable jar, you could use a tool like JarSplice (which also allows you to put vm parameters into the jar).
[/quote]
Ah, you’re right. I’ve been randomly using both forward and back slash until now because I didn’t think it mattered. Forward slash it is from now on then! And you’re right again. My releases are basically manual merges of all the jars. JarSplice looks like a very convenient tool. I will definitely have a look at it. From looking at the front page, it seems to support native files like you said, but how do I get it working with LWJGL? Does it automatically extract them somewhere? What do I have to do to get it working?

You might want to rethink your position though when I give you the following statistics.

Since January 1st 2011 I’ve had 27,396 users install my games. My users are a very good statistical spread of casual gamers and hardcore gamers from all over the web, though chiefly the US and UK, with France and Germany making up much of the remainder.

Of those, 17% of them had only OpenGL 1.5 or below, and 54% had OpenGL 2.1 or below. These numbers are so significant you can’t really afford to ignore them if you really want people to see your stuff.

By and large I stop supporting something when its penetration level is under 10%. So still quite a long time to go before I can even ditch OpenGL 1.4.

One last note about Steam: unless you’re actually making something specifically for Steam, I wouldn’t go thinking they are representative of your target audience. They are a pretty eclectic bunch of hardcore PC enthusiasts who spend way above average on their gaming systems and games in general. They are generally about 3-4 years ahead of the great unwashed which make up the rest of the internet. Not only that but Steam is unfortunately quite an exclusive club, so if you’re trying to make something that’s for Steam but fail to get in - as 99% of submissions do - you’ll be left targeting a bunch of people who probably can’t run your stuff. This is of course generic advice rather than specifically aimed at anyone here.

Hence: all my stuff is still OpenGL 1.3 compatible. Might even be 1.2. Actually it definitely is, 'coz I’ve got 1.1 logs in my database.

Cas :slight_smile:

Yup, basically it will extract your native to a temp folder, then start your app, after your app closes it will clean up (delete) the temp and the extracted natives.

Pretty easy to use with LWJGL, just do the following:

  1. Add all your project jars to the jar tabs - this includes a jar containing your project (including any resources like images, sounds etc), lwjgl.jar, asm.jar, lwjgl_util.jar and any other external jar you may be using.

  2. Add all the natives to the natives tab - thats all the LWJGL *.dll (windows), *.so (linux) and .dylib,.jnilib (mac) natives files to the natives tab.

  3. On the main class tab enter your main class: fireworks.tests.FireworkTest (you can also add any vm arguments on this tab).

Thats it, click create jar.

Oh, 17% still below OpenGL 2.0, was hoping that OpenGL 2.0 would be a good target to aim for these days. Since with 2.0 you can pretty much go full shaders, vbo’s and can avoid almost all of the functionality depreciated in OpenGL 3.2+. Also OpenGL ES 2.0 is similar and pretty much the standard now on mobile devices. So you can write code that ports much easier to ES.

Do you reckon that in about a year, computers with an OpenGL version less than 2.0 will be below 10% ?

Well, eventually. But it seems to be taking a long time.

Cas :slight_smile:

But my system is a current system with a modern OS and the graphics card isn’t that old either and with current drivers. It might be a driver problem, but i wouldn’t be so sure about it. I can try it on another Radeon HD on another machine and see if it works there…i’ll keep you posted.

yeah I said only if its really just experimental…

It’s just the mentality or ideology, that “every game has to work with every machine”.
Since this is not console programming “every” is a stretch, but consciously leaving machines out - and even 71% according to Cas here - is something that would be… unethical ! =D

and again the bottom line: you can write code that works on 1.1 and when 2+ or even 3+ is available it utilizes those features.

Actually, some developers think their programs work on OpenGL 1.1 whereas I succeeded in crashing some of them with my previous graphics card supporting OpenGL 1.3, it was the case with several major 3D engines. There is sometimes a noticeable difference between words and facts.

Okay, okay, I get it. But I still think supporting OpenGL 1.1 is overkill. I’m planning on making an awesome 3D demo with the latest graphical effects as a test of my skills, why would I support OpenGL 1.1? I wouldn’t be able to do any fun shader effects there, and in the end the game would be so slow that it wouldn’t run very well anyway. I can understand why you would want to support OpenGL 2, but 1.1? You guys are forgetting that the performance on those cards don’t really make them viable for anything demanding more performance than casual 2D games (no insult to anyone’s games!!!). I’ve seen Intel cards in action with CS 1.6, and the game lagged so hard with 2-3x screen overdraw, while my laptop can handle 10 times that amount. I mean, I wouldn’t be able to do anything at all as a graphics programmer for such a card.