What I did today

I just made a small ‘library’ (~3 hours), that helps laying out objects (rectangles) on screen in a hirarchy of objects.
Have a screenshot:

I made a small typo there with “written Java”: It should be “written IN Java”.

Here is how to create a bunch of boxes, using the library:


		void makeBoxes(ContainerNode root);
		{
			ContainerNode cnode = new ContainerNode(root);
			cnode.setPreferredBounds(new RectangleF(64, 96));
			cnode.addAttribute(LayoutAttribute.compassLayout());
			cnode.addAttribute(CompassAligmentAttribute.south("vertical-align"));
			cnode.addAttribute(CompassAligmentAttribute.west("horizontal-align"));
			
			{
				RectangleNode node = new RectangleNode(cnode);
				node.setPreferredBounds(new RectangleF(16, 64));
				node.addAttribute(CompassAligmentAttribute.north("vertical-align"));
				node.addAttribute(CompassAligmentAttribute.west("horizontal-align"));
			}
			
			{
				RectangleNode node = new RectangleNode(cnode);
				node.setPreferredBounds(new RectangleF(24, 24));
				node.addAttribute(CompassAligmentAttribute.center("vertical-align"));
				node.addAttribute(CompassAligmentAttribute.east("horizontal-align"));
				node.addAttribute(SizeAttribute.createPercentage("height", 50));
			}
		}

The entire thing doesn’t care about pixels/inches/centimeters or anything. It just says ‘units’, and thats it.
Its up to oneself how to use the values/rectangles the layout-library generates.

Have a nice day!

  • Longor1996

Wow, that’s really cool, I may just use this in the future! Could you post a download?

Well as for what I did today, I’m starting to learn more advanced concepts in C#. Since it’s so similar to Java it’s a nice and easy learn. It also supports a bunch of awesome things that Java doesn’t (optional params, class extensions, and operator overloading are my favorites). I’m using it to program something with the Selenium library to test the website of the company I work for. Loving it!!
The only downside is the OS limitation.

I decided to have fun with shadows today using shadow mapping.

The code: https://www.github.com/ra4king/ShadowTest

In addition to that, I have finally checked all 65+ pages of unread JGO threads I had:

Such a relief!

I’m thinking of learning PHP and jQuery UI. Just installed XAMPP on my windows 7.

Thanks! finally a usable example :slight_smile:

If I would be you I would rather forget about XAMPP. Instead I would recommend you to install an Ubuntu Server in Virtualbox, with Apache (or nginx) + MySQL and Samba file sharing or FTP server. You would learn more at least about the server itself and you could have a taste of commercial environments. I know it might sound cumbersome at first but it will worth the time investment imo and it’s a pretty straightforward process. :slight_smile:

Finished implementing first iteration of shadowmaps. New simple dialog ui skin.

I finally read this: http://casual-effects.blogspot.fr/2014/03/weighted-blended-order-independent.html and the real paper.

(One of the vids directly from blog post…just to catch your eye)

41dD2OsUagI

Yes, I can provide a download for it.
Or even better: How about a github repository?

The code is still a bit messy and not completely commented, since I wrote the entire thing in about 5 hours, but it works and is usable. I included a basic usage example in the ReadMe, even though I tried to make the usage as easy as possible. You can also just go and look at the Test-code.

Here is the link:
DatPaperLayout @ Longor1996 @ Github

And yes, the library is called ‘DatPaperLayout’, because I can’t think of any better name.

This is also the first time I am using Unit-Test’s instead of ‘main(String[] args)’ to test the library functions. Mind that I don’t really understand how they work (yet), so you may have to rip your hair out over the way I use them. (On that note: Can anybody recommend me a Tutorial for JUnit-4?)

Anyway, Have a nice day!

  • Longor1996

I was going to use that for rendering transparent effects in my game. The technique described there works well theoretically, but in practice it’s horrible. For example, if you have a very barely visible red fog particle in the foreground and a huge gray cloud far behind it, the red fog will (due to its high weight) make the particles behind it look red as well, effectively bleeding its color to objects behind it. The weighting is also very prone to aliasing and banding due to the limited precision of a 16-bit float render target. It also does not support particles with 0 alpha (= additive blending) as alpha is used as a weight, effectively causing a divide by 0 during the resolve. This can be solved by using a value slightly above 0, but if it’s too small then we run into float precision issues again. Some of these problems can be alleviated or completely fixed by modifying the depth weight function, but this requires a tailor made function for each scene and does not work with arbitrary geometry.

I ended up using this technique in combination with my own extension: layers. I split up the view volume into 10 layers. When rendering my transparent geometry I use a geometry shader to output each triangle/particle to the two layers that are closest to the depth of the geometry being rendered, with weights depending on how close to the depth of each layer it is. That way I can use a simple exponential depth weight function since the depth range for each layer is so limited. The main problem of this approach is obviously memory usage, as 10 layers of textures using 10 bytes per pixel (8 + 2) uses almost 200MBs of memory for a 1920x1080p window. In addition, each particle needs to be rasterized twice to two layers. To alleviate both these problems, I render all transparent geometry at half resolution, so both fill rate and memory usage is cut to 1/4th. I then use bilateral upsampling to eliminate aliasing from when upsampling to full resolution after the resolve.

Awesome, thanks so much. Funny enough, I’m actually just starting to learn JUnit as well, for my aforementioned project. Fun stuff.

Yeah, but all the simple solutions are horrible in different ways. Let’s note that this is intended to be cheap enough to work lower end of current hardware including some mobile. It good to hear about problems, but I wouldn’t blow it off.

So have to read the intel “Multi-Layer” paper?

Some related stuff: http://on-demand.gputechconf.com/gtc/2014/presentations/S4379-opengl-44-scene-rendering-techniques.pdf

Hmm I wonder if it could be optimised to handle 2D sprites. In an ideal world I’d only sort by GL rendering state and get nice phat VBOs to render, but because of the pesky painter’s algorithm I have to sort everything by Y. Fine you say, use the depth buffer, problem solved… except of course that breaks with 2D sprites because of the transparent parts, and alpha testing creates horrible artifacts at the edges.

Cas :slight_smile:

The biggest cause of glDraw-call-count in your sprite engine seemed to be changing textures, due to the inability to create 1 large sprite-atlas on Intel hardware, leading to visually ‘interleaved’ sprites potentially ending up in different atlas textures. IIRC: we had 300K sprites at 60fps with sprites in 1 atlas, and 3fps with said sprites in 2 atlases. (Ignore the rest of the post if this dated observation is… out-dated)

Still, all (smallish) sprite atlas textures are resident, so you can bind them all to different texture units - AFAIK OpenGL 3 spec states at least 16 texture units will be available. Now add a new vertex attribute (an unsigned byte), which tells the shader which texture to sample from. This has quite some overhead (dependent texture fetches), but it’s a superb trade-off, due to the larger batch-size you get in return, as you were clearly not fillrate limited.

Ofcourse, you should switch to GL_TEXTURE_ARRAY_2D and/or sparse texture arrays combined with bindless, but (old?) Intel drivers are probably struggling with that too, if there is support at all.

As it happens, I’ve switched to GL3+ now, and using GL_TEXTURE_ARRAY_2D, and (currently) about 8 texture units. I’ve managed to get most of the entire scene drawn using a single draw call using a “mode switch” in a shader, but I still have to sort all the pesky dynamic sprites by Y every frame so I know which order to draw them in. Intel chipsets aren’t actually fast enough to render everything at 60FPS anyway - and what’s more with the latest drivers Battledroid crashes on startup every other launch, so I’m officially giving them a big two-fingered salute. The end result is tossing away about 15% of potential revenues for a considerably easier life developing and supporting the game.

Cas :slight_smile:

Presumably you reverted from the multi-threaded grid-based sprite-sorter, potentially due to not being usable in specific (new) use cases. As it was capable of sorting 300K sprites in about 2-3ms I was under the impression that that was a solved problem in the sprite engine. What is the current approach?

I backed out of the grid-based sprite sorter in the end to try and keep my brain from imploding trying to understand it and keep two separate sprite engines maintained and created instead a single multi-threaded sprite engine with freezable sprites and freezable layers.

The sprites are ticked and updated and transformed using all cores, before being gathered at the end, and then again using all cores to write the sprite data out to VBOs. Frozen sprites don’t need transforming - I cache their VBO data and simply copy it verbatim. Frozen layers imply all sprites are frozen and that enables me to also avoid having to sort those layers unless sprites are added or removed. So a bunch of little optimisations on that front to save some CPU time, and a whole load of multithreading, but it all works in almost the same way as the original sprite engine did.

The sorting is still pretty fast - I had a multithreaded mergesort at one point but the overhead of the merging meant it was rarely useful. Gnomesort tends to crap out a bit too often as well - at least in the battle scenes I was rendering - I think that the speed at which sprites where whizzing around was causing too many swaps all the time. In the end the JDK sort function seems to be doing absolutely fine.

Right now it’s rendering ermm… can’t remember - maybe about 100k sprites per frame but of course 3/4 of those are actually frozen and very cheap to process. I ended up with bottlenecks in the particle system in the end (which itself, is now multithreaded, too). The end result is I can watch battles with about 6,000 robots, zoomed right out to see approx 100x100 of the map area, at 60fps on the 5 year old i7/gtx280 rig, which is good enough for me.

Cas :slight_smile:

I had a neat trick with the particle system btw… I’ve rendered some emitters out into sprites - that is, complete particle effects - and created single sprite animations for them. Emitters now have a Z-range at which they operate; so when zoomed out, it switches to using these “canned” emitters for explosions, smoke and ricochets, and when zoomed in, it uses normal emitters. That way I can render what looks like about 200,000 particles when zoomed out to look at the whole battle (you can’t really tell they’re “canned” because I’ve got about 3-4 sequences of each one rendered and it chooses them randomly).

Cas :slight_smile: