What I did today

yesterday i had a fight with the overpowered teenage skinhead mf
i got more physical damage (around 8 bruises)
he got more lyrical damage
he retreated

This pixelart ;D lol
Tons of work D:

Congratulations!

Added AABB calculation for tiles.

Before:

After:

Performance dropped due to the overhead of per-tile computations that I need to optimize. I currently recalculate all per-tile stuff for each pixel in the tile for simplicity, so the constant overhead per tile is 256 as high as it should be.

Thanks! Iā€™m so excited Iā€™m losing sleep over it.

@theagentd That looks really good, what sort of game is it?

Can you explain what youā€™re doing here? Iā€™m trying to get more into advanced OpenGL, and that looks flippenā€™ amazing.

Noticed that you can approximate high freq AO by using normalized normal map z length. Basically this is free. Combined with baked vertex AO and temporal smoothed SSAO I have three different frequencies of ao all playing nice together.

This is just a test program for my engine, but itā€™s being used by WeShallWake.

My engine is using deferred shading. That means that lighting is split up into two passes. The first pass renders all the models to write the diffuse color, normal, specular intensity, roughness and glow color into a massive G-buffer (4 render targets + depth buffer). In the second pass, I try to figure out which pixels that are affected by which lights, read the lighting data stored in the first pass for those pixels and accumulate lighting for each pixel.

Traditionally the second pass has been done by rendering light geometry and using the depth bounds test to cull pixels that are too far away from the light to be affected by it. For a point light, I render a sphere. This is what the light geometry looks like. You can clearly see where the light sphere is intersecting the world.

This has two main problems. The first has to do with the depth test. Consider the worst case scenario where youā€™re looking straight up into the sky and standing inside the light volume. The volume is covering the whole screen, but not a single pixel will pass the depth test. Thatā€™s 2 million wasted depth tests for a 1920x1080p screen. Basically it doesnā€™t matter if there is actually any geometry intersecting the light volume; you still have to run the depth test to figure that out, which although fast isnā€™t free. Note that the screenshot above does not show the pixels that failed the depth test, only the pixels that actually ran light computations. Around 2/3rds of the pixels were wastefully filled and failed the depth test. This can lead to extremely bad performance, especially when youā€™re standing inside multiple lights, regardless of how small they are.

The second problem has to do with overdraw. The lighting data needs to be read from the G-buffer and unpacked every time a light covers a pixel and passes the depth bounds test. I also need to reconstruct the eye space position for each pixel. If 10 lights affect the same pixel, this data is read and unpacked 10 times, again leading to possibly bad performance. Itā€™d be better if we only had to unpack this data once for all lights. What weā€™re essentially doing is this:


for(PointLight light : lights){
    for(Pixel pixel : getPixelsThatLightOverlaps()){
        if(checkDepthBounds(pixel, light)){
            LightData lightData = pixel.getAndUnpackLightData();
            pixel.color += computeLighting(lightData, light);
        }
    }
}

The solution to these two problems is tile based deferred shading. Instead of rendering light volumes, we split up the screen into 16x16 tiles. We upload a list of lights to the GPU and have the GPU compute a frustum for each tile based on the minimum and maximum depth of each tile. We can then check which lights that affect a certain tile and only compute lighting for these lights. The resulting light overdraw looks like this:

You can see that it detects the overall same light shape as the light volume technique, so why is this more efficient? First of all, we test visibility per tile, not per pixel. Before, we ran a depth test per covered pixel, which could be anywhere from 0 to 2 000 000 pixels, with the worst case scenario happening when you stand inside the light volume (a very common case). With tiles, we do a constant number of tests, 1920x1080/(16x16) = 8100 tests per light, regardless of how much of the screen they cover. Itā€™s clear that the worst case scenario is much better here. Secondly, we only need to unpack the pixel data when loading in the tile. The pseudocode now looks like this:


for(Tile tile : tiles){
    for(PointLight light : lights){
        if(light.intersects(tile.frustum)){
            tile.addToLightList(light);
        }
    }
}
for(Pixel pixel : getAllPixelsOnScreen()){
    LightData lightData = pixel.getAndUnpackLightData();
    List<PointLight> lightList = getTileForPixel(pixel).getLightList();
    for(PointLight light : lightList){
        pixel.color += computeLighting(lightData, light);
    }
}

Itā€™s clear that this is much more efficient. Fine-grained per-pixel culling of lights isnā€™t necessary, so culling them per tile has much better performance. For the actual lighting, the inner loop is now over lights, allowing us to read in the light data once and reuse it for all lights. The 16x16 tiles are also a very good fit for GPU hardware, as GPUs process pixels in groups. This technique was first used for Battlefield 3, and requires OGL4 compute shaders for an efficient GPU implementation.

EDIT: The overhead of computing the tile frustums is currently around 3ms for a 1920x1080p screen, but it is currently doing 256x as much work as it has to. Hopefully I can get it down to at least 0.5ms. Despite the 3ms overhead, I can create scenes with lots of lights were tiled deferred shading is much faster than light volumes, 12.5ms vs. 17.0ms.

Alright, Iā€™m starting to understandā€¦ But what I donā€™t really get is this saving data onto the graphics card stuff. I know you save the depth buffer, normals, specular intensity, and diffuse color into a G-buffer. I just donā€™t understand things like what data is actually going into that buffer. (Like images? Is it pixel data? Just bits?)

Also, where the GPU tests for the light pixels to be in the tileā€™s frustum, is that done through a custom shader? How do you tell a GPU to compute that? If thereā€™s a tutorial, (preferably book / lecture) please let me know.

The data is stored in textures. I simply set up MRT rendering (multiple render targets) using a framebuffer object to render to 4 color textures plus a depth texture as depth buffer. The texture layout looks like this:


Texture 0 (GL_RGBA16F):   DiffuseR, DiffuseG, DiffuseB, <unused>
Texture 1 (GL_RGBA16F):   PackedNormal0, PackedNormal1, SpecIntensity, Roughness
Texture 2 (GL_RG16F):     MotionVectorX, MotionVectorY
Texture 3 (GL_RGBA16F):   GlowX, GlowY, GlowZ, <unused, used during postprocessing>
Depth buffer (GL_DEPTH_COMPONENT24): Depth

I then just read this data using texture fetches in the lighting shader. Note that the normal is packed using sphere mapping so that it only takes two values instead of three.

Constructing a tile frustum is pretty similar to doing frustum culling on the CPU. You extract the 6 planes and check the signed distance to these planes. This should be implemented on a compute shader to be efficient.

I think Iā€™m addictedā€¦ I feel a strange attractionā€¦

Rendered in real-time, of course. :point:
Source (itā€™s still nasty): https://gist.github.com/BurntPizza/34844127864d36e79619

Now to find a heuristic for creating the circular orbs of energy typesā€¦

I was working on the 5th iteration (Or rather: The 5th ā€œgreat code rewrite/refactorā€) of my Voxel-Engine today.
This is the 1 Month birthday of the iteration, and I am working almost every second or third day on it, which is a new record of ā€˜not giving up on it after a monthā€™!

The current result:

  • 14.426 lines of code.
  • Uses Gson, LWJGL, EventBUS, and KryoNET.
  • Has a basic GUI-System.
  • Automated Asset Manager.
  • ā€˜Infiniteā€™ Bitmap FontRenderer.
  • Both Fixed-Function and Programmable-Function ā€˜pipelinesā€™ for OpenGL.
  • FINALLY got the hang of all the right patterns.
  • And some more stuff ā€¦

Screenshot of the bare-bones (yet)non-functional Main-Menu:

There isnā€™t anything besides the Main-Menu yet, sadly.
I am going to waste my entire weekend on thisā€¦

(Have a nice day!)

  • Longor1996

I was messing around with box2d and scene2d using libgdx. Currently, all you can do is jump around and spawn giant tennis balls everywhere :slight_smile:

Please ignore the bad art, it was done in less than 10 minutes. :smiley:

Letā€™s hope that this is the one then!

Thanks for the medal! ^^

Wow, that looks amazing!

You sort of get pulled in to a trance staring at them, the only thing that snaps you out of it, is the transition between the end and the beginning :stuck_out_tongue:
Thought about making a live wallpaper with them?

Thanks! :smiley:

I would if it wasnā€™t so computationally expensive, it takes almost all of a pretty strong CPU to render them at a decent frame rate, so mobile is out of the question.

Each frame you see there is 600K plotted points of the attractor.
Source is updated though, much cleaner (and faster), Iā€™m actually sort of proud of how it runs.
https://gist.github.com/BurntPizza/c6f4c7f18daa9950692c
(Adjust sleep in RenderFrame.run according to your compā€™s power, I havenā€™t got automatic throttling working yet)

i did absolutely nothing
i feel so empty
i canā€™t say itā€™s bad cause i know how does it feels to feel very bad
new life experience
but i canā€™t say cool cause emptiness is not cool :slight_smile:

Some developments in this areaā€¦ Battledroid turns out needs OpenGL3 level hardware to get the performance we need without having to write some annoying extra code paths, and I think weā€™ve covered >90% of the customer base so 3.0+ or better it is for now. The benefits of glacial development speed is that old hardware gradually disappears :slight_smile: So Iā€™m still slowly diddling away on Battledroid and Chaz and Alli are working on a secret projectā€¦ in Unity. (I was also working on Skies of Titan, another Java arcade game, but Chaz is never going to have the time to do both, so we decided on the Unity one as Battledroid needs progress)

Cas :slight_smile:

Been messing around with some software that makes MIDI file sound like old 8-bit music.

I messed around with this old Beatles song, check it out.

Reading The Art of Game Design by Jesse Schell

Good stuff

I added resource loading to my game engine.

level-1.xml - resources


<project>
    
//    <!-- Configuration for game -->
    <configuration>
        <initialize title="Some game" width="800" height="600"></initialize>
    </configuration>
//    <!-- End configuration -->
	
    <resources>
//        <!-- Textures for level 1-->
        <resource name="test-texture1" type="texture" path="./resources/texture1.png"></resource>
        <resource name="test-texture2" type="texture" path="./resources/texture2.png"></resource>
//        <!-- End textures for level 1-->
    </resources>

</project>

level-1.log - auto generated


[INFO 2014.06.30 14:01:12] Resource loader initialized successfully
[INFO 2014.06.30 14:01:12] Registered logging platform at '.\resources\level-1.log'
[---- 2014.06.30 14:01:12]  ... 
[INFO 2014.06.30 14:01:12] Registered texture resource: test-texture1 - '[Attribute: path="./resources/texture1.png"]'
[INFO 2014.06.30 14:01:12] Registered texture resource: test-texture2 - '[Attribute: path="./resources/texture2.png"]'