What I did today

Yeah, beautifully detailed and expressive. I love it.

That typography hurts though.

  • Jev

Began working on my first Game Engine

Mehā€¦ All day today I was getting Amazon Payments still just partially setup for an imminent Kickstarter launch of the TyphonRT video suite of apps for Android. Got to say that Iā€™ve really enjoyed working with Square 1 bank though as they have been really helpful. Got the studio room completely cleaned up last night just in time for a video crew to pop by though more filming this weekend and more paper work tomorrow (t-minus 7 days to launch; weā€™ll see). Hah, got a UPS mailbox (street address) today, so weā€™ll see if that satisfies the Google Play change for showing all developer addresses. Anywayā€¦ Here is a pict from the humble beginnings in what was originally a wagon repair shop built in the 1920ā€™s to what the studio looks like with some extra lighting from the film crew and my take on the future almost 100 years from when it was built.

Added support for noise-based moss decals on the terrain. Itā€™s based on precomputed 128x128x128 3D noise texture with a (precomputed) 5x5 gaussian blur applied to it. I opted to use a 3D texture since I could take advantage of texture filtering. Shader-based noise had really bad performance and/or quality, and also suffered from heavy aliasing when looked at from far away or at an oblique angle. A 3D texture with mipmaps and anisotropic filtering produces perfect quality.

holy hell, that looks really good!

iā€™ve learned that systems with massive stars are more stable.

@theagentd: assuming that a regular texture look-up is happening close to the noise look-up, you might want to try a non-lookup method.

Iā€™m not sure what you mean. I tried calculating the noise valuein the shader, but this tripled the instruction count and had extremely bad aliasing when downsampled.

That if you have a dependent texture read too closely followed by the next texture read it can be significantly slower. Weird about the aliasingā€¦Iā€™d expect that to be more likely with the LUT.

Dependent texture reads are hardly that costly, and thatā€™s not whatā€™s going on here. I basically do this:


float noise = ...;

//do some processing on the noise

vec3 diffuseColor = mix(originalTexture, mossTexture, noise);
//same for 3 more textures

The noise texture is only used to determine if the surface should have moss mixed in. The noise value is not used as texture coordinates for the moss texture, so thereā€™s no dependency there. Actually, I did add a dependency in a way to increase performance. Basically, I only sample the moss textures if noise > 0, which significantly increases performance when large continuous areas of the screen have no moss.

Now my collada loader supports multiple materials ;D (phong so far) in the same mesh. My engine can render just the diffuse textures. Still ironing out the bugs :emo: with the skinning some models animate fine while others are a bit odd :cranky:.

Iā€™ve found that if I get stuck one part, if I switch to another bit of the engine and work on that and then go back to the other part again I make more progress.

Making the loader for myself is a lot easier than trying to design a stand alone library that could be used in anyones project, so I might need some help with that if other people wanted to use it. Iā€™m not planing on supporting every part of the specification as I donā€™t need it, but may be others could complete it. Or just use it to make their own.

Modern gpuā€™s can hide that latency quite well if you donā€™t have too much register pressure.
Good presetantation http://bartwronski.files.wordpress.com/2014/03/ac4_gdc_notes.pdf from 64pages

I finally commented all the code I wrote for a programming competition I participated in last weekend.

Yes, but there needs to be work to done to be able to hide latency.

But that work does not have to happen in current pixel. Gpu just swap to next wavefront.

Extremely good reading even thought couple year old.

But in every case 128^3 * 1.333(full mip chain) * 1-4byte is just 2-8Mb of data. This is small texture and one lookup per pixel from that is extremely cheap. Added bonus is that volume texture can be prefiltered to avoid aliasing. Itā€™s get expensive and hard pretty quick to antialias general procedural signal.

I want to think I posted a link to that series. Thatā€™s a reasonable blog to subscribe to BTW. Note that my original post said: try. It all depends on dependency chains.

This. The texture is both small (1 byte per texel for a total of 2.6666MBs) and has extremely good cache locality thanks to mipmaps. I intended to use compression as well to halve the memory usage, but it turns out itā€™s not supported for 3D textures.

This isnā€™t true either. Texture sample dependencies are hardly a bottleneck at all in modern GPUs. Theyā€™re almost never too costly to be a good reason to work around having them, and you wonā€™t ever get very deep texture dependencies in the first place. Latency hiding works really well.

Letā€™s assume that texture sample dependencies were slow. Usually shaders get compiled to something like this:

  1. sample all textures
  2. wait for all texture samples to arrive
  3. execute shader

The point of latency hiding is (as you know) to allow the shader to queue up multiple texture sample requests and execute other invocations of the shader during 2. A shader with a texture dependency simply introduces another sample-wait-execute combo:

  1. sample all independent textures
  2. wait for texture samples to arrive
  3. execute shader as far as we can
  4. sample all dependent textures
  5. wait for texture samples to arrive
  6. execute the rest of the shader

Itā€™s easy to see that if latency hiding wasnā€™t there, the shader would become half as fast since the two waits would completely bottleneck the shader, but this doesnā€™t happen at all. For example, I used dependent texture reads in my single-quad tile renderer, which had a tile ID texture which was used to sample the tilemap texture. Performance wasnā€™t noticeably lower than simply rendering a simple textured fullscreen quad.

Did some reworking on my Blog ā€”> http://astrofirm.tumblr.com/

So I recorded this teaser video a couple of weeks ago with my video engine / capture app that Iā€™m doing the Kickstarter for starting next weekā€¦ Itā€™s a sneak peak I suppose of what it can do and Iā€™d be glad to hear some input from folks here. The video capture / effects composition app is running on the Nexus 5 and all effects are burned in while recording presently (next version will allow effects on playback). The Nexus 5 / Adreno 330 was the first mobile GPU that allowed a really deep post processing pipeline (previous FBO restrictions reduced significantly). What you donā€™t see in the video is the GUI to control all of the effects pipeline in real time while recording. IE the color changes and other tweaks in the video.

One of the fun things about San Francisco are the street fairs and once a year when I open my door there are lots of people outside. The Folsom Street Fair isā€¦ Iā€™ll go with interesting and I suppose quirky. I made my way to the main stage and caught Monarchyā€™s set which was pretty good. Here it isā€¦

23U6NYgQniM