What I did today

I spent the last week working on the actual networking part of my engine. I decided on going with a “server controls all” approach. Any instance-able class type in my engine will be automatically replicated from server to client when created, updated, or destroyed. So if I create a folder on the server, and name it “Jeff”, a folder with that same name will appear in the same place on all clients. This level of replication made it convenient to have the server control all main game features.

Then I extended it into Physics. When a physics object exists within a game object, and it is currently NOT sleeping, it will send updates to all of the clients, expressing where it is and where it’s going. This forces all physics simulations of the clients to be synchronized with the server.

I then took it a step further. Regarding the physics… If a physics object exists as a descendant of a players character, the server will not tell THAT PLAYER that his physics is updating. Additionally, any update that THAT PLAYER sends to the server regarding that physics object will be accepted into the server. i.e. your own client can have control of your own players physics, and the server will accept it. This allows me to have players locally controlled and still update to the server, and have OTHER clients see those changes. You can’t, however, from your own client update some other players physics, as it will get rejected by the server. Finally, if you manually set the position of a client-controlled physics object on the server, that will still force replicate to the client. This allows me to still create some sort of “anticheat” on the server, where I can prevent players from walking through walls locally (using server-sided raycasting & teleporting).

Then I decided to make a ping-pong game to show it:

Your paddle (red) is simulated on your client. Your position is sent to the server and the server accepts it. The ball is completely server sided, and will only collide with objects on the server; Same with the server’s paddle (green).

After over two weeks of having to do other stuff, I finally came to work on that voxel ray tracing again. Changed k-d tree memory layout and tree depth, which improved total frametime by ~60% (now giving over 1000 FPS for that scene - yeah, that comma is the decimal separator :)). I cannot stress enough how important it is in GLSL to do “wide” vector loads as much as possible. So, whenever we have a SSBO with a struct containing 4 ints, then by all means combine those into a single ivec4! And it can even be benefitial to increase the memory footprint of the struct if that means to have fewer load transactions by probably combining 1 int, 2 shorts and 3 bytes into a single ivec3 (effectively ivec4 after alignment padding).
‘Before’ and ‘after’ images (of same scene):

As before, the red color channel represents the nodes the ray traverses along its path via direct neighbor pointers and the green channel corresponds to the number of times the ray has to descend from a parent to child node.

After having read Symmetry-aware Sparse Voxel DAGs (SSVDAGs), that gave me ideas to cut down storage for the voxels list by storing the configurations of N^3 voxels inside of a k-d tree node as a single bitfield and trace the voxels that bitfield represents without actually loading the voxel positions from the SSBO. Let’s just hope that memory bandwidth is still my bottleneck.

Been finishing the assets for the first level, now that all are done rendering part is incoming.

Since I am a lazy person, I have created an application which will do this for me. Basically, Blender supports command line execution in background and the good part is that allows to render stuff from command line. Since beginning I have made a right choice of sticking to one-model-one-animation file organization approach. I did not have any specific reason for that, just decided to do it like that, but now it played a crucial role, since the only thing I need to do to make it work with Blender command line execution is adding a camera to each file and adjusting the view through the nodes system (isometric cabinet view which I adopted). Beware, if you are going to integrate Blender command line execution to your pipeline, you need to use the following approach to execute the process:

Process process = Runtime.getRuntime().exec(stringJoiner.toString());
BufferedReader processOutput = new BufferedReader(new InputStreamReader(process.getInputStream()));
String line;
while((line = processOutput .readLine()) != null){
    LOG.trace("BLENDER OUTPUT: {} ", line);
}

The approach with

process.waitFor()

will not work, I am not sure why, I my case the execution was starting always when I was stopping the Java application…

Since I am a lazy person^2, I have created another application, based on YAML configuration, the application will read rendered images and crop them, additionally checking whether the image goes out of bounds of the cropped area and if the render is too close to bounds (in the game I use a shader to calculate the outline of the object, so if it is too close to the image bounds, the outline will look like it is cut). Furthermore, when in debug mode, besides bounds warning it produced additional images, which display where the bounds were violated (yellow - non violated inner bounds, red - non violated outer bounds, cyan - violated bounds) and the center of the image so I could have an idea how much I should adjust the image in YAML configuration for the next cropping iteration.

Took me a couple of hours, not big deal, but I think it can save a lot of time…

Had some turkey. Happy Thanksgiving, to all y’all Americans!

Finally got around to rewriting my PBR to not be hacked in. Before I abused the emissive buffer to write my PBR data. Now I’m doing it in post:

Started integration of my Blender models into game, still looks empty, but I did not get yet to secondary/decorative objects (I still have not re-calculated offsets, so some things look out of place).

Good I invested time into scripting my rendering process, I had to redo the renders of humans 3 times, and going to have to redo for the 4th time… I did a mistake of rendering with low resolution, which does not work as well as rendering to higher resolution and resize.

Going to rework UI and controls (6th time now) after I am done with integration of graphics, I have an idea how to make it one button only (for interaction with game objects), but need to test it.

Implemented another optimization in the kd-tree-based voxel ray tracing: Merging adjacent voxels. Up till now all visible individual 1x1x1 voxels were being retained and the kd-tree was built from those. That wasn’t bad and had many advantages. However, it required that lots of voxels had to be processed, even those that could have been merged with nearby voxels into larger cuboids.
While reading about Minecraft’s Greedy Meshing approach:

The color metrics are still the same, but one can see that the right image has much darker colors indicating fewer ray descends and neighbor pointer follows. Left scene has 191.141 voxels; right scene has 31.291 “cuboids” (merged voxels).
This optimization was particularly helpful for the ‘ground plane’ which got shrunk down from 256*256 voxels to 1 cuboid.
Though, for actual kd-tree building when splitting the voxels/cuboids by the split plane determined via Surface Area Heuristics, big merged voxels will get splitted into two again when the split plane cuts through the big voxel.

Really impressive work. Interesting to read and watch the images and videos.
Does this optimisation to merge the voxels into bigger cubes make the terrain non-destructible?

Would be interesting to see how small you can make the voxels and still render them with high speed. I wonder how tiny the voxels have to be to make almost smooth-edged curved faces?

Thanks!

That is definitely the big advantage of having regular 1³ voxels. With those I could at least “mark” them as ‘dead’ when the player destroys them and later on idle-time do a rebuild of the kd-tree to incorporate the changes done so far by the player like destroying voxels and also adding new ones which to that point had been rebuilt in a separate “additions” tree.

That’s not as easy anymore with those merged voxels, but it’s still doable: When a 1³ voxel gets destroyed that was part of a bigger merged voxel, then I need to break up the merged voxel and re-optimize those 1³ voxels again without the destroyed one. That is still doable efficiently, since all of that only has to happen inside of the same kd-tree leaf node, and it does not have to happen for the whole scene. Additionally, I also only need to break up a voxel on the same Y layer.

Identifying which of the represented 1³ voxel got hit by a ray when testing against a big merged voxel for irradiance lookup will also be bit harder now, since irradiance has still to be stored for all individual 1³ voxels. But that is also solvable with some modulo arithmetic.

Thanks that’s great, I didn’t realise it was easy to use two separate trees concurrently. Nice one

We published the free demo of Lethal Running on Steam: https://store.steampowered.com/app/951170/Lethal_Running_Prologue/
Made with Libgdx.

https://lethalrunning.com/images/clown.png

Today I wanted to look into rendering grass blades. Minecraft simply uses two crossed, textured quads/rectangles on top of another voxel. So, that’s what I did as well, just that I do not use geometry to represent the quads but a custom voxel/ray intersection function which just checks for two plane/ray intersections bounded by the extent of the voxel (or any smaller portion of it to have smaller grass). Now, the quads just need to be textured, but obtaining the texture coordinates from the point of intersection is easy. Here’s a video of an island/mountain scene where each voxel only consists of the two crossed quads:

mDXcdv1pK_g

EDIT: Image with texture coordinates visualization and with actual grass texture:

And a video:

XYZ0GOuqf6I

Copying all of our game’s UI text to an external spreadsheet so the game can be localized is one of the most painful things I’ve ever had to do. Lol

I would have created a csv-export tool for that… which probably would have taken 5 times longer 8)

I’m using a spreadsheet and exporting it as TSV (I like the tab spacing more for OCD reasons), the problem here is that we had a lot of GUI text built into the code itself because I never expected to have to actually localize Robot Farm. So I’m having to tediously go through every class and link them to keys in a spreadsheet. Really messy business.

Still a bit more on Blender side (I am sorry for posting too much Blender stuff, but I believe it might be useful for people who intent to learn it and makes part of the game I am developing in Java :P).

After integrating the models I have done into the game (some of them) I was quite unhappy with the result, something was missing. After looking into the games which has similar graphics with mine I noticed that there is one important component missing - outlines. Take a look at the following image. The number one is the image which does not have outlines and makes me extremely unhappy :D. Since I have no expertise in the field I decided to experiment with different options.

The image number two has something called Blender Freestyle and in my opinion it looks much better in comparison to the first image (however I could not make it work with Hair particle system which I was using for modelling the roof, so had to replace it with wooden one). Number three uses a shader (so it is application dynamically generated effect) which in my opinion also improved the quality of the assets, it gives a notion of depth, I need to profile it however, especially on mobile devices, not sure about its performance (an alternative could be applying this effect on image itself, using edge detector algorithms for example with some bluring of the edge line). Finally, on the forth picture I combines both, which in my opinion resulted in the most interesting effect.

P.S(I know the sword is screwed, need to re-render human models, was using wrong distortion configurations :S)

#4 actually looks pretty cool. Maybe a picture with a bunch of assets on the screen at once?

Thank for feedback, need to do the renders first :slight_smile:

It’s looking really nice, great idea making 3d assets for a 2d side view game.
I also like the one with outlines. But I wonder if shadows might make it look better? I’m a terrible artist, but I’ve heard them say that shadows add perspective and ‘ground’ the characters and items in the landscape.

Heh, I am not an artist either, at all, I hate drawing actually, but desperate times require desperate measures :D. I believe shadows would be the best solution. I see two possibilities there first of all simplistic shadows (a circle under units, trees, objects, “something” under buildings). Pros = easy to implement, cons - might look weird with some game entities. The second one, more sophisticated shadows, but straight away I see a huge problem there. Generally projecting a shadow on the floor should not be difficult, but what if the unit is going next to the building? How I can partially project part on the floor, partially on the building (I operate in pure 2D world). I believe it is solvable, but I need still to investigate it better (maybe after I releasing a small demo with a couple of levels I will focus on this one). BUT, if you have any tip for shadows I will really appreciate, since it will definitely cut the research time :P, allowing to focus on actual solution!