What I did today

Been doing a lot of studying. I crossed the 1/3rd point on a Udemy course on using Hibernate to build an eCommerce site with Java. The course has 253 videos, just got through number 85. (Maybe can complete the course by end of October?)

Android tutorial course I want to finish has been languishing. In late teens and early college years, studying multiple subjects at the same time was par for the course. Now, in my dotage, it’s not quite so easy to keep multiple balls in the air.

Am also learning about this forum admin stuff on the fly, e.g., figured out how to move topics a couple days ago, putting some “newbie resources” questions onto the newbie thread.

And, I’m making progress on a JavaFX project. Just got through a tutorial and chapter on FXML, and feel like I have a pretty solid grip on it now, conceptually. Need to do a bit more hands on. This morning got through a chapter discussing JavaFX Properties. Part of my goal here is to make some custom controls for an application. I want this application to have something of a SteamPunk look and feel, designed down to its widgets. Maybe not as elaborate as this, but out in that general direction.

This fellow, Yereverluvinunclebert, is some kind of mad genius of design yes?

Added a small variation of the voxel lightmapping demo, also using the greedy-meshed faces/rectangles for tracing in the kd-tree instead of using separately built axis-aligned boxes.

Previously, scene representation for rasterization (greedy-meshed faces) and for tracing ambient occlusion (axis-aligned boxes) were different in order to increase performance.

However, this did not allow for tracing the scene for other view-dependent effects, such as reflection rays for the windows during normal rendering, because surface properties like texture coordinates to lookup the ambient occlusion term stored in the lightmap could not be obtained. Now, texture coordinates are preserved in both representations (for rasterization and for tracing).
So, this now allowed for ray traced reflections from the windows of the house:

Like the first demo, this is still OpenGL 3.3 core with scene representations stored in buffer textures.

Java: https://github.com/LWJGL/lwjgl3-demos/blob/main/src/org/lwjgl/demo/opengl/raytracing/VoxelLightmapping2.java
GLSL: https://github.com/LWJGL/lwjgl3-demos/tree/main/res/org/lwjgl/demo/opengl/raytracing/voxellightmapping2

3 Likes

Made a free java VRML Viewer :

https://github.com/YvesBoyadjian/Koin3D/releases

2 Likes

Have extended raytraced lightmapped ambient occlusion to chunked voxel rendering. The tricky part here was only to “stitch” the individual kd trees of the chunks together after they have been built incrementally and to do memory management in a single buffer texture.
But everything is running nicely now.
Here is a scene of 5x5 chunks each 64x256x64 voxels in size:

and here with 15x15 chunks (and different color grading):

Video:


Here is a cool kd-tree “trace depth” debug render.
  • the more “reddish” the color, the more often ray had to descend into the tree
  • the more “greenish” (red + green giving yellow) the color, the further the ray had to jump between adjacend nodes in the tree
  • rays all travel coherently from top right to bottom left
  • individual chunks are also visible

8 Likes

Okay, glMultiDrawElementsBaseVertex is probably the coolest GL 3.3 function when it comes to rendering many many chunks of individual index ranges from a single buffer object. Only multi-draw-indirect tops this, which sadly is only in core in 4.3 and also only supported for class 4 hardware.
I am currently optimizing rendering performance as much as possible by using a single buffer object per vertex attribute for all vertices and have been using glMultiDrawElements so far. However, the problem with that is that I wanted to cut down on the index buffer size by using GL_UNSIGNED_SHORT indices. However, this only works within a single chunk and not when rendering multiple chunks from a single index buffer. glMultiDrawElementsBaseVertex to the rescue! Here, we can actually use short indices over all chunks in a single vertex buffer and use individual 32-bit basevertex offsets in the draw call. Luckily, since I am also using primitive restart (to render a quad/two-triangles with 5 indices instead of 6) with a special 0xFFFF index, the primitive restart test is done before adding the basevertex offset to the element indices in the index buffer. So, we can use multidraw + primitive restart + short indices.
Then, when ARB_multi_draw_indirect is available, I’ll be using that to fill a buffer object with draw calls.
This setup is becoming only slightly more complicated because I use array textures for the lightmaps and because the chunk faces can UV-pack to various lightmap sizes, I use buckets for various power-of-two array textures that the lightmaps for chunks are assigned to.

2 Likes

Installed Budgie Desktop Environment in Pop OS! and configured it to work like macOS. Even gestures are supported. Only thing that is missing now is a proper macOS style Workspace Switcher.

Probably I need to try to get this installed from Elementary OS, but not sure if Gala Window Manager can co-exist with Gnome’s Window Manager. Any ideas or recommendations?

1 Like

Fixed AO sample locations offsets. Before, sample locations were computed by simply offsetting the sample location from the vertex position towards the center of the face to avoid starting a ray directly on the edge of a face. However, this results in different occlusion values computed for two adjacent faces, when they have different “neighbor configurations”. Ideally, every two adjacent faces should share two AO sample locations for the shared edge between them. For a single greedy meshed face of multiple adjacent voxels, this is already the case, however not for voxels that were not merged (either due to having different materials or simply because they belong to another merged face). And this results in slightly different sample locations, even though both faces have an adjacent edge. Now, in order to fix this, we must compute a voxel vertex’s neighbor “configuration” (bitmask of its three possible neighbors on a single plane) and offset the vertex sample location accordingly.

Before (with highlighting the actual problem):

After:

Commit: https://github.com/LWJGL/lwjgl3-demos/commit/e951d580c8ed9895ee21ffd50a252f32bc48d8f1

4 Likes

Create a pathfinding for my kitchen game. It was quite hard (tried different approaches) but I am very happy with the solution now.

5 Likes

Implemented “naive” (as in "not ray traced) ambient occlusion (Minecraft-style) which is also described in https://0fps.net/2013/07/03/ambient-occlusion-for-minecraft-like-worlds/ (but that lacks a few details). I’m going to use that as an initial approximation until the ray traced AO has gathered sufficient information.


The image shows effectively 155,293,807 voxels.

2 Likes

Finished the first iteration of the next single-file LWJGL 3 demo: a simple voxel game.


Sources coming soon to https://github.com/LWJGL/lwjgl3-demos

EDIT 07.11.: Today I got around implementing chunk loading/unloading at a radius around the player:

EDIT: 08.11.: Next: Offloading chunk building to background threads, giving buttersmooth renderings/flyovers:

3 Likes

Between doomsurfing about the election and tutorials, tutorials, tutorials, my brain is pretty fried. Just had a nice break–checking out @princec’s Basingstoke and Faerie Solitaire, available on itch.io! I’m running them on my Ubuntu system. I had to figure out that my system lacked OpenAL which is needed for Faerie Solitaire to work, but once that was in, it worked like a charm. Basingstoke looks like it’s a Unity game (not Java?!), ran seamlessly from the first. Very fun, many brilliant touches so far! Scary noises, probably means I should run away, yes?

Basingstoke is indeed Unity (you can just tell somehow eh?) Probs not going to touch Unity again though. Basically, don’t be seen. Or heard. If you are seen or heard, distract the enemies and run away. If you can’t run away, try and kill them. But preferably run away.

Cas :slight_smile:

@princec It was the presence of UnityEngine.dll in the Managed folder that clued me in.
OK, so now I know what to do with the sandwiches and sausages I come across!


I created an account with AWS today, free tier. A lot of jobs list AWS experience as required and I’ve only a year using Linode under my belt, and a couple months of the Google equivalent (for a CyberSecurity course).

My plan is to try and set up a Hibernate-based back-end and see if I can persuade a friend (comic book artist) to be the recipient of my willingness to “intern” and write something to display his works. I’ve only got another hour to finish up of the 253-video Hibernate Udemy course I’ve been taking, and want to put some of what I learned to the test.

One idea I will pitch to him is to display the comic book episodes in increments (e.g., series of blog posts) and monetize by one of three methods: ability to purchase hard-copy or PDFs or some sort of document form of complete books, advertising, early-access privileges. I can see using filters to vary the privilege access to content.

Hopefully this could also be a stepping stone to setting up a web-based “game” with branching-narrative elements combined with simple animations and sounds–I’ve been envisioning such for a long while, but lacking the technical expertise to make it work.

@Apo! The art in your games! Inspiring!

1 Like

https://github.com/nvpro-samples is really an awesome repository of cool (and actually CMake-compilable) examples.
Especially the https://github.com/nvpro-samples/gl_occlusion_culling with its fragment-shader-based occlusion culling (rasterizing objects’ bounding boxes testing against the depth buffer with early fragment testing enabled and if a fragment makes it all the way to the fragment shader then writing a visibility flag for this object into a SSBO at the object’s index. Later, use a draw call with a vertex shader to append multi-draw-indirect commands for visible objects into a final shader storage buffer object (with an atomic counter) and use that to source a single MDI call) is worth looking at.
Took a stab at it for the chunks demo. Here is some debug screenshot with NV_representative_fragment_test enabled to render only a handful of fragments for visible chunks’ bounding boxes to reduce the memory bandwidth for SSBO visibility buffer writes. Enabling this nice representative fragment test ups framerate from 2200 FPS to 3680 FPS compared to rasterizing the whole bounding boxes:

EDIT 20.11.:

Added “temporal coherence” fragment-shader-based occlusion culling to the demo.
This is also what’s implemented by Nvidia’s “Temporal Current Frame” variant: https://github.com/nvpro-samples/gl_occlusion_culling#result-processing
It works by first rendering the same chunks that were designated visible from the last frame. This primes the depth buffer for rendering bounding boxes of all chunks in frustum with a fragment shader with early fragment testing and depth testing enabled as well as color and depth write disabled that sets a “visibility” flag in a shader storage buffer at that chunk’s index for each generated fragment.
Next, a “compute-like” vertex shader (no vertex buffer source and no fragment stage) is spun to simply “collect” MDI draw call structs from the list of in-frustum chunks only for those whose “visibility” flag was set by the previous shader and append the MDI struct to an output shader storage buffer with an atomic counter backed by a buffer object.
That atomic counter’s buffer object is then used as the parameter buffer for an glMultiDrawElementsIndirectCountARB (core in OpenGL 4.6) draw call to finally draw all currently visible non-occluded chunks.
This enables both very efficient and quick occlusion culling as well as no flicker of disoccluded objects, since those will be found by rendering the in-frustum bounding boxes every frame anyways.

Video of that occlusion culling (note the FPS counter in the window’s title bar):

EDIT: 24.11.: Smooth fly over terrain (1440p):

EDIT: 25.11.: Collision detection and response with bunny hopping :slight_smile:

6 Likes

Any future plans for this project, or just messing around? Looks really cool :slight_smile:

Thanks! Right now, I’m just polishing it, adding comments to the code and then have it be another demo in the LWJGL/lwjgl3-demos repo, showcasing a few extensions and techniques.
Maybe that codebase gives other people ideas about implementing an actual game with it. I’m not gonna do that. :slight_smile:

EDIT:
Sources for that voxel demo are up:
Java: https://github.com/LWJGL/lwjgl3-demos/blob/main/src/org/lwjgl/demo/game/VoxelGameGL.java (single-file, only dependencies are JOML and LWJGL 3)
GLSL: https://github.com/LWJGL/lwjgl3-demos/tree/main/res/org/lwjgl/demo/game/voxelgame

2 Likes

I worked on my kitchen game again. It’s making some good progress. =)

2 Likes

I became the second member to visit the forum for 100 consecutive days.

1 Like