What I did today

I finally recorded a video of my instancing compatible gpu skinning system :slight_smile: Itā€™s capable of blending up to 4 animations per entity, but I donā€™t have example data, so a single animation per object has to be sufficient for demo purpose. Each instance has seperate materials, animation controllers (different playback speed for example) and so on.

AgxddJtSVx0

My occlusion and frustum culling mechanisms are deactivated currently. The animation state (which is the current animation frame?) is calculated on the cpu for up to 4 weighted animations per entity. On update (animation frame changed) the data is (multi) buffered to the GPU. The GPU then traverses bones from buffers, interpolates, blends animations etc, so it gets quite cheap.

For each instance, a AABB is calculated. So in theory, every instance can be culled with an instance aware culling mechanism. Iā€™m currently implementing this, but itā€™s not an easy task :slight_smile:

Implemented isometric projection in my application:

This took MUCH more time than it should due to codebase being pretty massive and many years old, with assumption of only two kinds of camera existing (top-down and FPP) everywhere.

Did some more work on my game.

oxdzuHwnB9s

Lookin good man!

Came up with this fun one liner today while working on some stuff:


n = sin(asin((n / max + (2 * PI / (frameRate * 100))) % 1)) * max;

which kinda generalizes to


n = sin(asin((n / max + t) % 1)) * max

Itā€™s a one line stepper function interpolating linearly from 0 to max moving with intervals of t * max every frame, where t < 0, looping back to 0 when n reaches max. It seemed useful. Initialize n, put the line in your draw method and watch it go.

Iā€™ve been rewriting bits of Robot Farmā€™s framework to allow for faster and more robust testing. One of the benefits so far is that Iā€™ve been able to virtually turn the game into a visual novel, which makes programming the dialogue in way faster. I can test all of the dialogue in the game within in a 100 line test program.

Gentle smiling man politely suggests you to make haste, or weā€™re all gonna die (soonish, like in his dream).

SdKfz.101b and white mannequins in the pacific, rando-island, 1944, colorized

I just continued working on this project again, the terrain rendering is now in the main game engine, thereā€™s so so much still left to plan out though, Iā€™m focusing on building a graphics engine right now since i still donā€™t have a crystal clear goal for this project, except for how it should look and feel.

I always wanted to have a large scale online wwii rts sandbox where battles take preparation over multiple hours/days, like actually getting the tanks and infantry to the battlefield with trains and boats, doing air raids, scouting and artillery strikes beforehand. And that together with multiple players (Generals? Commanders?) vs an AI enemy in realtime.

Fights between individual units will be determined by rolling a dice with nice effects to give the illusion of firing projectiles, there will be no realtime physics simulation at all though.

Iā€™m thinking of something in the middle of 4X and RTS, but with slowly evolving battles where the AI does a lot of the small scale fighting, with micromanagement not being necessary but possible to a degree.

This thing is going to be finished juuust after HL3 gets released.

Been looking at JavaScript game libraries. Every one Iā€™ve encountered thus far is the same: register the assets you want to use. Register the sprites you want to render. Everything is held together with string identifiers. :persecutioncomplex:

I just want to load and display images without having to wrestle with a scene graph. Is that to much to ask?

Why not just use the Canvas API directly?

Nothing is stopping you. Use the DOM as is or use the Canvas API. Or use a render library like pixi.js. Or use a a myriad of other libraries. Or use a game famework library.

http://jin.fi/projects/Misc/puzzle/

http://tetris.jin.fi/
( live coding of it: https://www.youtube.com/watch?v=8agxceEtRRU )

http://ray.jin.fi/
( live coding of it: https://www.youtube.com/watch?v=beWpbN9AZ_M )

http://flapmmo.jin.fi/

e.g. this was made without any external frameworks or libraries: http://www.littlewargame.com/

The browser is just a weird menagerie of drawing/painting tools. Mainly/loosely SVG, CSS, Canvas ā€“ bound together by jesus tape ( the DOM ).

Thanks guys, honestly using the canvas API never entered my mind. Thatā€™s what you get for researching tools at 12 AM ::slight_smile:

Iā€™m learning Scene2d within LibGDX. Been a network engineer for ~15 years, finally taking a hobby Iā€™ve tinkered with throughout the years and developing it into a workable skill. 3 months into serious practice and my 3rd run at a game seems to be coming together. Building UI stuff for context menus and building skins in Skin Composer to style the UI.

Just swapped in OpenSimplexNoise for Gustafsonā€™s SimplexNoise implementation, for the few spots where I am using 3D. Thought it might be a good idea to test/do this before making the program commercially available.

It didnā€™t noticably impact processing speed.

The animated clouds that resulted were a bit more soft-textured than before, but the softness is actually kind of nice. I made a slight adjustment mixing the higher octaves to greater relative amplitudes, and increased the increment which I am using to transverse through the space. It looks good.

Iā€™m happy.

[EDIT: made a little video demo of the effect. But the result is kind of blurry. Didnā€™t realize we couldnā€™t go full screenā€“I should have made it larger. Video starts with a slow drone using a common tanpura pattern, and a night sky background. Then I switch over to the day background and load a more complex loop. Iā€™m not attempting to show off any of the intonation-modification aspects in this vid. Just some twinkling stars and floating clouds.]

KDDi2D3yeYU

Implemented reversed depth and infinite far plane.

CPSJlK_OTwg

Details:

  • 5 square kilometers.
  • 1M triangles, 4 * 1024 shadow maps, Volumetric light and clouds, PBR, FXAA, SSAO, Dynamic skydome.
  • No frustum/occlusion culling.
  • On a GTX 750Ti.

Skimmed this post: https://astojanov.github.io/blog/2017/12/20/scala-simd.html

[quote=ā€œRoquen,post:5776,topic:49634ā€]
What a waste of time reinventing an inferior OpenCL running on the CPU. You could do this in LWJGL since 2010.

Had way too many of these on christmas :emo:

a1eYpt1g88E

I made this in libgdx, it gets big frame drops when you go crazy with the line but looks decent.

Maybe Iā€™ll turn it into a puzzle game of sorts, or maybe a 2d sandbox where you can spawn stuff.

@Spasi: Programs that use GPU tend to be GPU bound.

[quote=ā€œRoquen,post:5779,topic:49634ā€]
Not sure what you mean. Iā€™m talking about running OpenCL on the CPU, doing CPU work. Either because itā€™s not GPU friendly or the payload is too small to be worth the CPU/GPU transfer overhead. Both Intelā€™s and AMDā€™s runtimes are mature and do fantastic vectorization.

This has actually been my motivation for maintaining the OpenCL bindings: enabling Java developers to easily write cross-platform SIMD programs. GPU applications are usually best served by GL/Vulkan compute or CUDA.