What I did today

FIRST SUCCESSFUL BLINDFOLD RUBIKS CUBE SOLVE.

The satisfaction of opening your eyes and seeing it fully solved is incredible!
I’m also working on finalizing my FAT32, FAT16, SFS, and EXT3-4 drivers for my OS.

Today I continued my efforts to create a fully accurate resource workflow for model creation. With my Blender plugin mostly feature complete (at least for models, no animations yet) and normal maps now being interpreted exactly as bakers produce them, I’ve switched my focus onto PBR and lighting again. My goal is to take the textures produced by Substance Designer, import them into my engine and get as close results as possible to Substance. At first it looked really wrong, but after figuring out how Substance uses its textures and a whole lot of Googling I managed to get something very similar to the preview in Substance. In this process, I ended up switching from Cook-Torrance to GGX for specular lighting, which seems to be the by far most dominant way of handling specular lighting in physical based rendering. It’s a fair bit lighter in math after some optimizations, but it probably won’t matter when using shadows as they’re by far the heaviest part of lighting. I can’t really show any screenshots until a certain press release though. :stuck_out_tongue:

Previously we’ve tried to create our own model converters and material editors, but in the end we simply don’t have time to code, maintain and expand that kind of stuff with mostly just me working on them. Therefore, we’ve been working on getting compatibility with at least some mainstream tools and adhering to standards regarding PBR, normal mapping, etc. It’s all really starting to take shape, which is really exciting! It’s all so f**king hard to research though, because all technical details are drowned by artist buzzwords and layman explanations of how to use a certain checkbox in a certain program. Example: Substance was exporting “Metallic” maps, and I had no idea what they were for. I googled and just found ten variations of “metallic+roughness is a different workflow from specular+gloss” and then another ten variations of “X uses Y, but you can pretty much use whichever you prefer”. Took me a long time before I found the following on GameDev.net:[quote]Metalness is a replacement for the “specular map”, or the “F0” value – it’s an easier way of authoring specular maps. It also ties in with the main colour texture though.
Metalness of 0/black means that you should use an F0 value of 0.03 (8/255 in a specular map).
Metalness of 1/white means that you should read the main “color” texture and use that as F0 (and also replace the “diffuse color” with black).
Metalness of grey means you do something in-between those two extremes (diffuse color becomes party black, specular color is a lerp of 0.03 and the color map).
[/quote]
So frigging simple, yet completely buried, just like the technical details of tangent space normal mapping. Anyway, I’m thinking of switching to storing metalness in my G-buffer instead, since it allows you to get colored specular reflections. Before, I stored a diffuse color and just a single specular intensity, which means that he specular reflection is always white. There are some metallic objects that have colored specular reflections though (gold and copper for example) which won’t look good with this scheme. “Metalness” essentially switches between two “modes”, one for metals and one for non-metals. Non-metals have a hardcoded specular intensity of around 0.03 and the diffuse texture is used for diffuse lighting. Metals instead interpret the diffuse texture as the specular intensity and have zero diffuse lighting. TL;DR: Metalness decides if the diffuse texture is used as diffuse or specular color.

In addition, I have an idea for doing decal blending of normals and lighting parameters so that decals can modify the lighting of a surface and not just the color. The problem for me was that I use a tight packing in the G-buffer:
Texture1: Diffuse RGB, alpha unused.
Texture2: Packed normal xy, roughness, specular
Texture3: Emissive RGB, primitive ID (for SRAA).

The problem here is that the normal isn’t blendable, and in addition the specular isn’t easily modifyable either since it’s the alpha channel. If I write an alpha value from the shader, it’ll also get written as the specular intensity! We also don’t want to modify the primitive ID stored in SRAA, as it is bitwise compared with the ID of a second pass, so any modification here will break SRAA. However, I think that there’s a really complicated solution to this problem:

  • Per-render-target blending settings aren’t supported on OGL3 hardware, but per-target color masks are! We can use those to prevent the primitive ID from being modified by the alpha value written there!
  • Instead of packing the normal using spheremap projection, I can encode a normal with all of XYZ and store roughness as the length of the normal. This doesn’t use any extra memory, but has the benefit of being somewhat linearly blendable! It will have some normal distortion due to the normals having different lengths, and the blended vector may have a shorter length which will affect roughness a bit, but it’s all 100x better than getting pure garbage by blending encoded values.
  • Lastly, we need to fix the specular blending. Since I can just mask out the alpha channel of of Texture1 and Texture3, I will try to use blend constants to at least allow me to place a single specular value for the entire decal, which would be better than nothing.

EDIT: When I move over to storing a metallic value instead, it will simply be a plug-in replacement for the specular value. It should work very well, since most decals will either add a metallic OR a non-metallic object, so the single-value-for-entire-decal limitation won’t be a big deal at all in that case.

Errr, this turned into a bit of a rant, but ehhh…

AO is actually the opposite of global illumination, although it is often grouped together with GI. With GI, you start with zero ambient lighting and add up sources of ambient light. With AO, you assume a certain amount of light and check if something nearby would be blocking that light.

Nathan Reed did a little mile-high overview of lighting: http://computergraphics.stackexchange.com/questions/3955/physically-based-shading-ambient-indirect-lighting/3959#3959

Do you do image based lighting along with PBR?

Can’t say I’m not jealous :stuck_out_tongue:

Congratulations, that’s a huge accomplishment!

We got Orsus rendering in NNGINE at 100%. I made us a cool Substance Painter export preset that exports maps specially for NNGINE, so it’s pretty much just plug and play at this point, considering his nifty Blender plugins on top of all this.

We also got this model viewer working so you can see a preview of how your model will work in NNGINE. Thanks to painters naming scheme system, we made the viewer capable of reading the names that the NNGINE preset creates and load them autonomously.

Really cool stuff! It feels like we have a “real” modeling pipeline for the first time ever.

Not yet, since we don’t have a real map editor yet. I was thinking of doing some simple prebaked stuff once we have that up and running.

Got an endless, non-repeating campfire sfx to play using LWJGL/OpenAL streaming.

The mechanic of loading direct ByteBuffers is a bit alien to what I’ve been doing in my Java audio. So, getting this has been a bit of a conceptual leap.

I’m wishing the underlying code or natives provided some sort of “notify” when a ByteBuffer is consumed. I’ve read that for providing a good throttle (audio data can be generated much faster than it will be consumed), a pull/notify rather than continually repolling for an open slot should be more efficient way to go. If this were implemented, the latencies in OpenAL perhaps could be improved overall. But I’m still too new to this to know if my thoughts are on base or not.

Maybe this isn’t provided because the underlying native is written in C, not Java, and there is no intervening Java layer for this function.

More SSAO love:

https://anuj-rao.tinytake.com/media/3d294c?filename=1473222239919_07-09-2016-12-21-20.png&sub_type=thumbnail_preview&type=attachment&width=700&height=423&_felix_session_id=e1342aafa898339600b7fc4058be7be2&salt=OTU2MDk0XzQwMDgyNjg

It looks like you’re using JavaFX. How are you going to integrate JavaFX with OpenGL? There’s no OpenGLPanel or anything…

I’m not going to include OpenGL display in the editor, the editor just contains the list of actions, the entities and the resources. The scenes are created by either using Tiled map editor, or in case of 3D, we launch the game which lets the user to place the objects with mouse and export to a file which is loaded later.

The game also runs in a separate window, also a separate process. Basically the editor exports to JSON files, and the player (the game player app, or the runner) will load the JSON file and start the game.

IZoV8-vgEIo

(Ok so it was yesterday. Bite me)

More actual in-game footage. Lord help us.

Cas :slight_smile:

Day 1 of 2 done for the New Zealand Game Developers Conference.
Random tidbits:

  • Met up again with delt0r
  • Went to an amazing UX/UI talk
  • Went to a talk by the Cuphead lead designer (Roquen has posted the trailer of this a few times)
  • Met one of the Path of Exile designers
  • Generally met lots of cool devs working on awesome things
  • Lots of free(ish) food and drink. Probably had too much coffee…

Tomorrow should be good, looks like there is an emphasis on VR, which is right up my alley…

I came up with a sweet function:


setUpAndRun("Game", new Tx(iconBufferedImage), 500, 500, Utils.createPath("4gotten/Tests"), new Tasker(){
    @Override public void load(){ /* load resources */ }
    @Override public void logic(){ /* game logic */ }
    @Override public void render(){ /* rendering */ }
    @Override public void end(){ /* close streams etc */ }
});

This basically creates LWJGL natives inside a home directory (%APPDATA% on Windows), opens up a frame with name “Game”, width 500, height 500, icon iconBufferedImage, calls load() once, then logic() and render() until the frame is closed (or requested otherwise), and finally calls end() and terminates the application.

It’s a bit superfluous but I thought it was cool 8)

J0 :slight_smile:

I drew a gamepad in Inkscape which I’d like to use in SilenceEngine. Now I’m going to map the different types of gamepads to my gamepad, and I have a cheap gamepad I’m going to map first.

I’d also like to ask you guys if you are willing to help me map your kind of gamepad too. Please PM me in case you are interested.

Day 2 of 2 done for the New Zealand Game Developers Conference.
Random tidbits:

  • Met HeroesGraveDev for the first time
  • Saw Dean Hall (made DayZ, founded RocketWerkz, made Out of Ammo, employed kaffiene)
  • Went to a panel of an accountant (specializing in tax) and two lawyers (an IP/software one and a patent one). Very interesting, but very scary.
  • Saw the developer of Monument Valley.
  • Keynote by Noah Falstein (Chief Game Designer at Google) about VR
  • Saw the developer of Mini Metro.
  • It seems like half the “experienced” people there, at one point or another, worked for Ubisoft.
  • Pretty sure I had enough of the free coffee just to cover the ticket price…

If there are any NZ devs around here who elected not to go this year, go next year.
It’s worth the ticket price just to talk to all these very interesting people.

Damn, I wish I could go to NZ and attend NZGDC. Unfortunately, that can only be possible in my dreams…for now.

Don’t worry Anuj Rao, let’s meet there at some point in the future. :point:

I have +/- 18000 km to NZ… Pretty far away sadly. :slight_smile:

Found this graphics breakdown of DOOM(2016). The graphics pipeline is crazy, anyone with an interest in OpenGL/Vulkan should check this out.