What I did today

If you mean z-buffer techniques like shadow mapping, then yes. Just pick your preferred method of estimating/approximating the occluders (percentage-closer filtering, variance shadow mapping, shadow mapping by backprojection, …). All methods for area light soft shadows apply here.
Btw.: Here are some pics with multiple lights:

EDIT: Added a quick GGX raytraced image for validation that the analytic result is actually (very close to) correct. This is not yet the solution proposed in the “Combining Analytic Direct Illumination and Stochastic Shadows” paper but just solely raytraced (left is the LTC analytic solution, right is completely raytraced):

EDIT2: Here is a video:

6vH-SlKeYV8

In the last 2-3 nights I implemented AABB into my framework. It will be used for raycasting and object picking later if I ever finish(start?!) my game :smiley:

f8EjNzs95q0

Creating gameworld. Painting and using other assets from whole internet.

Finally got ding dang Jetty https working.

Learning curve FU.

There is an initial period where terms/new terminology doesn’t quickly connect with the actual concepts invoked. It’s easy to overlook things and make mistakes in these early stages. As one works with new words and concepts, the ability to hold them in ones head and actually “think them” and not just mouth the syllables improves. But that speed up takes time. Am still counting on fingers and toes with this stuff.

Next up work on setting up email server.

I made a page for NOKORIWARE on Facebook and we’re going to try and start posting weekly devblogs to it for our various projects:

Facebook is pretty accessible and easy to use, which is why I chose it. Also, it’s configured to automatically post links to each devblog on our Twitter, so if you prefer that platform, just follow our Twitter account:

https://twitter.com/RobotFarmGame

Likes and follows are greatly appreciated!

EDIT: Also, I’m looking for some coding help on a small side-project I work on during the weekends. We’re going to monetize it eventually but I need someone who can maintain it. The project is very simple so coders of all experience levels are welcome. If you’re interested, just message me on here and we’ll talk about it to see if you want to jump on board.

Today I implemented AABB recalculation when an entity is transformed (scaled, rotated). See more here.

Video (watch it in 1080!) (AABB recalculation):

qs3gCL92aBc

Video (watch it in 1080!) (AABB intersection point):

5U_lC9hgMUo

Created a little 32x32 torch icon :slight_smile:

Edit:

also a sword icon:

Offtopic, but why facebook and not instagram? Just a personal thing but from a feeling I think instagram + twitter might work better depending on your target audience and the exact usecase.

NOKORIWARE has an Instagram but you can’t really post screenshots on there because they limit the size and you can’t zoom in on the pictures. Plus the blog posts aren’t as readable either and I wouldn’t be able to real dive into details. Plus there may be cases where I write a post and don’t want to include a picture.

I’ve basically tried every social media at this point but Facebook is probably the best fit for me personally in terms of convenience + functionality. On top of that, I mostly started it anyway because my IRL friends were wanting to keep up with it and all of them have Facebook and read it even if they say they’re too cool for it. Lol

Created a small game because I want to do it.
Link: ChristmasPath

Screenshots:

Have been working on the script and implementation for a YouTube tutorial about raytracing with OpenGL and I finally got a nice OBS + Visual Studio Code + Browser Window with WebGL 2 rendering setup. I chose JavaScript/TypeScript + WebGL + Webpack + livereload because it is a sooo much smoother experience compared to Java and… well… OpenGL is just OpenGL regardless of the language.
Video where I try the OBS setup:

mvEwlAwapvM

Have to say, I was(am?) a Javascript hater, but in the last few years, when I needed a quick web server, a quick literally anything running, I always chose Javascript. And if possible, I always choose Typescript.

What kind of tutorials are you making?

Uploaded my Implementation of the LMAX SPSC RingBuffer on GitHub: https://github.com/VaTTeRGeR/JAtomicRingBuffer

It’s faster than Array/LinkedBlockingQueue, i use it for task passing to worker threads.

Like I wrote above: Raytracing with OpenGL.
I’ve searched YouTube a bit and actually did not find any good tutorial that has all of the following:

  • development of executable code other than C/C++
  • explanation of the stuff
  • having raytracing as topic and not your standard good old rasterization
    So, today was more illustration drawing, like this one:

Not in video-form, but there’s this cool tutorial by some guy named Kai :slight_smile:
Seems like it sadly never got a part 2

:slight_smile: yes, there are plenty of written tutorials, with and without code. If I were to recommend one, then I’d definitely say: the now free pbr book and also scratchapixel.
EDIT: Oh and I just found this while researching the advantages/disadvantages of Riemann sums compared to Monte Carlo integration: http://www.cs.uu.nl/docs/vakken/magr/2017-2018/files/PROB%20tutorial.pdf (really awesome article about the fundamentals of path tracing)

This evening is more preparing of tutorial materials. In particular, for understanding the rendering equation and radiometry, the three most frequently used terms (radiant flux, radiance and irradiance), understanding what an integral is, why we need a surface integral over the hemisphere, why the integral over the unit hemisphere is , why that is also the solid angle subtended by that hemisphere, why the irradiance with constant radiance L is π·L and not 2π·L, why a lambertian BRDF function is defined as c/π, why a BRDF can evaluate to infinity, what Monte Carlo integration is, what a probability density function (pdf) is, why the pdf of a randomly chosen vector over the hemisphere is 1/(2π) and why the hell we should need all of this when building a physically correct path tracer.
And I constantly lose grasp of certain things and have to re-read them.
Also, there is a nice explanation video about the rendering equation https://www.youtube.com/watch?v=eo_MTI-d28s , though I pm’ed Eric Arnebäck on Twitter about a slight error: BRDF evaluating to 1 does not equate to all light from omega_i reflecting towards omega_o - if that were the case, the BRDF would essentially represent a perfect mirror and would need to be the dirac delta function evaluating to infinity for that pair of directions, since the cosine-weighted BRDF still needs to integrate to <= 1 over the hemisphere).

EDIT: Just saw that the original James T. Kajiya paper “The Rendering Equation” from 1986 gives the answer to why it’s possible for basically all path tracers today to approximate the recursively defined rendering equation using an iterative process by following a light path over multiple bounces and accumulating the received light: It has got to do with reformulating the so called “Fredholm integral equation of the second kind” (of which the rendering equation is an example) into an infinite series (the Liouville–Neumann series), of which one can simply evaluate only the first n summands, where the first summand is just the direct light from a light source to the eye, the second summand is light from a source to a point on a surface and then to the eye, the third summand is light hitting two surfaces before reaching the eye, and so on.
I never understood why it works this way.

Dirty Dithering

some artifacts and wotnot, but otherwise this looks nice and retro

Made a little JavaFX serial-console to test the jSerialComm utility, i can attest that it is really nice to work with.

I want to use serial to talk with a diy LoRa modem (STM32 + 2x LoRa) while developing and testing its firmware. The GUI that im gonna build now will allow me to configure the modules automatically and run load/reliability tests.

Beginning crunch for two weeks at Volition starting today.