Tried out basic AI
MP4: http://i.gyazo.com/3925cc7c0f4c65663c2dafd7f2f9587e.mp4
Not today, but in the last 24 hours: scaled up my maps in Fog: Survive by a factor of 16 and added gravity. As a result of which I discovered my ground is very bumpyā¦ ???
(Probably because I am using a 4096x4096 hightmap, but the jME docs say a maximum of 1024ā¦ What do they know anyway?)
Update on my order-independent transparency algorithm.
Iāve been working on improving the quality and optimizing the algorithm. Today I found an opportunity to optimize the performance by almost 300%, making the algorithm an extremely good alternative to Weighted Blended Order-Independent Transparency. This performance optimization can also be applied dynamically, reducing quality when there is a massive amount of overdraw due to particles covering the whole screen, in which case the quality drop isnāt noticeable.
Hereās some numbers for full resolution 1080p rendering with 500 000 static particles. All these algorithms are GPU limited unless otherwise noted.
[tr][td]Algorithm[/td][td]Memory usage[/td][td]Performance[/td][td]Notes[/td][/tr]
[tr][td]CPU sorted[/td][td]15.8 MB[/td][td]6 - 56 FPS (heavily CPU limited)[/td][td]Good quality, but too sharp depth ordering, causing popping when the order of two intersecting particles suddenly changes[/td][/tr]
[tr][td]Weighted Blended Order-Independent Transparency[/td][td]35.6 MB[/td][td]155 FPS[/td][td]Very bad. Either weak depth ordering (looks like an even blend regardless of depth), or heavy color bleeding when overlapping particles have a high depth difference. No perfect depth weight function.[/td][/tr]
[tr][td]Weighted Blended Order-Independent Transparency with 8 layers[/td][td]174.0 MB[/td][td]~69 FPS (estimate)[/td][td]Bad. Better depth ordering, but complex to implement and artifacts are only suppressed slightly. Not stable under translation. Used in Insomnia at the moment.[/td][/tr]
[tr][td]My OIT algorithm at low quality (2x)[/td][td]39.1 MB[/td][td]110 FPS[/td][td]Better depth ordering compared to WBOIT, noticeably less color bleeding on overlapping particles. No popping when depth order changes.[/td][/tr]
[tr][td]My OIT algorithm at medium quality (4x)[/td][td]42.6 MB[/td][td]75 FPS[/td][td]Significantly better depth ordering, significantly reduced color bleeding. No popping when depth order changes. Superior to CPU sorted particles.[/td][/tr]
[tr][td]My OIT algorithm at high quality (8x)[/td][td]49.7 MB[/td][td]44 FPS[/td][td]Extremely high quality depth ordering. No noticeable color bleeding. No popping when depth order changes. Superior to CPU sorted particles.[/td][/tr]
In Insomnia, I render all transparent geometry at half resolution (= 1/4th the pixel area), so expect performance to improve by 3-4x these values in a real game which renders at a half resolution. This test was also extremely stressful, with an average pixel overdraw of around 15-16. In other words, around 33 million pixels were filled. A large number of pixels were also overdrawn over 200 times, meaning that per-pixel linked lists and depth peeling would not be able to handle this scene correctly/with good performance.
I did notice that the technique does require more memory/performance to be able to handle certain situations (hence the low-medium-high quality listing above), so that turned out to be false. Though, an advantage of this is that I can make a performance/quality trade-off if I want to, making the algorithm more adaptable. All my other claims still stand. Additive particles require a different shader to be rendered, but are also noticeably cheaper to draw when it comes to fill rate. I have yet to implement them, but the shader is essentially written in my head already.
EDIT: Oh, another unmentioned limitation: It cannot handle 100% opaque geometry. I currently have it limited to 99% opaque (= alpha 0.99).
EDIT2: It turns out that the optimization I wrote about not only quadruples my FPS, it also eliminates all remaining color bleeding, which I did not expect at allā¦
Iām not really a user of such tech, but I always enjoy hearing about such algorithms. Staying tuned!
What you plan to do with the algorithm? Write a paper from it? Keep it as secret? Share it? Its sounds very interesting and I canāt wait to see how well its work on different use scenarios.
Since the algorithm has matured a lot and is now faster, has higher quality, uses less memory and is easier to implement than what I currently use in Insomnia (a layered version of WBOIT), I intend to implement into our engine to make sure that it looks as good as it does in my test program and to provide a more real-world example. Then I intend to write a proper paper on it, possibly with the help of someone from my university.
Made a wallpaper to get myself hyped for Ludum Dare 31.
http://ludumdare.com/compo/wp-content/uploads/2014/11/LudumDare16x10.png
It starts in only 24 days from now.
16:10 wallpaper (1920x1200)
16:9 wallpaper (1920x1080)
16:10 alternative (1920x1200)
16:9 alternative (1920x1080)
16:10 PSD (1920x1200)
16:9 PSD (1920x1080)
Discovered and got interested in Seam Carving, going to try and make a fast implementation.
Got Sobel as the energy function so far (sitting in the bottom 8 bits of an INT_BGR image):
Crude, horizontal-only single-threaded version is working!
Version showing the pixels being removed: https://gfycat.com/GrouchySpryIriomotecat
I literally spent about 20-30 minutes making this to procrastinate homework.
Edit: I just wrote a short script to do this for me. Itās actually kind of neat, it automatically provides the input to puush, opens a new tab in chrome through a little plugin I spent about 2 seconds on, and opens up the puush link in the clipboard.
I also started planning out my next game. I havenāt worked on a game in over 6 months!
+1 for using puush!
In The Fog: Survive I connected up my terrain generation code to my fog, sky, lighting and related scenery code. Itās a good feeling standing on my own hills and looking down on the rolling, mysterious fog lapping the foothills.
Next step is to start picking up cuts of the latest state of entities and draw them into the world. Iāll soon be able to see what my little guys are doing (which is probably just goofing off under apple trees).
That is a super fun topic imo. I did that project for a course last year.
that is amazing! didnāt hear about this trick until now :o
Yeah apparently Ps, GIMP and ImageMagik (probably others too) have it.
Also attempts have been made to apply it to video: (fixed link, thx kingroka)
AJtE8afwJEg
Iāve completed the first version of the dialogs, npcs and quests for the main questās first chapter of my rpg. Because that involved quite a lot of writing, i had to do some visual stuff as well to relax. So i added the option to define birch tree sections in the woods. I might replace the tree model later, but anywayā¦
correct Youtube link is AJtE8afwJEg->
AJtE8afwJEg
Ah, whoops. Thanks.
For some reason this is really disturbing for me. I donāt know, itās like itās magically doing stuff to the picture and just corrupting images and I feel like my eyes should be able to catch it but they canātā¦ ;_;
After working with it for a bit the artifacts are actually pretty noticeable to me, it will look even more seamless later because Iām not yet performing the extra āquality-assuranceā heuristics!
EDIT: spooky object removal via seam carving (which I also plan to implement) : http://youtu.be/6NcIJXTlugc?t=3m43s