…unless I enslave you.
Cas
…unless I enslave you.
Cas
755-fFD6hb0
I’m officially a high school senior now. Can’t wait to get out of here!
Enjoy that while you can. It only gets harder lol.
Stay in school, for as long as you can.
Started making movement trays for my rat army. Looking forward to my first game of warhammer in 30 years
Hello JGO,
This is not really a ‘What I did today’, but more a ‘What I did last week’. Still I think it’s worth posting in here
I’ve been working on ambient light for my laptop. The LED strip is currently just dropped behind the screen for testing purposes, but it looks quite okay if you ask me.
YrDubXQDrTE
A Java application sends color data to an Arduino Uno R3 board to control a digital LED strip to simulate the ambient light. For the communication I’m using the RXTX library.
(sorry for poor video + audio
Hi lads!
I finally finished — or rather nearly finished — my decent bone animation tool! And I’m super happy with it, too. Hopefully I’ll be able to, at last, make not-too-bad looking running animations :point: And a hell lot of other things, too!
Here’s how I initially tried to make animations :persecutioncomplex:
Then I moved on to:
full size
(even Imgur thinks it’s stupid, the pic link says “LOwlL”)
And now I finally have:
J0
I graduated high school yesterday
I made a small test for testing how the Nvidia driver performs waits. It turns out it does spin-loops with yields.
I wrote a program that cleared the screen, then called glDrawArrays(GL_POINTS, 0, 10_000_000) to draw 10 million points (to give the GPU some work the CPU can wait on), then finally called glFenceSync() to create a sync object. I then fired up 6 threads, each with their own shared OpenGL context, and had them wait on the sync objects the main thread produced. The result: 100% CPU load on my quadcore with Hyperthreading (8 threads). Why? We have one main thread stuck on glfwSwapBuffers() waiting on the GPU to finish drawing, one driver server thread waiting for the main thread to produce more work for it, and finally wait threads that call glClientWaitSync() to wait for the GPU to finish as well. All these 8 waits seem to do spin loops, hence taking up 100% of all 8 cores together.
However, this CPU load barely interferes with other work. I ran a multithreaded sorting test which uses 100% of all CPU cores to sort lists in parallel, and the performance of that program was not significantly affected by running the sync test at the same time (just some stuttering every now and then). This leads me to conclude that these spin loops also do yields, allowing other threads to get time to work. This is also supported by the fact that the CPU doesn’t get particularly warm while running the sync test. Basically, Nvidia’s driver’s waiting operation looks like this:
while(notYetReady){
yield();
}
I wonder if Vulkan also does spin loops like this on fences… I’d guess so.
For the first time today I wrote code to generate loading/reading/rendering specific code as String.
It turned out easier than I thought, although you do get a lot of [icode]\t[/icode]'s everywhere
snippet += n + "\t\trT += speed;"+n+
"\t\ttime = (int)rT;"+n+
n+
"\t\tif(speed == 0){//Back to time = 0."+n+//Generated code is even commented! :p
"\t\t\tif(rT < " + (Timing.maxT/2) + "){"+n+
"\t\t\t\trT /= 4;"+n+
"\t\t\t\tif(rT < 1)"+n+
"\t\t\t\t\trT = 0;"+n+
"\t\t\t}else{"+n+
"\t\t\t\trT += (" + Timing.maxT + "-rT) / 4;"+n+
"\t\t\t\tif(rT > " + (Timing.maxT-1) + ")"+n+
"\t\t\t\t\trT = 0;"+n+
"\t\t\t}"+n+
"\t\t\ttime = (int)rT;"+n+
"\t\t}"+n;//etc..
[sup](Note: these [icode]+n+[/icode]'s everywhere are treated almost as [icode]\n[/icode]'s although it’s the platform specific line separator obtained from [icode]System.getProperty(“line.separator”)[/icode] because of how I’m using the generated code)[/sup]
J0
I recommend a template engine
-ClaasJG
I mean yeah, could’ve used that ;D
I just finished android port of SilenceEngine, and also wrote JNI bindings to OpenAL soft and STB vorbis using SWIG.
http://i.imgur.com/mxRcEA2.png
https://github.com/sriharshachilakapati/SilenceEngine/tree/silenceengine-1.0.1
That first day of “freedom” after high school is finished was amazing. Enjoy your summer! And do good in college if you’re going, study hard and all that.
Did another test with multithreaded OpenGL, this time with texture streaming. I decided to quickly investigate a problem where the texture streaming was seemingly being limited to loading 1 texture per rendered frame, even though there shouldn’t be any synchronization with the rendering thread.
I made a small program that continuously allocates a texture, maps a PBO and fills it with zeros, calls glTexImage2D() to copy data from the PBO to the texture object, then finally waits for completion using 3 different methods (glFinish() (on the streaming thread), glClientWaitSync() and glGetSynci()+sleep(1)). All 3 suffered from the same problem: glFinish() would block, glClientWaitSync() would block and glGetSynci() would not be signaled until rendering had completed it seems. It seems like despite them being two separate contexts, one dedicated to texture streaming, the commands all go to the same command queue, forcing the texture streaming to wait for the rendering as well. It’s clear that the texture streaming is done in parallel (there’s no FPS impact), so their scheduling/the way the driver handles them seems to be the problem, so completion isn’t signalled until the render context is done drawing stuff. By uploading 128x128 textures repeatedly while drawing 20 000 000 points per frame at around 5 FPS, I got this horrible output:
while the same thing when drawing 20 000 points @ 2700 FPS gave me:
Hence, it’s clear that it’s impossible to get completely separated rendering and texture streaming. I will try to rework our texture streaming a bit to lessen the impact of this limitation in the future. If I have a big PBO allocated already, it’s a waste to upload a 128x128 mipmap and then sync with the rendering. Instead, instead of waiting for the job to finish, I will simply continue uploading textures to the PBO’s free sections, only stopping if I run out of PBO memory. Once an action completes, the memory us used in the PBO can be reused again. If I have enough space in my PBO to a 4096x4096 RGBA texture (64MB), I can also upload a whooping 1024 128x128 texture in one frame as well, and I’m fairly sure the actual streaming from the harddrive will be the problem at that point. =P
It’ll be interesting to see how this performs compared Vulkan dedicated transfer queues. Presumably those will be completely independent of the rendering queue(s) and won’t have this problem.
You may want to try immutable storage and allocating multiple (3x) textures (a kind of ring buffer)
I got a good pong clone working on android, so now I’m going to start working on porting my game Arc (in the showcase) over to it. Hopefully I’ll knock a lot of it out tomorrow and put it up on Google Play soon.
Well long time no post. Basically got caught with my pants down on when things are needed to be submitted for PAX down under. As a result I have had a big change in direction. Been coding like a crazy man for the last week or so. Quite happy with the progress, we may just pull it off. It is yet another dungeon runner for what it is worth. Yea it is a tough crowed and a crowed genre but in terms of the technical aspects it is the easiest for me to do, while still permitting the unique things i feel i can bring to the table.
All place holder art is the blender default cube
I put up a new demo under my Dodger Dog WIP thread, it’s on the last post at the bottom. I would appreciate if anyone tried to play it and gave me some feedback! Have a good weekend everyone.