Successfully implementing a save/load game feature. Then on to the fun stuff ;D
Edit: XOR random generator improved generation speed by about 30%.
Edit2: Added multithreading, now 8192X8192 fully detailed map is generated in 14 seconds instead of 33.
I halved the run time without either of those things, so update your gist and we could be getting somewhere.
https://gist.github.com/BurntPizza/c6327045f6ad3d7c56c3 (Indentation seems half-broke in places, darn github editor)
Does 2048x2048 in ~430 ms (down from 880).
Donât time the write to file, thatâs way more expensive than actually generating an image, and not indicative of normal operation anyway.
I, after 2 hours, finally got a stream working with my mic and background music. Also works with Skype calls. Woot woot. Now broadcasting development of my next project. Pretty excited to see how this streaming thing works out.
never use java.util.Random. for anything. ever. havenât I said that before?
For anything at allâŚever? Why not?
The design has no use case. Itâs bad at everything.
Iâm participating in LD. My Game title is âMousyDareâ, here are a few screenshots:
Enough for today, will improve them tomorrow!
His main problem was cache thrashing so I took care of that first, but yeah I agree, donât use Random for anything that needs high quality or speed.
Can you elaborate?
He had a nested loop for summing and averaging the different noise layers together into the final image:
(paraphrased code)
// remember that width and height are large, at least several thousand each
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
for (Layer l : layers) {
result[y][x] += l.get(x, y);
}
result[y][x] /= numLayers;
}
}
(One thing is that the inner loop, being a for-each, creates an iterator every one of itâs 4M or so executions, but thatâs another matter)
The problem here is that each layer has itâs noise array, which is read from in Layer::get. This means that you grab one pixel worth of information from each layer, and then immediately evict it from cache to make room for the next layer (which will be a cache miss every time).
Fixed it using Loop interchange and fission:
for (Layer l : layers) {
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
result[y][x] += l.get(x, y);
}
}
}
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
result[y][x] /= numLayers;
}
}
This yields massive performance gain even despite (N + 1) times more loops over the main array because it reads everything it needs from the arrays before going on to the next one, instead of going back and forth, grabbing little bits at each stop, and the main array will be hot in the cache the whole time anyway.
(Also alleviates more hidden concerns such as the potentially immense register (and GC less so) pressure caused by creating the N iterators in the tight loop)
There are other little things, but this was the largest single gain.
private double getValueOrDefault(float[][] array, int x, int y, double def) {
try {
return array[x][y];
} catch (IndexOutOfBoundsException ex) {
return def;
}
}
This seems extremely odd way to do clamping to edge. You are always calculating that default value and its only used at edges(0.0488221645% of cases with 8k grid). Instead of this I would iterate edges with special case loop and then inner part without extra fuss. I would also unroll the smoothValue method.
I agree, although those methods are never called, so I didnât bother.
Exceptions-as-control-flow are never a good idea.
Gist updated. I will probably refactor project and make WIP thread soon.
[quote=âBurntPizza,post:1073,topic:49634â]
have to disagree. ⌠in âgamingâ context itâs totally fine if you do not get too broad :
try { return array[x]; } catch ( IndexOutOfBoundsException ex ) { return 0; } // fine
try { return array[x]; } catch ( Throwable ex ) { return 0; } // NOT fine
something similar applied to [icode]equals[/icode] :
public class vec3
{
[...]
@Override public boolean equals(Object v)
{
try
{
return equals((vec3)v);
}
catch(ClassCastException ignored)
{
return false;
}
}
boolean equals(vec3 v)
{
try
{
return [insert compare test here];
}
catch(NullPointerException ignored)
{
return false;
}
}
[...]
}
Exceptions are extremely heavy things - how often do you want Java calling up a stack trace?
Itâs super hacky, and while optimizing for performance Iâm going to eliminate as many exception creations as possible.
http://java-performance.info/throwing-an-exception-in-java-is-very-slow/
ofc it is about not having the exceptions coming up at all.
as with the array test : if 99.99% of the time it does not happen, the extra fuzz for doing âproperâ tests is a waste of cpu time.
On the contrary, not having the âextra fuzzâ can easily thwart bounds-check elimination, resulting in slower code.
The 99.99% figure is also to the benefit of the branch predictor, so that fuzz will be faster than you expect.
Also some optimizations are disabled inside of try{} blocks, although I donât remember exactly which.
i never got anything but performance gain from that.
try debugging a totally ignored exception. at least with -server itâs optimized to no stacktrace. see OmitStackTraceInFastThrow vm arg.