What I did today

What does everyone love about mechanical keyboards? I’ve used a few, even two from Razer, but they’re so uncomfortable… I guess I’m just a mebrane keyboard-type of guy.

Also, it totally doesn’t show you like Portal :stuck_out_tongue:

I’ve found the blue switches to be very… Clicky. Too clicky in fact. I’m used to bottoming out keys like I would on membrane boards, but I’m slowly switching back to Cherry MX Browns which are allot easier to press, and make me type allot faster. But really, it won’t make you immediately better at coding, it’s just a personal preference.

Also, congratz on being the 1500th JGO post!

Kinda over-did it really. :persecutioncomplex:

Yeah, blues are extremely loud for me. Some people like the clickity sound, but I find it little annoying. Browns are better, but still, there’s still some uncomfortable feeling when I type with them. Reds are propably the best I tried, but they still didn’t beat good ol’ membrane. And I won’t even mention how uncomfortable blacks were.
I didn’t try other switches though, so there may be a mecha-keyboard with the right color switch that might fit me.

Right, but it may make you a little faster when the keyboard is comfortable.

Oh, I didn’t notice that! Thanks! :slight_smile:

Did absolutely nothing interesting today…

1500th reply I think.

Cheers,

Kev

Well there’s your problem.


Anyway, there are a ton of reasons why mechboards are ‘better’ than membrane keyboards, I wrote a long post about them here. But in the end it’s really just personal preference.

You obviously haven’t used buckling springs before, they’re much clickier. :stuck_out_tongue:
You might enjoy Topre switches, but they’re a bit expensive.

You can never have too much Portal.

  • Jev

Today I livestreamed and made this track: https://soundcloud.com/literature-corner/dungeon-days

It certainly isn’t mastered yet so some of the levels are a bit off, but after two hours I couldn’t do any more, haha.

Today I woke up feeling alive and full of energy! I honestly don’t recall that having ever happened before :smiley:

got inspired by you and continued coding on my interval-tree too.

figured, tile-recursive-sort-loading is possible only after all data is known (correct ?). i’m following the implementation as described here : http://www-db.deis.unibo.it/courses/SI-LS/papers/Gut84.pdf which is a very dynamic r-tree version. adding and deleting on the fly is fast, but generates lower quality trees. figured, the tile-sorting algorithm can be applied here too (partially, not recursive, but sorting dimensions by perimeter), which gets close to the quadratic-quality at linear-time :open_mouth:

nice! … i like the way you go after the break at 2:08, guess the slower pace is coming from the missing guitar. thanks for sharing. :slight_smile:

You are correct. Sounds interesting, could you elaborate slightly? Is it competitive with R+/R* trees? Afaik the quadratic split isn’t actually very good, although it is better than the linear split(s). The first picture looks quite performant though.
R* topological split looks about as good as STR while remaining dynamic: (if more expensive than simpler splits)

Wikipedia

Also nice pictures as usual :point:

basil - I would print out your graphs and put them on my walls as artwork if they were higher resolution, theyre beautiful!

cheers! :slight_smile:

true, quadratic split isn’t too good, but good enough for raytracing and collision tests, if modified a bit. since it’s prone to insert order i presort objects along a hilbert curve before inserting, which mean all data is known before. yet, initially building the tree is one thing, changing it on the fly still requires a speed algorithm there.

“on the fly idea” is basically - if a node gets too big, split it in two and distributing the affected children nodes along them. starting by picking two children with “worst” distribution, the seed.

the change i did here is, instead of simply comparing the union-area created by a pair of children and picking the largest, i additionally substract the children areas from that value. (unionarea - node1area - node2area). a tiny change, it simply prefers smaller children nodes.

i changed the quadraticNext() method to to use the extensions-delta and it’s abs-value instead of just the extensions to pick the smallest. delta = ext1 - ext2. sign points into the “better” parent node, while selecting the largest abs-value yields the best child-node. sounds counterintuitive but creates a “better packed” tree. it’s still quadratic tho’, testing all-pairs.

anyway, what was giving me a headache was the area-function itself. since it’s width*height in 2D or *depth in 3D, once one dimension is zero then the area is zero too right? that’s not too uncommon in 3D, once you have a triangle perfectly aligned to an axis … it’s gone. so i compute the area like …

  public static float area(float[] min,float[] max)
  {
    float area = 1.0f;
    for(int i = min.length; i-- != 0; )
    {
      area *= max[i] - min[i] + 1.0f;
    }
    return area;
  }

which is not correct i know. but when used to compute extensions and unions which are there just to be compared, it does not matter. it turns out, adding a fixed +1 also made the comparators lean towards “rectangular” shapes instead of “wide-spread” since for instance … [icode]1010 = 100 and 250 = 100 but 1111 = 121 and 351 = 153[/icode] in which case the more rectangular shape is distinguishable.

thinking about the tile-sort idea, instead of picking seeds and iterating over children i simply

  • grab all children, which are not many since it’s just at the time of splitting a “too big” node into two smaller ones.
  • compute median ( list.length/2 ).
  • sort them, two times each dimension, min/max-comparator. store the lists, makes 4 in 2D and 6 in 3D etc.
  • loop all lists and compute its perimeter ( sum of (max-min)*2 in each dimension ) …
    – two times. one for the first half of the nodes (zero to median) and one for the rest (median to list.length). add these together …
  • use value to find the smallest list.
  • split list by its median (list.length/2)
  • insert one half into one of the new parent-nodes, the other half into the other one.

basically, sort children, split in two, attach (similar to balancing a kd-tree). not using lots of list to create more nodes in recursion, eventually the whole tree bottom up or top down as you do with R* right ?

trees made like that are still “overlapping”, more than quadratic, but have less nodes and build much faster. in the end it depends on what and how the tree is used.

sorry for the wall of text. o/

Converted all the SPGL2 code to run entirely on modern OpenGL3.3+ with no backward compatibility. Seems we now have to basically reimplement half the OpenGL APIs in our own code, bah.

Cas :slight_smile:

Installed putty and xming and now I try to dig into mathematica hearing the silence from a small pi cluster sitting in the corner of my room.

Updated my Moto G 1st Generation to Android Lollipop 5.0.2. This is not a custom ROM update, and is pushed out by Motorola.

So far enjoying the new Material UI everywhere.

Interesting topic. Thanks for the link to the bin, yes I made a crude version of a bin. It surprises me to see how far academics have studied geometric algorithms like this.
I notice on the wiki page that bins have similar characteristics to hashmaps. This must be an advantage of the bin over the tree since it makes queries O(1), assuming no collisions.
Trees on the other hand cannot go straight to the correct leaf, they have to climb through the branches from the trunk which is O(log(n)).
But the disadvantage of the bin is that data is often clustered and with each bin being the same size there will be some full bins and lots of empty bins wasting memory.
Is this a correct assessment?

Yes.
I analyzed some of the contest data and it is very clustered, with some very outlying clusters far away from the main groups as well, so a bin would have been (ha) terrible.
Meanwhile the tree completes [non-exhaustive] containment queries on 10M points in ~0.015 ms, and I haven’t spent any time optimizing node sizes, locality, etc.

I spent weeks avoiding creating normal arrays for my model loader and got started tonight. Can it be this simple?


        Vector3f[] normalsArray = new Vector3f[coordsArray.length];
        OneToManyMap<Integer, Vector3f> map = new OneToManyMap<Integer, Vector3f>();
        for (int ii = 0; ii < indexesArray.length; ii += 3) {
            //triangle vectors a and b
            int idx1 = indexesArray[ii], idx2 = indexesArray[ii + 1], idx3 = indexesArray[ii + 2];
            Vector3f a = coordsArray[idx2].subtract(coordsArray[idx1]).normalizeLocal();
            Vector3f b = coordsArray[idx3].subtract(coordsArray[idx1]).normalizeLocal();
            Vector3f normal = a.cross(b).normalizeLocal();
            map.put(idx1, normal);
            map.put(idx2, normal);
            map.put(idx3, normal);
        }
        for (Integer idx : map.keySet()) {
            List<Vector3f> allNormals = map.get(idx);
            Vector3f sum = new Vector3f();
            for (Vector3f normal : allNormals) {
                sum.add(normal);
            }
            normalsArray[idx] = sum.divide(allNormals.size()).normalizeLocal();
        }

EDIT: changed the OneToManyMap.get() return a List and not a Set, as otherwise the average might go wrong.

Got into the windows insider program and a free copy of Windows 10, installed it on my Surface Pro 3 after having to hack an intel driver installer that specifically says not to install it on Windows 10 in the ‘.inf’ file. Works fine though… For now…