Has anyone tried creating an optimized HashMap implementation?

Hey, everyone.

As the title says, has anyone tried writing a HashMap implementation that is faster than the one included in Java? How much faster was it? I am mainly interested in getting elements out and don’t really modify the hashmap a lot.

I tried, with moderate success. Even if you make the logic 10x as fast as HashMap, the L1 cache misses are going to ruin the party. If you can make any assumptions about how your lookups are grouped, try to make those keys fit in common pages. If you want performance, your data-structures will have to be highly specialized - I wouldn’t expect a drop-in replacement of HashMap that was more than 10% faster in real-world cases.

I wrote a binary-search algorithm some time ago that had hardly any cache-misses (which is the last hing you’d expect from binary-search on an array), as it used a layered binary-search, should be here on JGO - I’ll look it up here it is.

Hmm. I feel like the horrible linked list implementation that HashMap uses should be even worse for cache hits though. Is there any way of improving that part?

Sure, give each slot (derived from the hashcode) N keys and values.

Once the slot overflows, it’s likely you have to grow your backing table, or you fall back again to (linked) lists for keys beyond N.


private int calcSlot(int hashcode) {
    // precompute, and keep both keys.length and N POT,
    // to allow for some bit-magic. you'd just zero out some lower bits.
    return hashcode % (keys.length / N);
}

public V get(K key) {
    int slot = calcSlot(key.hashCode()) * N;
    for(int i=0; i<N; i++) {
      K got = keys[slot + i]; // could be null, who cares
      if(key.equals(got)) {
         // would be tempting to swap keys[slot+0] and keys[slot+i] here, for future lookups
         return vals[slot + i];
      }
    }
    return null;
}

public void put(K key, V val) {
    int slot = calcSlot(key.hashCode()) * N;
    for(int i=0; i<N; i++) {
      K got = keys[slot + i]; // could be null, treat as free
      if(got == null || key.equals(got)) {
         keys[slot+i] = key;
         vals[slot+i] = val;
         return;
      }
    }

    // dreadful logic here
}


A similar strategy is to have N hash-functions, to calculate N possible key indexes. If the first is full, try the second, until you arrive at N, and are forced to expand your backing table. This is less wasteful, but causes lots of cache misses (but not more than a linked list).

Are you even sure you want to use a HashMap? It may be the case if you’re basically only ever getting data out of it you want something with sorted keys or somesuch.

Cas :slight_smile:

Yeeeaaah, I just realized that in the use case I was thinking about a simple array would be good enough and solve all my problems. Embarrassing… :emo: Still, it’s an interesting problem. I may take a stab at optimizing it at some point.

In C like using bucketed cuckoo hashing.

as always, did you try the fastutil implementations out yet ?