I finally recorded a video of my instancing compatible gpu skinning system Itās capable of blending up to 4 animations per entity, but I donāt have example data, so a single animation per object has to be sufficient for demo purpose. Each instance has seperate materials, animation controllers (different playback speed for example) and so on.
AgxddJtSVx0
My occlusion and frustum culling mechanisms are deactivated currently. The animation state (which is the current animation frame?) is calculated on the cpu for up to 4 weighted animations per entity. On update (animation frame changed) the data is (multi) buffered to the GPU. The GPU then traverses bones from buffers, interpolates, blends animations etc, so it gets quite cheap.
For each instance, a AABB is calculated. So in theory, every instance can be culled with an instance aware culling mechanism. Iām currently implementing this, but itās not an easy task