GLSL speed vs java speed

Hello,

While working at same shader transformations im always wondering, what would be the fastest way to implement something, or is there no diffrence?
I would have benchmarked it if it wasnt just impossible to test, so does one of the glsl experts here have a answer for this?

Case:
I have the following calculation in the vertex shader, it simply converts the modelmatrix to an normal matrix:

n = normalize(mat3(inverse(transpose(modelmatrix))) * normal);

If i would precalculate this matrix in java (every frame, every object, worst case) and send them to the shader for every object, would this be faster, slower or equal to the speed of calculating them every vertex.
I would say preloading then for every object, but this means no parallel calculations and sending a command to the shader for every frame.

yes, it would be faster. Just calculate it on the CPU.
Also, you probably don’t need the inverse transpose matrix. You only need to do this if you use non-uniform scaling(scale the axes differently).

Thank you for your info, good to know about the invert / transpose, thats a lot easyer as i thought :).
So filling a float buffer and sending it to gsl is still faster then a simple cast (for lets say 500 / 1000 triangles (without indexing, so 1500 / 3000 vertrices))?

mat3(modelmatrix);

Doing this “cast” (it’s actually a C constructor) is probably a NOP. You probably wouldn’t notice a difference between the GPU/CPU versions in the case of big enough batches(i.e. more than a hand full triangles). I’m doing the CPU version in my “engine”, because it is more generic (will work in any case).

Thank you, ill forget it for now then :slight_smile: