2D normal maps

I would have necro’ed davedes’ topic here:


Too bad I can’t post to it, at least to link to this topic since it is related.

Anyway, neat stuff in that thread! I played around with his shader and a rough normal map to apply lighting to a Spine skeleton:

Code (with davedes’ permission) is here:
https://github.com/EsotericSoftware/spine-runtimes/blob/master/spine-libgdx/test/com/esotericsoftware/spine/NormalMapTest.java

Runnable JAR is here:
http://esotericsoftware.com/spine/files/demos/spine-libgdx-normals.jar

Sort of related, this is neat:
http://www.spritelamp.com/

It looks cool but somewhat strange around the ears and the chin.

You said this is a rough normal map, how rough is it? And how would it look with a better normal map? :slight_smile:

The normal map is generated directly from the diffuse texture. Since the diffuse texture already has lighting painted on to it, it doesn’t work very well. If the colors were flat the lighting would look far better and the “strangeness” would disappear around areas like the ears and chin.

Actually this particular normal map wasn’t generated AFAIK, it was drawn by the girlfriend of the guy doing the SpriteLamp stuff, and presumably processed with SpriteLamp. She did it really quick and rough and insisted that I mention she makes no guarantees to the quality. :slight_smile: Maybe she’ll do a better one if she finds out I’m showing this around! :wink:

It should look really good with a better one. See the SpriteLamp link above for some nice examples. Here it is, packed:

Hey all, Sprite Lamp guy here. Damn cool seeing some normal maps from my software in conjunction with skelmesh animation!

Nate: You are correct, this was processed by Sprite Lamp. I actually think the normals mostly came out okay, except for the head (as matheus23 mentioned), because it has a big old hard edge running right down the side. That should be a pretty easy fix, except that it’s the middle of the night here (east coast of Australia) and the artist in question is asleep so the fix will have to wait. :slight_smile:

Also - and I’m coming from a position of some ignorance here, and squinting through tired eyes at foreign code, so I could be wrong - but I suspect there might be a bug in that normal map demo. You can see there’s a discontinuity in the lighting at some of the joints, particularly the character’s right (camera-facing) elbow. I suspect what’s going on is that the normal value is being read directly from the map, but not correctly rotated in the pixel shader. This will result in roughly correct lighting on body parts that are rotated roughly in the same way as they appear in the texture atlas (like the head), but the further from that orientation they deviate, the worse it will get. Normally this is fixed by adding full normal/tangent information to the vertices of a mesh and multiplying by the TNB matrix (I think that’s what they call it?). Because in this case we’re working in 2D, there might be some clever way to avoid that. Again, this is just my suspicion having looked at the code and the screenshot, but I could be way off base - apologies to the person who wrote the code if I’m full of crap!

Okay, I’m new here so I don’t know how fast this forum moves, but in case anyone responds to me - I apologise for not getting back to you in a timely fashion, because for the moment, I have to go and sleep. :slight_smile:

~ Finn

Ah, right. I disabled rotation when packing the images and it is looking better. Runnable JAR updated!

Sweet.

I haven’t looked at SpriteLamp. But I think a tool that allows artists to paint normal maps for 2D sprites (kind of like ZBrush painting) would be awesome.

Or how bout this: a tool/plugin for 3D software that will export your untextured 3D character from a certain camera angle. The plugin exports two sprite sheets: your character’s normals (no need to paint them – and they are accurate and smooth) and your character’s wire/clay blueprint that you can then texture (for diffuse/spec/etc). It exports each 3D bone as a 2D sprite, suitable for software like Spine to make use of.

I imagine it wouldn’t be too hard to use a 3D tool like that.

New diffuse and normal maps from Finn:

Runnable JAR is updated too.

So, turns out it’s difficult to do artwork for a tool without being familiar with the tool. :slight_smile: We’re gradually improving the normal maps at our end and sending them through to Nate but there are still some issues - for instance, we didn’t realise that the ‘eyes open’ and ‘eyes closed’ frames needed to be done separately, so in the runnable JAR that’s up there now, the area around Spineboy’s eyes are weird and flat.

matheus23: Since you asked what it would look like with a less rough normal map, I’ve taken the liberty of assembling the normal/diffuse maps we do have into a neutral pose and exporting a lighting preview via Sprite Lamp. This includes a few fancy shader effects, but nothing that would make it incompatible with Spine animation. :slight_smile: Unfortunately, right at the moment I don’t have the time to fully implement these effects with Spine’s runtimes, but I’m starting to think it would make a cool stretch goal for my kickstarter.

davedes: Sprite Lamp is definitely more like the first thing you mentioned - the whole reason for its existence is to make dynamic lighting possible while preserving styles unique to 2D art, such as visible brushstrokes or pixel art. In other words, it’s a way of creating normal maps in specific styles that are hard to do with 3D modelling. Of course, a rendering implementation involving normal maps doesn’t care about such things - it doesn’t matter whether the normal maps come from Sprite Lamp, a depth map, a mesh, or anything else you can think of. One of the key design decisions with Sprite Lamp was that it was for processing images, and how you create those images is up to you - I figured artists are so used to using Photoshop or ArtRage or whatever that trying to make an interface for them to paint in that won’t be frustratingly limited would be out of my scope. :slight_smile:

Also, am I right in thinking you’re the one I should thank for implementing normal mapping in the Spine runtime? If so, thanks! I have a feeling at some point I’ll be working with that code. I’m wondering, do you have any thoughts on what I mentioned up thread, regarding correct rotation of normal maps? Having looked at the JAR posted here, I strongly suspect that that problem is present (that is, the normals aren’t being correctly rotated with the underlying geometry). The best way to see the issue is to position the light about here:

See how given the position of the light source (that white dot I drew in), Spineboy’s right forearm ought to be underlit, but instead the top is lit?

Anyway, I’m just wondering if you think I might be right about that - not asking you to rework the code or anything! If the problem is what I think it is, the solution is likely to be nontrivial anyway - especially once Spine gets soft skinning and freeform deformation added, which will serve to further complicate things. I’m just trying get a feel for what I’m up against if the time comes. :slight_smile:

~ Finn

You’re right, the normals aren’t rotated as the skeleton animates. That would be quite cool!

The new diffuse and normal images can be seen here:

https://raw.github.com/EsotericSoftware/spine-runtimes/master/spine-libgdx/test/spineboy-diffuse.png

https://raw.github.com/EsotericSoftware/spine-runtimes/master/spine-libgdx/test/spineboy-normal.png

The parts that don’t have proper normals aren’t very important for the skeleton, so probably not worth doing for this example.

Hmm… Rotating the normals. You could rotate the texture coords before sampling from the normal map, but its best to stay away from dependent texture reads and it might lead to some weird stuff with wrapping/filtering. It should be better to rotate the vec3 normal after sampling. Either way you would need to send a rotation for each bone – as a uniform or vertex attribute. And presumably an offset, for rotation anchor point.

I wonder whether the effect would be noticeably improved, and worth the performance cost.

That sprite lamp preview looks pretty darn amazing. And you can really easily colorize the diffuse map since its so flat.

@davedes you have to do the rotating or the normals are completely wrong. You might not notice it in this example, but just flip the character on the y-axis and the light is coming from the wrong direction. I mean it is quite common for a ingame char to be able to move from left to right and the other way around.

Danny02 speaks the truth, unfortunately. If Spineboy was rotated 180 degrees, the unrotated normals would make his head look concave, for instance. It’d get weird.

Fortunately, the maths for this is pretty well-trodden ground, because it’s a requirement of doing normal mapping in 3D applications (or even just vertex lighting, from back in the day). The long and the short of it is, you have to multiply the normals by the inverse of the transpose of the model view matrix, and then everything works out okay. :slight_smile: You do this after extracting the normal value from the texture, so you can avoid any texture-dependent lookup shenanigans - the only price to pay is a vertex-matrix multiplication in the pixel shader. I think this is possible even on phone GPUs these days, although perhaps just barely, especially with pixel density being what it is these days.

Things do get a bit hairier if you want to deal with stuff like soft skinning or other per-vertex deformation - basically, all the stuff they’re going to add to Spine after this next Kickstarter. :slight_smile:

Also, thanks! I’m glad you like the preview. I totally agree about the diffuse map - having it be separate from the normal map allows really easy recolouring of characters and the like, like a super powerful palette swap.

~ Finn

you can get around the need of a matrix multiplication in the fragment shader if you define a tangent and bi-tangent vector in the vertex shader.

you would have to choose these two vectors in the vertex shader in a way so that they point in the uv directions (tangent in u, and bi-tangent in v).
I did provide in my 3D application only the tangent as a vertex attribute and calculated the bi-tangent from the cross product.

The normal Matrix which is described by Dinaroozie have to be applied on the normal, tangent and bi-tangent vector before they are given over to the fragment shader. If you say that your base normal is (0,0,1) you can spare out the calculations for the normal(as in my code). Also, the normal matrix must only be calculated from the transpose inverse if the model matrix wasn’t homogeneous (no shearing or non uniform scaling). If it is, you can just take the rotation part(top left 3x3).


/fragment 
vec3 T = normalize(tangent);
vec3 B = normalize(bitangent);
vec3 tn = normalize(texture(normal_sampler, texcoord).rgb*0.5-0.5);
vec3 N = T * tn.x  + B * tn.y;//normaly also  "normal * tn.z", but I guess that the normal is (0,0,1) in your case
N.z += tn.z;
N = normalize(N);

cant you just rotate the lightsource per bone instead to “simulate” the effect you’ll get from rotating the normals?

That would break batching the bone images.

hmm, would it? I thought of passing the rotated lightsource as vertex attribute. compute it per bone and pass the same for every attached vertex, but you may also rotate it in the vertex shader when you pass the bone matrix. of course this requires that you can pass additional vertex attributes per batched image…

Danny02: Wouldn’t four normalize calls do more performance damage than a matrix multiply though? Admittedly, the solution I proposed isn’t as versatile when it comes to deformation and the like. Also admittedly, I don’t really know what I’m talking about when it comes to shader optimisation so maybe I should just be quiet. :slight_smile:

Nate: I think cylab is describing just taking the light’s position and getting it in tangent space, rather than getting the normals and putting them in world space. I think that solution should work, and it might maybe even play nice with soft skinning…? But yeah I don’t think the images would need to be changed for that, so it shouldn’t mess up batching.

This conversation has also made me realise that an atlas tool that handles normal maps without knowing that that’s what it’s doing might screw things up (by rotating them). I have to give this some thought, because it sounds like something Sprite Lamp ought to deal with intelligently.

@Dinaroozie, yes that is right. I compared the two shaders and the one with normalization takes 2 more cycles on AMD cards then the matrix multiplication. On the other hand did I take that code directly from my 3D stuff. With some thinking I came up with the following code which doesn’t need any normalization and is faster than the matrix calculation based on AMD ShaderAnalyzer.


vec2 tangent = ...;//normalized vertex input
vec3 normal = ...;//normalized texture read 

vec2 bi_tangent = vec2(-tangent.y, tangent.x);//have to see if the texture orientation is correct
vec3 N = vec3(tangent * normal.x + bi_tangent * normal.y, normal.z);//no normalization needed if both normal and tangent are normalized 

The tangent should be provided as a normalized vector in the vertex data, it might be needed to normalize it in the fragment shader for skinning. In the case of skinning one would have even more problems with the matrix approach, because one would need to blend two matrices instead of two Vector2.

D’oh… of course.

Though you could just use a uniform for that, so that you are still batching your bones together and without any extra vertex data.

This is of course a single case where the sprite is generally pointing either one or the other direction, and doesn’t rotate much on the whole. Other sprites I can see this being a problem. In which case I guess passing more vertex attributes is the best way to deal with it.