Libgdx overcoming color limitations?

Soooo here I’m yet again asking questions about Libgdx…

When I was using LWJGL, you could pass in any floats as color parameters (r = 10f, g = 1f, b = 1f), and it would work. The image would be really bright red color.
But now that I’m using Libgdx, I’m setting color in SpriteBatch.setColor(r,g,b,a) method. Libgdx only supports 0-1 floats. Is there a way to overcome this? I would like to make certain sprites 2x brighter than they are. I hope there is a way to overcome this limitation…

What’s wrong with

color.set(1f, .1f, .1f, 1f)

or, if you just want to make certain colors brighter:

color.mul(2f, 1f, 1f, 1f)

This doesn’t work, because color itself only accepts values 0-1. I can’t make stuff brighter than the original texture. You can make stuff White and Yellow only using LWJGL.

The color, You add to SpriteBatch is a “tint” not an “abosolute” color. This means that the sprite and color are blended together.
LibGDX uses shaders and looking at the SpriteBatch shader, the colors are multiplied “gl_FragColor = v_color * texture2D(u_texture, v_texCoords);”

So what You can do. You can modify Your sprite (externally or at runtime) to make it brither (add more white). Or You could write Your own shader, that gives You more control over the colors.

Edit: Colors have 4 channels, 1 byte per channel. As a float, the channel values range from 0f (nothing) to 1f (everything). As such, 10f does not sense make any.

I just don’t get it.
As Libgdx developer, why would you go through the pain of limiting colors only to 0-1f ?
Is there any benefit in doing so?

Sorry, I edited my reply (touching the subject of colors) while You asked the last question.
10f does not have a meaning in color channel, since 1f is the max.

Floating-point values of OpenGL colors are clamped to the range [0, 1] and LibGDX has several backends based on OpenGL through LWJGL, Android GL and JogAmp (JOGL). The range of the “tint” color is larger. If you don’t understand xsvenson’s explanation and mine, maybe you should learn the basics before using LibGDX.

LibGDX packs the RGBA components into a single float, and sends it along the pipeline. This is much faster than sending four floats (R, G, B, A) to the vertex shader. Unfortunately, this means that OpenGL will “normalize” the value in the range 0.0 to 1.0.

One option is to add a uniform value for “brightness.” See here. This is useful if you have a whole bunch of sprites that need the same brightness value. This should probably be fast enough for most use cases.

If you need better performance (e.g. per-sprite brightness), you should use a vertex attribute for brightness, instead of a uniform (so it can be batched). You basically have two options. Either you hack it, or you re-implement SpriteBatcher to have a proper brightness attribute.

To hack it, you could pass in a U texture coordinate larger than 1.0, and in the vertex shader, the brightness is taken from the integral part of the coordinate, and the actual U coordinate is the fractional [icode]fract(…)[/icode]. Then you send both of those along to your fragment shader. This means no “texture repeat” wrapping (which is pretty rare in 2D games, anyways).

The “proper” solution is to have a brightness attribute in your sprite batcher; but this is more work. You’d have to implement the Batch interface, copy over most of SpriteBatch’s code, and then add in the extra vertex attributes. Read more about working with mesh here.

And if you are just looking to change the brightness of the whole screen (rather than individual sprites), you should just use shaders and a post-processing step.

Cheers.

I don’t know much about LibGDX, but this is completely unrelated to OpenGL limitations.

When using fixed functionality, you can disable the clamping by calling:


glClampColor(GL_CLAMP_VERTEX_COLOR, GL_FALSE);

When using shaders, there’s no such limitation. The normalized attribute of glVertexAttribPointer() has no effect on data already in floating point format. [quote]For glVertexAttribPointer, if normalized is set to GL_TRUE, it indicates that values stored in an integer format are to be mapped to the range [-1,1] (for signed values) or [0,1] (for unsigned values) when they are accessed and converted to floating point.
[/quote]
Therefore, the only reason for values to be clamped to [0.0,1.0] would be if LibGDX either implements the clamping itself or if LibGDX converts the floats supplied to unsigned bytes (the last one is true according to Davedes).

It’s obvious why TrollWarrior1 would call out on this when OpenGL doesn’t have this problem with neither fixed functionality nor with shaders when using floats. I wouldn’t call an implementation detail “the basics”.

EDIT:

I would not recommend that. Even “modest” brightness increases will reduce the effective bit depth a lot. To show this, I drew a gradient in Paint.NET, multiplied it by 0.25 and then by 4.0 to bring it back to its original form. That’s the same as rendering a 0.25 sprite and then postprocessing it up to 1.0 by multiplying by 4.0.

We effectively killed 2 bits of information, so we get a 6-bit gradient with only 64 possible values instead of 256.

Soo I came up with this method. You lose a little bit of colors, but the effect is the same as LWJGL above 1f color components.

In your vertex shader, multiply your fragColor by some kind of vec4 shade(?) :

fragColor = a_color * vec4(2.,2.,2.,2.)

This will make everything very bright.
Now, when you send color components to Libgdx SpriteBatch, divide them by the floats you put in the vertex shader. So in this case divide all components you send to SpriteBatch by 2.

spriteBatch.setColor(myColor.r / 2, myColor.g / 2, myColor.b / 2, myColor.a / 2);

You’re still losing a bit of precision since you’re using an 8-bit precision color mapped to 512 values (0 - 2) instead of 256 values (0 - 1), but in contrast to the fullscreen pass version you only lose the precision of the actual a_color variable so I seriously doubt it’ll have a visible impact.

Fun fact, libgdx encodes an ABGR int color as a float but the high bits are masked with 0xfeffffff to avoid using floats in the NaN range (see Float#intBitsToFloat). This means packed color floats aren’t completely opaque! The reason is that floats in the NaN range can’t be converted back to int. If you know you won’t convert a color float to an int, you can disable the mask using NumberUtils.intToFloatColorMask = false. Eg, Spine uses an FBO for image export, so if I do this to get proper colors.

Ummm, you lose color precision in paper. In reality, all I do with colors is like 0.5f or 0.55f. There is like no visible difference :stuck_out_tongue:

Yeah, that was my point. ^^ Just wanted to point out that there still was a loss in precision, but that it’s much better than a fullscreen pass.

I don’t quite get it… Yes, there is a NaN problem, but this int (or 4-byte) -> float ‘packing’ is just another interpretation of bits (it would be a union in C). This means there is no conversion whatsoever, even when going from floats to ints, when using Float.floatToRawIntBits(float).

So… there is no need for this masking… right? As you might expect I didn’t even test it because I’m rather… confident.

Ignore the java sourcecode of these methods (if any), the JVM completely ignores it and uses a union, which means it’s basically a no-op. It tells the JIT that it can from now on interpret that register as another data-type.

We pack the 32-bit int into a float. Not the other way around. Reason: vertices are stored in a float array, then copied to a direct buffer. Why not go with the direct buffer only? Cause Android is a turd, the direct buffer implementation is broken, and crossing the native/VM bridge for every int/float/byte costs to much compared to composing the data in primitive arrays then copying it to a direct buffer, then uploading it to the GPU. It’s sad really.

edit: actually, those methods going from int to float do modify the value, they aren’t NOPs.
edit2: what you said makes total sense, so i wrote a tiny little test http://ideone.com/UEy4vA it’s been a very long time since we worked on this, and i can’t remember why we had to mask things.


int intColor = 0xff800001;
float floatColor = Float.intBitsToFloat(intColor);
int intColor2 = Float.floatToRawIntBits(floatColor);
System.out.println(Integer.toHexString(intColor)); // ff800001
System.out.println(Integer.toHexString(intColor2)); // ffc00001

Good, i thought i needed my crazy pills again… http://ideone.com/2mP7Tb

We stumbled on a serious JVM bug, then :slight_smile:

		System.out.println("Java version: "+System. getProperty("java.version"));
		System.out.println("JVM name: "+System.getProperty("java.vm.name"));
		System.out.println("JVM version: "+System.getProperty("java.vm.version"));
		System.out.println();
		
		int i1 = 0xff800001;
		float f = Float.intBitsToFloat(i1);
		int i2 = Float.floatToRawIntBits(f);
		System.out.println(Integer.toHexString(i1));
		System.out.println(Integer.toHexString(i2));

(Eclipse)

Java version: 1.7.0_25
JVM name: Java HotSpot(TM) 64-Bit Server VM
JVM version: 23.25-b01

ff800001
@@ff800001


Java version: 1.7.0_25
JVM name: Java HotSpot(TM) Client VM
JVM version: 23.25-b01

ff800001
@@ffc00001

Oh dear. Still broken in client _51 just released the other day, too.

Cas :slight_smile: