opinions needed: approaches to 2D gradients in OpenGL

Hello guys. I’m working on a real-time animation that requires a lot of gradient compositing. Just to give you a rough idea:

http://www.redpicture.com/pics/gradient.jpg

The image above is a prototype and shows various overlapping triangles that are filled with color gradients in Adobe Illustrator. Illustrator allows the user to define an arbitrary start point, and end point to a gradient. This generates a fill pattern, and is masked off by the Path of the graphic object getting the fill. In this case, it’s a triangle.

http://www.redpicture.com/pics/gradient-2.jpg

In order for my project to deal with rapid changes from the client, I have figured out how to extract relevant data from a PDF file. I can pull out the geometry, the colors in the gradient, the gradient start, end, and rotation.

The basic way ( the only way I know, lol ), is to read the data, generate pixel data, generate triangle vertices, bind pixel data as texture, and achieve the rotation by transforming the texture in the model’s ( triangle ) space. I think I can achieve this. However, before I do all this, it occured to me that I might also approach this problem with a GLSL shader instead.

  1. Which approach would you go for?

  2. If I want to animate, or update the gradients on the fly, does that preclude the use of the GLSL approach.

  3. Is the basic way I describe above flawed, or are there some gotchas to consider here.

Thanks for any help in advance oh gurus.

What exactly do you mean by “generate pixel data”? I would think you’d be able to do all those gradients with just a single greyscale gradient texture and manipulation of texture coords and vertex colours. Of course if you want different gradient styles (like non linear fades, or radial gradients) then the texture approach starts getting more and more complicated and the GLSL approach gets easier. GLSL would be fine for animating things on the fly, the only real question is are you willing to limit your app to people who have a GLSL capable card. If thats a tradeoff you’re willing to take then I’d go down the shader route - it really does make the code more logical and flexible instead of trying to bend a rigid API to your needs.

The first thing you should be aware of is that OpenGL will actually do a linear gradient for you automatically if you have the glShadeModel set to GL_SMOOTH. At which point you’d have to extrapolate the begin and end points to the nearest vertices and set the glColor accordingly.

Then as it’s been pointed out the shaders will allow you to add more complex gradients, and actually it wouldn’t complicate the simple linear case either, as it would simply be a single line for setting the gl_FragColor variable to be the interpolated value.

we also use gradients in our game a lot.
they are saved as SVG files (exportscript from illustrator) and rasterized to textures on the fly using batik.

it is not quite realtime, unless you have a really fast cpu with multiple cores.

hey guys. thanks so much for the replies. and, sorry for the overly ambiguous question. i’m from an artist background, so often i know what i want to know, but don’t know how to ask…lol.

anyway, i think i can elucidate the problem better now:

  1. the animation is a one-off application for use at an event. i can specify the platform, and will run something with a bunch of texture memory.

  2. i have to span over large distances at high resolution ( 1920 x 1080 ), so dithering will becoming necessary. i’ve already done tests using vertex color blending built into opengl. banding is too much for the client’s taste.

  3. i need to animate the colors in the gradients such as going from orange and magenta to blue and green. in addition, we may animate some of the triangles’ vertices.

hope this helps and is more clear. thanks for your valuable input.

So when you get banding you’re doing something like…

//Warning: Bad pseudo code ahead
glBegin()
glColor(white)
glVertex()
glColor(black)
glVertex()
glVertex()
glEnd()

and you see banding in the shades of gray between the white and black? That really shouldn’t be happening, and unfortunately that mode is pretty much the best you’ll be able to do. When compared to textures it has substantially higher precision (32bit floating point per component versus 8bit integer per component typically) and higher resolution as well.

[quote]3. i need to animate the colors in the gradients such as going from orange and magenta to blue and green. in addition, we may animate some of the triangles’ vertices.
[/quote]
That’s pretty easy to do with the method I’m describing above, just change the values in the glColor() calls.

yeah, i can simulate gradients using the glColorVertex3f() method. the issue is the banding though. even with glEnable(GL.GL_DITHER), the results are bad.

how can i work in higher internal precision, then perhaps dither everything?

I’ve never really had to do anything special, and honestly I’ve never even tried enabling dithering as I’ve always assumed it would harm the image quality.

If that doesn’t sort your problem out, take a look at and try to run the Nehe lesson 3

Try taking a look at the Nehe Lesson 3
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=03

If that produces noticeable banding something is wrong with your system. If so you might want to check that you have…

1.) A decent GPU; so one from ATI or Nvidia
2.) Most recent drivers installed and working
3.) Desktop color depth set to 32-bit; that seems to be what things like glut and jogl will use to detect and set the framebuffer capabilities by.
4.) Monitor needs adjustment; I’ve been told that having the gamma set incorrectly can cause those sort of artifacts, but that’s really getting well beyond my area of expertise.

thanks chris. i think the issue is the fact that i’m working at high resolutions, over a limited range. for example, doing a gradient @ 1920 x 1080 where i’m interpolating from say ( .5, .5, .5 ) to ( .75, .75, .75 ), is that it’s simply mathematically impossible ( even with dithering ) to hide that kind of banding.

i received some advice that if i work internally with a 16bpp texture, then dither, i might get what i’m looking for.

i’m writing some tests right now, and will post results when complete.

You’d have no problems with that that interpolation range if everything was floating point, or probably even 16bit integer. But I think the problem is a little more deeply rooted than that. I believe most CRTs are 8bits per component, and many LCDs are even lower.

Though I’d happily hear about any solution to the problem you discover!