How to do anaglyph 3D using LWJGL

Here’s a little tutorial about how to create the anaglyph stereoscopic 3D effect as I did it in Hyper Blazer.

Although it might need some tinkering to make it look good and even though anaglyph 3D is not the most effective stereoscopic 3D technique, it’s a trick that’s so easy to add to an existing OpenGL game that I hope to see more implementing it as it can really add something to the experience.
You do look like a total dork when you wear those silly red-cyan 3D glasses though ;D

I’m assuming you know what anaglyph 3D is, but if you don’t, here’s a link:

So the idea is to render the image twice, one for your left eye and one for the right eye. Both images are filtered in a way that each eye only sees one image, and are blended together.

Both the direction and position of the left and right eye sights need to be changed a bit so that:

  1. They are about 10 cm or so apart
  2. They look slightly ‘cross eyed’

Then the images should be filtered so that if you wear the red-cyan glasses, your left eye will only see the image meant for the left eye, and the right eye only sees the image meant for the right eye.
This means that since the left glass of red-cyan glasses is red, only the red colour component should be rendered for the left eye, and only green and blue should be rendered for the right eye. Then the images should be merged together.

Fortunately, in OpenGL this can be done with a super easy and neat trick: Using the glColorMask function:
http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/gl/colormask.html

So what you do each frame is this:

  1. clear colour and depth buffers
  2. set glColorMask to only render red
  3. render the frame translated and rotated for the left eye
  4. clear only the depth buffer (not the colour buffer because otherwise the previously rendered red image would get erased)
  5. set glColorMask to only render green and blue
  6. render the frame translated and rotated for the right eye

In Hyper Blazer, it looks like this:


        GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);

        if (anaglyphMode) {
            GL11.glColorMask(true, false, false, true);
            renderEye(-eyeDistance, -crossEyedness);

            GL11.glClear(GL11.GL_DEPTH_BUFFER_BIT);

            GL11.glColorMask(false, true, true, true);
            renderEye(+eyeDistance, +crossEyedness);

            // reset the colormask again for things like HUD rendering or other things that do not have depth
            GL11.glColorMask(true, true, true, true);
        } else {
            // no anaglyph mode means rendering for just one 'eye'
            renderEye(0, 0);
        }

        renderHUD();

renderEye() renders everything in the game with 3D depth
renderHUD() renders things with no 3D depth (like the HUD)

The ‘eyeDistance’ will be used in the renderEye() method to translate the camera position a bit along the x-axis (so that both virtual ‘eyes’ have a distance in between them)

The ‘crossEyedness’ will be used in the renderEye() method to slightly rotate the camera around the z-axis.
This is VERY important because it determines how far the player is looking and how the depth will appear. Where the lines of sight of both eyes meet will be rendered with no depth.
With no crossEyedness, the player would be looking infinitely far, which implies that the game is rendered such that everything pops out of the TV screen and the farthest object would appear at the location of the screen, which doesn’t look natural.
What you want is that most of the image appears ‘behind’ the screen, and some really close objects popping out of the screen so you’ll have to adjust the crossEyedness value for that.
Without crossEyedness, the effect mostly doesn’t work at all.

Some notes:

  • You will have to avoid colours like full red, green or blue. They will appear only in one eye, which will give you a big headache and no depth. In Hyper Blazer, all colours are a somewhat greyed out in anaglyph mode to avoid this.
  • In my experience, the effect usually doesn’t work very well on laptop screens, but it works quite well on my flatscreen TV and my CRT monitor
  • Adjust the eyeDistance and crossEyedness in such a way that you don’t overdo the effect. Overdo it, and you’ll get a headache
  • Obviously using anaglyphic rendering will cost performance as the whole scene has to be rendered twice per frame
  • I used LWJGL in the example code, but of course any OpenGL binding will do
  • You can easily order red-cyan 3D glasses on the internet. They’re dirt cheap

(I recently implemented anaglyph in minecraft)

You’ve made a mistake. If you bring the eyes apart and rotate the direction they look, objects infinitely far away will be rendered infinitely far apart on the screen. In reality, things infinitely far apart is straight ahead on both eyes. What you really want to do is SKEW the projection matrices.

Here’s what I did (Eye is either -1 or 1)


glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranslatef(-eye*stereoScale, 0, 0);
gluPerspective(70, aspect, 0.05f, renderDistance);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(eye*0.10f, 0, 0);
moveCameraToPlayer(a);

To make it perfectly correct, you need to know the eye separation of the player, the distance from the player to the monitor and the size of the monitor.
Then you need to set the fov to match what the player would actually be able to see through the screen, and calculate the skew so objects at screen distance would be rendered with an offset of 0, or so that objects at infinity would be rendered exactly eye distance apart (these two will match up if you know the exact head->screen distance and do the fov calculation right).

[quote]If you bring the eyes apart and rotate the direction they look, objects infinitely far away will be rendered infinitely far apart on the screen.
[/quote]
If you consider that you look slightly cross eyed if you look at an object near to you, then why shouldn’t it be rendered that way?
That’s exactly what your eyes do, isn’t it?

[quote]In reality, things infinitely far apart is straight ahead on both eyes.
[/quote]
Yes, but only if you’re looking at things infinitely far.
Not if you look at something close to you because your eyes are then rotated inwards to the object you’re looking at.

Now I’m not saying that it’s impossible I made a mistake, but I’m not sure I follow your argumentation of what I did wrong and why things should be skewed instead of rotated.
In any case, I’m going to try what happens if I use your way of doing it.

Anyway thanks for the feedback

The point is to force the viewer to cross his eyes to see the object, not to render the object from a crossed perspective.

Awesome graph:
http://www.mojang.com/notch/misc/anaglyph.jpg

The black boxes are the boxes we wish to render. The black circles are the eyes. The black lines are the screens.
The colored lines show the projections.
The bottom area shows how it shows up on the screen.

In front of screen, at screen, behind screen, and at infinity.

That is indeed an awesome graph :slight_smile:

I see what you mean.

Still I can’t really wrap my head around why my method is wrong.
And thinking about it, this statement is actually wrong:

[quote]If you bring the eyes apart and rotate the direction they look, objects infinitely far away will be rendered infinitely far apart on the screen.
[/quote]
This implies that an object infinitely far away is not rendered because it would be outside of the screen in both view ports, but I think you don’t take the FOV into account.
If you look at my attached awesome graph, you’ll see what I mean.
(The black box is what the 2 eyes are looking at. Even at that view, objects infinitely far away will not be rendered infinitely far apart on screen)

Could it be that my method is just different from yours with perhaps a slightly different result and not actually plain wrong?

I have allways done stereoscopic image with only a translation. Even stereoscopic camera doesn’t do rotation.

I think the trick is that camera are not eyes. Camera will provide an image for each eyes and then your eyes will do the focus.
You can’t predict where the user will focus on (Background ? Far away object ? Near object ?) so you can’t define a rotation.

Oh, yes, you’re right! I totally didn’t think that one through. Yes, you can rotate and have things at infinity still show up on the screen.
Thanks for graphing it so I understand. :smiley:

My gut feeling is still that a skew is right, but I have no proof now.

There’s another improvement to be done. If you just no the glColorMask think, a pure red object will be black on one eye and white on the other, and a pure cyan object will be reversed.
To solve this, you need to make the red eye see a monochrome version, and mix in the red color on the cyan eye. I added this to my textureloader and anywhere I manually set a color:

(The average intensity of a color is r0.3+g0.59+b*0.11)

            if (Minecraft.STEREO_3D_MODE)
            {
                int rr = (r * 30 + g * 59 + b * 11) / 100;
                int gg = (r * 30 + g * 70) / (100);
                int bb = (r * 30 + b * 70) / (100);

                r = rr;
                g = gg;
                b = bb;
            }

Just plain glColorMask:

http://www.mojang.com/notch/misc/anaglyph_1.png

With the color mixing:

http://www.mojang.com/notch/misc/anaglyph_2.png

(Look through the glasses, compare the left eye image to the right eye image)

Except that the hearts are grey now :persecutioncomplex:

That’s because they’re red. With anaglyph, you lose one color channel.

You can’t show a red color that has the same color intensity on both eyes, because to the left eye EVERYTHING is red, and to the right eye NOTHING is.

In fact cameras are very much like eyes.

“In fact slight rotation inwards (also called ‘toe in’) can be beneficial. Bear in mind that both images should show the same objects in the scene (just from different angles) - if a tree is on the edge of one image but out of view in the other image, then it will appear in a ghostly, semi-transparent way to the viewer, which is distracting and uncomfortable. Therefore, you can either crop the images so they completely overlap, or you can ‘toe-in’ the cameras so that the images completely overlap without having to discard any of the images.”

Your stereo camera setup defines what you look at, exactly like a single camera. You as a user can still look somewhere else, but it won’t be where the camera intended it. Again, exactly like a single camera.
In an ideal world, you’d wear contact lenses with little monitors in them that would track the rotation of both eyes and adjust the view ports accordingly. But that’s not really viable yet, so we’re stuck with one 2D screen with a fixed view that we somehow want to translated to a stereoscopic image.
Well, that’s my theory anyway :slight_smile:

@Markus_Persson:
Cool, I’ll check that out when I get home! :smiley:
Now I just prevent it by greying everything out a bit, but maybe your method is better.

The 2nd picture looks indeed a lot easier on the eyes, but now you completely seem to loose the colour red.
You’re right you can’t display 100% red and not wanting to claw your eyes out (or even be able to see depth with those colours), but there must be way to shift those problematic colours a bit more towards gray without loosing those colours altogether. Hmmm…

Well, you’re losing an entire color channel, leaving you with just two. It might be possible to hue-shift red and compress the entire color spectrum. Since you’re seeing things through a horrible color filter anyway, the brain might adapt and interpret the color as red. =D

I’m not clever enough at the moment to implement that, though.

Here’s a video explaining James Cameron’s new 3D camera:
http://video.google.com/videoplay?docid=-241532803911842846
It details how the lenses are rotated inwards, exactly how I described. So unless James Cameron is wrong too, I suppose I didn’t make a mistake with my way of rendering after all.

[quote]Well, you’re losing an entire color channel, leaving you with just two. It might be possible to hue-shift red and compress the entire color spectrum. Since you’re seeing things through a horrible color filter anyway, the brain might adapt and interpret the color as red.
[/quote]
Compressing the entire color spectrum is exactly what I do, and it works fairly well.
But then again, anaglyphs are the worst way of projecting stereoscopic images the way it abuses colour filtering, so it’s all a big trade-off anyway.

I’d claim he’s wrong.

And from what you explained (graying out), you’re not compressing the color spectrum at all. Could you describe your algorithm?

Wow, that would be an epic mistake as that camera is filming the most expensive movie ever :smiley:
I hope you’re wrong because I’m kinda looking forward to that movie ;D

Pardon the sucky code:


    public TileNormal(float r, float g, float b) {
        this.r = r;
        this.g = g;
        this.b = b;
        if (Game.ANAGLYPHIC) {
            float im = (r + g + b) / 3f;
            
            // damp color
            this.r = im - (im - r) * Game.ANA_COLOR_DAMP;
            this.g = im - (im - g) * Game.ANA_COLOR_DAMP;
            this.b = im - (im - b) * Game.ANA_COLOR_DAMP;
            
            // adjust brightness
            this.r = this.r + (1f-this.r) * Game.ANA_BRIGHTNESS;
            this.g = this.g + (1f-this.g) * Game.ANA_BRIGHTNESS;
            this.b = this.b + (1f-this.b) * Game.ANA_BRIGHTNESS;
        }
        this.texId = Game.TEXTURE_POOL.getTextureID("/tile.jpg", Texture.TYPE_MIPMAPPED, Texture.ENV_NONE);
    }

Reading it again, I’m not exactly sure what I was thinking, but the idea is to bring all colors closer to gray :persecutioncomplex:
Perhaps strictly speaking the word compression is not the right word, although that’s probably a matter of interpretation

Like he said, our brain make correction on focus. What is good for a movie, is no good for a game :

  • in a movie, viewers are forced to focus on a particular subject. In this case, your view is wrong but you make correction, your eyes don’t have to make the focus so it is less stressfull.
  • in a game, you just can’t focus on something

[quote]Reading it again, I’m not exactly sure what I was thinking, but the idea is to bring all colors closer to gray persecutioncomplex
[/quote]
I don’t know the detail but anaglyph driver from nVidia do a color filter too. It seems to be mostly more red to me.

In Ice Age 3d, they played around a lot with the focus depth and stereo effect strength across shots. The end result was that, sure, stuff looked 3d both in closeups and wide shots, but the scale of everything varied widely. In wide shots, the huge lumbering mammoths looked like tiny toy figures. It was horrible.

It appears from that video that Cameron is going to do something similar.

That is indeed exactly the problem of using stereoscopic 3D in an interactive game: The rendered view ports are not corrected according to what the user is looking at.
This is not a problem with 2D (although it is too as soon as you implement depth of field effects).

However this doesn’t make rotating the view ports inwards wrong. If you don’t do it, the camera is focusing at infinity instead of closer by, and you still have exactly the same problem.

No I disagree, to me it sounds more like that in Ice Age 3D they made a few mistakes that have nothing to do with rotating the viewports:
More specifically, it sounds exactly like they separated the view ports too far apart in the wide shots (probably in an attempt to enhance the 3D effect), which makes everything look too small.

As explained in the video, they avoided that problem in the new camera by placing the lenses at the same distance from each other as human eyes (old film camera’s were too big to do that).

I try to do a drawing to see what stereoscopic with rotation differ from stereoscopic without. But I didn’t manage… when using rotation, projection plans are not parralel to the screen !