LibGDX - Combine textures?

Hi all.

I’m following a tutorial on creating lasers in LibGDX and I’m having to use multiple images to create the effect. (A total of 6 as i’m not overlaying the animation texture as I don’t like it). I was wondering if it’s possible to combine these 6 images at runtime? I was personally thinking of creating a class that stores an arraylist of all 6 sprites, and then draws all of them at the relevant locations. This would require rotating all the sprites to their parents/positioning them etc.

This seems like a lot of work to get up and running so I was wondering if LibGDX had someway of taking all 6 textures and then combine them into 1. It would make things much more simple as I can simply rotate/move 1 texture :point:

Look into framebuffers. They have a backing texture that may be what you’re looking for.

Try looking for texture atlas. Maybe the texture packer can work.

You don’t move or rotate textures. But meshes that are texture mapped. Putting all those textures to single atlas does not simplify this thing at all. It may improve performance but it does not make it simpler.

Thanks guys, I’ll look into those.

@Pitbuller So if I was to pass those textures into a new sprite, would that be better?

With a framebuffer, you just draw all the textures with a batch to the FrameBuffer and then use getColorBufferTexture() to get a texture object that you can draw to the batch or apply to a sprite.

Perfect :slight_smile: seems exactly what i’m looking for. So say I had 30 lasers, each with 6 different textures, I would simply create a loop inside the fbo that would handle all the placements for all the different textures (Starting point, mid point and end cap for the laser) and send this information over to my spritebatch?

Just thinking though… 3 of the textures have to be the same colour as they give the outline to the laser. The other 3 are the white part in the middle of the laser. At the moment, i’m using a batch that simply sets the colour before each laser part and this works very nicely. Would I still be able to do this with a framebuffer?

Example: I have a red laser so 3 images need their colours set to red and the other 3 textures are already white, so we are good here. Would I be able to draw 2 fbo’s? And then simply lay the middle of the laser over the top of the outline of the laser? :point:

If I understand you correctly, this is the effect you want?


FrameBuffer thisBeam = new FrameBuffer(Pixmap.Format.RGBA8888, width, height, false);
SpriteBatch batch = new SpriteBatch();

// do this part every time you need to update this beam
thisBeam.begin();
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
for (texture in this beam) {
  batch.setColor(color for this part of the beam);
  batch.draw(texture in this beam, x, y);
}
batch.end();
thisBeam.end();

// draw the laser beam
yourMainBatch.draw(thisBeam.getColorBufferTexture(), x, y);

Thank you very much Cougar ;D I tested the effect with 1 laser and it works great so i’m going to clean the code up tomorrow and attempt at writing something that will allow me to manage all the lasers at once. I don’t really understand Framebuffers so i’m going to attempt to read up on them and I also want to read up on OpenGL blending as it’s come in handy twice now.

Here’s a bonus picture of the effect :slight_smile: credit to CodePoke.net for the effect and thank you again!

EDIT: I do seem to be having some issue setting the origin and position of the sprite O.o The coords seem to be drawn in the top left, getting a little confused :stuck_out_tongue:

http://s28.postimg.org/ghft9i0nx/javaw_2015_06_16_00_10_40_65.png

So I figured out my rotation issue. When I create the framebufffer, I create it with the width and height of my game and this means that when I create the sprite, it also has the width/height of the screen. I tried setting the framebuffer to the size of what I want the sprite to be but I got some weird results. Any idea what I could do? :slight_smile:

I’ve still not been able to solve the issue with the framebuffer.

I’m thinking about just creating a class that holds info about the laser such as the start/middle/end textures as well as the positions of each and then simply storing them in an arraylist and updating their positions each update. I’m going to have issues rotating the lasers with the turrets as i’m not sure what trig function should be used here? Or how to set the laser relative to the turret?

I imagine you have a good reason and I just didn’t grasp it in my read-through of the thread, but can you clarify why you need to combine textures in a framebuffer? Maybe you could link to the tutorial you mentioned in your first post, and/or post the individual textures you’re using so that we can see what you’re trying to do more clearly. I have a feeling there’s an easier solution somewhere here than what you’re trying to do currently (although admittedly, since I don’t fully understand the context, I could be completely wrong about that).

Sure :slight_smile: here’s the video so show what i’m trying to do as well as the link to the article:
http://codepoke.net/2011/12/27/opengl-libgdx-laser-fx/

qzmW9TuLRRU

I may be totally wrong here, but at the moment I’m not seeing why combining textures manually or using a framebuffer is necessary or even desirable for this. It seems all that’s needed is a custom mesh, created on demand for each laser, that would look something like this:

`-------------------------------------------
| | | |
| | | |

start cap stretched section end cap`

The three graphics (start cap, interior, and end cap) would then be part of the same texture. You’d still need the separate base and overlay, but they can be rendered using the same mesh. If you’re doing the dynamic animated effect, that might need its own mesh (or at least its own set of texture coordinates). You can just render the three layers in three separate passes, or combine (at least) the first two layers into a single pass using a shader.

Again, I could be missing something, so if this isn’t relevant for some reason, my apologies. But at the moment at least I’m not seeing anything that would require the kind of image manipulation you seem to have in mind.

I was trying to avoid meshes as I have no understanding of them. Was hoping that the framebuffer would allow me to avoid them :frowning: seems maybe not

Sure, I understand. Working with meshes directly provides a lot of flexibility, but you may not even need it for what you’re doing.

I did a quick search, and found mention of the LibGDX ‘NinePatch’ class, which may allow you to achieve the same thing without working with meshes directly. In case you’re not familiar with it, a ‘9-patch’ (as it’s often called) is a common mesh structure where a quad is split (with two horizontal lines and two vertical lines) to yield 9 sub-regions, kind of like a tic-tac-toe board (although the regions don’t have to be square). You can then adjust the parameters of the 9-patch so that the interior region stretches arbitrarily while the border regions maintain a fixed dimension.

This is often used for graphical elements, such as UI buttons and windows, that need to be sized arbitrarily while having consistent visual behavior around the border. With parameters set up property, I think this would work for your laser effects as well (excepting the animated dynamic effect, which is a little trickier).

So, I’d look into ‘NinePatch’ and see if you can get anywhere with that. Based on what you’ve posted so far at least, I think it’s likely to be a more appropriate solution than messing with framebuffers or anything like that.