Hi all,
I’m working on creating scalable GUI components programmatically by drawing layers on top of each other. For example, I’m using Sprites for gradients and drawing on top of that with 9patch Images for borders and bevel effects (although I might be able to use Pixmaps to draw the images instead of using 9patch Images, but haven’t had a chance to try this out yet).
So essentially for each individual component like a button, I’m going to end up with several layers of drawing for each button…but I was wondering if there was a way to do all of the creation on startup and then merge the layers together and end up with just a single image that I can then load into all of the buttons in my program. It seems like it would be far more efficient for each button to only have to draw a single image instead of multiple images.
I think FrameBuffer might do what I need, as from what I understand it writes the output of SpriteBatch to a Texture, but I’m not totally clear on how it works and how to use it. The few examples I have been able to find used the FrameBuffer inside the render method in connection with the SpriteBatch. However, I don’t want to use it on every frame in the render method. I would like to be able to combine all of the image layers in the constructor of my GUI class after they have been created based on the device’s resolution, and then use a single image in the rest of the program.
It seems to be a difficult task getting Sprites, Images, and Pixmaps all to combine or be able to draw them to a format that I can use to create a Texture, or an Image, etc.
Is what I want to do possible? Thanks!