You may have seen many people asking questions about the actual Swing GUI ( that isn’t working very good ) and being actively ignored (!)
I think it’s because we don’t use that and don’t have time to study the code for an answer.
We had a discussion already on that, and I bump another time, hoping it will produce some programming efforts.
So I think it would be good to do a true 3D GUI, with the following ideas :
Emulate the Swing API, ( not the look ) with the following components :
XComponent
XContainer
XFrame
XPanel
XLabel
XTextField
XButton
XProgressBar
XImage
Use Java2D image generation from text for the XLabel, XTextField and XButton components
Do a beautiful ( skinnable ? ) and customizable ( colors ) GUI : with special FX as color blending, transparency, and so on…
Do you really need all the container classes though?
We do already have an image class - org.xith3d.geometry.Quad which can be textured nicely (fair enough if you want to add a redundent version for completeness though).
When you say “true 3D”, are you talking about 3D ordering and effects (on 2D quads)?
The way I see it: The UI elements are all implemented as textured Quads. The Container is really just an extension of BranchGroup.
Mouse events are tricky – if you want to be able to embed a text box in the scene then picking is your only choice. If you make a constraint that input only works when the BG is attached to a Foreground node, then you can just map the XY screen coordinates (a far easier solution I think – text output though could be plonked anywhere). Of course, you can get your jazzy translucent effects, Z ordering etc.
I would like to help define the API.
How dedicated to completing this are you? Unlike most people who have attempted the same, you seem to be a forum regular with some projects on the go. Will you stay the distance?
Er… maybe not, we should just define if we want to make a “layout-manager-type-GUI-system” or a “pixel-position-based-GUI-system”.
The container stuff would be useful in the first case.
We can use it. What image format can we use for transparency support ? PNG, JPEG ?
Hum… I was wondering about doing some models ( rounded boxes ) to make the UI looking good, but it isn’t necessary, after all.
Yes, that’s probably the most “reasonable” way to implement it.
Or may we attach the BG directly to a Foreground node in the XComponent constructor ?
In all case, I think Foreground node as you describe it is the best solution.
However, I have to study the sourcecode to know exactly how it’s working.
I’ve seen a Xith test ( I can’t remember which ) where some Text is written directly on a texture. Is it using Java2D ?
Well, all contributions are accepted.
My only problem is I don’t have any CVS access for Xith so I would be obliged to work locally, and others can’t contribute directly…
I think I’ll finish that, as far as I don’t stop programming ( very unprobable… )
I’ve got good reasons for that :
That would be a valuable addition for Xith, and would help all its users, who provided me advice during 1 or 2 years
It’s really needed for the Gamma game engine
It’s a cool Java experience, and will improve my ( our ? ) knowledge of Java
I think it’s very interesting, and generally I do what I think is interesting…
BUT I can’t do that alone, and I’m waiting for others contributors ( arne ? )
Okay, some API designing now…
Basically, to create a GUI, you need :
A Canvas3D
The main BranchGroup
The position and size of the components are defined in %, relative to their parents. So the GUI can be any size.
We have to make one thing so the textured quad used to display the GUI is always entirely displayed ( some parts may be hide when the window is resized, with a non-4/3 ratio ).
We have to make too one thing to map the coordinates on the textured quad with the coordinates of the mouse on the screen. Has anyone one idea on how to do that ? I think it’s the only thing I don’t know how to do.
For the rendering, we may think of themes as “Displaying engines”.
There would be interface with drawComponent(Point2f position, Point2f size, Component comp), and the function can test which type of component it is and call graphic procedure ( included simplified images-based procedures ). If the component type is unknown ( home-made ) then the drawing is delegated to the component itself, with the default look&feel.
What do you think of that ?
you could probably very simply get your graphics by using Component.draw(Graphics). This way we would be also able to add fancy swing stuff to xith3d like JTree.
To save processing time a pick should also only be made when there’s actually a click or a drag.
Or you could also try to project the quad to the screen and make a simple projection formula. This would only be good I think, when the quad has a static position relative to the view.
We choosed to draw the UI on a textured quad with the Foreground node, and to make a new UI system, without Swing.
Because Swing isn’t suited for complex, beautiful game UI, and is slow.
This is an assumption, but I agree it’s probably true.
Regarding CVS and project coordination, this is exactly what the Xith Toolkit project was set up for, so my vote is that we use that. This way anyone can contribute, and it can be included in the official distribution. If you havn’t already got Dev access, please apply for it.
The root Node of a UI instance should definitally be a Branch Group. This way one can easily add it to the scene, either embedded in the scene somewhere (import support), or encapsulated in a Foreground node (input support).
Regarding input support, the way I see it we have two options - picking, and resolving the absolute screen coordinates to the relative foreground coordinates. I am not sure that the former would work. The latter option I am sure is possible, but not totally sure as to the implementation. I worked on a project recently where we managed to map precicely the XY screen coordinates to a textured quad in the scene. I did this by setting the FOV to 90 degrees, and moving the camera to half the quad’s hight (IIRC). There would be a mathematical formula I’m sure that one could use to calculate this, based on the FOV. To be honest, I do not know it (I did experiment with trial and error to work it out, but failed), but I did get it to work for a 90 degree FOV. Maybe if I read some more OpenGL books I could work this relationship out.
Does everyone here use the HIAL input abstraction layer? I do, and it allows me to seemlessly switch between the JOGL and LWJGL frameworks. My vote would be to use this library as the import for the system (one really big advantage is that our UI would be totally abstracted from the actual input, allowing future input libraries to be used with no changes to our code).
A low level design question: What does our pipeline look like?
Components drawn to a Graphics2D -> Graphics2D converted to Texture -> Quad texture updated?
Obviously for a snappy, and resource friendly UI we are going to want minimal overhead.
It’s cool to see I’m not the only one who don’t know the formulae to get foreground coordinates from absolute screen coordinates.
I think having a FOV of 90° isn’t a problem. Can you post an example code of what you’ve done, so I don’t have to recode that ?
I don’t currently use HIAL but I’ll switch to, for Gamma and my game.
For the pipeline, if we can have a function that is called each rendering in each component, we could do that : Graphics2D -> Graphics2D converted to Texture -> Quad texture updated.
I really do recommend HIAL, it adds absolutally minimal overhead, and the benifits are many. Glad to see you are looking at using it
Unfortunately we probably can’t restrict the users choice of FOV so that’s not a go. In fact, I’m not sure my example will be of any use to a UI, except to prove that such correlation of screen to 3D coordinates is possible. I’ll dig up the code anyway.
You need more than that, only a little, but it makes a huge difference. For proper, real, automatically resizing layout you need to copy HTML ;D
Each component has a size, that is either explicit (width=50px) or implicit ( width=75%; defaults to 100% if not specified).
Implicit sizes are calculated “relative to the parent’s available area”
The parent’s available area defaults to 100% of its size, but can be set to any percentage of this (available=90% would mean available-width is 90% of the parent container’s width), or to a fixed size relative to its size (available = -10px would mean “my area minus 10 pixels on top, bottom, left, right”)
This simple scheme is enough to do almost any conceivable layout in a very easy to maintain and non-verbose manner. W3C got it very right about layout; Sun got it very wrong.
That’s what I wanted to say, except I don’t know how to make explicit sizes with the following base principle :
The GUI is by default made with a fixed resolution, e.g. 1024x768, and for the lower resolutions (canvas size changed : 800x600 or 640x480) the textured quad is resized : so the GUI looks exactly the same with any resolution with a 4/3 ratio.
Do you see what I mean ?
There are bigger problems than that: The GUI can be displayed at different aspect ratios. For example, on a widescreen computer the aspect ratio is different to that of a 4:3 monitor. While you can just add black bars, or stretch the widescreen one, it’s not the best solution to do this.
Take a game like Warcraft III - that will happily run on a widescreen computer, the GUI just magically “works” (maybe there’s some clever padding in there, I shall investigate). Interestingly (my brother and I did a test of this), when on a widescreen computer, you actually do see more of the playing field than when not, which suprised us both since it gives the widescreen player an advantage (however small).
In my game, where the “UI components” are manually placed, on the widescreen, they stay there but you can see more of the scene either side. Obviously I need to make them relitive to the left/right.
As far as changing the aspect ratios go, the UI can either be scaled to fit (like warcraft), or kept at it’s “native” resolution (like Red Alert 2). Idealy, the scalar components should scale smoothly (fonts, etc).
Good idea with the HTML-esk type layout blar - makes sense to me.
Wouldn’t it be possible to get the positions on the plate to make picks at the corners of the screen and the interpolate between the intersection points? You’ll only have to do the picks once, so this will not be performance problem.
The HUD system I created for Java3D had 3 types of positioning, fixed, relative, and mixed. fixed is obviously ‘place it 15 pixels across, 10 up’ type things, relative is ‘place it 10% of the way up the screen, 90% of the way across’ and mixed, was ‘place it 15 pixels up from 50% of the way up the screen’ type things. This worked great for placing the componenets, even on my widescreen laptop :).
However, I tried porting it to xith, but my hud relied on forground plates in the 3d world, and the texture coords were set by picking in each corner of the component. Under xith, you can only get the object picked, not the coords, so for now at least, it’s java3d only. Unless anyone fancies implementing picking that returns a point3f
Not really, because it sounds like you are defining precisely the situation where you would want to use no explicit sizes at all. This is not the best terminology, I appreciate, but it made sense at the time - recall that “implicit” just means “defined by a percentage not a pixel size”, i.e. the “size in pixels is implicit”, even though “the size is explicit, in the meaning of the english word explicit”.
If you define all sizes implicitly then the GUI automatically scales up and down independent of resolution.
Normally, you have some things you want to have actual size.
Most obvious example: the shortcut bar for things like WoW or the inventory window in an RPG. You have lots of icons for these bars/windows that are fixed size (e.g. 64x64). At higher resolutions, you want the whole GUI to scale up, EXCEPT FOR the inventory / shortcut bar, whose size you want to scale up (like everything else), but whose contents you dont want to scale: you just want to draw more cells (i.e. you get a shortcut bar with more options, or you can see more of your inventory without scrolling). This is a perfect case for defining your cells in the bar/window using explicit pixel sizes. Then everything “just works”: you get a GUI that resizes to be resolution independent, apart from the inventory etc that just draws more or less of it’s contents :).
This works best with some utility layout classes, which are almost trivial to implement. e.g. I have several times (for different employers ;)) written a “gridWithFixedSizeCells” class that acts just like you’d expect an RPG inventory window to work - you add things to it, and they all have the same fixed size, and it just displays as many of them as it can within it’s size (whatever size has been given to it by it’s parent).
I’ve did picking that returns a point3d (there has to be some thread about that here around - do search) There is also some code in the xith-tk which does that.
Okay, so I think there’s two manners to do.
Take a button for example. In a “Displaying Engine”, using the pre-defined image-based functions, you’ll have 6 little images composing the button + a grey fill hard-coded. ( See attachment )
If we do as I say ( resizing the entire GUI ) everything’s look good, and the border of the buttons will grow as its text.
If we do more in a way blahblahblahh proposes, the text will be redrawn larger, and the border will be still at the same size.
Hmmm… the best way’s probably the second…
The problem is if we want a game that looks exactly the same with a 640x480, 800x600, 1024x768 or 1280x960 resolution, we’re obliged not to define components size with explicit size. Maybe we can do an option, if components are resized or not when the screen size ( canvas size ) is changed ?
( Note : it’s true not only with buttons border : am image will looks two times bigger with a 640x480 resolution than with a 800x600 resolution… maybe that’s not a problem… )
For that, I had either two types of button, or more normally a button with two constructors and a “mode” int:
MODE_NORMAL: the borders are fixed-size (explicit, pixels), and whatever is added as the image / text / content of the button (unlike Swing, I like to add( component ) to a button to make it’s content! Much much cleaner and more powerful) is given 100% of what remains.
If you think about it, this is all much simpler than swing: my buttons are just containers that take only one add()'ble element: they’re “avaialble area” is defined as “whatever is left over after the button has allocated space for its borders”. A default component (defaults to 100%) added to a default button will automaticaly render in the full space of the button’s center, and not overlap the borders.
EDIT: so, not only can you add anyting to become the rendered portion of a button, but you don’t need any special code (you don’t need all this crappy ImageIcon stuff from Swing just to put an Image object on a JButton). KISS…
EDIT: this also, obviously, makes animated buttons extremely easy: no special code!
MODE_FIXED_ASPECT: the borders are given implicit sizes (percentages), and all the other code remains identical.
The constructor would be e.g.: button( int mode, int borderDepth ), where depending upon mode, borderDepth is either a percentage or a pixel size.
There are, of course, cleaner OOP ways to do that method but this is just the quick-n-simple method I’ve used before.