StraightEdge - 2D path finding and lighting

Well, we already have implementations of path finding for gridless(as yours would be), square, and hex(both directions, vertical/horizontal), but would have to make some changes to the rendering and path finding code to accommodate. It does not help that we allow variable unit sizes, eg: one person may use 50px squares, another may use 100px squares, and another may use 25px hexes. We also have to keep track of “units” so that each cell has a specific unit value. That way, if one game system counts things in squares, the units would be one such that Range: 5 means 5 squares. Likewise, a game system where each cell is 5 feet, the same 5 squares turns into 25 feet as a whole for measurements.

Not 100% sure I understand the full meaning, but it would likely be used for both ai and UI. One of the things we currently do is have a Fog or War effect and “remember” where the “character”(ie, the source of the Line of Sight) has previously been before. This means that if you put up walls the “character” cannot see the entire board until his vision/light sources(depending on day/night mode) have revealed an area similar to most war games such as C&C, Warcraft, Starcraft, etc. Also, we currently have a feature which prevents movement through (well, actually into would be more precise terminology) hard Fog of War which would of course require an ai component in addition to the UI. Our hopes in our next version would be to add a movement blocking (which could be independent of the LOS blocking in some cases) feature, so we will likely duplicate the lines created for sight blocking and then perhaps add additional lines for movement blocking. Think of things such as small windows, or elevation(cliffs, at the top of) that might not block line of sight, but will block movement.

it’s a standardized way to organize code into functional units to create a “plug-in” style architecture. Most likely your lib probably would be compatible as a whole as long as it returns some interface defined objects where the implementation could be swapped out. The Eclipse platform runs on top of OSGi which is why plug-ins are possible.

Gee sounds like a lot of messing around for you trying to support all of those grids…

Cool, yeah fog of war is something that I messed around with a bit but couldn’t find an elegant polygon-only solution. The problem is the explosion in the number of points when I tried to break off little pieces from the fog polygon. I think tile-based fog of war is the way to go. You can see my different (failed) attempts in the packages straightedge.test.experimental and straightedge.test.experimental.map :-\ I’ll eventually make a tile-based solution that fits with the rest of the API but it’s not a big concern for me now and you probably have your own implementation of fog in your existing code i guess?

Hmm well there are some interfaces but for the low-level stuff (which is most of the API) i deliberately tried to avoid interfaces since they make the code harder to follow and look bloated for the larger number of files. I mean what’s the point of having a Point interface and then a sub-class PointImpl as well. Easier just to have a Point class. And they slow performance (last time I checked, but according to [http://stackoverflow.com/questions/973504/does-java-optimize-method-calls-via-an-interface-which-has-a-single-implementor-m this link] maybe that’s no true anymore). Would you prefer more interfaces so that you can swap bits of my code for your existing code? I would just replace the code manually to keep it simple rather than have loads of interfaces flying around. But that’s just my personal preference and I’m open to being convinced otherwise! :slight_smile:

Yes, it is. But it’s the price we pay to support a free program for a hobby that has many different styles of Role Playing Games in a “generic” manner. The rendering code is fairly convoluted(lot’s of if(){} statements to accommodate the various play styles we currently support.

Yes. The process goes something like this in the rendering pipeline:

Put down base layer
Put down an “object” layer(these are things such as tables, chairs, trees,etc that the player may not interact with directly.
Put down drawings(simple shapes using graphics.drawX)
Put down labels (simple text using graphics.drawString)
Put down player character (and NPC’s) tokens
Get the character tokens vision limit
Get the light sources on the map and do an intersect with the vision area
Determine Vision blocking shape (area Object)
Draw FOW(ie, cover with black)
Cut out cached previously revealed area(Area object) and cover with partial transparency to simulate “soft FOW”
Cut out current visible area (ie, line of sight, Area Object).

Also of note, many of the previous layers also do some clipping so that we don’t try to draw strings, objects, etc in places where hard FOW won’t be seen. For example, depending on the server startup options, we optionally render the NPC’s tokens in Soft FOW, it just depends on the host’s start up preferences. Yea, it’s a ton of work and lot’s of stuff to keep up with. I “think” I got that ordering right above as I am doing it off the top of my head.

Well, we may (or may not) end up having to make changes before for we implement the SE code. The goal of using OSGi, is to allow end users (who know java) to follow the API’s we create and write up their own additions(similar to an eclipse plugin). For example, there is a concept currently in our software for macros and macro groups which individual users can edit to place their one custom “script” code. These are used to do such things as “click this button to roll 2 10 sided dice and print the results to the chat window”. However, currently, the macro and macro group code directly generates buttons and option groups which ties the implementation directly to Swing JButton and JOptionGroup objects(ie, this is what the current macro building code does). However, what if a user wished to implement this in the UI as a JTree instead. Currently, they would have to change the macro and macrogroup classes. However, using an OSGi model, we uncouple the “model” (macro and macro group) from the “view”(some type of UI) and the UI implementation can be much easily switched out and the UI coder does not have to know anything about how the macro’s are built or work.

I seem to recall that your top level classes implements Shape? If that’s the case, that may very well be just fine for what we need it for without any changes, again, I have no idea since I have not really looked into it(someone else is on that part of the code.)

Very nice way of doing it, thanks for the explanation.

I will look into OSGI some more, sounds useful. I guess that the straightedge code will be the model only so there shouldn’t be any issues.

Cool, yeah KPolygon implements Shape and that’s the output of the vision code. Phew!

Btw I haven’t used the java.awt.geom.Area much at all, but you may find that [http://tsusiatsoftware.net/jts/main.html JTS] does a better job of calculating intersections, unions etc. I’ve found it to be very fast and reliable and with great support.

Let me know if there’s any edits you or the other devs would like me to make to the SE code.

Can I use this with LibGDX?

Really nice! Just started to learn how to use this and it seems simple enough to learn. I bet it will be worth it, first test runs perfectly.

Really nice! Just started to learn how to use this and it seems simple enough to learn. I bet it will be worth it, first test runs perfectly.

Is there a tutorial anywhere on how to use the lighting it would be extremely useful.

Is there a tutorial anywhere on how to use the lighting it would be extremely useful.

Hi Icass,
The demo code which is included in the zipped source file shows some different methods of drawing the lights, my preferred method is to make an image with a nice alpha gradient that is fully opaque at the light source and fully alpha at the border of an ellipse which is encompassed by the image. You can see the code for that in straightedge.test.demo.View and Player.
Cheers,
Keith

Hi Icass,
The demo code which is included in the zipped source file shows some different methods of drawing the lights, my preferred method is to make an image with a nice alpha gradient that is fully opaque at the light source and fully alpha at the border of an ellipse which is encompassed by the image. You can see the code for that in straightedge.test.demo.View and Player.
Cheers,
Keith

So how does the light work is it raycasting or do you overlay then figure out what the shape will be for each object?

So how does the light work is it raycasting or do you overlay then figure out what the shape will be for each object?