[LibGDX] Pass OpenGL context to another thread

Hello,

I am designing an abstract clock and event system for people to implement into their games, the clock runs on a seperate thread much like the gdx Timer class, this is due to timing issues with libGDX game loop, it only updates 60 times per second meaning the time is inaccurate.

The way I am doing this is basically allow the user to define how often the clock updates, if you want a millisecond accurate clock you can set it to do so, if you only need it to iterate once per second and update the clock/check for events you can do so.

So if the user does this, the events are triggered depending on how the user sets it up.

Say the user wants to start raining in the game world, they could easily set up a system for it to start raining at a random hour, at a random minute and a random second and the event would be triggered.

I think I need to pass the OpenGL context to this thread if any sort of event involves graphics, am I right here?

I am currently not sure as I have a test setup that draws the time using a Bitmap font…yet no error is thrown. Does this mean I am ok? I am not quite used to using threads atm, need practice!

EDIT: Quick sprite test, yeah need to pass the OpenGL context, I would like avoid having this error thrown for the simple fact is, well the user implementing it should not be trying to do anything graphics related outside the libGDX render thread but if they choose to I would like it to be possible.

Ofc if there is a sufficient way of doing this on the same thread, please enlighten me, it is probably something simple as hell and I just can’t put my finger on it. I tried dividing the 1000/ fps to increase the milliseconds, but then I am losing out on accuracy and the whole point of this clock is to allow simple things like stopwatches to be created with basically 1 line of code, then execute your code when it reaches zero with a simple implementation.

Priority queue.

Google awaits for me!

I’m really saying: keep it simple. Why isn’t 1/60 of a sec good enough resolution for triggering things? Does multiple threads really matter?

It is more than good enough for triggering things, however running a clock to the accuracy of 1 millisecond is not possible using the main LibGDX thread.

Not just a clock but say a timer or a stop watch, you would want to have an accurate read out of a time. For many things such as highscore comparison.

Running a real clock is fine enough, since when we look at the time we do not care much about the milliseconds, mainly the hour, minutes and seconds.

I overall want to keep this simple but flexible, being fair it is pretty terrible design to actually trigger an event that renders something. Really it should create something to be rendered and that thing is handled in the main thread, if that makes sense.

You’re not mentioning anything that needs greater than 1/60 resolution. Have one data-structure per thread. It stores:

  1. trigger time
  2. code that needs to be run
  3. data (2) needs to do it’s thing.

And yes the trigger code should just trigger things. If thread A needs to single something to be done in thread B…use a different communication channel.

For PQs…binary heap (potentially modified) is a choice. Other things pop to mind like a skip list.

I think I sort of have a representation of that.

create a clock
libGDX game loop>
add an event to happen at xx hour, which get stored in an array inside the clock
check if event time matches clock time
either trigger the event or continue to next iteration

This is a little silly at the moment, I will probably (most likely) add a proper event manager that simply works in conjuction with a given clock.

Than means the event manager will be updating on the main thread, then the other thread is purely for time keeping.

I have never made something that is done in proper OOP, like an API so structuring it is a little new to me.

Forget about clocks, libgdx and multiple threads for a moment. You have a simple game and it tracks the current frame count. Pretend like it’s currently 1000. And we’ll pretend like we’re using a priority queue.

It needs to check to see if it needs to run any triggers this frame: peek at the PQ…is the key 1000 (or less)? grab it and run it (repeat until none).

Some code happens this frame and want to trigger something in 1 sec: add with key 1000+60 (assuming the 1/60 timer), something else in 1 min (1000+60*60), etc, etc.

Let say that you have drag racing game. Timer accuracy is needed to be one millisecond for high scores. Naive bruteforce way would run physic simulation 1000 times per second. But you really don’t want to pay that performance price. So what you do? You pick simplest thing on earth and you cheat a bit. When car is over the finnish line you calculate what would have been subframe time when car actually finished to goal. Then you try interpolate between prev pos and current pos and you have solution that work almost for every problem. Keep it simple.

Thanks for the help guys, I dropped the thread way and just integrated in with the game loop already given to me.

Keeping it simple, I have went for a slightly inaccurate approach, I need to do some more reading into priority queue and a way to calculate time using interpolation as said above.