Hello,
I am designing an abstract clock and event system for people to implement into their games, the clock runs on a seperate thread much like the gdx Timer class, this is due to timing issues with libGDX game loop, it only updates 60 times per second meaning the time is inaccurate.
The way I am doing this is basically allow the user to define how often the clock updates, if you want a millisecond accurate clock you can set it to do so, if you only need it to iterate once per second and update the clock/check for events you can do so.
So if the user does this, the events are triggered depending on how the user sets it up.
Say the user wants to start raining in the game world, they could easily set up a system for it to start raining at a random hour, at a random minute and a random second and the event would be triggered.
I think I need to pass the OpenGL context to this thread if any sort of event involves graphics, am I right here?
I am currently not sure as I have a test setup that draws the time using a Bitmap font…yet no error is thrown. Does this mean I am ok? I am not quite used to using threads atm, need practice!
EDIT: Quick sprite test, yeah need to pass the OpenGL context, I would like avoid having this error thrown for the simple fact is, well the user implementing it should not be trying to do anything graphics related outside the libGDX render thread but if they choose to I would like it to be possible.
Ofc if there is a sufficient way of doing this on the same thread, please enlighten me, it is probably something simple as hell and I just can’t put my finger on it. I tried dividing the 1000/ fps to increase the milliseconds, but then I am losing out on accuracy and the whole point of this clock is to allow simple things like stopwatches to be created with basically 1 line of code, then execute your code when it reaches zero with a simple implementation.