I’m considering getting rid of the GameCoreRenderer interface which currently would be called from the rendering loop to put the sprites on the screen.
The replacement for it would be the ability to assign a rendering class directly to each sprite and this would get called instead. So there would be a default set of rendering classes that don’t anything special, they just get the sprites on the screen. If you had special rendering needs for a particular class (not the keyword) of sprites you could write it yourself and plug it in.
RenderOpaque, RenderAlphaPercent, RenderRedChannel, etc…
Basically I have acheived my original goal of understanding how to abstract out the rendering layer. This has shown me that I will loose a lot of functionality from OGL because the code has to pick a single way in which to render something. I still don’t want to have to be tied to a specific implementation like JOGL, J2D and LWJGL.
So my modified goal is to have a framework that allows the programmer to choose to be tied to a rendering implementation. We would end up with a “generic” framework for the programmer to start with and then they can add to it in order to the advantages of a specific rendering library.
I’m also considering removing the ability to change the coordinate system and the programmer just has to know what system the renderer they choose uses.
What do you all think of this new approach?