GameCore: An exercise in design

I’m feeling a bit out-of-sorts so take this post with a truck load of salt…

So what is it exactly about someone wanting to improve his brain that some of you find so confusing? ???

I’m mean this isn’t hurting you in anyway, or am I about to end the world by trying to think through something? At the worst my head might explode and leave a stain on the wall. 8) At best I end up with something that I like to use and God forbid someone else might find it useful. :stuck_out_tongue:

Not a bad idea, thanks for the suggestion.

Here’s another 2p! Now you have 4p:

I suggest you divorce the sprites from the entities. An entity can be represented by multiple sprites or even be a compound entity made up of smaller entities (boss gidrah). A particle is not an entity as it cannot collide with other particles or entities, and it can be represented by a single sprite.

An entity can have a simple fixed radius or it can have several radii which it can determine from the animation frames of the sprites it is composed from.

Cas :slight_smile:

Could you elaborate a bit more an this idea. I see the advantage of being able to tie several objects into a compound object so you could do things like “move this hole group of sprites together”. Like a boss that has ten sprites in his tail moving back and forth across the screen. What I’m having difficultly understnading is: what is an entity? Since it has to represented on screen via a spirte why and how would separating them make a difference? I’m interested in the idea, I just don’t understand it.

So this would be for collision detection of somesort I assume, or did you have some other idea in mind?

[quote]What I’m having difficultly understnading is: what is an entity? Since it has to represented on screen via a spirte why and how would separating them make a difference?
[/quote]
Typically an entity is a single ‘thing’ which you interact with in a game. So for something like space invaders you’d have a different entity type for each alien, probably one for the player and maybe even for the projectiles as well. Something like a boss monster is only a single entity, but its visual display is composed out of multiple sprites.

Side note: everything in S-Type is an entity. That includes the player, enemies, projectiles, lights, level geometry, switches and triggers. They all get treated in the same generic way. Although after trying this I can say that this does complicate things somewhat, it does mean you can write lots of general purpose code and use it all over the place.

Hmm, maybe I’m starting to get it. So really an entity is a totally custom thing per program. There really wouldn’t be any good way for me to create a default set of entity classes. You might, for instance, use them to implement automatic behavior like bouncing back and forth across the screen or something.

So what I have is fine and if a person needs these entities they can simply write them and assign my sprite types to them internal to the entity classes.

I think I will add a “super-sprite” to the diagram that is a collection of GameCoreSprite objects that all function relative to the super-spirtes location.

This is what I’d been assuming would happen all along ;D.

I’m considering getting rid of the GameCoreRenderer interface which currently would be called from the rendering loop to put the sprites on the screen.

The replacement for it would be the ability to assign a rendering class directly to each sprite and this would get called instead. So there would be a default set of rendering classes that don’t anything special, they just get the sprites on the screen. If you had special rendering needs for a particular class (not the keyword) of sprites you could write it yourself and plug it in.

RenderOpaque, RenderAlphaPercent, RenderRedChannel, etc…

Basically I have acheived my original goal of understanding how to abstract out the rendering layer. This has shown me that I will loose a lot of functionality from OGL because the code has to pick a single way in which to render something. I still don’t want to have to be tied to a specific implementation like JOGL, J2D and LWJGL.

So my modified goal is to have a framework that allows the programmer to choose to be tied to a rendering implementation. We would end up with a “generic” framework for the programmer to start with and then they can add to it in order to the advantages of a specific rendering library.

I’m also considering removing the ability to change the coordinate system and the programmer just has to know what system the renderer they choose uses.

What do you all think of this new approach?

Cas,

did you use quads to render the sprites in AF or just the GL drawPixels method? I’m wonder what would be a better approach. It seems simple to do the drawPixels thing bu then you can’t rotate and other things.

Quads. Much quicker and more flexible.

Cas :slight_smile: