I’m coding up the map part of my game engine ATM, which brings me too this question.
Is this a good way to handle way to handle collision detection with the environment (like walls, floors, etc.) in a platform scrolling game:
I create an image for my level in the game, and I split it up into tiles. The game loads the tiles and then renders the ones that are on the screen. This is fine and dandy.
For collision, I’d make a special “collision channel” (like an alpha channel) based on black and white values that corresponds to the background image and this is also split into tiles. For collisions detection with the character(s) and sprites, I get the tile they are one and then test to see if they are touching any of the non-black pixels of the “collision channel.”
The advantage to using this system, is I could define volumes that slow motion (like water for example). Any value less than white would be a “slowing down” (friction) area.
Does this make sense? Will it be too processor intensive? Is there a better way?
Edit: I’ve been giving this more thought, and I realized that it would be inneficient to scan the “collision channel” image for the gray->white pixels every game loop.
Anyway to cache the data? Would storing the colors in custom objects that takes a double array of ints (x,y coords) and a Color object, and cycle through the array?) be good enough do you think?