Hey guys. I’m having a hell of a time computing the slope of an edge in a BufferedImage – it’s a very difficult operation, and I don’t necessarily have the math skills to get it completely right without a lot of effort.
So I’m asking for some insight.
Basically, I have one BufferedImage that collides with another at a certain velocity, and I want it to properly reflect off at the correct angle when it hits the second BufferedImage.
This gives me a single pixel of collision (where the two images touched, or at least the first place), and so I need to use the location of that pixel to scan all surrounding pixels and find the slope. In concept this doesn’t seem too bad, but there’s so many fringe cases it’s making everything pretty difficult.
And out of a grid of pixels, how am I to find the slope, even without fringe cases? Essentially each pixel is the center piece in a 3x3 grid, as in it is completely surrounded by other pixels. Just looking at these 9 pixels, this creates only 4 real possibilities – up/down, right/left, and each diagonal. Naturally, that is not definite enough to find a real slope, so I have to expand the check to the surrounding 5x5 grid, creating a total of 25 pixels. And from there I’m not quite sure where to go. Unless an edge has one of those four slopes, then each pixel along it will definitely not be perfect. I can obviously count slope by rise over run, but this still creates problems with a potentially curving slope.
So I guess my real question is, how many pixels should I scan out, and how do I know when I found the real edge? And what do I do in some of the fringe cases, like single floating lone pixels or being completely surrounded in dirt?
Thanks.