Could you go into more detail about how you go from input to output? It doesn’t make sense to me that you’d be transforming color data by a perspective transformation. Also, what do you mean by “relevant parts of the output buffers”? Are these parts determined by the perspective transform? or is it almost a 1-1 mapping from input changes to output changes?
If all you need to do is to multiply input pixels by a matrix to get the output, here is one possible way of doing it:
- Create a framebuffer object rendering directly into your output texture (rendering with fbos is very fast)
- Write a glsl shader that looks up the color from the input texture, and then multiplies that by the desired matrix for the output color. This color will then be stored within the output texture of the fbo.
- To actually render an update, you’d bind your input texture and fbo, bind the special glsl program and render 1 quad that covers the whole output texture.
- To save the texture or access the data in-memory, you’d have to copy it from the graphics card.
These topics are more complicated than I described here, but it should hopefully help you on your search. If you’re worried about copying performance, you may want to consider storing the image data in opengl and doing the visualization from there. JOGL will let you display swing components around the opengl area easily.
Also, using opengl in such a way depends greatly on how you need to use the data. If all you need to do is update the visuals frequently, and only save or copy out a few times during the program, the above strategy will work well. If you need in-memory access of the output data for other operations, consider keeping everything in memory. In my experience, copying out a texture takes a few to 10 milliseconds (but it’s been awhile); I don’t know how long it would take do the transforms on the cpu.
HTH