Context pooling feasability

Hi, first post here. Interested in GLSL, because I need large performance gains with a goal seeker, solving for 20 unknowns. Part of process really needs to be brute forced (the first ten unknowns). As you might imagine this is a large bottleneck. java.util.conncurrent thread pools are nice for multi-core CPU’s and I am currently implemented using them, but this is not cost effective. T2 systems start at 15K. CUDA seems a little too new, NVIDIA specific, & not integrated with Java.

I have spent about 2 weeks reading, re-coding a couple classes as in GLSL, and trying to plan out how much could be moved off. Is it going to be possible to put multiple graphics cards in a computer, and build a context pool, that the thread pools can use. I know you can buy Power Macs with multiple graphics cards, but am I being just way too greedy? BTW, insane is already a given.

While just pulling anything off is no minor feat, was just wondering what my option might be.

Well looks like I was just seduced by seeing AWTGraphicsDevice Class in the Java Docs, and references to devices in the Java Doc of GLDrawableFactory. FYI, what I was looking for some like:

public class GLContextManager {
javax.media.opengl.GLContext[] contexts; // where each context is a different graphics card

public GLContextManager(){
    java.awt.GraphicsDevice[] devices = java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment().getScreenDevices();
            
    javax.media.opengl.GLDrawableFactory factory = javax.media.opengl.GLDrawableFactory.getFactory();
    for(int i = 0; i < devices.length; i++){
        contexts[i] = factory.getGLDrawable(devices[i], null, null).createContext(null);
    }
}

}

This would be followed by a method for a member of a thread pool to request a context. That thread would have to makeCurrent() and possibly wait, and then put release() in a finally block. No return to the pool would’ve actually been required, it would just hand out contexts in a round robbin fashion.

Another problem with this is multiple screen devices could be connected to the same graphics card. I guess I will just have to get a single card with the highest # of Fragment Processors as possible.

I’m a bit of a openGL noob; I know what GLSL is but I’ve never used it. What I don’t get is what you’re trying to do… Multipass rendering in realtime?

Simon,
Not to crowd gamer’s / renderer’s, but the price/performance ratio of fragment processors is grabbing attention of some who have no interest in graphics. There is a site for General-Purpose Computation Using Graphics Hardware, http://www.gpgpu.org/ . There you can find people, mostly researchers, doing many different kinds of systems, and Universities building Supercomputers out of video cards. NVIDIA has come up with a new language, CUDA, http://developer.nvidia.com/object/cuda.html which eliminates the need to fake out the cards by passing large amounts of data disguised as textures, and buiding an image to return that is not really an image.

I didn’t really answer your question, but the domain of my application is finance / investing.

Cool… Good luck!