Hiring for small job.

Looking for someone that is good with OpenGL/lwjgl. Java of course :slight_smile:
If you know CUDA (jcuda) that much the better, but not required.

It should hopefully be pretty straightforward for someone experienced.
All 2d and will probably be dealing with Frame Buffer Objects and Textures but will defer to your expertise on best way to implement it.

I can pay via paypal or check in USD.

Email if intrested: dimecoin@gmail.com

Can you give any hints on what the jobs entails or what your budget is?

Sure. In terms of budget, I donā€™t know. Was looking for an estimate for whoever was doing in.

In terms of what needs to be done, I need to update the screen per pixel.
See this: http://www.powerengine2a.com/images/screenshot_290.png
What Iā€™m doing now is pretty slow.

From what Iā€™ve read:
a pretty standard way to do this is to create a Frame Buffer Object, Rendering that to a Texture ā€œofflineā€ then display the texture.
Another much faster is to use a pixel shader or cuda to do all the manipulation on the GPU.
or third unknown option, like I said in my first post; Iā€™m willing to defer to the expert on whatever he thinks would work best.

In terms of performance Iā€™m looking for at least 60 fps at 1024x768 on moderate hardware.

In terms of deliverable, Iā€™m looking for a ā€œscreenā€ class that has an easy way to set pixels like setPixel(x,y,color), can render texture resolution independent of display (ie. internally use a 1024x768 texture for speed, but display it was 1280x800 [or whatever the set resolution is] either scaled or centered). Should also have some sort of update/render methods were it does its work. That way I can control how much updates it gets. If it starts lagging, I can start skipping updates and still render the last good texture generated; then resume updating once it catches back up. Or manually call an update after a setPixel.

What kind of algorithm is used in your calculations? You canā€™t just convert any algorithm to GPGPU/cuda/opencl, so we need to know a bit more about your project.

The more I think of it, the cuda/opencl is probably more than needed. Standard opengl hopefully will be fast enough.

Same goes for OpenGL.

So, again, whatā€™s the algorithm like? What are you calculating?

I smell confusion.

Cas :slight_smile:

Have an object that stores/renders a texture, store it and update/render it any way you want as longs as itā€™s fast. Have an external method that allows modifying/getting of that data. How the values that are passed into the modifying function are generated shouldnā€™t matter. Just assume itā€™s random data.

Yea, assume that itā€™ll call the modify function on one or more random x/y pair to set it to a random color. The update and render functions will be called atleast 60 times per second. From your objects point of view, there is no way itā€™ll be able to assume any patterns on the data coming in. The only thing it could reliably assume is that x/y will be in a valid range and color will be a valid color value (in whatever way you wish to store that data).

An example:

Create texture 800x600, set all pixels to black.

setPixel(100,200 red) is called.
The texture is updated(). Itā€™s all black, except pixel x=100/y=200 is red.
texture is rendered() and displayed to the user

setPixel(200,100, blue) is called.
The texture is updated(). Itā€™s all black, except pixel x=100/y=200 is red, pixel x=200/y=100 is blue
texture is rendered() and displayed to the user

setPixel(100,200 black)
setPixel(200,100, black)
The texture is updated(). Itā€™s all black again.
texture is rendered() and displayed to the user.

If you really want ā€˜randomā€™ pixel access, nothing beats CPU performance:


public void setPixel(int x, int y, int color)
{
    this.rgb[y*w+x] = color;
}

Itā€™s all this talk of attempting to set individual pixel data with individual method calls that seems a bit suspicious to meā€¦

Cas :slight_smile:

here is what I got:


GL11.glBegin(SGL.GL_QUADS);

		for (int x = 0; x < width / pixelSize; x++) {
			for (int y = 0; y < height / pixelSize; y++) {
			
				int x1=x*pixelSize;
				int y1=y*pixelSize;
				GL11.glVertex2f(x1, y1);
				GL11.glVertex2f(x1 + pixelSize, y1);
				GL11.glVertex2f(x1 + pixelSize, y1 + pixelSize);
				GL11.glVertex2f(x1, y1 + pixelSize);
			}
		}

GL11.glEnd();

roughly 97% of the time itā€™s bound by the GL11.glVertex2f functions.
Iā€™m drawing direct to screen.

I hoping that it would be faster to generate a texture off that data then just throw up the entire texture.
Rendering 1 texture to screen should be tons faster then all those calls.

The bottleneck will move to generation of the texture. Which if fine. That is only updating it, not rendering it and can always be throttled.

Sorry, but I donā€™t understand how this helps. My bottle neck is in rendering, not in access or manipulating the data.
The vast majority of time is spent in GL11.glVertex

That is possibly the most pathologically worst way of achieving this result without actually writing it in Ruby! See Rivenā€™s answer. Basically: allocate a direct ByteBuffer to hold the raw pixel data. Poke directly into that. Every frame, call glTexImage2D() to upload your data to OpenGL. Draw a single quad. Rinse, repeat.

Cas :slight_smile:

Any examples/tutorials of that? Like I said, I suck at opengl.

I donā€™t get it.

You know how you push data into your image.

You can render your image using plain old Java2D: g.drawImage(img, 0, 0, null);


   public static int[] accessRasterIntArray(BufferedImage src)
   {
      return ((DataBufferInt) src.getRaster().getDataBuffer()).getData();
   }

   public static byte[] accessRasterByteArray(BufferedImage src)
   {
      return ((DataBufferByte) src.getRaster().getDataBuffer()).getData();
   }


BufferedImage img = new BufferedImage(w, h, BufferedImage.TYPE_RGB_INT);
int[] rgb = accessRasterIntArray(img);

public void setPixel(int x, int y, int color)
{
    this.rgb[y*w+x] = color;
}



public void paint(Graphics g)
{
    g.drawImage(img, 0, 0, null);
}

I doubt anything will be (much) faster than that.

Why do you want OpenGL? It will be slower.

* Riven thinks he deserves a few bucks :slight_smile:

Actually GL will be a bit faster as itā€™s slightly more direct. And if he wants to scale the image to whatever actual size - maybe with mag/minification - very probably quicker, or at least reliably quicker in more places more of the time.

dime - this is possibly the simplest thing you can do with OpenGL. Iā€™ve not got the time to google it for you though. You basically want to: create a texture of appropriate size. Then in your loop, calculate your pixels and directly twiddle the data in the byte buffer. Upload the data to the texture each frame using your direct byte buffer and glTexImage2D. Render a quad using that texture over your entire display.

Cas :slight_smile:

Sure, if you scale/clip/do-anything OpenGL will be faster, but a pure blit (memcpy) is hard to beat. The upload to the GPU will be much slower.

Oh well, sounds to me like itā€™s best for him to keep things simple :slight_smile: Once it works on the CPU, he might want to try to support scaling and whatnot by pushing some work to the GPU.

ok, thanks guys. I guess this thread can be closed. I go back to reading up on opengl.