Using software rendering instead of GPU rendering?

I’m currently writing a Doom-esque 2.5D game at the moment, not using any libraries such as LWJGL or JOGL. Would it still be okay to use the games software rendering instead of moving on to LWJGL using GPU rendering? Any help is appreciated ;D.

Also to note: I’m still a slight-bit new, so if I get a few things wrong let me know.

Shouldn’t be a problem on todays CPU’s, however seems a bit of a waste for the GPU to just sit there when you can offload work to it and free up the CPU for other tasks.

Hi

I used software rendering at the very beginning of my project in 2006. The engine I used was called d3caster, it uses a very basic raycasting algorithm. There are more efficient solutions even in this field, 3DzzD is very fast and supports both hardware and software rendering. However, I agree with kappa. You can get a decent frame rate even on low end machines if and only if you use very simple meshes. Todays CPUs move the limits further but when you use Java, some operations are already hardware accelerated “under the hood” and your rendering will always be noticeably slower on the CPU anyway.

I have used JOGL for several years, it contains some nice renderer quirks to work around some very famous driver bugs which drives it safer than home made OpenGL plain C/C++ code except if you are a specialist of drivers :s The very first version of my game only has 2D ennemies in a flat 3D level, a bit like Wolfenstein and hardware acceleration gave me a huge boost, the game was initially unplayable in full screen mode, it was so slow, I got only one frame every 2 seconds with software rendering :frowning:

Use software rendering only if you don’t aim low end CPUs or if your graphics are extremely rudimentary but I don’t see the point except that you don’t need to sign your application.

You could write a software renderer using OpenCL… >_>

@theagentd - That sounds very cool but it might actually be slower than just cpu rendering. You would have to send the info to opencl on the gpu then it works stuff out, then it has to send it back to the cpu so you can send it back to the gpu to tell it to be rendered. Or maybe I’m missing something.

If I look at openGL it looks like it’s likely not slower :slight_smile: I know, openCL and openGL is not the same thing, but the difference shouldn’t be too big.

I was obviously kidding. That’s exactly what OpenGL is for.

That most likely wouldn’t be the case. There are lots of fixed functionality hardware between shader stages in OpenGL. There are hardware rasterizers that can determine which pixels that lie inside a triangle using multiple sample points, hardware interpolators for interpolating vertex attributes across a triangle for each pixel and most importantly ROP (raster output) units that ensure high performance while synchronizing the order of pixel writes. I bet that there’s even more hardware than I’ve mentioned under the hood that OpenGL uses that isn’t exposed in OpenCL.

doing a software renderer on the GPU is not a dumb idea. NVidia did some research in this direction, first they created a software rasterizer with CUDA which got at about 1/10th the speed of the normal hardware rendering. Then they did a lot of research with raytracing on the GPU which is a lot faster then doing it on the CPU.

http://de.slideshare.net/NVIDIA/alternative-rendering-pipelines-on-nvidia-cuda

It is not something which you can be used for games atm, because you can only use it for not so big scenes and your customoers would require a high and graphicscard.

They are using it in the movie industry for pre-vis for example.

On the other hand one can always use some sort of raycasting in a shader for some special rendering techniques, which is some sort of software rendering.

There are some things that are easier to do in software( probably more effecient too). Like swirling background I did on this gfx effect.

rel.phatcode.net/junk.php?id=142