Hi,
I’ve used OpenGL before, but I was wondering if someone can tell me exactly where OpenGL actually “lives”.
Does it reside in software (like in some Vista code or driver code) or hardware (like in the gfx card) ? I’ve just bought a brand new PC with an ATI Radeon X1300 graphics card. Of course the card comes with drivers from ATI that presumably let Windows Vista communicate with the card.
When I give OpenGL a command (like to draw a square, say) I am having trouble working out exactly how the computer ends up drawing it on the screen.
I assume that the command travels from my code to JOGL, and then to the ATI graphics driver which then breaks this OpenGL “command” down into pieces that the graphics card can understand. Is this correct?
Does the “hardware acceleration” for OpenGL occur automatically (by feeding the calls directly to the graphics card) or do I have to do something special to “tell” the card to accelerate the command? How do I know that my CPUs are not responsible for the rendering and that I’m actually using the power of the gfx card?
On a related note, I noticed that I can run Google Earth using OpenGL if I want. When I do the rendering of the earth slows down incredibly (remember I have a new machine, a 1.8GHz core 2 duo). Why is the rendering so incredibly awful? Is it some Microsoft thing? It looks to me like the rendering is being done either very inefficiently or in software rather than being accelerated.
Thanks in advance.
Mark