Only situation I can think of that could cause that (excluding driver peculiarities) would be :-
If you were using Java 1.5
and you didn’t have the OGL pipeline enabled
and you were using createCompatibleVolatileImage(w,h,transparency)
and you were drawing the image onto an unaccelerated surface.
In that situation, the volatile Image with translucency would be created in main memory (because, by my understanding translucent volatile images arn’t accelerated by default in 1.5)
Where as the bitmask volatile image would be created in vram.
Obviously with the target image in main memory not vram,
a main -> main blit would be quicker than a vram -> main blit. (due to vram readback being painfully slow )
i think you’re wrong… the createCompatible… means it creates the image most suited for the enviroment. if the enviroment uses an accelerated buffer (in vram) the image would (try) to be in vram too. if the back buffer is not in vram i think the image wouldn’t be in vram neither.
i think the images can’t be cached as they are bigger than 256X256…
anyway, i’m getting really frustrated with all this unexcpected behaviour…
createCompatibleImage(w,h,transparency) attempts create an image most suitable
for rendering to the device. Buf if you’re trying to create a translucent compatible image,
there may be no way to have anything more suitable than just plain IntArgb BufferedImage.
As translucent images weren’t accelerated in 5.0, rendering them to a volatileImage would
cause the latter to be punted to system memory to avoid doing readbacks from vram.
Some data on your environment (video board, os) wouldn’t hurt, btw.
Also, -Dsun.java2d.trace=count is your friend. Run with this option with your
images being loaded as translucent and as bitmask images. From that it’d be easier
to see what loops are being used.
here’s some code:
class Mountain
public Mountain()
{
/// some other coed here…
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice gs = ge.getDefaultScreenDevice();
GraphicsConfiguration gc = gs.getDefaultConfiguration();
//this works runs faster with Transparency.TRANSLUCENT than with Transparency.BITMASK
SetImage = gc.createCompatibleImage(Width,Height,Transparency.TRANSLUCENT);
Graphics g=SetImage.getGraphics();
Graphics2D G2=(Graphics2D)g;
G2.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
RenderingHints.VALUE_ANTIALIAS_ON);
G2.setRenderingHint(RenderingHints.KEY_RENDERING,RenderingHints.VALUE_RENDER_QUALITY );
G2.setPaint(texture);
g.fillPolygon(XPoints,YPoints,NumOfPoints);//Xpoints is an array of ints
g.setColor(new Color(0,0,0));
g.drawPolyline(XPoints,YPoints,NumOfPoints);
}
//and this is the rendering function:
public void draw(Graphics g)
{
g.drawImage(SetImage,DrawX,0,null);
}
//clear the screen
g.setColor(new Color(100,200,255));
g.fillRect(0,0,theScreen.GetWidth(),theScreen.GetHeight());
//this transforms the coordinate system from( X left to right ,Y up to down) to( X left to right, Y down to up)
G2.setTransform(StartAT);
//theMountains is an array of Mountain objects
for(int i=0;i<theMountains.length;i++)
{
theMountains[i].draw(g);
}
g.dispose();
strategy.show();
What does the creation of your BufferStrategy look like?
My guess is that this line :-
[quote]G2.setTransform(StartAT);
[/quote]
Is preventing the accelerated pipeline from being used.
Therefor, both BITMASK and TRANSLUCENT images are being drawn through the software blit loops.
Now, because TRANSLUCENT blits require readback from the BufferStrategy’s VolatileImage back buffer,
the rendering pipeline is being sensible and shunting the backbuffer out of vram permanently.
Giving you lots of main memory -> main memory blits, followed by just a single main memory ->vram blit at the end.
This would be ok if you have fast system memory.
Whereas blitting a BITMASK image onto the Volatile backbuffer doesn’t require pixel readback,
so the backbuffer is left in vram - resulting in alot of main memory -> vram blits.
If your graphics card isn’t too speedy, or the AGP bus isn’t fast - this will cause a bottleneck.
How much of a speed difference between BITMASK and TRANSLUCENT are we talking about?
Disclaimer
All that is a guess from the limited info available =D
ok, so how can i over come this problem (besides by switching to mustang)?
and what’s the use of using the opengl pipeline if it prevents me from applying transfroms?
If the opengl pipeline is functioning correctly, Transforms will be accelerated.
(without the opengl pipeline, transforms will never be accelerated in 1.5)
My understanding is that 1.6’s d3d pipeline offers equivalent accelerated functionality without relying on opengl. (though I have yet to download and try 1.6)
So this applies only to D3D, and OpenGl is Accelerated?
than why can’t i use Opengl? i’m not getting accelerated results while enabling the opengl pipeline either…
Yeah, yeah, okay, mea culpa. I can’t believe how much flack we’ve caught for this one. I remember reading a blog where some developer was fuming over the whole ‘t’/‘T’ thing. I guess he really must’ve run out of things to complain about if something like this could keep him up at night