Hi!
When I use OpenJDK, each sound is played correctly only the first time. Does it use PulseAudio? Do you think using JOAL would allow me to work around this limitation? Best regards.
Hi!
When I use OpenJDK, each sound is played correctly only the first time. Does it use PulseAudio? Do you think using JOAL would allow me to work around this limitation? Best regards.
LibraryJOAL, Version update
- Fixed JOAL package name from the old net.java.games.joal to the new com.jogamp.openal.
In case anyone else has the same problem, this is a problem with the available hardware mixers - basically it only let’s you create a certain number of lines/ clips, after which they stop playing, no matter what you do to delete or shut down the earlier lines – and with no error messages or anything. I’ve not gotten my own software mixer to work yet – pure Java just seems to be too slow for this type of operation.
The reason I have to be able to create new lines rather than reusing the old ones, is because there doesn’t seem to be a way in the API to change the audio format of an existing line. This isn’t a problem on the Java Sound Audio Engine mixer, because I can kill the previous line and create a new one. Unfortunately, the Java Sound Audio Engine is only available on normal Sun Java, not on alternate versions such as OpenJDK. I’m creating a new Library plug-in for non-Sun versions of Java which requires the user to have all their audio files in the same format (so I could reuse the lines). I might eventually work this into the existing LibraryJavaSound plug-in with a configuration option, depending on how extensive the changes turn out to be.
Thank you very much.
Talking about 3d sound, have you ever tought of trying to implement the cetera algorithm into your 3d sound system ? That would increase dramatically its awesomeness .
LMFAO, awesome. I don’t see the algorithm itself published anywhere though (proprietary technology, maybe?)
I guess with the explanation for how it supposedly works (brain picking up on differences in amplitude and phase offset), I could try to “reinvent” the algorithm from scratch (knowing the direction vector a source is coming from and assuming the average speed of sound, attenuation at sea-level, and distance between the ears, it should be a matter of applying some trigonometry and a logarithmic function).
I did some tests with it, and the difference between the time the soundwaves reach your ears is about 0.3ms (max!).
It’s amazing your brain can pickup these tiny differences.
The result is not really convincing though. You have to simulate sound waves inside your skull for it to be truely believable, as in not being able to distinguish what you hear through the headphones and some reallife positional sound source. The illusion immediately breaks when you turn your head, as all the offsets are off.
To be honest, though, I think for many applications the need to keep your head in a constant orientation wouldn’t be a huge problem. For example, when you are intently focused on an action game, you aren’t really looking around the room all that often (and when you do, you are probably paying attention to something besides the game anyway, so it wouldn’t matter much if the illusion is broken at that point).
Were you working with the actual catera algorithm, Riven, or “reinventing” it so to speak?
You’re missing the realism you can achieve. When you hear a very realistic sound and can’t distinguish it from reality, it’s actually likely to look at the ‘source’ of the audio. You just can’t help yourself.
I just did the math, implemented it and put on my headphones. I wasn’t aware of any existing algorithms.
Well I think that’s the point of doing it . Making it realistic enough . And I think yes, you can help yourself, after getting used to the realism.
Just an example, have you ever tried to make your grandpas play a video game ? In the beginning they’ll find themselves “dodging” the bullets/enemies with their body while holding the joystick . After a while they just get used/concentrated enough so they stop dancing with a joystick on their hands .
It should be a fun math exercise, anyway. I’ll see what I can come up with. Besides, who knows if at some point a type of head orientation device becomes widely used - then you could simply feed the listener orientation to the SoundSystem and problem solved
This conversation just totally gave me a great idea for my next project. A pattern-recognition algorithm that can recognize the eyes and nose from webcam input. Knowing their position on a 2D cross section of a player’s head could be used to calculate a fairly accurate orientation for the player’s head. That could control audio positional data for a scene. I am totally excited about this!
You’d also have to know the position and orientation of the webcam.
Yep. I’m thinking a simple “please look directly at the dot on the screen” configuration step to generate a matrix to apply to the later calculated orientations.
Talking about ideas, if you implement the “cetera” algorithm into your engine, it could also open new possibilities to produce more complex games for the visually impaired people . Or even a game to be played with your eyes closed (and no more excuses to not finishing a game because of the art )
I’ve started coding this. The math for calculating the timing and gain differences is actually surprisingly simple (that kind of worries me for some reason). What I have to figure out now is how to take those calculated values and apply them. I’m thinking a basic two mono input to one stereo output mixer. Gain differences are easy. Phase differences are going to be more tricky (specifically, changing the phase difference dynamically as the sound is playing). I’m not sure if this should be done with slight samplerate changes or by “throwing out” data from whichever side needs to be ahead of the other. I suppose I’ll just play around to see what sounds best.
Thinking about these formulas, there isn’t any difference between the values you get if the sound is playing in front of you or if it is behind you. I think there needs to be a bit more to it. Phase difference shouldn’t change, so I think the problem will be in the gain difference calculation. My initial thought is that sounds from the front should have a greater gain difference (due to the shape of the ears which amplifies incoming sound waves originating within a more-or-less cone shape extending out from the ear. I’ll have to think about this some more…
As said, you have to simulate the sound waves bouncing the skull, or it will never be realistic. Besides, it’s actually very hard in real life too, to distinguish a sound from directly in front / behind you, if there aren’t any obstacles that reflect the sound, which give more clues to the brain.
Ah, of course. I was focusing so much on the differences between ears, I left out the echo back to the ear part of the equation. The phase difference per side is easy enough to calculate (adds one more line to mix each, so 4 total now). The gain difference for the echo is a bit more complicated, though. I wonder what an acceptable attenuation would be for inside the skull. Different than through the air obviously, but no idea where I might find that. Maybe I could use use an attenuation measurement made through water, since the head is mostly full of fluids after all.
Hmm… Even with an echo off the inside of the skull (which is more or less spherical), the values are still all the same whether front or back (i.e. symmetric across the x/y plane, so front-right still would still sound exactly like back-right, etc). I could add another echo per side off some imaginary sphere or cube that the listener is inside, to give the brain more information to process. But that would again be symmetric across the x/y plane, so mathematically no difference. Even echoing of some completely randomly positioned object would still be indistinguishable without some visual or other sensory cue to define whether that object was in front of or behind the listener. I’m really back to thinking this must have something to do with the extra gain added by the ear as a sound approaches a direction directly in front of it. Am I understanding the concept incorrectly?
The pinna, the outer part of the ear, serves to “catch” the sound waves. Your outer ear is pointed forward and it has a number of curves. This structure helps you determine the direction of a sound. If a sound is coming from behind you or above you, it will bounce off the pinna in a different way than if it is coming from in front of you or below you. This sound reflection alters the pattern of the sound wave. Your brain recognizes distinctive patterns and determines whether the sound is in front of you, behind you, above you or below you.
Sound, which travels through the air as vibrations, is captured by the pinna, or outer ear. Learn about sound and find out how the ear captures sound.
I think that will be a though one to simulate .