No User logged in, System Language: 
localisation
One last thing we must examine about how we hear is the way we understand where a sound comes from. Our brain is capable of a very detailed analysis of what our ears transmit to it, especially when it comes to sound localisation.

Take the picture below as an example:

phase1


As we can see, the sound comes from the right of our subject. The sound will have to cover a distance d1 to arrive to his right eardrum, and a distance d2 to arrive to his left one. Now, it is clear that d1 is slightly shorter than d2, probably around 20cm, the average width of our head. It follows that the sound will take a little longer to arrive to the left ear.

Well, that will change a lot of things about how the sound arrives to the two ears, in particular there will be:

- A difference in sound pressure
- A difference in time, which will cause
- A difference in phase
- A difference in the frequency content of the two sounds

It is clear how the first three points relate to an increased distance, but we probably need to explain the reason of the last point. When a sound wave comes across an 'obstacle', two things can happen: the obstacle is not big enough to reflect back the sound wave, in which case the sound can go beyond the object, or the obstacle reflects back the sound wave. What makes the difference? The wavelength in first place: low frequencies can have huge sound waves, which are likely to overcome bigger objects. High frequencies on the other hand are much easier to be stopped. This effect can be easily experienced when listening to some music playing in another room: it will have that typical padded sound which is due to a lack of high frequencies. The amplitude of a sound wave also has a role, because it can cause the object to vibrate and transmit the sound anyway: so if our neighbour lowers the volume, we won't hear the music anymore. There is a mathematical relation to the size of the sound wave and the size of the obstacles that could stop it, but we will not go into it in this course.

Back to our case, our head is big enough to stop the highest frequency of the audible range, so what arrives to the right ear will contain the frequencies that our head would stop and which are not arriving at the left ear. Frequency 'filtering' is also made by our shoulders and torso and by the shape of the pinna. Of course we are not able to distinguish such a small difference, but our brain is, and as a result it will interpret the different sound qualities and arrival times as belonging to a single sound coming from our right hand side.

The difference in amplitude is also called IAD (Interaural Amplitude Difference)
The difference in time is also called ITD (Interaural Time Difference)

As you can imagine, sound localisation is possible thanks to the fact that we have two ears, and that our brain compares the 'data' that come from both of them.

Now we know enough about the marvellous capabilities of human hearing to move on and see some of its most peculiar behaviours: some times, what we think we hear is not what exactly is being produced by a sound source. In other words, we can say that due to our ears physical limitations, our ears sort of 'fool' ourselves making us hear things differently. The situations that we are going to examine in the next chapters are absolutely normal, they are not related to any medical condition and affect even the healthiest hearing. Being them so related to our subjectivity, some people refer to them as 'psychoacoustic' phenomena.