Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Hack 45. Detect Sound Direction

Our ears let us know approximately which direction sounds are coming from. Some sounds, like echoes, are not always informative, and there is a mechanism for filtering them out.

A major purpose of audition is telling where things are. There's an analogy used by auditory neuroscientists that gives a good impression of just how hard a job this is. The information bottleneck for the visual system is the ganglion cells that connect the eyes to the brain [Hack #13] . There are about a million in each eye, so, in your vision, there are about two million channels of information available to determine where something is. In contrast, the bottleneck in hearing involves just two channels: one eardrum in each ear. Trying to locate sounds using the vibrations reaching the ears is like trying to say how many boats are out on a lake and where they are, just by looking at the ripples in two channels cut out from the edge of the lake. It's pretty difficult stuff.

Your brain uses a number of cues to solve this problem. A sound will reach the near ear before the far ear, the time difference depending on the position of the sound's source. This cue is known as the interaural (between the ears) time difference. A sound will also be more intense at the near ear than the far ear. This cue is known as the interaural level difference. Both these cues are used to locate sounds on the horizontal plane: the time difference (delay) for low-frequency sounds and the level difference (intensity) for high-frequency sounds (this is known as the Duplex Theory of sound localization). To locate sounds on the vertical plane, other cues in the spectrum of the sound (spectral cues) are used. The direction a sound comes from affects the way it is reflected by the outer ear (the ears we all see and think of as ears, but which auditory neuroscientists call pinnae). Depending on the sound's direction, different frequencies in the sound are amplified or attenuated. Spectral cues are further enhanced by the fact that our ears are slightly different shapes, thus differently distort the sound vibrations.

The main cue is the interaural time difference. This cue dominates the others if they conflict. The spectral cues, providing elevation (up-down) information, aren't as accurate and are often misleading.

In Action

Echoes are a further misleading factor, and seeing how we cope with them is a good way to really feel the complexity of the job of sound localization. Most environmentsnot just cavernous halls but the rooms in your house tooproduce echoes. It's hard enough to work out where a single sound is coming from, let alone having to distinguish between original sounds and their reverberations, all of which come at you from different directions. The distraction of these anomalous locations is mitigated by a special mechanism in the auditory system.

Those echoes that arrive at your ears within a very short interval are grouped together with the original sound, which arrives earliest. The brain takes only the first part of the sound to place the whole group. This is noticeable in a phenomenon known as the Haas Effect, also called the principle of first arrival or precedence effect.



The Haas Effect operates below a threshold of about 30-50 milliseconds between one sound and the next. Now, if the sounds are far enough apart, above the threshold, then you'll hear them as two sounds from two locations, just as you should. That's what we traditionally call echoes. By making echoes yourself and moving from an above-threshold delay to beneath it, you can hear the mechanism that deals with echoes come into play.

You can demonstrate the Haas Effect by clapping at a large wall.1 Stand about 10 meters from the wall and clap your hands. At this distance, the echo of your hand clap will reach your ears more than 50 milliseconds after the original sound of your clap. You hear two sounds.

Now try walking toward the wall, while still clapping every pace. At about 5 meterswhere the echo reaches your ears less than 50 ms after the original sound of the clapyou stop hearing sound coming from two locations. The location of the echo has merged with that of the original sound; both now appear to come as one sound from the direction of your original clap. This is the precedence effect in action, just one of many mechanisms that exist to help you make sense of the location of sounds.

How It Works

The initial computations used in locating sounds occur in the brainstem, in a peculiarly named region called the superior olive. Because the business of localization begins in the brainstem, surprising sounds are able to quickly produce a turn of the head or body to allow us to bring our highest resolution sense, vision, to bear in figuring out what is going on. The rapidity of this response wouldn't be possible if the information from the two ears were integrated only later in processing.

The classic model for how interaural time differences are processed is called the Jeffress Model, and it works as shown in Figure 4-1. Cells in the midbrain indicate a sound's position by increasing their firing in response to sound, and each cell takes sound input from both ears. The cell that fires most is the one that receives a signal from both ears simultaneously. Because these cells are most active when the inputs from both sides are synchronized, they're known as coincidence-detector neurons.

Figure 4-1. Neurons used in computing sound position fire when inputs from the left and right ear arrive simultaneously. Differences in time delays along the connecting lines mean that different arrival times between signals at the left and right ear trigger different neurons.

 

Now imagine if a sound comes from your left, reaching your right ear only after a tiny delay. If a cell is going to receive both signals simultaneously, it must be because the left-ear signal has been slowed down, somewhere in the brain, to exactly compensate for the delay. The Jeffress Model posits that the brain contains an array of coincidence-detector cells, each with particular delays for the signals from either side. By this means, each possible location can be represented with the activity of neurons with the appropriate delays built in.

The Jeffress Model may not be entirely correct. Most of the neurobiological evidence for it comes from work with barn owls, which can strike prey in complete darkness. Evidence from small mammals suggests other mechanisms also operate.2

 

An ambiguity of localization comes with using interaural time difference, because sounds need not be on the horizontal planethey could be in front, behind, above, or below. A sound that comes in from your front right, on the same level as you, at an angle of 33° will sound, in terms of the interaural differences in timing and intensity, just like the same sound coming from your back right at an angle of 33° or from above right at an angle of 33°. Thus, there is a "cone of confusion" as to where you place a sound, and that is what is shown in Figure 4-2. Normally you can use the other cues, such as the distortion introduced by your ears (spectral cues) to reduce the ambiguity.

Figure 4-2. Sounds from the surface of the "cone of confusion" produce the same interaural time difference and are therefore ambiguously localized

 

In Real Life

The more information in a sound, the easier it is to localize. Noise containing different frequencies is easier to localize. This is why they now add white noise, which contains all frequencies in equal proportions, to the siren sounds of emergency vehicles,3 unlike the pure tones that have historically been used.

If you are wearing headphones, you don't get spectral cues from the pinnae, so you can't localize sounds on the up-down dimension. You also don't have the information to decide if a sound is coming from in front or from behind.

Even without headphones, our localization in the up-down dimension is pretty poor. This is why a sound from down and behind you (if you are on a balcony, for example) can sound right behind you. By default, we localize sounds to either the center left or center right of our earsthis broad conclusion is enough to tell us which way to turn our heads, despite the ambiguity that prevents more precise localization.

This ambiguity in hearing is the reason we cock our heads when listening. By taking multiple readings of a sound source, we overlap the ambiguities and build up a composite, interpolated set of evidence on where a sound might be coming from. (And if you watch a bird cocking its head, looking at the ground, it's listening to a worm and doing the same.4)

Still, hearing provides a rough, quick idea of where a sound is coming from. It's enough to allow us to turn toward the sound or to process sounds differently depending on where they come from [Hack #54] .

End Notes

1. Wall and clapping idea thanks to Geoff Martin and from his web site at http://www.tonmeister.ca/main/textbook/psychoacoustics/06.html.

2. McAlpine, D., & Grothe, B. (2003). Sound localization and delay linesDo mammals fit the model? Trends in Neurosciences, 26(7), 347-350.

3. "Siren Sounds: Do They Actually Contribute to Traffic Accidents?" (http://www.soundalert.com/pdfs/impact.pdf).

4. Montgomerie, R., & Weatherhead, P. J. (1997). How robins find worms. Animal Behaviour, 54, 137-143.

See Also

· Moore, B. C. J. (1997). An Introduction to the Psychology of Hearing. New York: Academic Press.

 

 


 

 

Hack 46. Discover Pitch Why we perceive pitch at all is a story in itself. Pitch exists for sounds because our brains calculate it, and to do that, they must have a reason. All sounds are vibrations in air. Different amplitudes create different sound intensities; different frequencies of vibration create different pitches. Natural sounds are usually made up of overlaid vibrations that are occurring at a number of different frequencies. Our experience of pitch is based on the overall pattern of the vibrations. The pitch isn't, however, always a quality that is directly available in the sound information. It has to be calculated. Our brains have to go to some effort to let us perceive pitch, but it isn't entirely obvious why we do this at all. One theory for why we hear pitch at all is because it relates to object size: big things generally have a lower basic frequency than small things. The pitch we perceive a sound having is based on what is called the fundamental of the sound wave. This is the basic rate at which the vibration repeats. Normally you make a sound by making something vibrate (say, by hitting it). Depending on how and what you hit (this includes hitting your vocal cords with air), you will establish a main vibrationthis is the fundamentalwhich will be accompanied by secondary vibrations at higher frequencies, called harmonics. These harmonics vibrate at frequencies that are integer multiples of the fundamental frequency (so for a fundamental at 4 Hz, a harmonic might be at 8 Hz or 12 Hz, but not 10 Hz). The pitch of the sound we hear is based on the frequency of the fundamental alone; it doesn't matter how many harmonics there are, the pitch stays the same. Amazingly, even if the fundamental frequency isn't actually part of the sound we hear; we still hear pitch based on what it should be. So for a sound that repeats four times a second but that is made up of component frequencies at 8 Hz, 12 Hz, and 16 Hz, the fundamental is 4 Hz, and it is based upon this that we experience pitch. It's not definite how we do this, but one theory runs like this1: the physical construction of the basilar membrane in the inner ear means that it vibrates at the frequency of the fundamental as it responds to higher component frequencies. Just the physical design of the cochlea as an object means that it can be used by the brain to reproducephysicallythe calculation needed to figure out the fundamentals of a sound wave. That discovered fundamental is then available to be fed into the auditory processing system as information of equal status to any other sound wave.2 So it looks as if a little bit of neural processing has leaked out into the physical design of the eara great example of what some people have called extelligence, using the world outside the brain itself to do cognitive work. 4.4.1. In Action An illusion called the missing fundamental demonstrates the construction of sounds in the ear. The fundamental and then harmonics of a tone are successively removed, but the pitch of the tone sounds the same. Play the sound file at http://physics.mtsu.edu/~wmr/julianna.html, and you'll hear a series of bleeps. Even though the lower harmonics are vanishing, you don't hear the sound get higher. It remains at the same pitch.3 4.4.2. How It Works The way pitch is computed from tones with multiple harmonics can be used to construct an illusion in which the pitch of a tone appears to rise continuously, getting higher and higher without ever dropping. You can listen to the continuously rising tone illusion and see a graphical illustration of how the sound is constructed at http://www.kyushu-id.ac.jp/~ynhome/ENG/Demo/2nd/05.html#20. Each tone is made up of multiple tones at different harmonics. The harmonics shift up in frequency with each successive tone. Because there are multiple harmonics, evenly spaced, they can keep shifting up, with the very highest disappearing as they reach the top of the frequency range covered by the tones and with new harmonics appearing at the lowest frequencies. Because each shift seems like a step up on a normal scale, your brain gives you an experience of a continuously rising tone. This is reinforced because the highest and lowest components of each tone are quieter, blurring the exact frequency boundaries of the whole sound.4 4.4.3. End Notes 1. There are other mechanisms, using neural processing, involved too, in reconstructing the fundamental from a harmonic. There are two main theories of what these are. One involves recognizing patterns in the activity level of the receptor cells over the length of the cochlea, and the other involves using the timing of the responses of the cells. 2. McAlpine, D. (2004) Neural sensitivity in the inferior colliculus: Evidence for the role of cochlear distortions. Journal of Neurophysiology, 92(3), 1295-1311. 3. The missing fundamental illusion is also found in motion perception at http://www.umaine.edu/visualperception/summer. 4. Yoshitaka Nakajima's "Demonstrations of Auditory Illusions and Tricks" (http://www.kyushu-id.ac.jp/~ynhome/ENG/Demo/illusions2nd.html) is a fantastic collection of auditory illusions, including examples and graphical explanations. A particular favorite is the "Melody of Silences" (http://133.5.113.80/~ynhome/ENG/Demo/2nd/03.html).

 

 


 

 

Hack 47. Keep Your Balance The ear isn't just for hearing; it helps you keep your balance. Audition isn't the only function of the inner ear. We have semicircular channels of fluid, two in the horizontal plane, two in the vertical plane, that measure acceleration of the head. This, our vestibular system, is used to maintain our balance. Note that this system can detect only acceleration and deceleration, not motion. This explains why we can be fooled into thinking we're moving if a large part of our visual field moves in the same directionfor example, when we're sitting on a train and the train next to ours moves off, we get the impression that we've started moving. For slow-starting movement, the acceleration information is too weak to convince us we've moved. It's a good thing the system detects only acceleration, not absolute motion, otherwise we might be able to tell that we are moving at 70,000 mph through space round the sun. Or, worse, have direct experience of relativitythen things would get really confusing. T.S. 4.5.1. In Action You can try and use this blind spot for motion next time you're on a train. Close your eyes and focus on the rocking of the train side to side. Although you can feel the change in motion side to side, without visual informationand if your train isn't slowing down or speeding upyou don't have any information except memory to tell you in which direction you are traveling. Imagine the world outside moving in a different way. See if you can hallucinate for a second that you are traveling very rapidly in the opposite direction. Obviously this works best with a smooth train, so readers in Japan will have more luck. 4.5.2. How It Works Any change in our velocity causes the fluid in the channels of the vestibular system to move, bending hair cells that line the surface of the channels (these hair cells work the same as the hair cells that detect sound waves in the cochlea, except they detect distortion in fluid, not air). Signals are then sent along the vestibular nerve into the brain where they are used to adjust our balance and warn of changes in motion. Dizziness can result from dysfunction of the vestibular system or from a disparity between visual information and the information from the vestibular system. So in motion sickness, you feel motion but see a constant visual world (the inside of the car or of the ship). In vertigo, you don't feel motion but you see the visual world move a lot more than it shouldbecause of parallax, a small movement of your head creates a large shift in the difference between your feet and what you see next to them. (Vertigo is more complex than just a mismatch between vestibular and visual detection of motion, but this is part of the story.) This is why, if you think you might get dizzy, it helps to fix on a moving point if you are moving but your visual world is not (such as the horizon if you are on a ship). But if you are staying still and your visual world is moving, the best thing to do is not to look (such as during vertigo or during a motion sickness-inducing film). I guess this means I'd have felt less nauseated after seeing the Blair Witch Project if I'd watched it from a vibrating seat. T.S.

 

 


 

 


Date: 2015-12-11; view: 785


<== previous page | next page ==>
Figure 3-7. AceReader Pro presenting target words | Hack 48. Detect Sounds on the Margins of Certainty
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.009 sec.)