Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Audition dominates for timing

Vision doesn't always dominate. Watch Ladan Shams's "Sound-induced Illusory Flashing" movies at Caltec (http://neuro.caltech.edu/~lshams/demo.html; QuickTime).1 They show a black dot flashing very briefly on a white background. The only difference between the movie on the left and the movie on the right is the sound played along with the flash of the dot. With one set you hear a beep as the dot appears; with another set you hear two beeps.

On Ladan Shams's page, you have the option of watching a number of different pairs of movies. These correspond to different computer speeds. Start with the ones at the top and run them all until you find the one with the strongest effect.

 

Notice how the sound affects what you see. Two beeps cause the dot not to flash but to appear to flicker. Our visual system isn't so sure it is seeing just one event, and the evidence from hearing is allowed to distort the visual impression that our brain delivers for conscious experience.

When the experiment was originally run, people were played up to four beeps with a single flash. For anything more than one beep, people consistently experienced more than one flash.

Aschersleben and Bertelson2 demonstrated that the same principle applied when people produced timed movements by tapping. People tapping in time with visual signals were distracted by mistimed sound signals, whereas people tapping in time with sound signals weren't as distracted by mistimed visual signals.

How It Works

This kind of dominance is really a bias. When the visual information about timing is ambiguous enough, it can be distorted in our experience by the auditory information. And vice versawhen auditory information about location is ambiguous enough, it is biased in the direction of the information provided by visual information. Sometimes that distortion is enough to make it seem as if one sense completely dominates the other.

Information from the nondominant sense (vision for timing, audition for location) does influence what result the other sense delivers up to consciousness but not nearly so much. The exact circumstances of the visual-auditory event can affect the size of the bias too. For example, when judging location, the weighting you give to visual information is proportional to the brightness of the light and inversely proportional to the loudness of the sound.3 Nevertheless, the bias is always weighted toward using vision for location and toward audition for timing.

The weighting our brain gives to information from these two senses is a result of the design of our senses, so you can't change around the order of dominance by making sounds easier to localize or by making lights harder to locate. Even if you make the sound location-perfect, people watching are still going to prefer to experience what they see as where they see it, and they'll disregard your carefully localized sounds.

End Notes

1. Shams, L., Kamitani, Y., & Shimojo, S. (2000). What you see is what you hear. Nature, 408, 788.



2. Aschersleben, G., & Bertelson, P. (2003). Temporal ventriloquism: crossmodal interaction on the time dimension: 2. Evidence from synchronization. International Journal of Psychophysiology, 50(1-2), 157-63.

3. Radeau, M. (1985). Signal intensity, task context, and auditory-visual interactions. Perception, 14, 571-577.

See Also

· Recanzone, G. H. (2003). Auditory influences on visual temporal rate perception. Journal of Neurophysiology, 89, 1078-1093.

· The advice in this hack, and other good tips for design can be found in Reeves et al. (2004). Guidelines for multimodal user interface design. Communications of the ACMSpecial Issue on Multimodal Interfaces, 47(1), 57-59. It is online at http://www.niceproject.com/publications/CACM04.pdf.

 

 


 

 

Hack 54. Don't Divide Attention Across Locations Attention isn't separate for different senses. Where you place your attention in visual space affects what you hear in auditory space. Attention exists as a central, spatially allocated resource. Where you direct attention is not independent across the senses. Where you pay attention to in space with one sense affects the other senses.1 If you want people to pay attention to information across two modalities (a modality is a sense mode, like vision or audition), they will find this easiest if the information comes from the same place in space. Alternatively, if you want people to ignore something, don't make it come from the same place as something they are attending to. These are lessons drawn from work by Dr. Charles Spence of the Oxford University crossmodal research group (http://www.psych.ox.ac.uk/xmodal/default.htm). One experiment that everyone will be able to empathize with involves listening to speech while driving a car.2 5.3.1. In Action Listening to a radio or mobile phone on a speaker from the back of a car makes it harder to spot things happening in front of you. Obviously showing this in real life is difficult. It's a complex situation with lots of variables, and one of these is whether you crash your carnot the sort of data psychologists want to be responsible for creating. So Dr. Spence created the next best thing in his laban advanced driving simulator, which he sat people in and gave them the job of simultaneously negotiating the traffic and bends while repeating sets of words played over a speaker (a task called shadowing). The speakers were placed either on the dashboard in front or to the side. Drivers who listened to sounds coming from the sides made more errors in the shadowing task, drove slower, and took longer to decide what to do at junctions. You can see coping strategy in action if you sit with a driver. Notice how he's happy to talk while driving on easy and known roads, but falls quiet and pops the radio off when having to make difficult navigation decisions. 5.3.2. How It Works This experimentand any experience you may have had with trying to drive with screaming kids in the backseat of a carshows that attention is allocated in physical space, not just to particular things arbitrarily and not independently across modalities. This is unsurprising, given that we know how interconnected cortical processing is [Hack #81] and that it is often organized in maps that use spatial coordinate frames [Hack #12] . The spatial constraints on attention may reflect the physical limits of modulating the activity in cortical processing structures that are themselves organized to mirror physical space. 5.3.3. In Real Life Other studies of this kind of task, which involve complex real-world tasks, have shown that people are actually very good at coordinating their mental resources. The experiments that motivated this experiment proved that attention is allocated in space and that dividing it in space, even across modalities, causes difficulties. But these experiments tested subjects who weren't given any choice about what they didthe experimental setup took them to the limits of their attentional capacities to test where they broke down. Whether these same factors had an effect in a real-world task like driving was another question. When people aren't at the limit of their abilities, they can switch between tasks, rather than doing both at onceallocating attention dynamically, shifting it between tasks as the demands of the task change. People momentarily stop talking when driving at sections of the road that are nonroutine, like junctions, in order to free up attention, avoiding getting trapped in the equivalent of Spence's shadowing task. The driving experiment shows that despite our multitasking abilities the spatial demands of crossmodal attention do influence driving ability. The effect might be small, but when you're travelling at 80 mph toward something else that is travelling at 80 mph toward you, a small effect could mean a big difference. 5.3.4. Hacking the Hack One of the early conclusions drawn from research into crossmodal attention3 was that it was possible to divide attention between modalities without clashing, so if you wanted users to simultaneously pay attention to two different streams of information, they should appear in two different modalities. That is, if a person needs to keep on top of two rapidly updating streams, you're better off making one operate in vision and one in sound rather than having both appear on a screen, for example. The results discussed in this hack suggest two important amendments to this rule of thumb:4 · Dividing attention across modalitiesattending to both vision and sound, for exampleis more efficient if the streams have a common spatial location. · If one stream needs to be ignored, don't let it share spatial location with an attended stream. 5.3.5. End Notes 1. Spence, C., & Driver, J. (eds.). (2004) Crossmodal Space and Crossmodal Attention. Oxford: Oxford University Press. 2. Spence, C., & Read, L. (2003). Speech shadowing while driving: On the difficulty of splitting attentions between eye and ear. Psychological Science, 14(3), 251-256. 3. Wickens, C. D. (1980). The structure of attentional resources. In R. S. Nickerson (ed.), Attention and Performance, 8, 239-257. Hillsdale, NJ: Erlbaum. 4. More details about the best way to present two steams of important information to a person, one in vision and one in sound, are discussed by Spence & Driver. Crossmodal Space and Crossmodal Attention, 187-188.

 

 


 

 


Date: 2015-12-11; view: 589


<== previous page | next page ==>
Hack 50. Give Big-Sounding Words to Big Concepts | Hack 55. Confuse Color Identification with Mixed Signals
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.008 sec.)