Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Hack 55. Confuse Color Identification with Mixed Signals

When you're speaking, written words can distract you. If you're thinking nonlinguistically, they can't.

The Stroop Effect is a classic of experimental psychology. In fact, it's more than a classic, it's an industry. J. Ridley Stroop first did his famous experiment in 1935, and it's been replicated thousands of times since then. The task is this: you are shown some words and asked to name the ink color the words appear in. Unfortunately, the words themselves can be the names of colors. You are slower, and make more errors, when trying to name the ink color of a word that spells the name of a different color. This, in a nutshell, is the Stroop Effect. You can read the original paper online at http://psychclassics.yorku.ca/Stroop.

In Action

To try out the Stroop Effect yourself, use the interactive experiment available at http://faculty.washington.edu/chudler/java/ready.html1 (you don't need Java in your web browser to give this a go).

Start the experiment by clicking the "Go to the first test" link; the first page will look like Figure 5-1, only (obviously) in color.

Figure 5-1. In the Stroop experiment, the color of the ink isn't necessarily the same as the color the word declares

 

As fast as you're able, read out loud the color of each wordnot what it spells, but the actual color in which it appears. Then click the Finish button and note the time it tells you. Continue the experiment and do the same on the next screen. Compare the times.

The difference between the two tests is that whereas the ink colors and the words correspond on the first screen, on the second they conflict for each word. It takes you longer to name the colors on the second screen.

How It Works

Although you attempt to ignore the word itself, you are unable to do so and it still breaks through, affecting your performance. It slows your response to the actual ink color and can even make you give an incorrect answer. You can get this effect with most people nearly all of the time, which is one reason why psychologists love it.

The other reason it's a psychologist's favorite is that, although the task is simple, it involves many aspects of how we think, and the experiment has variations to explore these. At first glance, the explanation of the task seems simplewe process words automatically, and this process overrides the processing of color information. But this isn't entirely true, although that's the reason still taught in many classes.

Reading the word interferes only if two conditions are fulfilled. First, the level and focus of your attention has to be broad enough that the word can be unintentionally read. Second, the response you are trying to give must be a linguistic one. In this case, the required response is spoken, so it is indeed linguistic.

Avoiding reading is easier when the color to report is disentangled from the word. If you have to respond to only the color of the first letter of each word and the rest are black, the confusion is reduced. Ditto if the word and block of color are printed separately. In these cases, we're able to configure ourselves to respond to certain stimuli (the color of the ink) and ignore certain others (the word). It's only when we're not able to divide the two types of information that the Stroop Effect emerges.



It's probably this kind of selective concentration that renders otherwise bizarre events invisible, as with inattention blindness [Hack #41] when attention on a basketball game results in a gorilla walking unseen across the court.

 

The second condition, that the response is linguistic, is really a statement about the compatibility between the stimulus and response required to it. Converting a written word into its spoken form is easier than converting a visual color into its spoken form. Because of immense practice, word shapes are already linguistic items, whereas color has to be translated from the purely visual into a linguistic symbol (the sensation of red on the eye, to the word "red").

So the kind of response normally required in the Stroop Effect uses the same codelanguageas the word part of the stimulus, not the color part. When we're asked to give a linguistic label to the color information, it's not too surprising that the response-compatible information from the word part of the stimulus distracts us.

But by changing the kind of response required, you can remove the distracting effect. You can demonstrate this by doing the Stroop Effect task from earlier, but instead of saying the color out loud, respond by pointing to a square of matching color on a printout. The interference effect disappearsyou've stopped using a linguistic response code, and reading the words no longer acts as a disruption.

Taking this one step further, you can reintroduce the effect by changing the task to its oppositetry responding to what the written word says and attempting to ignore the ink color (still pointing to colors on the chart rather than reading out loud). Suddenly pointing is hard again when the written word and ink color don't match.2

You're now getting the reverse effect because your response is in a code that is different from the stimulus information you're trying to use (the word) and the same as the stimulus information you're trying to ignore (the color).

Take-home message: more or less mental effort can be required to respond to the same information, depending on how compatible the response is with the stimulus. If you don't want people to be distracted, don't make them translate from visual and spatial information into auditory and verbal information (or vice versa).

End Notes

1. This experiment is part of the much larger Neuroscience for Kids web site: http://faculty.washington.edu/chudler/neurok.html.

2. Durgin, F. H. (2002). The reverse Stroop Effect. Psychonomic Bulletin & Review, 7(1), 121-125.

See Also

· Two further papers may be of interest if you'd like to explore the Stroop Effect and the underlying brain regions responsible: Besner, D. (2001). The myth of ballistic processing: Evidence from Stroop's paradigm. Psychonomic Bulletin & Review, 8(2), 324-330. And: MacLeod, C. M., & MacDonald, P. A. (2000). Interdimensional interference in the Stroop Effect: Uncovering the cognitive and neural anatomy of attention. Trends in Cognitive Sciences, 4(10), 383-391.

 

 


 

 

Hack 56. Don't Go There You're drawn to reach in the same direction as something you're reacting to, even if the direction is completely unimportant. So much of what we do in everyday life is responding to something that we've seen or heardchoosing and clicking a button on a dialog box on a computer or leaping to turn the heat off when a pan boils over. Unfortunately, we're not very good at reacting only to the relevant information. The form in which we receive it leaks over into our response. For instance, if you're reacting to something that appears on your left, it's faster to respond with your left hand, and it takes a little longer to respond with your right. And this is true even when location isn't important at all. In general, the distracting effect of location responses is called the Simon Effect,1 named after J. Richard Simon, who first published on it in 1968 and is now Professor Emeritus at the University of Iowa.2 The Simon Effect isn't the only example of the notionally irrelevant elements of a stimulus leaking into our response. Similar is the Stroop Effect [Hack #55], in which naming an ink color nets a slower response if the ink spells out the name of a different color. And, although it's brought about by a different mechanism, brighter lights triggering better reaction times [Hack #11] is similar in that irrelevant stimulus information modifies your response (this one is because a stronger signal evokes a faster neural response). 5.5.1. In Action A typical Simon task goes something like this: you fix your gaze at the center of a computer screen and at intervals a light flashes up, randomly on the left or the rightwhich side is unimportant. If it is a red light, your task is to hit a button on your left. If it is a green light, you are to hit a button on your right. How long it takes you is affected by which side the light appears on, even though you are supposed to be basing which button you press entirely on the color of the light. The light on the left causes quicker reactions to the red button and slower reactions to the green button (good if the light is red, bad if the light is green). Lights appearing on the right naturally have the opposite effect. Even though you're supposed to disregard the location entirely, it still interferes with your response. The reaction times being measured are usually a half-second or less for this sort of experiment, and the location confusion results in an extension of roughly 5%. It's difficult to tell what these reaction times mean without trying the experiment, but it is possible to feel, subjectively, the Simon Effect without equipment to measure reaction time. You need stimuli that can appear on the left or the right with equal measure. I popped outside for 10 minutes and sat at the edge of the road, looking across it, so traffic could come from either my left or my right. (Figure 5-2 shows the view from where I was sitting.) My task was to identify red and blue cars, attempting to ignore their direction of approach. Figure 5-2. The view from where I sat watching for red and blue cars   In choosing this task, I made use of the fact that color discrimination is poor in peripheral vision [Hack #14] . By fixing my gaze at a position directly opposite me, over the road, and refusing to move my eyes or my head, I would be able to tell the color of each car only as it passed directly in front of me. (If I had chosen to discriminate black cars and white cars, there's no color information required, so I would have been able to tell using my peripheral vision.) I wanted to do this so I wouldn't have much time to do my color task, but would be able to filter out moving objects that weren't cars (like people with strollers). As a response, I tapped my right knee every time a red car passed and my left for blue ones, trying to respond as quickly as possible. After 10 minutes of slow but steady traffic, I could discern a slight bias in my responses. My right hand would sometimes twitch a little if cars approached from that direction, and vice versa. Now, I wouldn't be happy claiming a feeling of a twitchy hand as any kind of confirmation of the Simon Effect. The concept of location in my experiment is a little blurred: cars that appear from the right are also in motion to the leftwhich stimulus location should be interfering with my knee-tapping response? But even though I can't fully claim the entire effect, that a car on the right causes a twitching right hand, I can still claim the basic interference effect: although I'd been doing the experiment for 10 minutes, my responses were still getting mucked up somehow. To test whether my lack of agility at responding was caused by the location of the cars conflicting with the location of my knees, I changed my output, speaking "red" or "blue" as a response instead. In theory, this should remove the impact of the Simon Effect (because I was taking away the left-or-right location component of my response), and I might feel a difference. If I felt a difference, that would be the Simon Effect, and then its lack, in action. And indeed, I did feel a difference. Using a spoken output, responding to the color of the cars was absolutely trivial, a very different experience from the knee tapping and instantly more fluid. 5.5.2. How It Works For my traffic watching exercise, the unimportant factor of the location of the colored cars was interfering with my tapping my left or right knee. Factoring out the location variable by speaking instead of knee tappingeffectively routing around the Simon Effectmade the whole task much easier. Much like the Stroop Effect [Hack #55] (in which you involuntarily read a word rather than sticking to the task of identifying the color of the ink in which it is printed), the Simon Effect is a collision of different pieces of information. The difference between the two is that the conflict in the Stroop Effect is between two component parts of a stimulus (the color of the word and the word itself), while, in the Simon Effect, the conflict is between the compatibility of stimulus and response. You're told to ignore the location of the stimulus, but just can't help knowing location is important because you're using it in making your response. The key point here is that location information is almost always important, and so we're hardwired to use it when available. In real life, and especially before the advent of automation, you generally reach to the location of something you perceive in order to interact with it. If you perceive a light switch on your left, you reach to the left to switch off the lights, not to the rightthat's the way the world works. Think of the Simon Effect not as location information leaking into our responses, but the lack of a mechanism to specifically ignore location information. Such a mechanism has never really been needed. 5.5.3. In Real Life Knowing that location information is carried along between stimulus and response is handy for any kind of interface design. My range has four burners, arranged in a square. But the controls for those burners are in a line. It's because of the Simon Effect that I have to consult the diagram next to the controls each and every time I use them, not yet having managed to memorize the pattern (which, after all, never changes, so it should be easy). When I have to respond to the pot boiling over at the top right, I have the top-right location coded in my brain. If the controls took advantage of that instead of conflicting, they'd be easier to use. Dialog boxes on my computer (I run Mac OS X) are better aligned with keyboard shortcuts than my stove's controls with the burners. There are usually two controls: OK and Cancel. I can press Return as a shortcut for OK and Escape as a shortcut for Cancel. Fortunately, the right-left arrangement of the keys on my keyboard matches the right-left arrangement of the buttons in the dialog (Escape and Cancel on the left, Right and OK on the right). If they didn't match, there would be a small time cost every time I attempted to use the keyboard, and it'd be no quicker at all. And a corollary: my response to the color of the cars in the traffic experiment was considerably easier when it was verbal rather than directional (tapping left or right). To make an interface more fluid, avoid situations in which the directions of stimulus and response clash. For technologies that are supposed to be transparent and intuitivelike my Mac (and my stove, come to that)small touches like this make all the difference. 5.5.4. End Notes 1. Simon, J. R. (1969). Reactions toward the source of stimulation. Journal of Experimental Psychology, 81, 174-176. 2. In fairness, we should mention that the Simon Effect was actually noted a hundred years before Simon published, by the pioneering Dutch experimental psychologist Franciscus Donders. Donders, Franciscus C. (1868). Over de snelheid van psychische processen. Onderzoekingen gedaan in het Physiologisch Laboratorium der Utrechtsche Hoogeschool, 1868-1869, Tweede reeks II: 92-120. Reprinted as Donders, Franciscus C. (1969). On the speed of mental processes. Acta Psychologica, 30, 412-431. 5.5.5. See Also · You can read about the early history of measuring reaction times at "Mental Chronometry and Verbal ActionSome Historical Threads" (http://www.mpi.nl/world/persons/private/ardi/Rts.htm).

 

 


 

 

Hack 57. Combine Modalities to Increase Intensity Events that affect more than one sense feel more intense in both of them. The vision and audition chapters (Chapter 2 and Chapter 4, respectively) of this book look at the senses individually, just as a lot of psychologists have over the years. But interesting things begin to happen when you look at the senses as they interact with one another.1 Multisensory information is the norm in the real world, after all. Tigers smell strong and rustle as they creep through the undergrowth toward you. Fire shines and crackles as it burns. Your child says your name as she shakes your shoulder to wake you up. These examples all suggest that the most basic kind of interaction between two senses should be the enhanced response to an event that generates two kinds of stimulation rather than just one. Information from one sense is more likely to be coincidence; simultaneous information on two senses is a good clue that you have detected a real event. 5.6.1. In Action We can see the interaction of information hitting two senses at once in all sorts of situations. People sound clearer when we can see their lips [Hack #59] . Movies feel more impressive when they have a sound track. If someone gets a tap on one hand as they simultaneously see two flashes of light, one on each side, the light on the same side as the hand tap will appear brighter. Helge Gillmeister and Martin Eimer of Birkbeck College, University of London, have found that people experience sounds as louder if a small vibration is applied to their index finger at the same time.2 Although the vibration didn't convey any extra information, subjects rated sounds as up to twice as loud when they occurred at the same time as a finger vibration. The effect was biggest for quieter sounds. 5.6.2. How It Works Recent research on such situations shows that the combination of information is wired into the early stages of sensory processing in the cortex. Areas of the cortex traditionally thought to respond to only a single sense (e.g., parts of the visual cortex) do actually respond to stimulation of the other senses too. This makes sense of the fact that many of these effects occur preconsciously, without any sense of effort or decision-making. They are preconscious because they are occurring in the parts of the brain responsible for initial representation and processing of sensationanother example (as in [Hack #15] ) of our perception not being passive but being actively constructed by our brains in ways we aren't always aware of. Macaluso et al.3 showed that the effect can work the other way round from the one discussed here: touch can enhance visual discrimination. They don't suggest that integration is happening in the visual cortex initially, but instead that parietal cortex areas responsible for multisensory integration send feedback signals down to visual areas, and it is this that allows enhanced visual sensitivity. For enhancement to happen, it has to be labeled as belonging to the same event, and this is primarily done by the information arriving simultaneously. Individual neurons [Hack #9] are already set up to respond to timing information and frequently respond strongest to inputs from different sources arriving simultaneously. If information arrives at different times, it can suppress the activity of cells responsible for responding to inputs across senses (senses are called modalities, in the jargon). So, what makes information from two modalities appear simultaneous? Obviously arriving at the exact same time is not possible; there must be a resolution of the senses in time below which two events appear to be simultaneous. Although light moves a million times faster than sound, sound is processed faster once it gets to the ear [Hack #44] than light is processed once it gets to the eye. The relative speed of processing of each sense, coupled with the speed at which light and sound travel, leads to a "horizon of simultaneity"4 at about 10 meterswhere visual and auditory signals from the same source reach the cortex at the same time. Most events don't occur just on this 10-meter line, of course, so there must be some extra mechanisms at work in the brain to allow sound and light events to appear simultaneous. Previously, researchers had assumed that the calculation of simultaneity was approximate enough that time difference due to arrival time could be ignored (until you get to events very far awaylike lightning that arrives before thunder, for example). But now it appears that our brains make a preconscious adjustment for how far away something is when calculating whether the sound and the light are arriving at the same time.5 Another mechanism that operates is simply to override the timing information that comes from vision with the timing information from auditory information [Hack #53] . 5.6.3. End Notes 1. To start following up the research on crossmodal interactions, you could start by reading Crossmodal Space and Crossmodal Attention by Charles Spence and Jon Driver. This is an edited book with contributions from many of the people at the forefront of the field. You can read more about the Oxford University crossmodal research group on its home page: http://www.psych.ox.ac.uk/xmodal/default.htm. 2. Gillmeister, H., & Eimer, M. (submitted). Multisensory integration in perception: tactile enhancement of perceived loudness. 3. Macaluso, E., Frith, C. D., & Driver, J. (2000). Modulation of human visual cortex by crossmodal spatial attention. Science, 289, 1206-1208. 4. Pöppel, E. (1988). Mindworks: Time and Conscious Experience. New York: Harcourt Brace Jovanovich. 5. Sugita, Y., and Suzuki, Y. (2003). Audiovisual perception: Implicit estimation of sound-arrival time. Nature, 421, 911.

 

 


 

 

Hack 58. Watch Yourself to Feel More Looking at your skin makes it more sensitive, even if you can't see what it is you're feeling. Look through a magnifying glass and it becomes even more sensitive. The skin is the shortest-range interface we have with the world. It is the only sense that doesn't provide any information about distant objects. If you can feel something on your skin, it is next to you right now. Body parts exist as inward-facing objectsthey provide touch informationbut they also exist as external objectswe can feel them with other body parts, see them, and (if you're lucky) feel and see those of other people. [Hack #64] and [Hack #93] explore how we use vision to update our internal model of our body parts. But the integration of the two senses goes deeper, so much so that looking at a body part enhances the sensitivity of that body part, even if you aren't getting any useful visual information to illuminate what's happening on your skin. 5.7.1. In Action Kennett et al.1 tested how sensitive people were to touch on their forearms. In controlled conditions, people were asked to judge if they were feeling two tiny rods pressed against their skin or just one. The subjects made these judgments in three conditions. The first two are the most important, providing the basic comparison. Subjects were either in the dark or in the light and looking at their armbut with a brief moment of darkness so they couldn't actually see their arm as the pins touched it. Subjects allowed to look at their arms were significantly more accurate, indicating that looking at the arm, even though it didn't provide any useful information, improved tactile sensitivity. The third condition is the most interesting and shows exactly how pervasive the effect can be. Subjects were shown their forearm through a magnifying glass (still with darkness at the actual instant of the pinprick). In this condition, their sensitivity was nearly twice as precise as their sensitivity in the dark! This is astounding for at least two reasons. First, it shows that visual attention can improve our sensitivity in another domain, in this case touch. There is no necessity for touch to interact like this with vision. The senses could be independent until far later in processing. Imagine if the double-click rate setting on your mouse changed depending on what was coming down your Internet connection? You'd think it was pretty odd. But for the brain this kind of interaction makes sense because we control where we look and events often spark input to more than one of our senses at a time. The second reason this is astounding is because it shows how a piece of technology (the magnifying glass) can be used to adjust our neural processing at a very fundamental level. 5.7.2. How It Works Touch information is gathered together in the parietal cortex (consult the crib notes in [Hack #7] if you want to know where that is), in an area called the primary somatosensory cortex. You'll find neurons here arranged into a map representing the surface of your body [Hack #12], and you'll find polysensory neurons. These respond in particular when visual and tactile input synchronize and suppress when the two inputs are discordant; it seems there's a network here that integrates information from both senses, either within the somatosensory map of the body or in a similar map nearby. This theory explains why brain damage to the parietal cortex can result in distortions of body image. Some patients with damaged parietal lobes will point to the doctor's elbow when asked to point to their own elbow for example. This hack and [Hack #64] show that short-term changes in our representation of our body are possible. Individual neurons in the cortex that respond to stimulation of the skin can be shown to change what area of skin they are responsible for very rapidly. If, for example, you anesthetize one finger so that it is no longer providing touch sensation to the cortical cells previously responsible for responding to sensation there, these cells will begin to respond to sensations on the other fingers.2 In the magnifying glass condition, the expanded resolution of vision appears to cause the resources devoted to tactile sensitivity of the skin to adjust, adding resolution to match the expanded resolution the magnifying glass has artificially given vision. 5.7.3. In Real Life This experiment explains why in general we like to look at things as we do them with our hands or listen to them with our earslike watching the band at a gig. We don't just want to see what's going onit actually enhances the other senses as well. Perhaps this is also why first-person shooter games have hit upon showing an image of the player's hands on the display. Having hands where you can see them may actually remap your bodily representation to make the screen part of your personalor near-personalspace, and hence give all the benefits of attention [Hack #54] and multimodal integration (such as the better sense discrimination shown in this hack) that you get there. 5.7.4. End Notes 1. Kennett, S., Taylor-Clarke, M., & Haggard, P. (2001). Noninformative vision improves the spatial resolution of touch in humans. Current Biology, 11, 1188-1191. 2. Calford, M. B., & Tweedale, R. (1991). Acute changes in cutaneous receptive fields in primary somatosensory cortex after digit denervation in adult flying fox. Journal of Neurophysiology, 65, 178-187.

 

 


 

 

Hack 59. Hear with Your Eyes: The McGurk Effect Listen with your eyes closed and you'll hear one sound; listen and watch the speaker at the same time and you'll hear another. If there were ever a way of showing that your senses combine to completely change your ultimate experience, it's the McGurk Effect. This classic illusion, invented by Harry McGurk (and originally published in 19761, makes you hear different sounds being spoken depending on whether or not you can see the speaker's lips. Knowing what's going to happen doesn't help: the effect just isn't as strong. 5.8.1. In Action Watch Arnt Maas's McGurk Effect video (http://www.media.uio.no/personer/arntm/McGurk_english.html; QuickTime with sound). You can see a freeze frame of the video in Figure 5-3. Figure 5-3. Arnt Maas's McGurk Effect video   When you play it with your eyes closed, the voice says "ba ba." Play the video again, and watch the mouth: the voice says "da da." Try to hear "ba ba" while you watch the lips move. It can't be done. 5.8.2. How It Works The illusion itself can't happen in real life. McGurk made it by splicing the sound of someone saying "ba ba" over a video of him making a different sound, "ga ga." When you're not watching the video, you hear what's actually being spoken. But when you see the speaker too, the two bits of information clash. The position of the lips is key in telling what sound someone's making, especially for distinguishing between speech sounds (called phonemes) like "ba," "ga," "pa," and "da" (those which you make by popping air out). Visual information is really important for listening to people speak. It's a cliché, but I know I can't understand people as well when I don't have my glasses on. M.W. We use both visual and auditory information when figuring out what sound a person is making and they usually reinforce each other, but when the two conflict, the brain has to find a resolution. In the world the brain's used to, objects don't usually look as if they're doing one thing but sound as if they're doing another. Since visually you're seeing "ga ga" and audition is hearing "ba ba," these are averaged out and you perceive "da da" instead, a sound that sits equally well with both information cues. In other situations, visual information will dominate completely and change a heard syllable to the one seen in the lip movements.2 Remarkably, you don't notice the confusion. Sensory information is combined before language processing is reached, and language processing tunes into only certain phonemes [Hack #49] . The decision as to what you hear is outside your voluntary control. The McGurk Effect shows integration of information across the senses at a completely preconscious level. You don't get to make any decisions about this; what you hear is affected by what goes in through your eyes. It's a good thing that in most circumstances the visual information you get matches what you need to hear. 5.8.3. End Notes 1. McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746-747. 2. Fusion of the sound and sight information is possible only when you have experience with a suitable compromise phoneme. One of the interesting things about phonemes is that they are perceived as either one thing or the other, but not as in-between values. So although there exists a continuum of physical sounds in between "ba" and "da," all positions along this spectrum will be perceived as either "ba" or "da," not as in-between sounds (unlike, say, colors, which have continuous physical values that you can also perceive). This is called categorical perception. 5.8.4. See Also · Hearing with Your Eyes (http://ccms.ntu.edu.tw/~karchung/Phonetics%20II%20page%20seventeen.htm; QuickTime) has a collection of McGurk Effect movies.

 

 


 

 

Hack 60. Pay Attention to Thrown Voices Sounds from the same spatial location are harder to separate, but not if you use vision to fool your brain into "placing" one of the sounds somewhere else. Sense information is mixed together in the brain and sorted by location [Hack #54], and we use this organization in choosing what to pay attention to (and therefore tune into). If you're listening to two different conversations simultaneously, it's pretty easy if they're taking place on either side of your headyou can voluntarily tune in to whichever one you want. But let's say those conversations were occurring in the same place, on the radio: it's suddenly much harder to make out just one. Which is why we can talk over each other in a bar and still understand what's being said, but not on the radio. On the radio, we don't have any other information to disambiguate who says what and the sounds get confused with each other. T.S. Hang on...how do we decide on the spatial location of a sense like hearing? For sound alone, we use clues implicit in what we hear, but if we can see where the sound originates, this visual information dominates [Hack #53] . Even if it's incorrect. 5.9.1. In Action Jon Driver from University College London1 took advantage of our experience with syncing language sounds with lip movements to do a little hacking. He showed people a television screen showing a person talking, but instead of the speech coming from the television, it was played through a separate amplifier and combined with a distracting, and completely separate, voice speaking. The television screen was alternately right next to the amplifier or some distance away. The subject was asked to repeat the words corresponding to the talking head on the television. If they watched the talking head on screen nearby the amplifier, they made more errors than if they watched the talking head on the screen kept distant from the sound. Even though both audio streams were heard from the single amplifier in the two cases, moving the video image considerably changed the listener's ability to tune into one voice. This experiment is a prime candidate for trying at home. An easy way would be with a laptop hooked up to portable speakers and a radio. Have the laptop playing a video with lots of speech where you can see lip movements. A news broadcast, full of talking heads, is ideal. Now put the radio, tuned into a talk station, and the laptop speaker, in the same location. That's the single amplifier in Driver's experiment. The two different cases in the experiment correspond to your laptop being right next to the speakers or some feet away. You should find that you understand what the talking heads on the video are saying more easily when the laptop is further away. Give it a go. 5.9.2. How It Works It's easier to understand what's going on here if we think about it as two separate setups. Let's call them "hard," for the case in which you're looking at the television right by the amplifier and "easy," when you're looking at the screen put a little further away. In the hard case, there's a video of a talking head on the television screen and two different voices, all coming from the same location. The reason it's hard is because it's easier to tune out of one information stream and into another if they're in different locations (which is what [Hack #54] is all about). The fact there's a video of a talking head showing in this case isn't really important. The easy setup has one audio stream tucked off to the side somewhere, while a talking head and its corresponding audio play on the television. It's plain to see that tuning into the audio on the television is a fairly simple taskI do it whenever I watch TV while ignoring the noise of people talking in the other room. But hang on, you say. In Driver's experiment, the easy condition didn't correspond to having one audio stream neatly out of the way and the one you're listening to aligned with the television screen. Both audio streams were coming from the same place, from the amplifier, right? Yes, right, but also no. Strictly speaking, both audio streams do still come from the same place, but remember that we're not very good at telling where sounds come from. We're so poor at it, we prefer to use what we see to figure out the origin of sounds instead [Hack #53] . When you look at the screen, the lip movements of the talking head are so synchronized with one of the audio streams that your brain convinces itself that the audio stream must be coming from the position of the screen too. It's whether the video is in the same place as the amplifier that counts in this experiment. When the screen is in a different place from the amplifier, your brain makes a mistake and mislocates one of the audio streams, so the audio streams are divided and you can tune in one and out the other. Never mind that the reason the conversations can be tuned into separately is because of a localization mistake; it still works. It doesn't matter that this localization was an illusionthe illusion could still be used by the brain to separate the information before processing it. All our impressions are a construction, so an objectively wrong construction can have as much validity in the brain as an objectively correct construction. 5.9.3. End Note 1. Driver, J. (1996). Enhancement of selective listening by illusory mislocation of speech sounds due to lip-reading. Nature, 381, 66-68.

 

 


 

 

Hack 61. Talk to Yourself Language isn't just for talking to other people; it may play a vital role in helping your brain combine information from different modules. Language might be an astoundingly efficient way of getting information into your head from the outside [Hack #49] , but that's not its only job. It also helps you think. Far from being a sign of madness, talking to yourself is something at the essence of being human. Rather than dwell on the evolution of language and its role in rewiring the brain into its modern form,1 let's look at one way language may be used by our brains to do cognitive work. Specifically we're talking about the ability of language to combine information in ordered structuresin a word: syntax. Peter Carruthers, at the University of Maryland,2 has proposed that language syntax is used to combine, simultaneously, information from different cognitive modules. By "modules," he means specialized processes into which we have no insight,3 such as color perception or instant number judgments [Hack #35] . You don't know how you know that something is red or that there are two coffee cups, you just know. Without language syntax, the claim is, we can't combine this information. The theory seems pretty boldor maybe even wrongbut we'll go through the evidence Carruthers uses and the details of what exactly he means and you can make up your own mind. If he's right, the implications are profound, and it clarifies exactly how deeply language is entwined with thought. At the very least, we hope to convince you that something interesting is going on in these experiments. 5.10.1. In Action The experiment described here was done in the lab of Elizabeth Spelke.4 You could potentially do it in your own home, but be prepared to build some large props and to get dizzy. Imagine a room like the one in Figure 5-4. The room is made up of four curtains, used to create four walls in a rectangle, defined by two types of information: geometric (two short walls and two long walls) and color information (one red wall). Figure 5-4. Setup for Spelke's experimentsa rectangular room with one colored wall   Now, think about the corners. If you are using only geometric information, pairs of corners are identical. There are two corners with a short wall on the left and a long wall on the right and two corners the other way around. If you are using only color information, there are also two pairs of identical corners: corners next to a red wall and corners not next to a red wall. Using just one kind of information, geometry or color, lets you identify corners with only 50% accuracy. But using both kinds of information in combination lets you identify any of the four corners with 100% accuracy, because although both kinds of information are ambiguous, they are not ambiguous in the same way. So, here's a test to see if people can use both kinds of information in combination.5 Show a person something he'd like, like some food, and let him see you hide it behind the curtains in one corner of the room. Now disorient him by spinning him around and ask him to find the food. If he can combine the geometric and the color information, he'll have no problem finding the foodhe'll be able to tell unambiguously which corner it was hidden in. If he doesn't combine information across modules, he will get it right 50% of the time and 50% of the time wrong on his first guess and need a second guess to find the food. Where does language come into it? Well, language seems to define the kinds of subjects who can do this task at better than 50% accuracy. Rats can't do it. Children who don't have language yet can't do it. Postlinguistic children and adults can do it. Convinced? Here's the rub: if you tie up an adult's language ability, her performance drops to close to 50%. This is what Linda Hermer-Vazquez, Elizabeth Spelke, and Alla Katsnelson did.6 They got subjects to do the experiment, but all the time they were doing it, they were asked to repeat the text of newspaper articles that were played to them over loudspeakers. This "verbal shadowing task" completely engaged their language ability, removing their inner monologue. The same subjects could orient themselves and find the correct corner fine when they weren't doing the task. They could do it when they were doing an equivalently difficult task that didn't tie up their language ability (copying a sequence of rhythms by clapping). But they couldn't do it with their language resources engaged in something else. There's something special about language that is essential for reorienting yourself using both kinds of information available in the room. 5.10.2. How It Works Peter Carruthers thinks that you get this effect because language is essential for conjoining information from different modules. Specifically he thinks that it is needed at the interface between beliefs, desires, and planning. Combining across modalities is possible without language for simple actions (see the other crossmodal hacks [Hack #57] through [Hack #59] in this book for examples), but there's something about planning, and that includes reorientation, that requires language. This would explain why people sometimes begin to talk to themselvesto instruct themselves out loudduring especially difficult tasks. Children use self-instruction as a normal part of their development to help them carry out things they find difficult.7 Telling them to keep quiet is unfair and probably makes it harder for them to finish what they are doing. If Carruthers is right, it means two things. First, if you are asking people to engage in goal-oriented reasoning, particularly if it uses information of different sorts, you shouldn't ask them to do something else that is verbal, either listening or speaking. I've just realized that this could be another [Hack #54] part of the reason people can drive with the radio on but need to turn it off as soon as they don't know where they are going and need to think about which direction to take. It also explains why you should keep quiet when the driver is trying to figure out where to go next. T.S. Second, if you do want to get people to do complex multisequence tasks, they might find it easier if the tasks can be done using only one kind of information, so that language isn't required to combine across modules. 5.10.3. End Notes 1. Although if you do want to dwell on the role of language in brain evolution (and vice versa), you should start by reading Terrence Deacon's fantastic The Symbolic Species: The Co-Evolution of Language and the Brain. New York: W. W. Norton & Company (1998). 2. The article that contains this theory was published by Peter Carruthers in Behavioural and Brain Sciences. It, and the response to comments on it, are at http://www.philosophy.umd.edu/people/faculty/pcarruthers/Cognitive-language.htm and http://www.philosophy.umd.edu/people/faculty/pcarruthers/BBS-reply.htm. 3. OK, by "modules," he means a lot more than that, but that's the basic idea. Read Jerry Fodor's Modularity of Mind (Cambridge, MA: MIT Press, 1983) for the original articulation of this concept. The importance of modularity is also emphasized by evolutionary psychologists, such as Steven Pinker. 4. Much of the work Peter Carruthers bases his theory on was done at the lab of Elizabeth Spelke (http://www.wjh.harvard.edu/~lds). 5. Strictly, you don't have to use both kinds of information in combination at the same time to pass this test; you could use the geometric information and then use the color information, but there is other good evidence that the subjects of the experiments described hererats, children, and adultsdon't do this. 6. Hermer-Vazquez, L., Spelke, E. S., & Katsnelson, A. S. (1999). Sources of flexibility in human cognition: Dual-task studies of space and language. Cognitive Psychology, 39(1), 3-36. 7. Berk, L. E. (November 1994). Why children talk to themselves. Scientific American, 78-83 (http://www.abacon.com/berk/ica/research.html).

 

 


 

 

Chapter 6. Moving Section 6.1. Hacks 62-69 Hack 62. The Broken Escalator Phenomenon: When Autopilot Takes Over Hack 63. Keep Hold of Yourself Hack 64. Mold Your Body Schema Hack 65. Why Can't You Tickle Yourself? Hack 66. Trick Half Your Mind Hack 67. Objects Ask to Be Used Hack 68. Test Your Handedness Hack 69. Use Your Right Brainand Your Left, Too

 

 


 

 

6.1. Hacks 62-69 The story of the brain is a story of embodiment, of how much the brain takes for granted the world we're in and the body that carries it about. For instance, we assume a certain level of stability in the world. We make assumptions about how our body is able to move within the environment, and if the environment has changed [Hack #62], we get confused. As we assume stability in the world, so too do we assume stability from our body. Why should the brain bother remembering the shape of our own body when it's simply there to consult? But when our body's shape doesn't remain stable, the brain can get confused. You start by getting your fingers mixed up when you cross your hands [Hack #63] ; you end up convincing your brain that you're receiving touch sensations from the nearby table [Hack #64] . This is also a story of how we interact with the world. Our brains continually assess and anticipate the movements we need to grasp objects, judging correctly even when our eyes are fooled [Hack #66] . We're built for activity, our brains perceiving the uses of an object, its affordances [Hack #67], as soon as we look at itas soon as we see something, we ready ourselves to use it. We'll finish on what we use for manipulation: our hands. What makes us right- or left-handed [Hack #68] ? And, while we're on the topic, what does all that left-brain, right-brain stuff really mean [Hack #69] ?

 

 


 

 

Hack 62. The Broken Escalator Phenomenon: When Autopilot Takes Over Your conscious experience of the world and control over your body both feel instantaneousbut they're not. Lengthy delays in sensory feedback and in the commands that are sent to your muscles mean that what you see now happened a few moments ago and what you're doing now you planned back then. To get around the problem caused by these delays in neural transmission, your brain is active and constructive in its interactions with the outside world, endlessly anticipating what's going to happen next and planning movements to respond appropriately. Most of the time this works well, but sometimes your brain can anticipate inappropriately, and the mismatch between what your brain thought was going to happen and what it actually encounters can lead to some strange sensations. 6.2.1. In Action One such sensation can be felt when you walk onto a broken escalator. You know it's broken but your brain's autopilot takes over regardless, inappropriately adjusting your posture and gait as if the escalator were moving. This has been dubbed the broken escalator phenomenon.1 Normally, the sensory consequences of these postural adjustments are canceled out by the escalator's motion, but when it's broken, they lead to some self-induced sensations that your brain simply wasn't expecting. Your brain normally cancels out the sensory consequences of its own actions [Hack #65], so it feels really weird when that doesn't happen. To try it out yourself, the best place to look is somewhere like the London Underground (where you're sure to find plenty of broken escalators) or your favorite run-down mall. You need an escalator that is broken and not moving but that you're still allowed to walk up. You could also use the moving walkways they have at airports; again, you need one that's stationary but that you're still permitted to walk onto. Now, try not to think about it too much and just go ahead and walk on up the escalator. You should find that you experience an odd sensation as you take your first step or two onto the escalator. People often report feeling as though they've been "sucked" onto the escalator. You might even lose your balance for a moment. If you keep trying it, the effect usually diminishes quite quickly. 6.2.2. How It Works Unless we've lived our lives out in the wilderness, most of us will have encountered moving escalators or walkways at least a few times. And when we've done so, our brain has learned to adapt to the loss of balance caused by the escalator's motion. It's done this with little conscious effort on our part, automatically saving us from falling over. So when we step onto an escalator or moving walkway now, we barely notice the transition, and continue fluidly on our way. The thing is, when the escalator is broken, our brain adjusts our balance and posture anyway, and it seems we can't stop it from doing so. Until recently, evidence for this phenomenon was based only on urban anecdotes. But now the phenomenon has actually been investigated in the laboratory using a computer-controlled moving walkway.1,2 Special devices attached to the bodies and legs of 14 volunteers recorded their posture and muscle activity. Each volunteer then walked 20 times from a fixed platform onto the moving walkway. After that, the walkway was switched off, the volunteers were told it would no longer move, and they then walked from the platform onto the stationary walkway 10 times. The first time the subjects stepped onto the moving walkway, they lost their balance and grasped the handrail. But over the next few attempts, they learned to anticipate the unbalancing effect of the walkway by speeding up their stride and leaning their body forward. Then crucially, when the volunteers first walked onto the walkway when it was switched off, they continued to walk at the increased speed and also continued to sway the trunk of their body forward. They performed these inappropriate adjustments even though they could see the walkway was no longer moving and even though they had been told it would no longer move. However, this happened only once. Their brain had apparently realized the mistake and the next time they walked onto the stationary walkway they didn't perform these inappropriate adjustments. Consistent with anecdotal evidence for the broken escalator phenomenon, most of the volunteers expressed spontaneous surprise at the sensations they experienced when they first stepped onto the stationary walkway. 6.2.3. In Real Life There are obviously differences between the lab experiment and the real-life phenomenon. Our brains have learned to cope with escalators over years of experience, whereas the experimental volunteers adapted to the lab walkway in just a few minutes. But what the real-life phenomenon and lab experiment both represent is an example of dissociation between our conscious knowledge and our brain's control of our actions. The volunteers knew the walkway was motionless, but because it had been moving previously, the brain put anticipatory adjustments in place anyway to prevent loss of balance. Usually these kinds of dissociations work the other way around. Often our conscious perception can be tricked by sensory illusions, but the action systems of our brain are not fooled and act appropriately. For example, visual illusions of size can lead us to perceptually misjudge the size of an object, yet our fingertip grasp will be appropriate to the object's true size. The motor system gets it right when our conscious perception is fooled by the illusion size (see [Hack #66] to see this in action). These observations undermine our sense of a unified self: it seems our consciousness and the movement control parts of our brain can have two different takes on the world at the same time. This happens because, in our fast-paced world of infinite information and possibility, our brain must prioritize both what sensory information reaches consciousness and what aspects of movement our consciousness controls. Imagine how sluggish you would be if you had to think in detail about every movement you made. Indeed, most of the time autopilot improves performancethink of how fluent you've become at the boring drive home from work or the benefits of touch-typing. It's just that, in the case of the broken escalator, your brain should really have handed the reins back to "you." 6.2.4. End Notes 1. Reynolds, R. F., & Bronstein, A. M. (2003). The broken escalator phenomenon. aftereffect of walking onto a moving platform. Experimental Brain Research, 151, 301-308. 2. Reynolds, R. F., & Bronstein, A. M. (2004). The moving platform aftereffect: Limited generalization of a locomotor adaptation. Journal of Neurophysiology, 91, 92-100. Christian Jarrett

 

 


 

<== previous page | next page ==>
Audition dominates for timing | Hack 63. Keep Hold of Yourself
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.015 sec.)