Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Figure 2-26. The Ouchi illusionthe central circle appears to float above the other part of the design

 

Here the central disk of vertical bars appears to move separately from the rest of the pattern, floating above the background of horizontal bars. You can increase the effect by jiggling the book.

Your fixational eye movements affect the two parts of the pattern in different ways. The dominant direction of the bars, either horizontal or vertical, means that only one component of the random movements stands out. For the "background" of horizontal bars, this means that the horizontal component of the movements is eliminated, while for the "foreground" disk the vertical component of the movements is eliminated. Because the fixational movements are random, the horizontal and vertical movements are independent. This means that the two parts of the pattern appear to move independently, and your visual system interprets this as meaning that there are two different objects, one in front of the other.

Peripheral drift

The rotating snakes illusion (Figure 2-25) uses a different kind of structure to co-opt these small random eye movements, one that relies on differential brightness in parts of the pattern (color isn't essential to the effect3). To understand how changes in the brightness of the pattern create an illusion of motion in the periphery, see Figure 2-27.

Figure 2-27. The peripheral drift illusion, in which the spokes appear to rotate in the corner of your eye4

 

In this simple pattern, the difference in the shading of the figure creates the impression of illusory movement. It makes use of the same principles as the rotating snakes, but it's easier to work out what's happening. Brighter things are processed faster in the visual system (due to the stronger response they provoke in neurons [Hack #11] ), so where the spokes meet, as one fades out into white and meets the black edge of another, the white side of the edge is processed faster that the black edge. The difference in arrival times is interpreted as a movement but only in the peripheral vision where your resolution is low enough to be fooled. The illusion of motion occurs only when the information first hits the eye, so you need to "reset" by blinking or quickly shifting your eyes. It works really well with two patterns next to each other, because your eye flicks between the two as the illusory motion in the periphery grabs your attention. Try viewing two copies of this illusion at the same time; open http://viperlib.york.ac.uk/Pimages/Lightness-Brightness/Shading/8cycles.DtoL.CW.jpg in two browser windows on opposite sides of your desktop.

How It Works

You are now equipped to understand why Professor Kitaoka's rotating snakes illusion (Figure 2-25) works. Because the shape has lots of repeating parts, it is hard for your visual system to lock on to any part of the pattern to get a frame of reference. The shading of the different parts of the squares creates illusory motion that combines with motion from small eye movements that are happening constantly. The effect is greatest in your peripheral vision, where your visual resolution is most susceptible to the illusionary motion cue in the shading of the patterns. Your eyes are attracted by the illusory motion, so they flit around the picture and the movement appears everywhere apart from where you are directly looking. The constant moving of your eyes results in a kind of reset, which triggers a new interpretation of the pattern and new illusory motions and prevents you from using consistency of position across time to figure out that the motion is illusory.



In Real Life

Professor Kitaoka's web page (http://www.ritsumei.ac.jp/~akitaoka/index-e.html) contains many more examples of this kind of anomalous motion and his scientific papers in which he explores the mechanisms behind them.

We are constantly using the complex structure of the world to work out what is really moving and to discount movements of our eyes, heads, and bodies. These effects show just how artificial patterns have to be to fool our visual system. Patterns like this are extremely unlikely without human intervention.

Professor Kitaoka has spotted one example of anomalous motion similar to his rotating snakes illusion that may not have been intentional. The logo of the Society For Neuroscience, used online (http://web.sfn.org), appears to drift left and right in the corner of their web site! Now you know what to look for, maybe you will see others yourself.

End Notes

1. Martinez-Conde, S., Macknik, S. L., & Hubel, D. H. (2004). The role of fixational eye movements in visual perception. Nature Reviews Neuroscience, 5, 229-240.

2. Figure reprinted from: Ouchi, H. (1977). Japanese Optical and Geometrical Art: 746 Copyright-Free Designs. New York: Dover. See also http://mathworld.wolfram.com/OuchiIllusion.html.

3. Olveczky, B., Baccus, S., & Meister, M. (2003). Segregation of object and background motion in the retina. Nature, 423, 401-408.

4. Faubert, J., & Herbert, A. (1999). The peripheral drift illusion: A motion illusion in the visual periphery. Perception, 28, 617-622. Figure reprinted with permission from Pion Limited, London.

 

 


 

 

Hack 31. Minimize Imaginary Distances If you imagine an inner space, the movements you make in it take up time according to how large they are. Reducing the imaginary distances involved makes manipulating mental objects easier and quicker. Mental imagery requires the same brain regions that are used to represent real sensations. If you ask someone to imagine hearing the first lines to the song "Purple Haze" by Jimi Hendrix, the activity in her auditory cortex increases. If you ask someone to imagine what the inside of a teapot looks like, his visual cortex works harder. If you put a schizophrenic who is hearing voices into a brain scanner, when she hears voices, the parts of the brain that represent language sounds really are activeshe's not lying; she really is hearing voices. Any of us can hear voices or see imaginary objects at will; it's only when we lose the ability to suppress the imaginings that we think of it as a problem. When we imagine objects and places, this imagining creates mental space that is constrained in many of the ways real space is constrained. Although you can imagine impossible movements like your feet lifting up and your body rotating until your head floats inches above the floor, these movements take time to imagine and the amount of time is affected by how large they are. 2.20.1. In Action Is the left shape in Figure 2-28 the same as the right shape? Figure 2-28. Is the left shape the same as the right shape?   How about the left shape in Figure 2-29is it the same as the right shape? Figure 2-29. Is the left shape the same as the right shape?   And is the left shape in Figure 2-30 the same as the one on the right? Figure 2-30. Is the left shape the same as the right shape?   To answer these questions, you've had to mentally rotate one of each pair of the shapes. The first one isn't too hardthe right shape is the same as the left but rotated 50°. The second pair is not the same; the right shape is the mirror inverse of the left and again rotated by 50°. The third pair is identical, but this time the right shape has been rotated by 150°. To match the right shape in the third example to the left shape, you have to mentally rotate 100° further than to match the first two examples. It should have taken you extra seconds to do this. If you'd like try an online version, see the demonstration at http://www.uwm.edu/People/johnchay/mrp.htm (requires Shockwave). When we tried it, the long version didn't save our data (although it claimed it did) so don't get excited about being able to analyze your results; at the moment, you can use it only to get a feel for how the experiment works. 2.20.2. How It Works These shapes are similar to the ones used by Robert Shepard and Jacqueline Metzler1 in their seminal experiments on mentally rotation. They found that the time to make a decision about the shapes was linearly related to the angle of rotation. Other studies have shown the mental actions almost always take up an amount of time that is linearly related to the amount of imaginary movement required. This shows that mental images are analog representations of the real thingwe don't just store them in our head in some kind of abstract code. Also interesting is the fact that mental motions take up a linearly increasing amount of time as mental distance increases; in the original experiments by Shepard and Metzler, it was one extra second for every extra 50°. This relationship implies that the mental velocity of our movements is constant (unlike our actual movements, which tend to accelerate sharply at the beginning and decelerate sharply at the end, meaning that longer movements have higher velocities). Further studies of mental rotation2 showed that the mental image does indeed move through all the transitional points as it is rotated in and that at least in some experiments, rotating complex shapes didn't take any longer than rotating simple shapes. Other experiments3 have also shown that moving your mind's eye over a mental space (such as an imagined map) takes time that is linearly related to the imagined distance. If you "zoom in" on a mental image, that takes time as well. So if you ask people to imagine an elephant next to a rabbit, they will take longer to answer a question about the color of the rabbit's eyes than about the color of the elephant's eyes. You can partially avoid this zooming-in time by getting people to imagine the thing really large to start withasking them to start, say, by imagining a fly and then the rabbit next to it. Recent neuroimaging research4 has shown that mentally rotating objects may involve different brain regions from mentally rotating your own body through space. Studies that compare the difficulty of the two have found that it is easier and faster to imagine yourself mental rotating around a display of objects than it is to imagine the objects rotating around their own centers.5 So if you are looking at a pair of scissors that have the handle pointing away from you, it will be easier to imagine yourself rotating around the scissors in order to figure out if they are lefthanded or righthanded scissors, rather than imaging the scissors rotating around so that the handle faces you. And easiest of all is probably to imagine just your own hand rotating to match the way the handle is facing. All this evidence suggests that mental space exists in analog form in our minds. It's not just statements about the thing, but a map of the thing in your mind's eye. There is some evidence, however, that the copy in your mind's eye isn't an exact copy of the visual inputor at least that it can't be used in exactly the same way as visual input can be. Look at Figure 2-31, which shows an ambiguous figure that could be a duck or could be a rabbit. You'll see one of them immediately, and if you wait a few seconds, you'll spot the other one as well. You can't see both at once; you have to flip between them and there will always be one you saw first (and which one you see first is the sort of thing you can affect by priming [Hack #81], exposing people to concepts that influence their later behavior). Figure 2-31. You can see this picture as a duck or a rabbit, but if you'd seen only one interpretation at the time, could you see the other interpretation in your mind's eye?7   If you flash a figure up to people for just long enough for them to see it and make one interpretationto see a duck or a rabbit, but not boththen they can't flip their mental image in their mind's eye to see the other interpretation. If they say they saw a duck, then if you ask them if the duck could be a rabbit, they just think you're mad.6 Perceiving the ambiguity seems to require real visual input to operate on. Although you have the details of the image in your mind's eye it seems you need to experience them anew, to refresh the visual information, to be able to make a reinterpretation of the ambiguous figure. 2.20.3. In Real Life We use mental imagery to reason about objects before we move them or before we move around them. Map reading involves a whole load of mental rotation, as does fitting together things like models or flat-pack furniture. Assembly instructions that involve rotating the object will be harder to compute, all other things being equal. But if you can imagine the object staying in the same place with you rotating around it, you can partially compensate for this. The easier it is to use mental rotation, the less physical work we actually have to do and the more likely we are to get things right the first time. 2.20.4. End Notes 1. Shepard, R. N., & Metzler, J. (1971). Mental rotation of three dimensional objects. Science, 171, 701-703. 2. Cooper, L. A., & Shepard, R. N. (1973). Chronometric studies of the rotation of mental images. In W. G. Chase (ed.), Visual Information Processing, 75-176. New York: Academic Press. 3. Kosslyn, S., Ball, T., & Reiser, B. (1978). Visual images preserve metric spatial information: Evidence from studies of image scanning. Journal of Experimental Psychology: Human Perception and Performance, 4, 47-60. 4. Parsons, L. M. (2003). Superior parietal cortices and varieties of mental rotation. Trends in Cognitive Sciences, 7(12), 515-517. 5. Wraga, M., Creem, S. H., & Proffitt, D. R. (2000). Updating displays after imagined object and viewer rotations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 151-168. 6. Chambers, D., & Reisberg, D. (1985). Can mental images be ambiguous? Journal of Experimental Psychology: Human Perception and Performance, 11(3), 317-328. 7. Fliegende Blätter (1892, No. 2465, p. 17). Munich: Braun & Schneider. Reprinted in: Jastrow, J. (1901). Fact & Fable in Psychology. London: Macmillan. 2.20.5. See Also · Great short notes on mental imagery from Barnes & Noble (http://www.sparknotes.com/psychology/cognitive/perception/section1.html).

 

 


 

 

Hack 32. Explore Your Defense Hardware We have special routines that detect things that loom and make us flinch in response. Typically, the more important something is, the deeper in the brain you find it, the earlier in evolution it arose, and the quicker it can happen. Avoiding collisions is pretty important, as is closing your eyes or tensing if you can't avoid the collision. What's more, you need to do these things to a deadline. It's no use dodging after you've been hit. Given this, it's not surprising that we have some specialized neural mechanisms for detecting collisions and that they are plugged directly into motor systems for dodging and defensive behavior. 2.21.1. In Action The startle reaction is pretty familiar to all of usyou blink, you flinch, maybe your arms or legs twitch as if beginning a motion to protect your vulnerable areas. We've all jumped at a loud noise or thrown up our arms as something expands toward us. It's automatic. I'm not going to suggest any try-it-at-home demonstrations for this hack. Everyone knows the effect, and I don't want y'all firing things at each other to see whether your defense reactions work. 2.21.2. How It Works Humans can show response to a collision-course stimulus within 80 ms.1 This is far too quick for any sophisticated processing. In fact, it's even too quick for any processing that combines information across both eyes. It's done, instead, using a classic hacka way of getting good-enough 3D direction and speed information from crude 2D input. It works like this: symmetrical expansion of darker-than-background areas triggers the startle response. "Darker-than-background" because this is a rough-and-ready way of deciding what to count as an object rather than just part of the background. "Symmetrical expansion" because this kind of change in visual input is characteristic of objects that are coming right at you. If it's not expanding, it's probably just moving, and if it's not expanding symmetrically, it's either changing shape or not moving on a collision course. These kind of stimuli capture attention [Hack #37] and cause a startle response. Everything from reptiles to pigeons to human infants will blink and/or flinch their heads when they see this kind of input. You don't get the same effects with contracting patches, rather than expanding patches, or with light patches, rather than dark patches .2 Looming objects always provoke a reaction, even if they are predictable; we don't learn to ignore them as we learn to ignore other kinds of event.3 This is another sign that they fall in a class for which there is dedicated neural machineryand the reason why is pretty obvious as well. A looming object is always potentially dangerous. Some things you just shouldn't get used to. In pigeons, the cells that detect looming exist in the midbrain. They are very tightly tuned so that they respond only to objects that look as if they are going to collidethey don't respond to objects that are heading for a near miss, even if they are still within 5º of collision.4 These neurons fire at a consistent time before collision, regardless of the size and velocity of the object. This, and the fact that near misses don't trigger a response, shows that path and velocity information is extracted from the rate and shape of expansion. Now this kind of calculation can be done cortically, using the comparison of information from both eyes, but for high-speed, non-tiny objects at anything more that 2 m away, it isn't.5 You don't need to compare information from both eyes; the looming hack is quick and works well enough. 2.21.3. End Notes 1. Busettini, C., Masson, G. S., Miles, F. A. (1997). Radial optic flow induces vergence eye movements with ultra-short latencies. Nature, 390(6659), 512-515. 2. Nanez, J. E. (1988). Perception of impending collision in 3- to 6-week-old human infants. Infant Behaviour and Development, 11, 447-463. 3. Caviness, J. A., Schiff, W., & Gibson, J. J. (1962). Persistent fear responses in rhesus monkeys to the optical stimulus of "looming." Science, 136, 982-983. 4. Wang, Y., & Frost, B. J. (1992). Time to collision is signalled by neurons in the nucleus rotundus of pigeons. Nature, 356, 236-238. 5. Rind, F. C., & Simmons, P. J. (1999). Seeing what is coming: Building collision-sensitive neurones. Trends in Neurosciences, 22, 215-220. (This reference contains some calculations showing exactly what size of approaching objects, at what distances, are suitable for processing using the looming system and what are suitable for processing by the stereo-vision system.)

 

 


 

 

Hack 33. Neural Noise Isnt a Bug; Its a Feature Neural signals are innately noisy, which might just be a good thing. Neural signals are always noisy: the timings of when they fire, or even whether they fire at all, is subject to random variation. We make generalizations at the psychological level, such as saying that the speed of response is related to intensity by a certain formulaPieron's Law [Hack #11] . And we also say that cells in the visual cortex respond to different specific motions [Hack #25] . But both of these are true only on average. For any single cell, or any single test of reaction time, there is variation each time it is measured. Not all the cells in the motion-sensitive parts of the visual cortex will respond to motion, and those that do won't do it exactly the same each time we experience a particular movement. In the real world, we take averages to make sense of noisy data, and somehow the brain must be doing this too. We know that the brain is pretty accurate, despite the noisiness of our neural signals. A prime mechanism for compensating for neural noise is the use of lots of neurons so that the average response can be taken, canceling out the noise. But it may also be the case that noise has some useful functions in the nervous system. Noise could be a feature, rather than just an inconvenient bug. 2.22.1. In Action To see how noise can be useful, visit Visual Perception of Stochastic Resonance (http://neurodyn.umsl.edu/~simon/sr.html; Java) designed by Enrico Simonotto,1 which includes a Java applet. A grayscale picture has noise added and the result filtered through a threshold. The process is repeated and results played like a video. Compare the picture with various levels of noise included. With a small amount of noise, you see some of the gross features of the picturethese are the parts with high light values so they always cross the threshold, whatever the noise, and produce white pixelsbut the details don't show up often enough for you to make them out. With lots of noise, most of the pixels of the picture are frequently active and it's hard to make out any distinction between true parts of the picture and pixels randomly activated by noise. But with the right amount of noise, you can clearly see what the picture is and all the details. The gross features are always there (white pixels), the fine features are there consistently enough (with time smoothing they look gray), and the pixels that are supposed to be black aren't activated enough to distract you. 2.22.2. How It Works Having evolved to cope with noisy internal signals gives you a more robust system. The brain has developed to handle the odd anomalous data point, to account for random inputs thrown its way by the environment. We can make sense of the whole even if one of the parts doesn't entirely fit (you can see this in our ability to simultaneously process information [Hack #52], as well). "Happy Birthday" sung down a crackly phone line is still "Happy Birthday." Compare this with your precision-designed PC; the wrong instruction at the wrong time and the whole thing crashes. The ubiquity of noise in neural processing means your brain is more of a statistical machine than a mechanistic one. That's just a view of noise as something to be worked around, however. There's another function that noise in neural systems might be performingit's a phenomenon from control theory called stochastic resonance. This says that adding noise to a signal raises the maximum possible combined signal level. Counterintuitively, this means that adding the right amount of noise to a weak signal can raise it above the threshold for detection and make it easier to detect and not less so. Figure 2-32 shows this in a graphical form. The smooth curve is the varying signal, but it never quite reaches the activation threshold. Adding noise to the signal produces the jagged line that, although it's messy, still has the same average values and raises it over the threshold for detection at certain points. Figure 2-32. Adding noise to a signal brings it above threshold, without changing the mean value of the signal   Just adding noise doesn't always improve things of course: you might now have a problem with your detection threshold being crossed even though there is no signal. A situation in which stochastic resonance works best is one in which you have another dimension, such as time, across which you can compare signals. Since noise changes with time, you can make use of the frequency at which the detection threshold is crossed too. In Simonotto's applet, white pixels correspond to where the detection threshold has been crossed, and a flickering white pixel averages to gray over time. In this example, you are using time and space to constrain your judgment of whether you think a pixel has been correctly activated, and you're working in cooperation with the noise being added inside the applet, but this is exactly what your brain can do too. 2.22.3. End Note 1. Simonotto, E., Riani, M., Seife, C., Roberts, M., Twitty, J., & Moss, F. (1997). Visual perception of stochastic resonance. Physical Review Letters, 78(6), 1186-1189. 2.22.4. See Also · An example of a practical application of stochastic resonance theory, in the form of a hearing aid: Morse, R. P., & Evans, E. F. (1996). Enhancement of vowel coding for cochlear implants by addition of noise. Nature Medicine, 2(8), 928-932.

 

 


 

 

Chapter 3. Attention Section 3.1. Hacks 34-43 Hack 34. Detail and the Limits of Attention Hack 35. Count Faster with Subitizing Hack 36. Feel the Presence and Loss of Attention Hack 37. Grab Attention Hack 38. Don't Look Back! Hack 39. Avoid Holes in Attention Hack 40. Blind to Change Hack 41. Make Things Invisible Simply by Concentrating (on Something Else) Hack 42. The Brain Punishes Features that Cry Wolf Hack 43. Improve Visual Attention Through Video Games

 

 


 

 

3.1. Hacks 34-43 It's a busy world out there, and we take in a lot of input, continuously. Raw sense data floods in through our eyes, ears, skin, and more, supplemented by memories and associations both simple and complex. This makes for quite a barrage of information; we simply haven't the ability to consider all of it at once. How, then, do we decide what to attend to and what else to ignore (at least for now)? Attention is what it feels like to give more resources over to some perception or set of perceptions than to others. When we talk about attention here, we don't mean the kind of concentration you give to a difficult book or at school. It's the momentary extra importance you give to whatever's just caught your eye, so to speak. Look around the room briefly. What did you see? Whatever you recall seeinga picture, a friend, the radio, a bird landing on the windowsillyou just allocated attention to it, however briefly. Or perhaps attention isn't a way of allocating the brain's scarce processing resources. Perhaps the limiting factor isn't our computational capacity at all, but, instead, a physical limit on action. As much as we can perceive simultaneously, we're able to act in only any one way at any one time. Attention may be a way of throwing away information, of narrowing down all the possibilities, to leave us with a single conscious experience to respond to, instead of millions. It's hard to come up with a precise definition of attention. Psychologist William James,1 in his 1890 The Principles of Psychology, wrote: "Everyone knows what attention is." Some would say that a more accurate and useful definition has yet to been found. That said, we can throw a little light on attention to see how it operates and feels. The hacks in this chapter look at how you can voluntarily focus your visual attention [Hack #34], what it feels like when you do (and when you remove it again) [Hack #36], and what is capable of overriding your voluntary behavior and grabbing attention [Hack #37] automatically. We'll do a little counting [Hack #35] too. We'll also test the limits of shifting attention [Hack #38] and [Hack #39] and run across some situations in which attention lets you down [Hack#40] and [Hack#41]. Finally, we'll look at a way your visual attention capacity can be improved [Hack #43] . 3.1.1. End Note 1. The Stanford Encyclopedia of Philosophy has a good biography of William James (http://plato.stanford.edu/entries/james).

 

 


 

 

Hack 34. Detail and the Limits of Attention Focusing on detail is limited by both the construction of the eye and the attention systems of the brain. What's the finest detail you can see? If you're looking at a computer screen from about 3 meters away, 2 pixels have to be separated by about a millimeter or more for them not to blur into one. That's the highest your eye's resolution goes. But making out detail in real life isn't just a matter of discerning the difference between 1 and 2 pixels. It's a matter of being able to focus on fine-grain detail among enormously crowded patterns, and that's more to do with the limits of the brain's visual processing than what the eye can do. What you're able to see and what you're able to look at aren't the same. 3.2.1. In Action Figure 3-1 shows two sets of bars. One set of bars is within the resolution of attention, allowing you to make out details. The other obscures your ability to differentiate particularly well by crowding .1 Figure 3-1. One set of bars is within the resolution of attention (right), the other is too detailed (left)1   Hold this book up and fix your gaze on the cross in the middle of Figure 3-1. To notice the difference, you have to be able to move your focus around without moving your eyesit does come naturally, but it can feel odd doing it deliberately for the first time. Be sure not to shift your eyes at all, and notice that you can count how many bars are on the righthand side easily. Practice moving your attention from bar to bar while keeping your eyes fixed on the cross in the center. It's easy to focus your attention on, for example, the middle bar in that set. Now, again without removing your gaze from the cross, shift your attention to the bars on the lefthand side. You can easily tell that there are a number of bars therethe basic resolution of your eyes is more than good enough to tell them apart. But can you count them or selectively move your attention from the third to the fourth bar from the left? Most likely not; they're just too crowded. 3.2.2. How It Works The difference between the two sets of bars is that the one on the left is within the resolution of visual selective attention because it's spread out, while the one on the right is too crowded with detail. "Attention" in this context doesn't mean the sustained concentration you give (or don't give) the speaker at a lecture. Rather, it's the prioritization of some objects at the expense of others. Capacity for processing is limited in the brain, and attention is the mechanism to allocate it. Or putting it another way, you make out more detail in objects that you're paying attention to than to those you aren't. Selective attention is being able to apply that processing to a particular individual object voluntarily. While it feels as if we should be able to select anything we can see for closer inspection, the diagram with the bars shows that there's a limit on what can be picked out, and the limit is based on how fine the detail is. We can draw a parallel with the resolution of the eye. In the same way the resolution of the eye is highest in the center [Hack #14] and decreases toward the periphery, it's easier for attention to select and focus on detail in the center of vision than it is further out. Figure 3-2 illustrates this limit. Figure 3-2. Comparing a pattern within the resolution of attention (left) with one that is too fine (right)   On the left, all the dots are within the resolution required to select any one for individual attention. Fix your gaze on the central cross, and you can move your attention to any dot in the pattern. Notice how the dots have to be larger the further out from the center they are in order to still be made out. Away from the center of your gaze, your ability to select a dot deteriorates, and so the pattern has to be much coarser. The pattern on the right shows what happens if the pattern isn't that much coarser. The dots are crowded together just a little too much for attention to cope, and if you keep your eyes on the central cross, you can't voluntarily focus your attention on any particular dot any more. (This is similar to the righthand side set of bars in the first diagram, as shown in Figure 3-1.) Also notice, in Figure 3-2, left, that the dots are closer together at the bottom of the patterns than at the top. They're able to sit tighter because we're better at making out detail in the lower half of visionthe resolution of attention is higher there. Given eye level and below is where all the action takes place, compared to the boring sky in the upper vision field, it makes sense to be optimized that way round. But precisely where this optimization arises in the structure of the brain, and how the limit on attentional resolution in general arises, isn't yet known. Why is selective attention important, anyway? Attention is used to figure out what to look at next. In the dot pattern on the left, you can select a given dot before you move your eyes, so it's a fast process. But in the other diagram, on the right, moving your eyes to look directly at a dot involves more hunting. It's a hard pattern to examine, and that makes examination a slow process. 3.2.3. In Real Life Consider attentional resolution when presenting someone with a screen full of information, like a spreadsheet. Does he have to examine each cell laboriously to find his way around it, like the crowded Figure 3-2, right? Or, like the one on the left, is it broken up into large areas, perhaps using color and contrast to make it comprehensible away from the exact center of the gaze and to help the eyes move around? 3.2.4. End Note 1. Figures reprinted from Trends in Cognitive Sciences, 1(3), He, S., Cavanagh, P., & Intriligator, J., Attentional Resolution, 115-21, Copyright (1997), with permission from Elsevier.

 

 


 

 

Hack 35. Count Faster with Subitizing You don't need counting if a group is small enough; subitizing will do the job, and it's almost instant. The brain has two methods for counting, and only one is officially called counting. That's the regular waywhen you look at a set of items and check them off, one by one. You have some system of remembering which have already been countedyou count from the top, perhapsand then increment: 7, 8, 9... The other way is faster, up to five times faster per item. It's called subitizing. The catch: subitizing works for only really small numbers, up to about 4. But it's fast! So fast that until recently it was believed to be instantaneous. 3.3.1. In Action See how many stars there are in the two sets in Figure 3-3. You can tell how many are in set A just by looking (there are three), whereas it takes a little longer to see there are six in set B. Figure 3-3. The set of stars on the left can be subitized; the one on the right cannot   I know this feels obvious, that it takes longer to see how many stars there are in the larger set. After it, there are more of them. But that's exactly the point. If you can tell, and it feels like immediately, how many stars there are when there are three of them, why not when there are six? Why not when there are 100? 3.3.2. How It Works Subitizing and counting do seem like different processes. If you look at studies of how long it takes for a person to look at some shapes on the screen and report how many there are, the time grows at 40-80 ms per item up to four, then increases at 250-350 milliseconds beyond that.1 Or to put it another way, assessing the first four items takes only a quarter of a second. It takes another second for every four items after that. That's a big jump. The difference between the two is borne out by the subjective experience. Counting feels to be a very deliberate act. You must direct your attention to each item. Your eyes move from star to star. Subitizing, on the other hand, feels preattentive. Your eyes don't need to move from star to star at all. There's no deliberate act required; you just know that there are four coffee mugs on the table or three people in the lobby, without having to check. You just look. It's this that leads some researchers to believe that subitizing isn't an act in itself, but rather a side effect of visual processing. We know that we are able to keep track of a limited number of objects automatically and follow them as they move around and otherwise change. Like looking at shadows to figure out the shape of the environment [Hack #20], object tracking seems to be a built-in feature of visual processingan almost involuntary ability to keep persistent files open for objects in vision [Hack #36] . The limit on how many objects can be tracked and how many items can be subitized is curiously similar. Perhaps, say some, the reason subitizing is so quick is that the items to be "counted" have already been tagged by the visual processing system, and so there's no further work required to figure out how many there are .2 In this view, counting is an entirely separate process that occurs only when the object tracking capacity is reached. Counting then has to remember which items have been enumerated and proceed in a serial way from item to item to see how many there are. Unfortunately, there's no confirmation of this view when looking at which parts of the brain are active while each of the two mechanisms is in use.1 Subitizing doesn't appear to use any separate part of the brain that isn't also used when counting is employed. That's not to say the viewpoint of fast subitizing as a side effect is incorrect, only that it's still a conjecture. Regardless of the neural mechanism, this does give us a hint as to why it's quicker to count in small clusters rather than one by one. Say you have 30 items on the table. It's faster to mentally group them into clusters of 3 each (using the speedy subitizing method to cluster) and slowly count the 10 clusters, than it is to use no subitizing and count every one of the 30 individually. And indeed, counting in clusters is what adults do. 3.3.3. In Real Life You don't have to look far to see the real-life impact of the speed difference between sensing the quantity of items and having to count them. Some abaci have 10 beads on a row. These would be hardand slowto use if it weren't for the Russian design of coloring the two central beads.3 This visual differentiation divides a row into three groups with a top size of four beadsperfect for instantly subitizing with no need for actual counting. It's a little design assistance to work around a numerical limitation of the brain. We also subitize crowds of opponents in fast-moving, first-person shooter video games to rapidly assess what we're up against (and back off if necessary). The importance of sizing up the opposition as fast as possible in these types of games has the nice side effect of training our subitizing routines [Hack #43] . 3.3.4. End Notes 1. Piazza, M., Mechelli, A., Butterworth, B., & Price, C. J. (2002). Are subitizing and counting implemented as separate or functionally overlapping processes? NeuroImage, 15, 435-446. 2. Trick, L. M., & Pylyshyn, Z. W. (1994). Why are small and large numbers enumerated differently? A limited-capacity preattentive stage in vision. Psychological Review, 101(1), 80-102. 3. "The Material Culture of Mathematics in a Historical Perspective." University of Cambridge, Department of History and Philosophy of Science (http://www.hps.cam.ac.uk/readinglists/p5mcm-2.html; includes illustrations).

 

 


 

 


Date: 2015-12-11; view: 887


<== previous page | next page ==>
Figure 2-24. The stepping feet illusion, with the striped background | Figure 3-5. Tracking the moving shapes becomes harder when they periodically move behind the black bars (outlined in white)
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.01 sec.)