Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Processing with Built-in Assumptions

The wiring diagram for all the subsequent motion detection and object recognition modules is enormously complex. After basic feature extraction, there's still number judgment, following moving objects, and spotting biological motion [Hack #77] to be done. At a certain point, the defining characteristic of the cortex as a whole must come into play, and visual information is processed enough to be associated with memory, language, and reading emotions. This is where it blends in to the higher-order functions of the whole brain.

In the hacks that follow, we'll explore the effects of early and late visual processing. A common thread through these effects will be the assumptions the visual system has made about the visual world to expedite its computationand by looking at the quirks of vision, we can draw some of these out. Assumptions like the visual world remaining relatively stable from second to second (so we don't notice if it doesn't [Hack #40] ) and supposing that dark areas are shadows, which is the quirk that makeup takes advantage of [Hack #20] .

In a sense, the fact that we can observe these assumptions suggests that the visual system assumes as much about the external environment as about its own modules. The visual system's expectation that the motion module will report motion correctly (and therefore our confusion when the module doesn't identify motion correctly [Hack #25] ) is much the same as the visual system's expectation that a shadow is reporting 3D shape correctly. While we could think of the visual system as entirely in the brain, really we should include the eyes, the head, the body, and the environment as components in this big, messy, densely connected human visual processing system, all of which report their conclusions into the mix.

And somehow, in all of this, the visual perception we know and love somehow springs into existence. There doesn't seem to be a single place where all this visual processing is reassembled, no internal television screen that we watch (and even if there were, who would watch it?). It's distributed over the whole visual system, and over the environment too. Not just a picture at the retina, after all.

 

 


 

 

Hack 14. See the Limits of Your Vision The high-resolution portion of your vision is only the size of your thumbnail at arm's length. The rest of your visual input is low res and mostly colorless, although you seldom realize it. Your vision isn't of uniform resolution. What we generally think of as our visual ability, the sharpness with which we see the world, is really only the very center of vision, where resolution is at its highest. From this high-resolution center, the lower-resolution periphery, and using continual movements of our head and eyes [Hack #15], we construct a seamlessand uniformly sharppicture of the universe. But how much are we compensating? What is the resolution of vision? The eye's resolution is determined by the density of light-sensitive cells on the retina, which is a layer of these cells on the back of the eye (and also includes several layers of cells to process and aggregate the visual signals to send on to the rest of the brain). If the cells were spread evenly, we would see as well out of the corners of our eyes as directly ahead, but they're not. Instead, the cells are most heavily packed right in the center of the retina, a small region called the fovea, so the highest-resolution part of the vision is in the middle of your visual field. The area corresponding to this is small; if you look up at the night sky, out of everything you see, your fovea just about covers the full moon. Away from this, in your peripheral vision, resolution is much coarser. Color also falls off in peripheral vision. The light-sensitive cells, called photoreceptors, come in different types according to what kinds of light they convert into neural signals. Almost all the photoreceptors that can discriminate colors of light are in the fovea. Outside of this central area you can still make out color, but it's harder; the oter type of cell, more sensitive but able to recognize only brightness, is more abundant. 2.3.1. In Action Figure 2-1 is a variant of the usual eye chart you will have encountered at the optometrist, constructed by Stuart Anstis. Hold it in front of you, and rest your gaze on the central dot. The letters in the chart are smallest in the middle and largest at the outer edge; they scale up at a rate to exactly compensate for your eyes' decrease in resolution from the central fovea to the periphery. Figure 2-1. When you fixate on the center of this chart, all the letters are scaled to have the same resolution1   That means that, holding your gaze on the center of the chart, it should be as easy for you to read one of the letters near the middle as one of the bigger ones at the edge. What this eye chart doesn't show is our relative decrease in color-sensing ability as we edge toward peripheral vision. Have a friend hold pieces of colored card up to the side of your face while you keep your head, and eyes, looking forward. Notice that, while you can see that she's moving the card off in the corner of your eye, you can't tell what color the card is. Because peripheral vision is still good at brightness, you'll need to use pieces of card that won't give you any clues simply from how bright the card looks. A dull yellow and a bright blue will do. If you'd like to perform a more rigorous experiment, the Exploratorium museum provides instructions on how to make yourself a collar to measure the angles at which your color vision becomes useful (http://www.exploratorium.edu/snacks/peripheral_vision.html). Since trying this experiment, I've been playing a similar game walking along the side of the road. When cars are coming from behind me, and I'm looking strictly ahead, at what point can I see there's something there, and how much later is it that I can tell the color? I know a car's in my peripheral vision for a surprisingly long time before I can make the color out. Even though it would be in the name of science, please do be careful not to get run over. M.W. 2.3.2. How It Works When you're looking at Anstis' eye chart, Figure 2-1, all the letters are equally legible because the light from each is falling on the same number of photoreceptors at the back of the eye. The central letters fall in the center of your retina, where the photoreceptors are densest; the outer letters fall in the periphery where the cells are spread thinner, but the letters are larger so the same number of cells are covered. The distribution of light-sensitive cells across the retina is shown in Figure 2-2. There are two different curves, one for rods and one for cones, corresponding to the two types of photoreceptor cells we possess, so named because of their shapes. You can see how they're both densest toward the center of the eye and drop away toward the periphery, although at different rates. Assuming you're reading this book in anything above dim light, you'll have been using your cones to look at the eye chartthey're the ones that drop away fastest, and that rate determines the resolution of vision. Figure 2-2. The distribution of different photoreceptors on the retina2   That's why our color vision suffers outside the fovea. Cones work best in normal, daytime light levels, and they also respond to color. But the rods are relatively more numerous outside the fovea, and they don't respond to color. They're also extremely sensitive to light, so during the day they're not too much help at all, but you can still see how they're useful when cones are sparse. They're why you could see your friend moving the colored card in the earlier experiment, but they couldn't help you figure out whether the card was yellow, blue, or whatever. Rods, because of their sensitivity to light, are also handy when light is very poor. In dim conditions, our cones shut down (over a period of about 5 minutes) and we use our rods to see (the rods reach maximum sensitivity after about half an hour). But notice that rods are actually densest just outside the fovea, which means the best way to spot really faint light is to look at it slightly off-center. You can use this to look for faint stars on a dark night, and you'll see slightly more stars slightly outside the exact center of your vision. Curiously, aside from experiments like the colored card one, you don't normally notice that not all of your visual world is high resolution. This is because you move your eyes to what you want to look at, and as you move your eyes, the area of high resolution follows. This process of active vision [Hack #15] is much more efficient than having high resolution everywhere. Of course, before you move your eyes to something, your visual system has to preconsciously spot it using your peripheral vision and move your attention there. The events best noticed by peripheral vision are described in [Hack #37] and are mainly sudden changes of movement and light. These are events that signify that something needing an urgent response could be happeningit's not surprising we are designed to notice them even outside the high-resolution center of the eye. 2.3.3. End Notes 1. Reprinted from Vision Research, volume 14, Anstis, S., "A chart demonstrating variations in acuity with retinal position," p. 591, copyright (1974), with permission from Elsevier. 2. For a diagram that shows the detail rather than the general features, see: Østerberg, G. A. (1935). Topography of the layer of rods and cones in the human retina. Acta Ophthalmologica, 13 (Supplement 6), 1-97. 2.3.4. See Also · Illustrations of how resolution decreases in the periphery (http://psy.ucsd.edu/~sanstis/SABlur.html). · A brief introduction to the human eye and the implication for page design (http://www.awpa.asn.au/tipstrix/eyeball1.htm and http://www.awpa.asn.au/tipstrix/eyeball2.htm). · "The Rods and Cones of the Human Eye" (http://hyperphysics.phy-astr.gsu.edu/hbase/vision/rodcone.html), a good introduction and resource, which is part of the innovative and informative HyperPhysics hypertext (http://hyperphysics.phy-astr.gsu.edu/hbase/hph.html). · A list of facts and figures on the eye, its capabilities, and a little about visual processing (http://white.stanford.edu/~brian/numbers/node1.html). · More facts and figures concerning the human retina (http://webvision.med.utah.edu/facts.html), with references.

 



 


 

 

Hack 15. To See, Act Think of perception as a behavior, as something active, rather than as something passive. Perception exists to guide action, and being able to act is key to the construction of the high-resolution illusion of the world we experience. The other hacks in this chapter could give the impression that seeing is just a matter of your brain passively processing the information that comes in through the eyes. But perception is far more of an active process. The impression we have of the world is made up by sampling across times, as well as just by sampling across the senses. The sensation we receive at any moment prompts us to change our head position, our attention, maybe to act to affect something out in the world, and this gives us different sensations in the next moment to update our impression of the world. It's easier for your brain to take multiple readings and then interpolate the answers than it is to spend a long time processing a single scene. Equally important, if you know what you want to do, maybe you don't need to completely interpret a scene; you may need to process it just enough to let you decide what to do next and in acting give yourself a different set of sensations that make the scene more obvious. This school of thought is an "ecological" approach to perception and is associated with the psychologist J. J. Gibson.1 He emphasized that perception is a cognitive process and, like other cognitive processes, depends on interacting with the world. The situations used by vision scientists in which people look at things without moving or reaching out to touch them are extremely unnatural, as large as the difference between a movie at the theater directed by someone else and the freewill experience of regular real life. If you want people to see something clearly, give them the chance to move it around and see how it interacts with other objects. Don't be fooled into thinking that perception is passive. 2.4.1. In Action One example of active vision that always happens, but that we don't normally notice, is moving our eyes. We don't normally notice our blind spots [Hack #16] or our poor peripheral vision [Hack #14], because our gaze constantly flits from place to place. We sample constantly from the visual world using the high-resolution center of the eyethe foveaand our brain constructs a constant, continuous, consistent, high-resolution illusion for us. Constant sampling means constant eye movement: automatic, rapid shifts of gaze called saccades. We saccade up to five times a second, usually without noticing, even though each saccade creates a momentary gap in the flow of visual information into our brains [Hack #17] . Although the target destination of a saccade can be chosen consciously, the movement of the eyes isn't itself consciously controlled. A saccade can also be triggered by an event we're not even consciously aware ofat least not until we shift our gaze, placing it at the center of our attention. In this case, our attention's been captured involuntarily, and we had no choice but to saccade to that point [Hack #37] . Each pause in the chain of saccades is called a fixation. Fixations happen so quickly and so automatically that it's hard to believe that we don't actually hold our gaze on anything. Instead, we look at small parts of a scene for just fractions of a second and use the samples to construct an image. Using eye tracking devices, it is possible to construct images of where people fixate when looking at different kinds of objectsa news web site, for instance. The Poynter Institute's Eyetrack III project (http://www.poynterextra.org/eyetrack2004/) investigates how Internet news readers go about perusing news online (Figure 2-3) and shows the results of their study as a pattern of where eye gaze lingers while looking over a news web site. Figure 2-3. The pattern of eye fixations looking over a news web site; the brighter patches show where eyes tend to fixate2   Part of developing speed-reading skills is learning to make fewer fixations on each line of text and take in more words at each fixation. If you're goodand the lines are short enoughyou can get to the point of one fixation per line, scanning the page from top to bottom rather than side to side. Figure 2-4 shows typical fixation patterns while reading. Figure 2-4. A typical pattern of eye fixations when reading3   Figure 2-5 shows a typical pattern of what happens when you look at a face. You fixate enough to get a good idea of the shape of the whole face with your peripheral vision, fixating most on those details that carry the most information: the eyes. Figure 2-5. A pattern of fixations over 8 seconds when looking at a face (Matt's, in this case)4   2.4.2. End Notes 1. Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin. 2. Heatmap image produced by Eyetools Inc. as part of the Poynter Institute's Eyetrack III project (http://www.poynter.com/eyetrack). 3. Scanpath produced using BeGaze software from eye movements recorded with the iView X Hi-Speed system, courtesy of SensoMotoric Instruments GmBH. 4. Photo of Matt by Dorian Mcfarland. Many thanks to Lizzie Crundall for creating this scanpath image. 2.4.3. See Also · Eye tracking and visual attention demos and movies from the University of Southern California (http://ilab.usc.edu/bu). · A tutorial on the mechanics of saccades (http://www.personal.psu.edu/users/e/l/elm173/schlwork/semester3/psych/complete.htm).

 

 


 

 


Date: 2015-12-11; view: 700


<== previous page | next page ==>
Enter the Visual Cortex | Figure 2-6. A typical blind spot pattern
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.009 sec.)