Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Hack 18. When Time Stands Still

Our sense of time lends a seamless coherence to our conscious experience of the world. We are able to effortlessly distinguish between the past, present, and future. Yet, subtle illusions show that our mental clock can make mistakes.

You only have to enjoy the synchrony achieved by your local orchestra to realize that humans must be remarkably skilled at judging short intervals of time. However, our mental clock does make mistakes. These anomalies tend to occur when the brain is attempting to compensate for gaps or ambiguities in available sensory information.

Such gaps can be caused by self-generated movement. For example, our knowledge about how long an object has been in its current position is compromised by the suppression of visual information [Hack #17] that occurs when we move our eyes toward that objectwe can have no idea what that object was actually doing for the time our eyes were in motion. This uncertainty of position, and the subsequent guess the brain makes, can be felt in action by saccading the eyes toward a moving object.

In Action

Sometimes you'll glance at a clock and the second hand appears to hang, remaining stationary for longer than it ought to. For what seems like a very long moment, you think the clock may have stopped. Normally you keep looking to check and see that shortly afterward the second hand starts to move again as normalunless, that is, it truly has stopped.

This phenomenon has been dubbed the stopped clock illusion. You can demonstrate it to yourself by getting a silently moving clock and placing it off to one side. It doesn't need to be an analog clock with a traditional second hand; it can be a digital clock or watch, just so long as it shows seconds. Position the clock so that you aren't looking at it at first but can bring the second hand or digits into view just by moving your eyes. Now, flick your eyes over to the clock (i.e., make a saccade [Hack #15] ). The movement needs to be as quick as possible, much as might happen if your attention had been grabbed by a sudden sound or thought [Hack #37] ; a slow, deliberate movement won't cut it. Try it a few times and you should experience the "stopped clock" effect on some attempts at least.

Whether or not this works depends on exactly when your eyes fall on the clock. If your eyes land on the clock just when the second hand is on the cusp of moving (or second digits are about to change), you're less likely to see the illusion. On the other hand, if your eyes land the instant after the second hand has moved, you're much more likely to experience the effect.

 

How It Works

When our gaze falls on an object, it seems our brain makes certain assumptions about how long that object has been where it is. It probably does this to compensate for the suppression of our vision that occurs when we move our eyes [Hack #17] . This suppression means vision can avoid the difficult job of deciphering the inevitable and persistent motion blur that accompanies each of the hundred thousand rapid saccadic eye movements that we make daily. So when our gaze falls on an object, the brain assumes that object has been where it is for at least as long as it took us to lay eyes on it. Our brain antedates the time the object has been where it is. When we glance at stationary objects like a lamp or table, we don't notice this antedating process. But when we look at a clock's second hand or digits, knowing as we do that they ought not be in one place for long, this discord triggers the illusion.



This explanation was supported and quantified in an experiment by Keilan Yarrow and colleagues at University College, London and Oxford University.1 They asked people to glance at a number counter. The participants' eye movements triggered the counter, which then began counting upward from 1 to 4. Each of the numerals 2, 3, and 4 was displayed for 1 second, but the initial numeral 1 was displayed for a range of different intervals, from 400 ms to 1600 ms, starting the moment subjects moved their eyes toward the counter. The participants were asked to state whether the time they saw the numeral 1 was longer or shorter than the time they saw the subsequent numerals. Consistent with the stopped clock illusion, the participants consistently overestimated how long they thought they had seen the number 1. And crucially, the larger the initial eye movement made to the counter, the more participants tended to overestimate the duration for which the initial number 1 was visible. This supports the saccadic suppression hypothesis, because larger saccades are inevitably associated with a longer period of visual suppression. And if it is true that the brain assumes a newly focused-on target has been where it is for at least as long as it took to make the orienting saccade, then it makes sense that longer saccades led to greater overestimation. Moreover, the stopped clock illusion was found to occur only when people made eye movements to the counter, not when the counter jumped into a position before their eyesagain consistent with the saccadic suppression explanation.

You'll experience an effect similar to the stopped clock illusion when you first pick up a telephone handset and get an intermittent tone (pause, beeeep, pause, beeeep, repeat). You might find that the initial silence appears to hang for longer than it ought to. The phone can appear dead and, consequently, the illusion has been dubbed the dead phone illusion.

The clock explanation, however, cannot account for the dead phone illusion since it doesn't depend on saccadic eye movement.2 And it can't account, either, for another recent observation that people tend to overestimate how long they have been holding a newly grasped object,3 which seems like a similar effect: the initial encounter appears to last longer.

One suggestion for the dead phone illusion is that shifting our attention to a new auditory focus creates an increase in arousal, or mental interest. Because previous research has shown that increased arousalwhen we're stressed, for instancespeeds up our sense of time, this could lead us to overestimate the duration of a newly attended-to sound. Of course, this doesn't fit with the observation mentioned before, that the stopped clock illusion fails to occur when the clock or counter moves in front of our eyessurely that would lead to increased arousal just as much as glancing at a clock or picking up a telephone.

So, a unifying explanation for "when time stands still" remains elusive. What is clear is that most of the time our brain is extraordinarily successful at providing us with a coherent sense of what happened when.

End Notes

1. Yarrow, K., Haggard, P., Heal, R., Brown, P., & Rothwell, J. C. (2001). Illusory perceptions of space and time preserve cross-saccadic perceptual continuity. Nature, 414(6861), 302-305.

2. Hodinott-Hill, I., Thilo, K. V., Cowey, A., & Walsh, V. (2002). Auditory chronostasis: Hanging on the telephone. Current Biology, 12, 1779-1781.

3. Yarrow, K., & Rothwell, J. C. (2003). Manual chronostasis: Tactile perception precedes physical contact. Current Biology, 12(13), 1134-1139.

Christian Jarrett

 

 


 

 

Hack 19. Release Eye Fixations for Faster Reactions It takes longer to shift your attention to a new object if the old object is still there. Shifting attention often means shifting your eyes. But we're never fully in control of what our eyes want to look at. If they're latched on to something, they're rather stubborn about moving elsewhere. It's faster for you to look at something new if you don't have to tear your eyes awayif what you were originally looking at disappears and then there's a short gap, it's as if your eyes become unlocked, and your reaction time improves. This is called the gap effect. 2.8.1. In Action The gap effect can be spotted if you're asked to stare at some shape on a screen, then switch your gaze to a new shape that will appear somewhere else on the screen. Usually, switching to the new shape takes about a fifth of a second. But if the old shape vanishes shortly before the new shape flashes up, moving your gaze takes less time, about 20% less. It has to be said: the effecton the order of just hundredths of a secondis tiny in the grand scheme of things. You're not going to notice it easily around the home. It's a feature of our low-level cognitive control: voluntarily switching attention takes a little longer under certain circumstances. In other words, voluntary behavior isn't as voluntary as we'd like to think. 2.8.2. How It Works We take in the world piecemeal, focusing on a tiny part of it with the high-resolution center of our vision for a fraction of a second, then our eyes move on to focus on another part. Each of these mostly automatic moves is called a saccade [Hack #15] . We make saccades continuouslyup to about five every secondbut that's not to say they're fluid or all the same. While you're taking in a scene, your eyes are locked in. They're resistant to moving away, just for a short time. So what happens when another object comes along and you want to move your eyes toward it? You have to overcome that inhibition, and that takes a short amount of time. Having to overcome resistance to saccades is one way of looking at why focusing on a new shape takes longer if the old one is still there. Another way to look at it is to consider what happens when the old shape disappears. Then we can see that the eyes are automatically released from their fixation, and no longer so resistant to making a saccadewhich is why, when the old shape disappears before the new shape flashes up, it's faster to gaze-shift. In addition, the disappearing shape acts as a warning signal to the early visual system ("There's something going on, get ready!"), which serves to speed up the eyes' subsequent reaction times. It's a combination of both of these factorsthe warning and the eyes no longer being held back from movingthat results in the speedup. 2.8.3. In Real Life Just for completeness, it's worth knowing that the old point of fixation should disappear 200 milliseconds (again, a fifth of a second) before the new object appears, to get maximum speedup. This time is used for the brain to notice the old object has vanished and get the eyes ready to move again. Now, in the real world, objects rarely just vanish like this, but it happens a lot on computer screens. So it's worth knowing that if you want someone to shift his attention from one item to another, you can make it an easier transition by having the first item disappear shortly before the second appears (actually vanish, not just disappear behind something, because we keep paying attention to objects even when they're temporarily invisible [Hack #36] ). This will facilitate your user's disengagement from the original item, which might be a dialog box or some other preparatory display and put her into a state ready for whatever's going to need her attention next. 2.8.4. See Also · Taylor, T. L., Kingstone, A., & Klein, R. M. (1998). The disappearance of foveal and non-foveal stimuli: Decomposing the gap effect. Canadian Journal of Experimental Psychology, 52(4), 192-199.

 

 


 

 

Hack 20. Fool Yourself into Seeing 3D How do you figure out the three-dimensional shape of objects, just by looking? At first glance, it's using shadows. Looking at shadows is one of many tricks we use to figure out the shape of objects. As a trick, it's easy to foolshading alone is enough for the brain to assume what it's seeing is a real shadow. This illusion is so powerful and so deeply ingrained, in fact, that we can actually feel depth in a picture despite knowing it's just a flat image. 2.9.1. In Action Have a look at the shaded circles in Figure 2-8, following a similar illustration in Kleffner and Ramachandran's "On the Perception of Shape from Shading."1 Figure 2-8. Shaded figures give the illusion of three-dimensionality   I put together this particular diagram myself, and there's nothing to it: just a collection of circles on a medium gray background. All the circles are gradient-filled black and white, some with white at the top and some with white at the bottom. Despite the simplicity of the image, there's already a sense of depth. The shading seems to make the circles with white at the top bend out of the page, as though they're bumps. The circles with white at the bottom look more like depressions or even holes. To see just how strong the sense of depth is, compare the shaded circles to the much simpler diagram in Figure 2-9, also following Kleffner and Ramachandran's paper. Figure 2-9. Binary black-and-white "shading" doesn't provide a sense of depth   The only difference is that, instead of being shaded, the circles are divided into solid black and white halves. Yet the depth completely disappears. 2.9.2. How It Works Shadows are identified early in visual processing in order to get a quick first impression of the shape of a scene. We can tell it's early because the mechanism it uses to resolve light source ambiguities is rather hackish. Ambiguities occur all the time. For instance, take one of the white-at-top circles from Figure 2-8. Looking at it, you could be seeing one of two shapes depending on whether you imagine the shape was lit from the top or the bottom of the page. If light's coming from above, you can deduce it's a bump because it's black underneath where the shadows are. On the other hand, if the light's coming from the bottom of the page, only a dent produces the same shading pattern. Bump or dent: two different shapes can make the same shadow pattern lit from opposite angles. There's no light source in the diagram, though, and the flat gray background gives no clues as to where the light might be coming from. That white-at-top circle should, by rights, be ambiguous. You should sometimes see a bump and sometimes see a dent. What's remarkable is that people see the white-at-top circles as bumps, not dents, despite the two possibilities. Instead of leaving us in a state of confusion, the brain has made a choice: light comes from above.2 Assuming scenes are lit from above makes a lot of sense: if it's light, it's usually because the sun is overhead. So why describe this as a hackish mechanism? Although the light source assumption seems like a good one, it's actually not very robust. Try looking at Figure 2-8 again. This time, prop the book against a wall and turn your head upside-down. The bumps turn into dents and the dents turn into bumps. Instead of assuming the light comes from high up in the sky, your brain assumes it comes from the top of your visual field. Rather than spend time figuring out which way up your head is and then deducing where the sun is likely to be, your brain has opted for the "good enough" solution. This solution works most, not all, of the time (not if you're upside-down), but it also means the light source can be hardcoded into shape perception routines, allowing rapid processing of the scene. It's this rapidity that allows the deduction of shape from shadows to occur so early in processing. That's important for building a three-dimensional mental scene rather than a flat image like a photograph. But the shaded circles have been falsely tagged as three-dimensional, which gives them a compelling sense of depth. What's happened to the shaded circles is called "pop-out." Pop-out means that the circles jump out from the background at youthey're easier to notice or give attention to than similar flat objects. Kleffner and Ramachandran, in the same paper as before, illustrate this special property by timing how long it takes to spot a single bump-like circle in a whole page of dents. It turns out to not matter how many dents are on the page hiding the bump. Due to pop-out, the bump is immediately seen. If the page of bumps and one dent is turned on its side, however, spotting the dent takes much longer. Look one more time at Figure 2-8, this time holding the book on its side. The sense of depth is much reduced and, because the light-from-above assumption favors neither type of circle, it's pretty much random which type appears indented and which appears bent out of the page. In fact, timings show that spotting the one different circle is no longer immediate. It takes longer, the more circles there are on the page. The speed advantage for pop-out is so significant that some animals change their coloring to avoid popping out in the eyes of their predators. Standing under a bright sun, an antelope would be just like one of the shaded circles with a lit-up back and shadows underneath. But the antelope is dark on top and has a white belly. Called "countershading," this pattern opposes the shadows and turns the animal an even shade, weakening the pop-out effect and letting it fade into the background. 2.9.3. In Real Life Given pop-out is so strong, it's not surprising we often use the shading trick to produce it in everyday life. The 3D beveled button on the computer desktop is one such way. I've not seen any experiments about this specifically, but I'd speculate that Susan Kare's development of the beveled button in Windows 3.0 (http://www.kare.com/MakePortfolioPage.cgi?page=6) is more significant than we'd otherwise assume for making more obvious what to click. My favorite examples of shade from shading are in Stuart Anstis' lecture on the use of this effect in the world of fashion (http://psy.ucsd.edu/~sanstis/SAStocking.htm). Anstis points out that jeans faded white along the front of the legs are effectively artificially shadowing the sides of the legs, making them look rounder and shapelier (Figure 2-10). The same is true of stockings, which are darker on the sides whichever angle you see them from. Figure 2-10. Shaded jeans add shape to legs   Among many examples, the high point of his presentation is how the apparent shape of the face is changed with makeupor in his words, "painted-on shadows." The with and without photographs (Figure 2-11) demonstrate with well-defined cheekbones and a sculpted face just how compelling shape for shading really is. Figure 2-11. With only half the face in makeup, the apparent shape difference is easy to see   2.9.4. End Notes 1. Kleffner, D. A., & Ramachandran, V. S. (1992). On the perception of shape from shading. Perception and Psychophysics, 52(1), 18-36. 2. Actually, more detailed experiments show that the brain's default light source isn't exactly at the top of the visual field, but to the top left. These experiments detailed in this paper involve more complex shadowed shapes than circles and testing to see whether they pop out or appear indented when immediately glanced. Over a series of trials, the position of the assumed light source can be deduced by watching where the brain assumes the light source to be. Unfortunately, why that position is top left rather than top anywhere else is still unknown. See Mamassian, P., Jentzsch, I., Bacon, B. A., & Schweinberger, S. R. (2003). Neural correlates of shape from shading. NeuroReport, 14(7), 971-975.

 

 


 

 

Hack 21. Objects Move, Lighting Shouldn't

Moving shadows make us see moving objects rather than assume moving light sources.

Shadows get processed early when trying to make sense of objects, and they're one of the first things our visual system uses when trying to work out shape. [Hack #20] further showed that our visual system makes the hardwired assumption that light comes from above. Another way shadows are used is to infer movement, and with this, our visual system makes the further assumption that a moving shadow is the result of a moving object, rather than being due to a moving light source. In theory, of course, the movement of a shadow could be due to either cause, but we've evolved to ignore one of those possibilitiesrapidly moving objects are much more likely than rapidly moving lights, not to mention more dangerous.

In Action

Observe how your brain uses shadows to construct the 3D model of a scene. Watch the ball-in-a-box movie at:

· http://gandalf.psych.umn.edu/~kersten/kersten-lab/images/ball-in-a-box.mov (small version)

· http://gandalf.psych.umn.edu/~kersten/kersten-lab/demos/BallInaBox.mov (large version, 4 MB)

If you're currently without Internet access, see Figure 2-12 for movie stills.

 

The movie is a simple piece of animation involving a ball moving back and forth twice across a 3D box. Both times, the ball moves diagonally across the floor plane. The first time, it appears to move along the floor of the box with a drop shadow directly beneath and touching the bottom of the ball. The second time the ball appears to move horizontally and float up off the floor, the shadow following along on the floor. The ball actually takes the same path both times; it's just the path of the shadow that changes (from diagonal along with the ball to horizontal). And it's that change that alters your perception of the ball's movement. (Figure 2-12 shows stills of the first (left) and second (right) times the ball crosses the box.)

Figure 2-12. Stills from the "ball-in-a-box" movie

 

Now watch the more complex "zigzagging ball" movie (http://www.kyb.tue.mpg.de/links/demo.html; Figure 2-13 shows a still from the movie), again of a ball in motion inside a 3D box.

Figure 2-13. A still from the "zigzagging ball" movie1

 

This time, while the ball is moving in a straight line from one corner of the box to the other (the proof is in the diagonal line it follows), the shadow is darting about all over the place. This time, there is even strong evidence that it's the light sourceand thus the shadowthat's moving: the shading and colors on the box change continuously and in a way that is consistent with a moving light source rather than a zigzagging ball (which doesn't produce any shading or color changes!). Yet still you see a zigzagging ball.

How It Works

Your brain constructs an internal 3D model of a scene as soon as you look at one, with the influence of shadows on the construction being incredibly strong. You can see this in action in the first movie: your internal model of the scene changes dramatically based solely on the position and motion of a shadow.

I feel bad saying "internal model." Given that most of the information about a scene is already in the universe, accessible if you move your head, why bother storing it inside your skull too? We probably store internally only what we need to, when ambiguities have been involved. Visual data inside the head isn't a photograph, but a structured model existing in tandem with extelligence, information that we can treat as intelligence but isn't kept internally.

T.S.

The second movie shows a couple more of the assumptions (of which there are many) the brain makes in shadow processing. One assumption is that darker coloring means shadow. Another is that light usually comes from overhead (these assumptions are so natural we don't even notice they've been made). Both of these come into play when two-dimensional shapesordinary picturesappear to take on depth with the addition of judicious shading [Hack #20] .

Based on these assumptions, the brain prefers to believe that the light source is keeping still and the moving object is jumping around, rather than that the light source is moving. And this despite all the cues to the contrary: the lighting pattern on the floor and walls, the sides of the box being lit up in tandem with the shifting shadowthese should be more than enough proof. Still, the shadow of the ball is all that the brain takes into account. In its quest to produce a 3D understanding of a scene as fast as possible, the brain doesn't bother to assimilate information from across the whole visual field. It simplifies things markedly by just assuming the light source stays still.

It's the speed of shadow processing you have to thank for this illusion. Conscious knowledge is slower to arise than the hackish-but-speedy early perception and remains influenced by it, despite your best efforts to see it any other way.

End Note

1. Zigzagging ball animation thanks to D. Kersten (University of Minnesota, U.S.) and I. Bülthoff (Max-Planck-Institut für biologische Kybernetik, Germany)

See Also

· The Kersten Lab (http://gandalf.psych.umn.edu/~kersten/kersten-lab) researches vision, action, and the computational principles behind how we turn vision into an understanding of the world. As well as publications on the subject, their site houses demos exploring what information we can extract from what we see and the assumptions made. One demo of theirs, Illusory Motion from Shadows (http://gandalf.psych.umn.edu/~kersten/kersten-lab/images/kersten-shadow-cine.mov), demonstrates how the assumption that light sources are stationary can be exploited to provide another powerful illusion of motion.

· Kersten, D., Knill, D., Mamassian, P., & Buelthoff, I. (1996). Illusory motion from shadows. Nature, 379(6560), 31.

 

 


 

 

Hack 22. Depth Matters Our perception of a 3D world draws on multiple depth cues as diverse as atmospheric haze and preconceptions of object size. We use all together in vision and individually in visual design and real life. Our ability to see depth is an amazing feature of our vision. Not only does depth make what we see more interesting, it also plays a crucial, functional role. We use it to navigate our 3D world and can employ it in the practice of visual communication design to help organize what we see through depth's ability to clarify through separation1. Psychologists call a visual trigger that gives us a sense of depth a depth cue. Vision science suggests that our sense of depth originates from at least 19 identifiable cues in our environment. We rarely see depth cues individually, since they mostly appear and operate in concert to provide depth information, but we can loosely organize them together into several related groups:   Binocular cues (stereoscopic depth, eye convergence) With binocular (two-eye) vision, the brain sees depth by comparing angle differences in the images from each eye. This type of vision is very important to daily life (just try catching a ball with one eye closed), but there are also many monocular (single-eye) depth cues. Monocular cues have the advantage that they are easier to employ for depth in images on flat surfaces (e.g., in print and on computer screens).   Perspective-based cues (size gradient, texture gradient, linear perspective) The shape of a visual scene gives cues to the depth of objects within it. Perspective lines converging/diverging or a change in the image size of patterns that we know to be at a constant scale (such as floor tile squares) can be used to inform our sense of depth.   Occlusion-based cues (object overlap, cast shadow, surface shadow) The presence of one object partially blocking the form of another and the cast shadows they create are strong cues to depth. See [Hack #20] for examples.   Focus-based cues (atmospheric perspective, object intensity, focu Greater distance usually brings with it a number of depth cues associated with conditions of the natural world, such as increased atmospheric haze and physical limits to the eye's focus range. We discuss one of these cues, object intensity, next.   Motion-based cues (kinetic depth, a.k.a. motion parallax) As you move your head, objects at different distances move at different relative speeds. This is a very strong cue and is also the reason a spitting cobra sways its head from side to side to work out how far away its prey is from its position. There isn't room to discuss all of these cues here, so we'll look in detail at just two depth cues: object intensity and known size (a cue that is loosely connected to the prespective-based cue family). More information on depth cues and their use in information design can be found in the references at the end of this hack. 2.11.1. Object Intensity Why do objects further away from us appear to be faded or faint? Ever notice that bright objects seem to attract our attention? It's all about intensity. If we peer into the distance, we notice that objects such as buildings or mountains far away appear less distinct and often faded compared to objects close up. Even the colors of these distant objects appear lighter or even washed out. The reason for this is something psychologists call atmospheric perspective or object intensity. It is a visual cue our minds use to sense depth; we employ it automatically as a way to sort and prioritize information about our surroundings (foreground as distinct from background). Designers take advantage of this phenomenon to direct our attention by using bold colors and contrast in design work. Road safety specialists make traffic safety signs brighter and bolder in contrast than other highway signs so they stand out, as shown in Figure 2-14. You too, in fact, employ the same principle when you use a highlighter to mark passages in a book. You're using a depth cue to literally bring certain text into the foreground, to prioritize information in your environment. Figure 2-14. Important street signs often use more intense colors and bolder contrast elements so they stand out from other signage2   2.11.1.1 In action Close one eye and have a look at the two shaded blocks side by side in Figure 2-15. If you had to decide which block appears to be visually closer, which would you choose? The black block seems to separate and appear forward from the gray block. It is as if our mind wants it to be in front. Figure 2-15. Which block appears closer?   2.11.1.2 How it works The reason for this experience of depth, based on light-dark value differences, is atmospheric perspective and the science is actually quite simple. Everywhere in the air are dust or water particles that partially obscure our view of objects, making them appear dull or less distinct. Up close, you can't see these particles, but as the space between you and an object increases, so do the numbers of particles in the air. Together these particles cause a gradual haze to appear on distant objects. In the daytime, this haze on faraway objects appears to be colored white or blue as the particles scatter the natural light. Darker objects separate and are perceived as foreground and lighter ones as background. At night, the effect is the same, except this time the effect is reversed: objects that are lit appear to be closer, as shown in Figure 2-16. So as a general rule of thumb, an object's intensity compared to its surroundings helps us generate our sense of its position. Even colors have this same depth effect because of comparative differences in their value and chroma. The greater the difference in intensity between two objects, the more pronounced the sense of depth separation between them. Figure 2-16. At night, lit objects appear closer   So how does intensity relate to attention? One view is that we pay more attention to objects that are closer, since they are of a higher concern to our physical body. We focus on visually intense objects because their association with the foreground naturally causes us to assign greater importance to them. Simply put, they stand out in front. 2.11.1.3 In real life Since weather can affect the atmosphere's state, it can influence perceived depth: the more ambient the air particles, the more acute the atmospheric perspective. Hence, a distance judged in a rainstorm, for example, will be perceived as further than that same distance judged on a clear, sunny day. 2.11.2. Known Size How do we tell the distance in depth between two objects if they aren't the same? We all know that if you place two same-size objects at different distances and look at them both, the object further away appears smaller. But have you ever been surprised at an object's size when you see it for the first time from afar and discover it is much bigger up close? Psychologists call this phenomenon size gradient and known size. Size gradient states that as objects are moved further away, they shrink proportionally in our field of view. From these differences in relative size, we generate a sense of depth. This general rule holds true, but our prior knowledge of an object's size can sometime trip us up because we use the known size of an object (or our assumptions of its size) to measure the relative size of objects we see. Being aware of a user's knowledge of subjects and objects is key if comparative size is an important factor. Many visual communication designers have discovered the peril of forgetting to include scale elements in their work for context reference. A lack of user-recognizable scale can render an important map, diagram, or comparative piece completely useless. An unexpected change in scale can disorientate a useror, if employed right, can help grab attention. 2.11.2.1 In action Have a look at the mouse and elephant in Figure 2-17. We know about their true relative sizes from our memory, even though the mouse appears gigantic in comparison. Figure 2-17. An elephant and a mouseyou know from memory that elephants are bigger   But what about Figure 2-18, which shows a mouse and a zerk (a made-up animal). Since we've never seen a zerk before, do we know which is truly bigger or do we assume the scale we see is correct? Figure 2-18. A zerk and a mousesince a zerk is made up, you can use only comparison with the mouse to judge size   2.11.2.2 How it works Our knowledge of objects and their actual size plays a hidden role in our perception of depth. Whenever we look at an object, our mind recalls memories of its size, shape, and form. The mind then compares this memory to what we see, using scale to calculate a sense of distance. This quick-and-dirty comparison can sometimes trip us however, especially when we encounter something unfamiliar. One psychologist, Bruce Goldstein, offers a cultural example of an anthropologist who met an African bushman living in dense rain forest. The anthropologist led the bushman out to an open plain and showed him some buffalo from afar. The bushman refused to believe that the animals were large and said they must be insects. But when he approached them up close, he was astounded as they appeared to grow in size, and attributed it to magic. The dense rain forest and its limitations on viewing distance, along with the unfamiliar animal, had distorted his ability to sense scale. 2.11.2.3 In real life Some designers have captured this magic to their benefit. The movie industry has often taken our assumptions of known size and captivated us by breaking them, making the familiar appear monstrous and novel. For example, through a distortion of scale and juxtaposition, we can be fooled into thinking that 50-foot ants are wreaking havoc on small towns and cities. 2.11.3. End Notes 1. Bardel, W. (2001). "Depth Cues for Information Design." Thesis, Carnegie Mellon University (http://www.bardel.info/downloads/Depth_cues.pdf). 2. Street sign symbols courtesy of Ultimate Symbol Inc. (http://www.ultimatesymbol.com). 2.11.4. See Also · Goldstein, E. B. (1989). Sensation & Perception. Pacific Grove: Brooks/Cole Publishing. · Ware, C. (1999). Information Visualization. London: Academic Press. · Tufte, E. (1999). Envisioning Information. Cheshire: Graphics Press. · Braunstein, M. L. (1976). Depth Perception Through Motion. London: Academic Press. · Reagan, D. (2000). Human Perception of Objects. Sunderland: Sinauer Assoc. William Bardel

 

 


 

 

Hack 23. See How Brightness Differs from Luminance: The Checker Shadow Illusion

A powerful illusion of brightness shows how our brain takes scene structure and implied lighting into account when calculating the shade of things.

A major challenge for our vision is the reconstruction of a three-dimensional visual world from a two-dimensional retinal picture. The projection from three to two dimensions irrevocably loses information, which somehow needs to be reconstructed by the vision centers in our brain. True, we have two eyes, which helps a bit in the horizontal plane, but the vivid self-experience of seeing a 3D world clearly persists after covering one eye [Hack #22] .

In the process of reconstructing 3D from 2D, our brain cleverly relies on previous experience and assumptions on the physics of the real world. Since information is thus fabricated, the process is prone to error, especially in appropriately manipulated pictures, which gives rise to various large classes of optical illusions. We will concentrate here on a fairly recent example, Ted Adelson's checker shadow illusion.1

In Action

Take a look at Adelson's checker shadow illusion in Figure 2-19.

Figure 2-19. Adelson's checker shadowwhich is brighter, A or B?

 

We would all agree that one sees a checkerboard with a pillar standing in one corner. Illumination obviously comes from the top-right corner, as the shadow on the checkerboard tells us immediately (and we know how important shadows are for informing what we see [Hack #20] ). All of this is perceived at one rapid glance, much faster than this sentence can be read (lest written!).

Now let's ask the following question: which square is brighter, A or B? The obvious answer is B, and I agree. But now change the context by looking at Figure 2-20. The unmasked grays are from the two squares A and B, and unquestioningly the two shades of gray are identical (in fact, the entire figure was constructed just so).

Figure 2-20. This checkerboard is the same as the first, except for the added barsnow does A look brighter than B?

 

You can prove it to yourself by cutting out a mask with two checker square-size holes in it, one for A and one for B, and putting it over the original checkerboard (Figure 2-19).

How It Works

If squares A and B in the first case have clearly differing brightness and in the second case they have the same, what gives? Surely the two alternatives exclude each other? The solution in a nutshell: brightness depends on context.

There is a good reason that visual scientists describe their experiments using the term luminance rather than brightness. Luminance is a physical measure, effectively counting the number of light quanta coming from a surface, then weighting them by wavelength with regard to their visibility. (The unit of measurement, by the way, is candela per square meter, cd/m2. A candela was originally defined as the light from a standard candle 1 foot away.)

Brightness, on the other hand, is a subjective measuresomething your brain constructs for your conscious experience. It depends on previous history (light adaptation), the immediate surroundings (contrast effects), and context (as here). It has no dimension but can be measured using psychophysical techniques.

Contrast in vision science has two meanings. First, it can refer to the perceptual effect that the brightness of a region in the visual field depends on the luminance of the adjacent regions (mediated by "lateral inhibition," a sort of spatial high-pass filtering of the scene). Second, it is the technical term for how luminance differences are measured. With the term "context" here, we denote the interpretation of figural elementsor scene structurewhich here is changed by the gray bars.

 

What exactly is happening when comparing Figure 2-19 and Figure 2-20? Well, when I initially asked, "Which square is brighter?", I knew you would give the deeper answer, namely the lightness quality of the substance the squares are made of. I knew youor your smart visual systemwould assess the scene, interpret it as a 3D scene, guess the shadowed and lit parts, predict an invisible light source, measure incoming light from the squares, subtract the estimated effect of light versus shadow, and give a good guess at the true lightnessthe lightness that we would expect the checker squares to really have given the way they appear in the scene they're in. With the mask applied (Figure 2-20), however, we create a very different context in which a 3D interpretation does not apply. Now the two squares are not assumed to be lit differently, no correction for light and shadow needs to be applied, and the brightness becomes equal. The luminance of squares A and B is always identical, but due to different context, the perceived brightness changes.

By the way: there are more places in that figure where luminances are equal, but brightness differs, and hunting for those is left as an exercise for the gentle reader.

This striking checker shadow illusion by Ted Adelson teaches us quite a number of things: it demonstrates how much unconscious scene computation goes on in our visual brain when it applies inverse perspective and inverse lighting models. It shows us how strongly luminance and brightness can differ, giving rise to perceptual constancies, here light constancy. It also demonstrates the "unfairness" of the term "optical illusion": the first answer you gave was not wrong at all; in fact, it was the answer one would be interested in, most of the time. Imagine the checkerboard were like a puzzle, with missing pieces, and you had to hunt for a matching piece. Material property is what we need then, independent of lighting. In fact, estimating the "true" material properties independent of context is a very hard computational problem and one that hasn't been solved to a satisfying degree by computer vision systems.

In Real Life

Correction of surface perception for light and shadow conditions is such a basic mechanism of our perceptionand one that normally operates nearly perfectlythat very artificial situations must be created by the accompanying figures for it to reveal itself. That is why we need technical help taking photographs: since photos are normally viewed under different lighting conditions compared to the original scene, professional photographers need to go a long way arranging lighting conditions so that the impression at viewing is the one that is desired.

End Note

1. The checker shadow illusion, together with Ted Adelson's explanation, is online (http://web.mit.edu/persci/people/adelson/checkershadow_illusion.html).

See Also

· You can also use an interactive version of the illusion to verify the colors of the checks do indeed correspond (http://www.michaelbach.de/ot/lum_adelson_check_shadow).

· Adelson, E. H. (1993). Perceptual organization and the judgment of brightness. Science 262, 2042-2044.

· Adelson, E. H. (2000). Lightness Perception and Lightness Illusions. In The New Cognitive Neurosciences, 2nd edition, 339-351. M. Gazzaniga (ed.). Cambridge, MA: MIT Press.

· Todorovic, D. (1997). Lightness and junctions. Perception 26, 379-395.

· Blakeslee, B. & McCourt, M. E. (2003). A multiscale spatial filtering account of brightness phenomena. In: L. Harris & M. Jenkin (eds.), Levels of Perception. New York: Springer-Verlag.

Michael Bach

 

 


 

 

Hack 24. Create Illusionary Depth with Sunglasses We can use a little-known illusion called the Pulfrich Effect to hack the brain's computation of motion, depth, and brightnessall it takes is a pair of shades and a pendulum. This is a journey into the code the visual system uses to work out how far away things are and how fast they are moving. Both of the variablesdepth and velocitycan be calculated by comparing measurements of object position over time. Rather than have separate neural modules to figure out each variable, performing the same fundamental processing, the brain combines the two pieces of work and uses some of the same cells in calculating both measures. Because depth and motion are jointly encoded in these cells, it's possible (under the right circumstances) to convert changes in one into changes in another. An example is the Pulfrich Effect, in which a moving pendulum and some sunglasses create an illusion of the pendulum swinging in ellipses rather than in straight lines. It works because the sunglasses create an erroneous velocity perception, which gets converted into a depth change by the time it reaches your perception. It's what we'll be trying out here. 2.13.1. In Action Make a pendulum out of a piece of string and something heavy to use as a weight, like a bunch of keys. You'll also need a pair of sunglasses or any shaded material. Ask a friend to swing the pendulum in front of you in a perpendicular plane, and make sure it's going exactly in a straight line, left to right. Now, cover one of your eyes with the shades (this is easiest if you have old shades and can poke one of the lenses out). Keep both eyes open! You'll see that the pendulum now seems to be swinging back and forth as well as side to side, so that it appears to move in an ellipse. The two of you will look something like Figure 2-21. Figure 2-21. Matt and Tom use sunglasses and a pendulum made out of a bootlace to test the Pulfrich Effect   Show your friend swinging the pendulum how you see the ellipse, and ask her to swing the pendulum in the opposite manner to counteract the illusion. Now the pendulum appears to swing in a straight line, and the thing that seems odd is not the distance from you, but the velocity of the pendulum. Because it really is swinging in an elliptical pattern, it covers perceived distance at an inconsistent rate. This makes it seem as if the pendulum is making weird accelerations and decelerations. 2.13.2. How It Works The classic explanation for the Pulfrich is this: the shading slows down the processing of the image of the object in one eye (lower brightness means the neurons are less stimulated and pass on the signal at a slower rate [Hack #11] ); in effect, the image reaches one eye at a delay compared to when it reaches the other eye. Because the object is moving, this means the position of the image on the retina is slightly shifted. The difference in image perception between the two retinas is used by the visual system to compute depth [Hack #22] . The slight displacement of the image on the retina of the shaded eye is interpreted as an indication of depth, as in Figure 2-22. Figure 2-22. The geometry of the Pulfrich Effect: although the pendulum is, in reality, at point 1, the delay in processing makes it appear to be at point 2 to the shaded eye. When the eyes are taken together, the pendulum therefore appears to be at point 3, at a different length.   This explanation puts the confounding of depth and motion on the geometry of the situationthe point of confusion lies in the world, not in the brain. Taking recordings of the responses of individual brain cells, Akiyuki Anzai and colleagues have shown that this isn't the whole story. The confounding of motion and depth goes deeper than a mathematical ambiguity that arises from computing real-world interpretations from the visual images on the retinas. It seems that most of the neurons in the primary visual cortex are sensitive to motion and depth in combination. These neurons are optimally responsive to some combination of motion and depth; what makes up that optimum combination can be varying amounts of motion and depth. This means you when you see something and judge its distance your brain always also makes a judgment about its velocity, and vice versa. From the first point in your primary visual cortex where information from the two eyes is combined (i.e., very early in visual processing), motion and depth are coupled. You don't get a sense of one without getting a sense of the other. This may result from the use of motion parallax to detect depth [Hack #22] . Moving your head is one of the basic ways of telling how far away something is (you can see spitting cobras using motion parallax by shifting their heads from side to side to work out how far to spit). It works even if you have the use only of one eye. The joint encoding theory explains why you can get Pulfrich-like effects in situations with less obvious geometry. If you watch television snow with one eye shaded, you will see two sheets of dots, one in front of the other and one moving to the left and one moving to the right. The reasons for this are complex but rest on the way our eyes try and match dots in the images for both eyes and use this matching to calculate depth (stereoscopic vision). Adding a shade to the image in one eye creates a bias so that instead of perceiving all the dots at a single average depth we see two sets of skewed averages, and because depth and motion are jointly encoded, these two planes move as well (in opposite directions). 2.13.3. In Real Life The Pulfrich Effect can be used to create 3D effects for television, as long as people are willing to watch with one eye shaded. It's hard to do since the motion of the image/camera has to be smooth to create a consistent illusion of depth, but it has been done.1 2.13.4. End Note 1. Descriptions of some TV shows that have included applications of the Pulfrich Effect (http://www.combsmusic.com/RosesReview.html). 2.13.5. See Also · Anzai, A., Ohzawa, I., & Freeman, R. D. (2001). Joint-encoding of motion and depth by visual cortical neurons: Neural basis of the Pulfrich Effect. Nature Neuroscience, 4, 513-518. · The Psychology Department at Southern Illinois University Carbondale's Pulfrich Effect page (http://www.siu.edu/~pulfrich) has many links for further information.

 

 


 

 

Hack 25. See Movement When All Is Still Aftereffect illusions are caused by how cells represent motion in the brain. Why, when the train stops, does the platform you are looking at out the window appear to creep backward? The answer tells us something important about the architecture of the visual system and about how, in general, information is represented in the brain. The phenomenon is the motion aftereffect. Just as when you go from very bright sunlight to the indoors, everything looks dark, or if you are in a very quiet environment, loud noises seem even louder, so continuous motion in a certain direction leaves us with a bias in the otheran aftereffect. 2.14.1. In Action Watch the video of a waterfall (http://www.biols.susx.ac.uk/home/George_Mather/Motion/MAE.HTML; QuickTime) for a minute or so, staring at the same position, then hit pause. You'll have the illusion of the water flowing upward. It works best with a real waterfall, if you can find one, although pausing at the end is harder, so look at something that isn't moving instead, like the cliff next to the waterfall. The effect doesn't work for just continuous downward motion. Any continuous motion will create an opposite aftereffect; that includes spiral motion, such as in the Flash demo at http://www.at-bristol.org.uk/Optical/AfterEffects_main.htm. The effect works only if just part of your visual field is moving (like the world seen through the window of a train). It doesn't occur if everything is moving, which is why, along with the fact that your motion is rarely continuous in a car, you don't suffer an aftereffect after driving. 2.14.2. How It Works Part of what makes this effect so weird is the experience of motion without any experience of things actually changing location. Not only does this feel pretty funny, but it suggests that motion and location are computed differently within the architecture of the brain. Brain imaging confirms this. In some areas of the visual cortex, cells respond to movement, with different cells responding to different types of movement. In other areas of the visual cortex, cells respond to the location of objects in different parts of the visual field. Because the modules responsible for the computation of motion and the computation of location are separate, it is possible to experience motion without anything actually moving. The other way is to be able to perceive static images but be unable to experience motion, and this happens to some stroke victims whose motion module is damaged. Their life is experienced as a series of strobe-like scenes, even thoughtheoreticallytheir visual system is receiving all the information it would need to compute motion (that is, location and time). You don't need brain imaging to confirm that this effect takes place at the cortex, integrating all kinds of data, rather than being localized at each eye. Look at the movie image of the waterfall again but with one eye closed. Swap eyes when you pause the videoyou'll still get the effect even with the eye that was never exposed to motion. That shows that the effect is due to some kind of central processing and is not happening at the retina. To understand why you get aftereffects, you need to know a little about how information is represented in the brain. Different brain cells in the motion-sensitive parts of the visual system respond, or "fire," to different kinds of motion. Some fire most for quick sideway

Date: 2015-12-11; view: 718


<== previous page | next page ==>
Figure 2-6. A typical blind spot pattern | Hack 26. Get Adjusted
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.023 sec.)