Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Enter the Visual Cortex

From the LGN, the signals are sent directly to the visual cortex. At the lower back of the cerebrum (so about a third of the way up your brain, on the back of your head, and toward the middle) is an area of the cortex called either the striate or primary visual cortex. It's called "striate" simply because it contains a dark stripe when closely examined.

Why the stripes? The primary visual cortex is literally six layers of cells, with a thicker and subdivided layer four where the two different pathways from the LGN land. These projections from LGN create the dark band that gives the striate cortex its name. As visual information moves through this region, cells in all six layers play a role in extracting different features. It's way more complex than the LGNthe striate contains about 200 million cells.

The first batch of processing takes place in a module called V1. V1 holds a map of the retina as source material, which looks more or less like the area of the eye it's dealing with, only distorted. The part of the map that represents the foveathe high-resolution center of the eyeis all out of proportion because of the number of cells dedicated to it. It's as large as the rest of the map put together.

Physically standing on top of this map are what are called hypercolumns. A hypercolumn is a stack of cells performing processing that sits on top of an individual location and extracts basic information. So some neurons will become active when they see a particular color, others when they see a line segment at a particular angle, and other more complex ones when they see lines at certain angles moving in particular directions. This first map and its associated hypercolumns constitute the area V1 (V for "vision"); it performs really simple feature extraction.

The subsequent visual processing areas named V2 and V3 (again, V for "vision," the number just denotes order), also in the visual cortex, are similar. Information gets bumped from V1 to V2 by dumping it into V2's own map, which acts as the center for its batch of processing. V3 follows the same pattern: at the end of each stage, the map is recombined and passed on.

2.2.4. "What" and "Where" Processing Streams

So far visual processing has been mostly linear. There are feedback (the LGN gets information from elsewhere on the cortex, for example) and crossovers, but mostly the coarse and fine visual pathways have been processed separately and there's been a reasonably steady progression from the eye to the primary visual cortex.

From V3, visual information is sent to dozens of areas all over the cortex. These modules send information to one another and draw from and feed other areas. It stops being a production line and turns into a big construction site, with many areas extracting and associating different features, all simultaneously.

There's still a broad distinction between the two pathways though. The coarse visual information, the magnocellular pathway, flows up to the top of the head. It's called the dorsal stream, or, more memorably, the "where" stream. From here on, there are modules to spot motion and to look for broad features.



The fine detail of vision from the parvocellular pathway comes out of the primary visual cortex and flows down the ventralstreamthe "what" stream. The destination for this stream is the inferior temporal lobe, the underside of the cerebrum, above and behind the eyes.

As the name suggests, the "what" stream is all about object recognition. On the way to the temporal lobe, there's a stop-off for a little further processing at a unit called the lateral occipital complex (LOC). What happens here is key to what'll happen at the final destination points of the "what" stream. The LOC looks for similarity in color and orientation and groups parts of the visual map together into objects, separating them from the background.

Later on, these objects will be recognized as faces or whatever else. It represents a common method: the visual information is processed to look for features. When found, information about those features is added to the pool of data, and the whole lot is sent on.


Date: 2015-12-11; view: 695


<== previous page | next page ==>
Start at the Retina | Processing with Built-in Assumptions
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.006 sec.)