Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Hack 50. Give Big-Sounding Words to Big Concepts

The sounds of words carry meaning too, as big words for big movements demonstrate.

Steven Pinker, in his popular book on the nature of language, The Language Instinct1, encounters the frob-twiddle-tweak continuum as a way of talking about adjusting settings on computers or stereo equipment. The Jargon File, longtime glossary for hacker language, has the following under frobnicate (http://www.catb.org/~esr/jargon/html/F/frobnicate.html):

Usage: frob, twiddle, and tweak sometimes connote points along a continuum. `Frob' connotes aimless manipulation; twiddle connotes gross manipulation, often a coarse search for a proper setting; tweak connotes fine-tuning. If someone is turning a knob on an oscilloscope, then if he's carefully adjusting it, he is probably tweaking it; if he is just turning it but looking at the screen, he is probably twiddling it; but if he's just doing it because turning a knob is fun, he's frobbing it.2

Why frob first? Frobbing is a coarse action, so it has to go with a big lump of a word. Twiddle is smaller, more delicate. And tweak, the finest adjustment of all, feels like a tiny word. It's as if the actual sound of the word, as it's spoken, carries meaning too.

In Action

The two shapes in Figure 4-3 are a maluma and a takete. Take a look. Which is which?

Don't spoil the experiment for yourself by reading the next paragraph! When you try this out on others, you may want to cover up all but the figure itself.

 

Figure 4-3. One of these is a "maluma," the other a "takete"which is which?

 

If you're like most people who have looked at shapes like these since the late 1920s, when Wolfgang Köhler devised the experiment, you said that the shape on the left is a "takete," and the one on the right is a "maluma." Just like "frob" and "tweak," in which the words relate to the movements, "takete" has a spiky character and "maluma" feels round.

How It Works

Words are multilayered in meaning, not just indices to some kind of meaning dictionary in our brains. Given the speed of speech, we need as many clues to meaning as we can get, to make understanding faster. Words that are just arbitrary noises would be wasteful. Clues to the meaning of speech can be packed into the intonation of a word, what other words are nearby, and the sound itself.

Brains are association machines, and communication makes full use of that fact to impart meaning.

In Figure 4-3, the more rounded shape is associated with big, full objects, objects that tend to have big resonant cavities, like drums, that make booming sounds if you hit them. Your mouth is big and hollow, resonant to say the word "maluma." It rolls around your mouth.

On the other hand, a spiky shape is more like a snare drum or a crystal. It clatters and clicks. The corresponding sound is full of what are called plosives, sounds like t- and k- that involve popping air out.



That's the association engine of the brain in action. The same goes for "frob" and "tweak." The movement your mouth and tongue go through to say "frob" is broad and coarse like the frobbing action it communicates. You put your tongue along the base of your mouth and make a large cavity to make a big sound. To say "tweak" doesn't just remind you of finely controlled movement, it really entails more finely controlled movement of the tongue and lips. Making the higher-pitched noise means making a smaller cavity in your mouth by pushing your tongue up, and the sound at the end is a delicate movement.

Test this by saying "frob," "twiddle," and "tweak" first thing in the morning, when you're barely awake. Your muscle control isn't as good as it usually is when you're still half-asleep, so while you can say "frob" easily, saying "tweak" is pretty hard. It comes out more like "twur." If you're too impatient to wait until the morning, just imagine it is first thing in the morningas you stretch say the words to yourself with a yawn in your voice. The difference is clear; frobbing works while you're yawning, tweaking doesn't.

Aside from denser meaning, these correlations between motor control (either moving your hands to control the stereo or saying the word) and the word itself may give some clues to what language was like before it was really language. Protolanguage, the system of communication before any kind of syntax or grammar, may have relied on these metaphors to impart meaning.3 For humans now, language includes a sophisticated learning system in which, as children, we figure out what words mean what, but there are still throwbacks to the earlier time: onomatopoeic words are ones that sound like what they mean, like "boom" or "moo." "Frob" and "tweak" may be similar to that, only drawing in bigness or roundness from the visual (for shapes) or motor (for mucking around with the stereo) parts of the brain.

In Real Life

Given the relationship between the sound of a word, due to its component phonemes and its feel (some kind of shared subjective experience), sound symbolism is one of the techniques used in branding. Naming consultants take into account the maluma and takete aspect of word meaning, not just dictionary meaning, and come up with names for products and companies on demandfor a price, of course. One of the factors that influenced the naming of the BlackBerry wireless email device was the b- sound at the beginning. According to the namers, it connotes reliability.4

End Notes

1. Pinker, S. (1994). The Language Instinct: The New Science of Language and Mind. London: Penguin Books Ltd.

2. The online hacker Jargon File, Version 4.1.0, July 2004 (http://www.catb.org/~esr/jargon/index.html).

3. This phenomenon is called phonetic symbolism, or phonesthesia. Some people have the perception of color when reading words or numbers, experiencing a phenomenon called synaesthesia. Ramachandran and Hubbard suggest that synaesthesia is how language started in the first place. See: Ramachandran, V. S., & Hubbard, E. M. (2001). Synaesthesiaa window into perception, thought and language. Journal of Consciousness Studies, 8(12), 3-34. This can also be found online at http://psy.ucsd.edu/chip/pdf/Synaesth_JCS.pdf.

4. Begley, Sharon. "Blackberry and Sound Symbolism" (http://www.stanford.edu/class/linguist34/Unit_08/blackberry.htm), reprinted from the Wall Street Journal, August 26, 2002.

See Also

· Naming consultancies were especially popular during the 1990s dot-com boom. Alex Frenkel took a look for Wired magazine in June 1997 in "Name-o-rama" (http://www.wired.com/wired/archive/5.06/es_namemachine.html).

 

 


 

 

Hack 51. Stop Memory-Buffer Overrun While Reading The length of a sentence isn't what makes it hard to understand it's how long you have to wait for a phrase to be completed. When you're reading a sentence, you don't understand it word by word, but rather phrase by phrase. Phrases are groups of words that can be bundled together, and they're related by the rules of grammar. A noun phrase will include nouns and adjectives, and a verb phrase will include a verb and a noun, for example. These phrases are the building blocks of language, and we naturally chunk sentences into phrase blocks just as we chunk visual images into objects. What this means is that we don't treat every word individually as we hear it; we treat words as parts of phrases and have a buffer (a very short-term memory) that stores the words as they come in, until they can be allocated to a phrase. Sentences become cumbersome not if they're long, but if they overrun the buffer required to parse them, and that depends on how long the individual phrases are. 4.9.1. In Action Read the following sentence to yourself: · While Bob ate an apple was in the basket. Did you have to read it a couple of times to get the meaning? It's grammatically correct, but the commas have been left out to emphasize the problem with the sentence. As you read about Bob, you add the words to an internal buffer to make up a phrase. On first reading, it looks as if the whole first half of the sentence is going to be your first self-contained phrase (in the case of the first, that's "While Bob ate an apple")but you're being led down the garden path. The sentence is constructed to dupe you. After the first phrase, you mentally add a comma and read the rest of the sentence...only to find out it makes no sense. Then you have to think about where the phrase boundary falls (aha, the comma is after "ate," not "apple"!) and read the sentence again to reparse it. Note that you have to read again to break it into different phrases; you can't just juggle the words around in your head. Now try reading these sentences, which all have the same meaning and increase in complexity: · The cat caught the spider that caught the fly the old lady swallowed. · The fly swallowed by the old lady was caught by the spider caught by the cat. · The fly the spider the cat caught caught was swallowed by the old lady. The first two sentences are hard to understand, but make some kind of sense. The last sentence is merely rearranged but makes no natural sense at all. (This is all assuming it makes some sort of sense for an old lady to be swallowing cats in the first place, which is patently absurd, but it turns out she swallowed a goat too, not to mention a horse, so we'll let the cat pass without additional comment.) 4.9.2. How It Works Human languages have the special property of being recombinant. This means a sentence isn't woven like a scarf, where if you want to add more detail you have to add it at the end. Sentences are more like Lego. The phrases can be broken up and combined with other sentences or popped open in the middle and more bricks added. Have a look at these rather unimaginative examples: · This sentence is an example. · This boring sentence is a simple example. · This long, boring sentence is a simple example of sentence structure. The way sentences are understood is that they're parsed into phrases. One type of phrase is a noun phrase, the object of the sentence. In "This sentence is an example," the noun phrase is "this sentence." For the second, it's "this boring sentence." Once a noun phrase is fully assembled, it can be packaged up and properly understood by the rest of the brain. During the time you're reading the sentence, however, the words sit in your verbal working memorya kind of short-term bufferuntil the phrase is finished. There's an analogy here with visual processing. It's easier to understand the world in chunkshence the Gestalt Grouping Principles [Hack #75] . With language, which arrives serially, rather than in parallel like vision, you can't be sure what the chunks are until the end of the phrase, so you have to hold it unchunked in working memory until you know where the phrase ends. M.W. Verb phrases work the same way. When your brain sees "is," it knows there's a verb phrase starting and holds the subsequent words in memory until that phrase has been closed off (with the word "example," in the first sentence in the previous list). Similarly, the last part of the final sentence, "of sentence structure," is a prepositional phrase, so it's also self-contained. Phrase boundaries make sentences much easier to understand. Rather than the object of the third example sentence being three times more complex than the first (it's three words: "long, boring sentence" versus one, "sentence"), it can be understood as the same object, but with modifiers. It's easier to see this if you look at the tree diagrams shown in Figure 4-4. A sentence takes on a treelike structure, for these simple examples, in which phrases are smaller trees within that. To understand a whole phrase, its individual tree has to join up. These sentences are all easy to understand because they're composed of very small trees that are completed quickly. Figure 4-4. How the example sentences form trees of phrases   We don't use just grammatical rules to break sentences in chunks. One of the reasons the sentence about Bob was hard to understand was you expect, after seeing "Bob ate" to learn about what Bob ate. When you read "the apple," it's exactly what you expect to see, so you're happy to assume it's part of the same phrase. To find phrase boundaries, we check individual word meaning and likelihood of word order, continually revise the meaning of the sentence, and so on, all while the buffer is growing. But holding words in memory until phrases complete has its own problems, even apart from sentences that deliberately confuse you, which is where the old lady comes in. Both of the first remarks on the old lady's culinary habits require only one phrase to be held in buffer at a time. Think about what phrases are left incomplete at any given word. There's no uncertainty over what any given "caught" or "by" words refer to: it's always the next word. For instance, your brain read "The cat" (in the first sentence) and immediately said, "did what?" Fortunately the answer is the very next phrase: "caught the spider." "OK," says your brain, and pops that phrase out of working memory and gets on with figuring out the rest of the sentence. The last example about the old lady is completely different. By the time your brain gets to the words "the cat," three questions are left hanging. What about the cat? What about the spider? What about the fly? Those questions are answered in quick succession: the fly the old lady swallowed; the spider that caught the fly, and so on. But because all of these questions are of the same type, the same kind of phrase, they clash in verbal working memory, and that's the limit on sentence comprehension. 4.9.3. In Real Life A characteristic of good speeches (or anything passed down in an oral tradition) is that they minimize the amount of working memory, or buffer, required to understand them. This doesn't matter so much for written text, in which you can skip back and read the sentence again to figure it out; you have only one chance to hear and comprehend the spoken word, so you'd better get it right the first time around. That's why speeches written down always look so simple. That doesn't mean you can ignore the buffer size for written language. If you want to make what you say, and what you write, easier to understand, consider the order in which you are giving information in a sentence. See if you can group together the elements that go together so as to reduce demand on the reader's concentration. More people will get to the end of your prose with the energy to think about what you've said or do what you ask. 4.9.4. See Also · Caplan, D., & Waters, G. (1998). "Verbal Working Memory and Sentence Comprehension" (http://cogprints.ecs.soton.ac.uk/archive/00000623). · Steven Pinker discusses parse trees and working memory extensively in The Language Instinct. Pinker, S. (2000). The Language Instinct: The New Science of Language and Mind. London: Penguin Books Ltd.

 

 


 

 

Hack 52. Robust Processing Using Parallelism Neural networks process in parallel rather than serially. This means that as processing of different aspects proceeds, previously processed aspects can be used quickly to disambiguate the processing of others. Neural networks are massively parallel computers. Compare this to your PC, which is a serial computer. Yeah, sure, it can emulate a parallel processor, but only because it is really quick. However quick it does things, though, it does them only one at a time. Neural processing is glacial by comparison. A neuron in the visual cortex is unlikely to fire more than every 5 milliseconds even at its maximum activation. Auditory cells have higher firing rates, but even they have an absolute minimum gap of 2 ms between sending signals. This means that for actions that take 0.5 to 1 secondsuch as noticing a ball coming toward you and catching it (and many of the things cognitive psychologists test)there are a maximum of 100 consecutive computations the brain can do in this time. This is the so-called 100 step rule.1 The reason your brain doesn't run like a PC with a 0.0001 MHz processor is because the average neuron connects onto between 1000 and 10,000 other neurons. Information is routed, and routed back, between multiple interconnected neural modules, all in parallel. This allows the slow speed of each neuron to be overcome, and also makes it natural, and necessary, that all aspects of a computational job be processed simultaneously, rather than in stages. Any decision you make or perception you have (because what your brain decides to provide you with as a coherent experience is a kind of decision too) is made up of the contributions of many processing modules, all running simultaneously. There's no time for them to run sequentially, so they all have to be able to run with raw data and whatever else they can get hold of at the time, rather than waiting for the output of other modules. 4.10.1. In Action A good example of simultaneous processing is in understanding language. As you hear or read, you use the context of what is being said, the possible meaning of the individual words, the syntax of the sentences, and how the sounds of each wordor the letters of each wordlook to figure out what is being said. Consider the next sentence: "For breakfast I had bacon and ****." You don't need to know the last word to understand it, and you can make a good guess at the last word. Can you tell the meaning of "Buy v!agra" if I email it to you? Of course you can; you don't need to have the correct letter in the second word to know what it is (if it doesn't get stopped by your spam filters first, that is). 4.10.2. How It Works The different contributionsthe different clues you use in readinginform one another, to fill in missing information and correct mismatched information. This is one of the reasons typos can be hard to spot in text (particularly your own, in which the contribution of your understanding of the text autocorrects, in your mind, the typos before you notice them), but it's also why you're able to have conservations in loud bars. The parallel processing of different aspects of the input provides robustness to errors and incompleteness and allows information from different processes to interactively disambiguate each other. Do you remember the email circular that went around (http://www.mrc-cbu.cam.ac.uk/personal/matt.davis/Cmabrigde) saying that you can write your sentences with the internal letters rearranged and still be understood just as well? Apparently, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. It's not true, of course. You understand such scrambled sentences only nearly as well as unscrambled ones. We can figure out what the sentence is in this context because of the high redundancy of the information we're given. We know the sentence makes sense, so that constrains the range of possible words that can be in it, just as the syntax does: the rules of grammar mean only some words are allowed in some positions. The word-length information is also there, as are the letters in the word. The only missing thing is the position information for the internal letters. And compensating for that is an easy bit of processing for your massively parallel, multiple constraint-satisfying, language faculty. Perhaps the reason it seems surprising that we can read scrambled sentences is because a computer faced with the same problem would be utterly confused. Computers have to have each word fit exactly to their template for that word. No exact match, no understanding. OK, so Google can suggest correct spellings for you, but type in i am cufosned and it's stuck, whereas a human could take a guess (they face off in Figure 4-5). Figure 4-5. Google and my friend William go head to head   This same kind of process works in vision. You have areas of visual cortex responsible for processing different elements. Some provide color information, some information on motion or depth or orientation. The interconnections between them mean that when you look at a scene they all start working and cooperatively figure out what the best fit to the incoming data is. When a fit is found, your perception snaps to it and you realize what you're looking at. This massive parallelism and interactivity mean that it can be misleading to label individual regions as "the bit that does X"; truly, no bit of the brain ever operates without every other bit of the brain operating simultaneously, and outside of that environment single brain regions wouldn't work at all. 4.10.3. End Note 1. Feldman, J. A., & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6, 205-254 (http://cognitrn.psych.indiana.edu/rgoldsto/cogsci/Feldman.pdf).

 

 


 

 

Chapter 5. Integrating Section 5.1. Hacks 53-61 Hack 53. Put Timing Information into Sound and Location Information into Light Hack 54. Don't Divide Attention Across Locations Hack 55. Confuse Color Identification with Mixed Signals Hack 56. Don't Go There Hack 57. Combine Modalities to Increase Intensity Hack 58. Watch Yourself to Feel More Hack 59. Hear with Your Eyes: The McGurk Effect Hack 60. Pay Attention to Thrown Voices Hack 61. Talk to Yourself

 

 


 

 

5.1. Hacks 53-61 This chapter looks at how we integrate our perceptionsimages ( Chapter 2), sounds (Chapter 4), our own mechanisms of attention (Chapter 3), and our other senses [Hack #12] into a unified perceptual experience. For instance, how do we use our eyes and ears together? (We prefer to use our ears for timing and eyes for determining location [Hack #53] .) And what are the benefits of doing so? (We feel experiences that happen in two senses simultaneously as more intense [Hack #57] and [Hack #58].) Sometimes, we overintegrate. The Stroop Effect [Hack #55], a classic experiment, shows that if we try to respond linguistically, irrelevant linguistic input interferes. In its eagerness to assimilate as much associated information, as much context, as possible, the brain makes it very hard to ignore even what we consciously know is unimportant. We'll also look at one side effect and one curious limitation of the way we integrate sense information. The first goes to show that even the brain's errors can be useful and that we can actually use a mistaken conclusion about a sound's origin to better listen to it [Hack #60] . The second answers the question: do we really need language to perform what should be a basic task, of making a simple deduction from color and geometry? In some cases, it would appear so [Hack #61] .

 

 


 

 


Date: 2015-12-11; view: 641


<== previous page | next page ==>
Hack 49. Speech Is Broadband Input to Your Head | Audition dominates for timing
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.01 sec.)