Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






MORPHOLOGY AND LEXICAL SEMANTICS

In his comprehensive descriptive work on English word formation, Hans Marchand expressed the following opinion about the meaning of derivational suffixes (1969, 215): “Unlike a free morpheme a suffix has no meaning in itself, it acquires meaning only in conjunction with the free morpheme which it transposes.” In context, what Marchand means does not seem nearly so radical. He goes on in the same passage to explain that derivational suffixes change either syntactic or semantic class, and his prime example is the suffix -er (1969, 215):

As a word class transposer, -er plays an important part in deverbal derivatives, while in denominal derivatives its role as a word class transposer is not important, since basis and derivative in the majority of cases belong to the same word class "substantive" ...; its role as a semantic transposer, however, is different in this case. Although most combinations denote a person, more specifically a male person (types potter, Londoner, banqueter, weekender), many other semantically unrelated senses are possible. Derivatives with -er may denote a banknote, bill (fiver, tenner), a blow (backhander), a car, a bus (two-seater, two-decker), a collar (eight-incher), a gun (six-pounder), a gust of wind (noser, souther), a lecture at a certain hour (niher "a class at nine o'clock"), a line of poetry (fourteener), a ship (three-decker, freighter, …).

Marchand of course does not mean to say that -er actually means “car,” “bus,” “banknote,” or “gust of wind” in these forms. Rather he suggests that the meaning of the affix is fluid enough to allow all of these meanings in combination with particular bases. But why should this be? What, if anything, does -er add to a base to give rise to these meanings?

This book is about the semantics of word formation. More specifically, it is about the meaning of morphemes and how they combine to form meanings of complex words, including derived words (writer, unionize), compounds (dog bed, truck driver), and words formed by conversion. To my knowledge there is no comprehensive treatment of the semantics of word formation in the tradition of generative morphology. One reason for this is perhaps the late start morphology got in the history of generative grammar; generative morphology has arguably come into its own as a legitimate field of study only since the mid-1970s and has concentrated on structural and phonological issues concerning word formation to the neglect of semantic issues.

But, another reason, a more important reason, I would argue, is that up until now a systematic way of talking about the lexical semantics of word formation (as opposed to words) has largely been lacking. Yet questions like the following concerning the meaning of word-formation processes have continued to be raised sporadically:

o The polysemy question: for example, why does the affix -ize in English sometimes seem to mean "cause to become X" {unionize, randomize), sometimes "cause to go into X" (containerize), and sometimes "perform X" (anthropologize); why does the affix -er sometimes create agent nouns (writer), sometimes instrument nouns (opener), and sometimes patient nouns (loaner)! Do these affixes have any unitary core of meaning at all, and if so, what is it?



o The multiple-affix question: why does English often have several affixes that perform the same function or create the same kind of derived word (e.g., -ize, -ify for causative verbs; -er, -ant for agent nouns)?

o The zero-derivation question: how do we account for word formation in which there is semantic change without any concomitant formal change (e.g., in so-called conversion or zero derivation)?

o The semantic mismatch question: why is the correspondence between form and meaning in word formation sometimes not one-to-one? On the one hand, why do there sometimes seem to be morphemes that mean nothing at all (e.g., the -in- in longitudinal or the -it- in repetition)?On the other hand, why do we sometimes find “derivational redundancy,” that is, cases in which the same meaning seems to be expressed more than once in a word (e.g., in dramatical or musicianer)?Finally, why does the sense of a morpheme sometimes seem to be subtracted from the overall meaning of the word (e.g., realistic does not mean "pertaining to a realist")?

Such questions are related: all are part of a larger question of how we characterize the meanings of complex words. The goal of this work is to develop and justify a framework in which such questions can fruitfully be raised, discussed, and answered.

I am, of course, not the first to raise these questions. They have their origins at least as far back as American Structuralist debates about the architecture of the theory ofword formation. Hockett perhaps first framed the question in Structuralist theory, contrasting Item and Arrangement (IA) theories of word formation with Item and Process (IP) theories. In a classic Item and Arrangement theory, a word is built up by addition of morphemes, each of which contributes a distinct meaning to the complex word; the relationship between form and meaning is presumed most often to be one-to-one. Item and Process theorists look at word formation as the operation of processes or rules on base morphemes or words, each rule adding to or changing the form of the base, and concomitantly having some characteristic semantic or morphosyntactic effect; but again, the relationship between process and semantic or morphosyntactic effect is typically one-to-one. Contrasting with IA and IP theories are so-called Word and Paradigm (WP) theories, which map semantic and morphosyntactic properties onto words in a many to one fashion. IA, IP, and WP frameworks have all had their advocates within generative traditions. My own work has rightly been characterized as falling within the IA camp, as has been the work of Selkirk, Williams, and others. The theory of Aronoff falls into the IP camp, and that of Anderson into the Word and Paradigm camp.

Further, the question of form-meaning correspondence in word formation has led in recent years to the "Separation Hypothesis," most prominently advocated in Beard's Lexeme Morpheme Base Morphology. Beard, and also Corbin and Szymanek have argued that since the form-meaning correspondence in morphology is rarely one-to-one, the semantic effects of word formation should be strictly separated from its formal effects. Word formation consists in such theories of a semantic or morphosyntactic process (for example, formation of causative verbs or agent nouns) which is strictly separated from the addition of formal morphological markers (e.g., -ize or -er). There is no expectation within such a theory that the correspondence between meaning and form should be one-to-one.

On its surface, this debate seems to be about the architecture of a morphological theory, specifically about whether morphemes – units smaller than the word – should be treated as Saussurian signs, that is, pairings of sound and meaning, and if so what we should expect about the pairing of sound and meaning. The discussion, that is, has largely been over the issue of correspondence. But at the heart of the problem is a more fundamental question: how do we talk about the meanings which can be said to be in correspondence (one-to-one, one-to-many, many-to-one) with structural units?

I argue in this book that this issue will not be resolved by looking at the architecture of a morphological theory, at least not until we have a way of talking about (describing, comparing) the semantic effects of word-formation processes in some detail and depth. We will not be able to talk about the correspondence of meaning and form until we can say in some useful way what complex words mean - what the meaning or meanings of the suffix -ize is (is it one or many meanings, and if many are they related meanings?), whether this meaning is the same as that of -ify, and so on.

I suggest that we do not yet have the theoretical apparatus to conduct such discussions. In order to talk about the semantics of word formation we need a framework of lexical semantic description which has several distinctive properties. First, it must be decompositional; it must involve some relatively small number of primitives or atoms, and the primitives or atoms it makes available should be of the right “grain size” to allow us to talk about the meanings of complex words. Further, such a descriptive framework must allow us to concentrate on lexical semantic properties, rather than semantic properties that manifest themselves at higher levels of syntactic structure (i.e., phrases, sentences, propositions, discourses). It must also be thoroughly cross-categorial, allowing us to discuss in equal depth the semantic characteristics of nouns, verbs, adjectives, and perhaps other categories. Finally, if we agree that word formation often creates new lexemes, our theory must allow us to talk about the meanings of complex words in the same terms that we use to talk about the meanings of simplex lexemes.

Let me start first with why such a theory of semantic description must be decompositional. This is a controversial choice in light of Fodor's extensive arguments that decompositional semantics is a waste of time. Fodor defends a position he calls Informational Atomism, consisting of two parts:

Informational semantics: content is constituted by some sort of nomic, mind-world relation. Correspondingly, having a concept (concept possession) is constituted, at least in part, by being in some sort of nomic, mind-world relation. Conceptual atomism: most lexical concepts have no internal structure.

I have no quibble with Fodor's doctrine that nomic mind–world relations are the fundamental stuff of meanings at some level, that is, that meaning must ultimately be grounded in a lawful relation between language and the world. I understand from the philosophers that the only game in town is to anchor meaning in truth conditions at some level. But I find Fodor’s notion of conceptual atomism to be question-begging, especially if one is interested in questions concerning the meanings of complex words. Fodor argues that there is no sound justification for lexical decomposition: “I know of no reason, empirical or a priori, to suppose that the expressive power of English can be captured in a language whose stock of morphologically primitive expressions is interestingly smaller than the lexicon of English.” The process of decomposing words – so the argument goes – merely defers the problem of meaning by passing it on to a metalanguage whose semantics generally remains unexplored. Nevertheless, Fodor believes in the compositionality of meaning – meanings are built up of smaller pieces.

Fodor is right to question the nature of primitives. But in doing so, he declares that we have no grounds for preferring one set of primitives to another, and that the default set of primitives is “the lexicon of English,” that is, the set of words of which the lexicon is constituted. But surely we must consider carefully what constitutes the lexicon – what its parts are, what makes up words – before we decide that the word is the correct grain size for conceptual primitives. If words are themselves formally complex, can’t they be semantically complex, and therefore might not the right grain size for semantic primitives be smaller than the concepts embodied in words? In other words, there may be nowhere to go but decomposition if one wants to talk about the meanings of complex words; I therefore take the leap into decompositional semantics in full knowledge of the philosophical problems it engenders.

There are, of course, many systems of semantic description in the literature which are decompositional in one way or another and which we might bend to our purposes. Nevertheless, I suggest that none of the currently available theories of semantic analysis has all the right properties for the job at hand.

First, Logical or Model Theoretic Semantics is not suitable for my purposes, as it does not yet allow for a sufficient focus on lexical aspects of meaning. Model Theoretic Semantics has concentrated primarily on aspects of propositional meaning including predication, quantification, negation, and the semantics of complementation. There has, of course, been work on lexical meaning, most notably the work of Dowty and Verkuyl on verbal aspect and verb classes generally. Dowty is especially notable in that he directly addresses issues of derivation as well as issues concerning the simplex verbal lexicon. Other researchers in this tradition have contributed enormously to our understanding of other lexical classes; see, for example, Carlson, Kratzer, on the individual/stage level distinction in adjectives and nouns; Landman, Gillon, Schwarzchild, Schein, among others on plurals; Bierwisch on prepositions; and Bierwisch on adjectives. Nevertheless, at this point Model Theoretic Semantics has not yet produced a system of decomposition that is sufficiently broad and cross-categorial, and at the same time fine-grained enough to address the questions I raise here.

Also available for our purposes are semantic systems such as those of Szymanek, Jackendoff, Pustejovsky, and Wierzbicka, all of which are decompositional in one way or another and more closely concentrated on the lexical domain. Although each of these systems has some attractive characteristics, none of them has all the characteristics that I believe are necessary to the task at hand.

Ray Jackendoff has, since the early seventies, developed a decompositional system of semantic representation or Lexical Conceptual Structure, as he calls it, which has many of the characteristics I mention above. Jackendoff’s Lexical Conceptual Structures (LCSs) are hierarchical arrangements of functions and arguments. The primitives of the system are semantic functions such as BE, GO, STAY, ORIENT, CAUSE, TO, FROM, THING, and PATH, and in some later work increasingly smaller atoms of meaning represented as features (e.g., [bounded], [internal structure]) which allow for the discussion of aspectual characteristics of verbs and quantificational characteristics of nouns. I see my own work largely as an outgrowth and extension of the work of Jackendoff and related theorists, and I owe a great debt to their pioneering work. Nevertheless, Jackendoff’s system as it stands is not entirely suitable to tackle the issues of morphological semantics I raised above. For one thing, his work has been heavily weighted towards the description of verbal meanings, and as yet is insufficiently cross-categorial to allow for a full discussion of the semantics of nouns and adjectives, which we would need in a full consideration of word-formation processes such as derivation, compounding, and conversion. Secondly, as I will argue in what follows, the "grain size" of many of Jackendoff’s primitives is not quite right for our purposes. So although much of what follows will be couched in terms similar to those of Jackendoff, the system I will develop below will differ from his in significant ways.

Similarly, I cannot simply adopt the system of semantic description that has been developed in the work of Anna Wierzbicka. Her framework is decompositional, and unlike Jackendoff’s, it is very broadly cross-categorial. It is also admirably comprehensive. Wierzbicka, unlike most other semantic theorists, claims that the primitives of lexical semantics are a Natural Semantic Metalanguage comprised of word-sized chunks such as I, YOU, HERE, NOW, DO, HAPPEN, MANY, and the like (in Wierzbicka the number of primitives is set tentatively at fifty-six):

Semantic primitives are, by definition, indefinable: they are Leibniz’s ultimate “simples”, Aristotle's “prioria”, in terms of which all the complex meanings can be articulated, but which cannot be decomposed themselves. They can, of course, be represented as bundles of some artificial features, such as "+Speaker, – Hearer" for "I", but this is not the kind of decomposition which leads from complex to simple and from obscure to clear. As pointed out earlier, the meaning of a sentence like "I know this" cannot be clarified by any further decomposition – not even by decomposition into some other meaningful sentences; and "features", which have no syntax and which are not part of natural language, have no meaning at all; they have to be assigned meaning by sentences in natural languages, rather than the other way around.

In other words, the only candidates for primitives in Wierzbicka’s framework are chunks of meaning that cannot be explicated in simpler words; these chunks of meaning are themselves word-sized.

While I agree with Wierzbicka’s judgment that putative primitives must be simple, I also believe, and hope to show in what follows, that the particular word-sized chunks that she deems to be primitives sometimes do not allow us to answer the questions about the semantics of complex words that I have raised above. The problem with Wierzbicka’s system of lexical semantic description is therefore the one of “grain size.”

Another attractive theory of lexical semantic representation is Pustejovsky's theory of the Generative Lexicon. This theory, like Wierzbicka’s, is broadly cross-categorial, and allows us to represent many aspects of the meanings of lexical items. A lexical semantic representation for Pustejovsky consists of four parts:

These include the notion of argument structure, which specifies the number and type of arguments that a lexical item carries; an event structure of sufficient richness to characterize not only the basic event type of a lexical item, but also internal, subeventual structure; a qualia structure, representing the different modes of predication possible with a lexical item; and a lexical inheritance structure, which identifies how a lexical structure is related to other structures in the dictionary, however it is constructed.

The qualia part of the lexical semantic structure can in turn include several types of information about the meaning of a word: constitutive information (“the relation between an object and its constituent parts”); formal information (“that which distinguishes it within a larger domain”); telic, information (“its purpose and function”); and agentive information (“factors involved in its origin or ‘bringing it about’”).

Pustejovsky’s theory is decompositional, but he does not argue for a fixed number of primitives. Indeed, it is not clear that the descriptive elements in his lexical entries are primitives at all. What matters more for Pustejovsky is the process by which lexical items are combined – the ways in which their composition into larger units determines the meaning of each item in situ. His primary goal is to account for the polysemy of lexical items in the larger sentential context, for example, why we understand the word window to refer to an object in the sentence She broke the window, but an aperture in She climbed through the window.

With its emphasis on polysemy, the Generative Lexicon might seem to afford a possible framework in which to discuss the semantics of word formation. However, we will see that this system of description does not provide us with the means to discuss all the questions raised above – in particular the multiple-affix question – and that this latter question can in fact be answered only within a representational system that relies on a fixed (and presumably relatively small) number of primitives.

Finally, I must consider the descriptive system developed by Szymanek and adopted in large part by Beard for his Lexeme Morpheme Base Morphology. Unlike the descriptive systems provided by Jackendoff, Wierzbicka, and Pustejovsky, Szymanek’s system is specifically intended to address questions of meaning in word formation. It therefore might seem the best place to start for the present endeavor. Further, Szymanek’s system has several of the characteristics that we seek: it is broadly cross-categorial and decompositional, and relies on a (perhaps fixed) number of primitives. The problem, however, is with the primitives themselves.

These include semantic categories like the following: OBJECT, SUBSTANCE, PERSON, NUMBER, EXISTENCE, POSSESSION, NEGATION, PROPERTY, COLOR, SHAPE, DIMENSION, SIMILARITY, SEX, SPACE, POSITION, MOVEMENT, PATH, TIME, STATE, PROCESS, EVENT, ACTION, CAUSATION, AGENT, INSTRUMENT. Szymanek suggests a condition which he calls the Cognitive Grounding Condition: “The basic set of lexical derivational categories is rooted in the fundamental concepts of cognition.” In other words, word formation is typically based on one or more of the semantic/conceptual categories, above. I believe that Szymanek is right about the issue of cognitive grounding: (derivation must be rooted in the basic concepts of cognition, as he puts it. But again it will become apparent that Szymanek’s categories do not exhibit the right “grain size” needed to give interesting answers to the questions at the heart of this book. In fact, it appears that Szymanek adopts this list not so much for its intrinsic merit, but as a sort of first approximation, a useful heuristic: "Up to a point, then, the categorial framework to be developed will constitute a cumulative list of the fundamental categories of cognition as discussed by other authors. It should be noted, however, that we omit from the inventory a few concepts whose status seems rather dubious or simply non-essential from the point of view of the present Study”. In other words, unlike Jackendoff and Wierzbicka, who are interested in establishing the nature and necessity of the primitives themselves, Szymanek is content with a list of provisional labels. These are, of course, labels that are useful in describing derivational processes, but I will try to show that answers to our basic questions begin to emerge only when we leave behind provisional labels such as AGENT and CAUSATION and try to establish the precise nature of the descriptive primitives in our system of lexical semantic representation.

Let me briefly outline the sort of framework of lexical semantic description which I think we need, and which I will develop in this book. As I mentioned above, I see my own work in some ways as an outgrowth and extension of that of theorists like Jackendoff, Wierzbicka, Pustejovsky, and Szymanek. But I distinguish my theory from theirs. First, I believe that noninflectional word formation – derivation, compounding, and conversion – serves to create lexemes and to extend the simplex lexicon; for that reason, I believe that the meanings it expresses ought to reflect the semantic distinctions that are salient in the simplex lexicon. That is, to the extent that we find semantic classes that are significant in distinguishing the behavior of underived lexemes, we might expect derivation, compounding, and conversion to extend those classes. And to the extent that we find polysemy in complex words, it ought to be like the polysemy we see in simplex lexical items.

Second, I conceive of lexical semantic representations as being composed of two parts, what I will call the Semantic/Grammatical Skeleton (or skeleton, for short) and the Semantic/Pragmatic Body (body, for short). The distinction I make here between skeleton and body is not particularly new, although some elements of both skeleton and body are designed in this theory to allow discussion of problems associated with the semantics of word formation. But the skeleton and body I develop in what follows do have elements in common with what Rappaport Hovav and Levin call respectively the “event structure template” and the “constant,” or what Mohanan and Mohanan call “Grammatical Semantic Structure” and “Conceptual Structure.”

The skeleton in my framework will be comparable in some but not all ways to Jackendoff s Lexical Conceptual Structures. It will be the decompositional part of the representation, hierarchically arranged, as Jackendoff’s LCSs are. It will seek to isolate all and only those aspects of meaning which have consequences for the syntax. This part of the representation will be relatively rigid and formal. It is here that I will try to establish primitives, and specifically a small number of primitives of the right “grain size” to allow us to address issues of the semantics of derivation, compounding, and conversion. Instead of Jackendoff’s semantic functions (BE, GO, CAUSE, etc.), Wierzbicka’s simple concepts, or Szymanek’s cognitive categories, I will propose a broadly cross-categorial featural system for decomposing meanings of morphemes.

The other part of the semantic representation, the body, will be encyclopedic, holistic, nondecompositional, not composed of primitives, and perhaps only partially formalizable. It will comprise those bits of perceptual and cultural knowledge that form the bulk of the lexical representation. The body will include many of the aspects of meaning that Pustejovsky encodes in his Qualia Structure – information concerning material composition, part structure, orientation, shape, color, dimensionality, origin, purpose, function, and so on .

My theory is consciously based on an anatomical metaphor. The skeleton forms the foundation of what we know about morphemes and words. It is what allows us to extend the lexicon through various word-formation processes. The body fleshes out this foundation. It may be fatter or thinner from item to item, and indeed from the lexical representation of a word in one person’s mental lexicon to the representation of that “same” word in another individual’s mental lexicon. But the body must be there in a living lexical item. Bodies can change with the life of a lexical item – gain or lose weight, as it were. Skeletons, however, are less amenable to change.

My main claim is that the semantics of word formation involves the creation of a single referential unit out of two distinct semantic skeletons that have been placed in a relationship of either juxtaposition or subordination to one another. The primary mechanism for creating a single referential unit will be the co-indexation of semantic arguments. Compound formation will involve juxtaposition of skeletons with concomitant co-indexing. Derivational affixation will involve the addition of skeletal material to a base whose own skeleton is subordinated; in other words, the semantic representation of a derivational affix will be a bit of semantic skeleton which subordinates a lexical base. The skeletons of which compounds are formed will typically have accompanying bodies, but derivational affixes will often have little or nothing in the way of semantic bodies. Both derived words and compounds may, however, over time develop substantial and distinctive bodies as a function of their lexicalization. Lexicalization, we shall see, proceeds on an item-by-item basis, thus allowing a wide range of meanings to exist in items formed by the same process of derivation or compounding.

Semantic variation among items formed by the same process of derivation or compounding will not merely be a function of the lexicalization process, however. In fact, a concomitant of the claim that the semantics of derivation should reflect the semantics of the simplex lexicon is that the sorts of polysemy we find in the simplex lexicon should also be found in derived words. I will show in what follows that both of the main types of polysemy that are manifested in the simplex lexicon – what Pustejovsky and Boguraev call “logical polysemy” and “sense extensions” – are to be found in derivational affixes as well. Logical polysemy will be seen to arise from the composition of skeletons, and specifically from the effects of underdetermination in skeletal meanings. It is here that the choice of primitives in our system will receive its justification: only a featural system such as the one to be proposed in this book will give rise to the right level of underdetermination of meaning to account for affixal polysemy. We will see that sense extensions sometimes arise in affixation, as well, although not as frequently as logical polysemy.

A word about the scope and limits of this book. I cannot hope to cover everything that needs to be said about the semantics of all sorts of word formation in all sorts of languages without promising to write a book I would never finish or could never hope to get published. I have chosen to narrow the scope of this work to three types of word formation that are well represented and fairly well understood – derivation, compounding, and conversion – and to limit my discussion in most cases to these processes in English.

This is not to say that inflection is unimportant, or to deny that there is an enormous amount that we could learn from scrutinizing word formation in languages other than English. In this work, I propose to confine myself to bona fide processes of lexeme formation in the hopes that the foundation of lexical semantics developed here will eventually allow us to proceed to a fruitful discussion of inflection. Other theorists such as Anderson, Aronoff, and Stump have tended to take the opposite route, building their theories primarily on the basis of a study of inflectional phenomena and giving shorter shrift to derivation, compounding, and conversion.

Similarly, these theorists have tended to look at inflection in a wide variety of the world’s languages, a methodological choice that has certainly borne fruit in the study of inflection. But specifically because of my concentration on processes of lexeme formation in this work, I will tend to focus attention on a single language – English. My justification is the following: the sort of semantic work that I hope to do requires a detailed and intimate look at the meanings of lots of words formed with the same affix, or by the same type of compounding or conversion. Indeed, as will become apparent in the chapters that follow, I cannot even hope to provide an exhaustive description of the semantics of all of English word formation. Rather, I must narrow discussion to a series of case studies of particular realms of word formation: formation of personal/instrumental nouns; root and synthetic compounding; formation of verbs by affixation and conversion; negative affixation; and a few select others. These case studies are carefully chosen to reveal answers to the four central questions with which I began this introduction. So I beg the reader's indulgence on what might initially seem to be a rather narrow range of analysis. I cannot hope to do such detailed work with languages of which I am not a native speaker. I would hope that native speakers of other languages will eventually help to corroborate or criticize any of the theoretical apparatus that I build here.

 


Date: 2016-04-22; view: 1081


<== previous page | next page ==>
The Structure of Compounds | Belarus and Scientific Cooperation.
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.01 sec.)