Topic 3.The main categories of cognitive linguistics
Lecture 7. Concept
· Give the definition to the notion of “concept”
· View the structure of the “concept”
1.1 The notion of the “concept’
Concepts are the constituents of thoughts. Consequently, they are crucial to such psychological processes as categorization, inference, memory, learning, and decision-making. This much is relatively uncontroversial. But the nature of concepts—the kind of things concepts are—and the constraints that govern a theory of concepts have been the subject of much debate. This is due, at least in part, to the fact that disputes about concepts often reflect deeply opposing approaches to the study of the mind, to language, and even to philosophy itself.
Concepts are psychological entities, taking as its starting point the representational theory of the mind (RTM). According to RTM, thinking occurs in an internal system of representation. Beliefs and desires and other propositional attitudes enter into mental processes as internal symbols. For example, Sue might believe that Dave is taller than Cathy, and also believe that Cathy is taller than Ben, and together these may cause Sue to believe that Dave is taller than Ben. Her beliefs would be constituted by mental representations that are about Dave, Cathy and Ben and their relative heights. What makes these beliefs, as opposed to desires or other psychological states, is that the symbols have the characteristic causal-functional role of beliefs. (RTM is usually presented as taking beliefs and other propositional attitudes to be relations between an agent and a mental representation (e.g., Fodor 1987). But given that the relation in question is a matter of having a representation with a particular type of functional role tokened in one's mind, it is simpler to say that occurring beliefs just are mental representations with a characteristic type of functional role.)
Many advocates of RTM take the mental representations involved in beliefs and other propositional attitudes to have internal structure. Accordingly, the representations that figure in Sue's beliefs would be composed of more basic representations. For theorists who adopt the mental representation view of concepts, concepts are identified with these more basic representations.
Early advocates of RTM (e.g., Locke (1690/1975) and Hume (1739/1978)) called these more basic representations ideas, and took them to be mental images. But modern versions of RTM assume that much thought is not grounded in mental images. The classic contemporary treatment maintains, instead, that the internal system of representation has a language-like syntax and a compositional semantics. According to this view, much of thought is grounded in word-like mental representations. This view is often referred to as the language of thought hypothesis (Fodor 1975).
Some philosophers maintain that possession of natural language is necessary for having any concepts (Brandom 1994, Davidson 1975, Dummett 1993) and that the tight connection between the two can be established on a priori grounds. In a well known passage, Donald Davidson summarizes his position as follows:
We have the idea of belief only from the role of belief in the interpretation of language, for as a private attitude it is not intelligible except as an adjustment to the public norm provided by language. It follows that a creature must be a member of a speech community if it is to have the concept of belief. And given the dependence of other attitudes on belief, we can say more generally that only a creature that can interpret speech can have the concept of a thought.
Can a creature have a belief if it does not have the concept of belief? It seems to me it cannot, and for this reason. Someone cannot have a belief unless he understands the possibility of being mistaken, and this requires grasping the contrast between truth and error—true belief and false belief. But this contrast, I have argued, can emerge only in the context of interpretation, which alone forces us to the idea of an objective, public truth.(Davidson 1975, p. 170).
The argument links having beliefs and concepts with having the concept of belief. Since Davidson thinks that non-linguistic creatures can't have the concept of belief, they can't have other concepts as well. Why the concept of belief is needed to have other concepts is somewhat obscure in Davidson's writings (Carruthers 1992). And whether language is necessary for this particular concept is not obvious.
1.2 The structure of concepts
Just as thoughts are composed of more basic, word-sized concepts, so these word-sized concepts—known as lexical concepts—are generally thought to be composed of even more basic concepts.
The classical theory
In one way or another, all theories regarding the structure of concepts are developments of, or reactions to, the classical theory of concepts. According to the classical theory, a lexical concept C has definitional structure in that it is composed of simpler concepts that express necessary and sufficient conditions for falling under C. The stock example is the concept BACHELOR, which is traditionally said to have the constituents UNMARRIED and MAN. If the example is taken at face value, the idea is that something falls under BACHELOR if it is an unmarried man and only if it is an unmarried man. According to the classical theory, lexical concepts generally will exhibit this same sort of definitional structure. This includes such philosophically interesting concepts as TRUTH, GOODNESS, FREEDOM, and JUSTICE.
Before turning to other theories of conceptual structure, it's worth pausing to see what's so appealing about classical or definitional structure. Much of its appeal comes from the way it offers unified treatments of concept acquisition, categorization, and reference determination. In each case, the crucial work is being done by the very same components. Concept acquisition can be understood as a process in which new complex concepts are created by assembling their definitional constituents. Categorization can be understood as a psychological process in which a complex concept is matched to a target item by checking to see if each and every one of its definitional constituents applies to the target. And reference determination, we've already seen, is a matter of whether the definitional constituents do apply to the target.
The classical theory has come under considerable pressure in the last thirty years or so, not just in philosophy but in psychology and other fields as well. For psychologists, the main problem has been that the classical theory has difficulty explaining a robust set of empirical findings. At the center of this work is the discovery that certain categories are taken to be more representative or typical and that typicality scores correlate with a wide variety of psychological data (for reviews, see Smith &Medin 1981, Murphy 2002). For instance, apples are judged to be more typical than plums with respect to the category of fruit, and correspondingly apples are judged to have more features in common with fruit. There are many other findings of this kind. One other is that more typical items are categorized more efficiently. For example, subjects are quicker to judge that apples are a kind of fruit than to judge that plums are.
What other type of structure could they have? A non-classical alternative that emerged in the 1970s is the prototype theory. According to this theory, a lexical concept C doesn't have definitional structure but has probabilistic structure in that something falls under C just in case it satisfies a sufficient number of properties encoded by C's constituents. The prototype theory has its philosophical roots in Wittgenstein's (1953/1958) famous remark that the things covered by a term often share a family resemblance, and it has its psychological roots in Eleanor Rosch's experimental treatment of much the same idea (Rosch&Mervis 1975, Rosch 1978). The prototype theory is especially at home in dealing with the typicality effects that were left unexplained by the classical theory. One standard strategy is to maintain that, on the prototype theory, categorization is to be understood as a similarity comparison process, where similarity is computed as a function of the number of constituents that two concepts hold in common. On this model, the reason apples are judged to be more typical than plums is that the concept APPLE shares more of its constituents with FRUIT. Likewise, this is why apples are judged to be a kind of fruit faster than plums are.
The prototype theory does well in accounting for a variety of psychological phenomena and it helps to explain why definitions may be so hard to produce. But the prototype theory has its own problems and limitations. One is that its treatment of categorization works best for quick and unreflective judgments. Yet when it comes to more reflective judgments, people go beyond the outcome of a similarity comparison. If asked whether a dog that is surgically altered to look like a raccoon is a dog or a raccoon, the answer for most of us, and even for children, is that it is remains a dog (see Keil 1989, Gelman 2003 for discussion). Another criticism that has been raised against taking concepts to have prototype structure concerns compositionality. When a patently complex concept has a prototype structure, it often has emergent properties, ones that don't derive from the prototypes of its constituents (e.g., PET FISH encodes properties such as brightly colored, which have no basis in the prototype structure for either PET or FISH). Further, many patently complex concepts don't even have a prototype structure (e.g., CHAIRS THAT WERE PURCHASED ON A WEDNESDAY) (Fodor &Lepore 1996, Fodor 1998; for responses to the arguments from compositionality, see Prinz 2002, Robbins 2002, Hampton &Jönsson 2011).
One general solution that addresses all of these problems is to hold that a prototype constitutes just part of the structure of a concept. In addition, concepts have conceptual cores, which specify the information relevant to more considered judgments and which underwrite compositional processes. Of course, this just raises the question of what sort of structure conceptual cores have.
Another and currently more popular suggestion is that cores are best understood in terms of the theory of concepts. This is the view that concepts stand in relation to one another in the same way as the terms of a scientific theory and that categorization is a process that strongly resembles scientific theorizing (see, e.g., Carey 1985, 2009, Gopnik&Meltzoff 1997, Keil 1989). It's generally assumed, as well, that the terms of a scientific theory are interdefined so that a theoretical term's content is determined by its unique role in the theory in which it occurs.
The theory is especially well-suited to explaining the sorts of reflective categorization judgments that proved to be difficult for the prototype theory. For example, theory theorists maintain that children override perceptual similarity in assessing the situation where the dog is made to look like a raccoon, claiming that even children are in possession of a rudimentary biological theory. This theory, an early form of folk biology, tells them that being a dog isn't just a matter of looking like a dog. More important is having the appropriate hidden properties of dogs—the dog essence (see Atran&Medin 2008 on folkbiology). Another advantage of the theory is that is supposed to help to explain important aspects of conceptual development. Conceptual change in childhood is said to follow the same pattern as theory change in science.
One problem that has been raised against the theory is that it has difficulty in allowing for different people to possess the same concepts (or even for the same person to have the same concept over time). The reason is that the theory is holistic. A concept's content is determined by its role in a theory, not by its being composed of just a handful of constituents. Since beliefs that enter people's mental theories are likely to be different from one another (and are likely to change), there may be no principled basis for comparison (Fodor &Lepore 1992). Another problem with the theory concerns the analogy to theory change in science. The analogy suggests that children undergo radical conceptual reorganization in development, but many of the central case studies have proved to be controversial on empirical grounds, with evidence that the relevant concepts are implicated in core knowledge systems that are enriched in development but not fundamentally altered (see Spelke 1994 on core knowledge).
A radical alternative to all of the theories we've mentioned so far is conceptual atomism, the view that lexical concepts have no semantic structure (Fodor 1998, Millikan 2000). According to conceptual atomism, the content of a concept isn't determined by its relation to other concepts but by its relation to the world.
Conceptual atomism follows in the anti-descriptivist tradition that traces back to Saul Kripke, Hilary Putnam, and others working in the philosophy of language (see Kripke 1972/80, Putnam 1975, Devitt 1981). Kripke, for example, argues that proper names function like mere tags in that they have no descriptive content (Kripke 1972/80). On a description theory one might suppose that “Gödel” means something like the discoverer of the incompleteness of arithmetic. But Kripke points out we could discover that Schmitt really discovered the incompleteness of arithmetic and that Gödel could have killed Schmitt and passed the work off as his own. The point is that if the description theory were correct, we would be referring to Schmitt when we say “Gödel”. But intuitively that's not the case at all. In the imagined scenario, the sentence “Gödel discovered the incompleteness of arithmetic” is saying something false about Gödel, not something trivially true about the discoverer of the incompleteness of arithmetic, whoever that might be (though see Machery et al. 2004 on whether this intuition is universal). Kripke's alternative account of names is that they achieve their reference by standing in a causal relation to their referents. Conceptual atomism employs a similar strategy while extending the model to all sorts of concepts, not just ones for proper names.
At present, the nature of conceptual structure remains unsettled. Perhaps part of the problem is that more attention needs to be given to the question of what explanatory work conceptual structure is supposed to do and the possibility that there are different types of structure associated with different explanatory functions. We've seen that conceptual structure is invoked to explain, among other things, typicality effects, reflective categorization, cognitive development, reference determination, and compositionality. But there is no reason to assume that a single type of structure can explain all of these things. As a result, there is no reason why philosophers shouldn't maintain that concepts have different types of structure. For example, notice that atomism is largely motivated by anti-descriptivism. In effect, the atomist maintains that considerable psychological variability is consistent with concepts entering into the same mind-world causal relations, and that it's the latter that determines a concept's reference. But just because the mechanisms of reference determination permit considerable psychological variability doesn't mean that there aren't, in fact, significant patterns for psychologists to uncover. On the contrary, the evidence for typicality effects is impressive by any measure. For this reason, it isn't unreasonable to claim that concepts do have prototype structure even if that structure has nothing to do with the determination of a concept's referent. Similar considerations suggest that concepts may have theory-structure and perhaps other types of structure as well.
One way of responding to the plurality of conceptual structures is to suppose that concepts have multiple types of structure. This is the central idea behind conceptual pluralism. According to one version of conceptual pluralism, suggested by Laurence & Margolis (1999), a given concept will have a variety of different types of structure associated with it as components of the concept in question. For example, concepts may have atomic cores that are linked to prototypes, internalized theories, and so on. On this approach, the different types of structure that are components of a given concept play different explanatory roles. Reference determination and compositionality have more to do with the atomic cores themselves and how they are causally related to things outside of the mind, while rapid categorization and certain inferences depend on prototype structure, and more considered inferences and reasoning depend upon theory structure. Many variants on this general proposal are possible, but the basic idea is that, while concepts have a plurality of different types of structure with different explanatory roles, this differing structure remains unified through the links to an atomic representation that provides a concept's reference. One challenge for this type of account is to delineate which of the cognitive resources which are associated with a concept should be counted as part of its structure and which should not. As a general framework, the account is neutral regarding this question, but as the framework is filled in, clarification will be needed regarding the status of potential types of structure.
Problem questions: Do the concepts and their components differ in various languages? Why? Are there any universal concepts which are appropriate to most languages of the world?
Lecture 8. Sphere of concepts (Conceptual system of a language)
· Give the definition to the “sphere of concepts”
· Consider the relation between a concept and the sphere of concepts
· Study how is the sphere of concepts organized
The human conceptual system is not open to direct investigation. Nevertheless, cognitive linguists maintain that the properties of language allow us to reconstruct the properties of the conceptual system, and to build a model of that system. The logic of this claim is as follows. As language structure and organization, as revealed in the previous section, reflect various known aspects of cognitive structure, by studying language, which is observable, we thereby gain insight into the nature of the conceptual system. The sub-branch of cognitive linguistics concerned with employing language as a lens, in order to study otherwise hidden aspects of conceptual structure, is often referred to as cognitive semantics.
One of the earliest, and perhaps best-known, cognitive semantic theories is conceptual metaphor theory, developed by Lakoff and Johnson (1980, 1999). The central insight of this approach is that figurative patterns in language reflect underlying, highly stable associations, known as mappings, which hold between domains in the conceptual system. Sets of mappings holding between two distinct conceptual domains are referred to as conceptual metaphors, which is what gives the theory its name. For instance, one particularly common way in which we talk and think about a love relationship is in terms of journeys. To illustrate, consider the following everyday expressions, drawn from Lakoff and Johnson (1980), which we might use to describe aspects of a love relationship:
a. Look how far we’ve come.
b. We’re at a crossroads.
c. We’ll just have to go our separate ways.
d. We can’t turn back now.
e. I don’t think this relationship is going anywhere.
f. This relationship is a dead-end street.
g. Our marriage is on the rocks.
h. This relationship is foundering.
According to Lakoff and Johnson, utterances such as these are motivated by an entrenched pattern in our conceptual system: A conceptual metaphor. The conceptual metaphor can be stated as Love as a Journey. This conceptual metaphor is made up of a fixed set of established mappings which structure concepts that are located in the more abstract domain of love, in terms of concepts belonging to the more concrete domain of journey. For instance, in the domain of love we have concepts for lovers, the love relationship, events that take place in the love relationship, difficulties that take place in the relationship, progress we make in resolving these difficulties, and in developing the relationship, choices about what to do in the relationship, such as moving in together, whether to split up, and so on, and the shared and separate goals we might have for ourselves in the relationship, and for the relationship itself. Similarly, we represent a range of concepts relating to the domain of journeys. These include concepts for the travellers, the vehicle used for the journey, plane, train, or automobile, the distance covered, obstacles encountered, such as traffic jams, that lead to delays and hence impediments to the progress of the journey, our decisions about the direction and the route to be taken, and our knowledge about destinations.
The conceptual metaphor, Love is a Journey, provides a means of systematically mapping these knowledge slots from the domain of journey onto corresponding slots in the domain of love. This means that slots in the love domain are structured in terms of knowledge from the domain of journey. For instance, the lovers in the domain of love are structured in terms of travellers such that we understand lovers in terms of travellers.
Similarly, the love relationship itself is structured in terms of the vehicle used on the journey. For this reason we can talk about marriage foundering, being on the rocks, or stuck in a rut and understand expressions such as these as relating, not literally to a journey, but rather to two people in a long-term love relationship that is troubled in some way. In other words, we must have knowledge of the sort specified by the conceptual metaphor stored in our heads if we are to be able to understand these English expressions: to understand lovers in terms of travellers, and the relationship in terms of the vehicles, and so on. The linguistic expressions provide compelling evidence for the conceptual metaphors. The mappings implicated by the linguistic evidence are given in Table.
In essence, the claim at the heart of conceptual metaphor theory is that the mappings, which lie at the level of conceptual structure, are revealed by evidence from language, as exemplified by the sentences in for instance. Language can thus be employed as a key methodological tool for revealing conceptual patterns that underlie language use.
Philosophers, psychologists and computer scientists have proposed that semantic knowledge is best understood as a system of relations. Two questions immediately arise: how can these systems be represented, and how are these representations acquired? The possible answer can be found in the ‘domain theory’.
Suppose that a domain includes several types, or sets of entities. One role of a domain theory is to specify the kinds of entities that exist in each set, and the possible or likely relationships between those kinds. Consider the domain of medicine, and a single type defined as the set of terms that might appear on a medical chart. A theory of this domain might specify that cancer and diabetes are both disorders, asbestos and arsenic are both chemicals, and that chemicals can cause disorders. This model assumes that each entity belongs to exactly one kind, or cluster, and simultaneously discovers the clusters and the relationships between clusters that are best supported by the data. A key feature of our approach is that it does not require the number of clusters to be fixed in advance. The number of clusters used by a theory should be able to grow as more and more data are encountered, but a theory-learner should introduce no more clusters than are necessary to explain the data. This approach automatically chooses an appropriate number of clusters using a prior that favors small numbers of clusters, but has access to a countably infinite collection of clusters. Previous infinite models (Rasmussen 2002; Antoniak 1974) have focused on feature data, and the IRM extends these approaches to work with arbitrary systems of relational data.
The concepts that govern our thought are not just matters of the intellect. They also govern our everyday functioning, down to the most mundane details. Our concepts structure what we perceive, how we get around in the world, and how we relate to other people. Our conceptual system thus plays a central role in defining our everyday realities. If we are right in suggesting that our conceptual system is largely metaphorical, then the way we think, what we experience, and what we do every day is very much a matter of metaphor.
But our conceptual system is not something we are normally aware of. In most of the little things we do every day, we simply think and act more or less automatically along certain lines. Just what these lines are is by no means obvious. One way to find out is by looking at language. Since communication is based on the same conceptual system that we use in thinking and acting, language is an important source of evidence for what that system is like.
Lakoff and Johnson have found a way to begin to identify in detail just what the metaphors are that structure how we perceive, how we think, and what we do.
Sphere of concepts is a structured knowledge, an information base of mental images, consisting of universal object code units. The semantic language space as a part of the sphere of concepts is verbalized in the system of linguistic signs: words, phrase combinations, syntactic structures, frames. It is formed by the linguistic units’ meanings. A concept conceived as a unit of the sphere of concepts reflects peculiarities of thinking, worldview and culture of people. Any person can be “a concepts bearer”, as he or she has its own cultural experience and cultural identity. Thus, individual verbal activity is determined by the language sphere of concepts and national sphere of concepts.
Constructs such as frames, Idealized Cognitive Models (ICMs), and domains have been central to various methods of analysis in Cognitive Linguistics. Each of them provides a way of characterizing the structured encyclopedic knowledge which is inextricably connected with linguistic knowledge—that assertion being an important tenet in much of the cognitive linguistic research. Frames, ICMs, and domains all derive from an approach to language as a system of communication that reflects the world as it is construed by humans, rather than as it might be represented from some god’s-eye point of view.
Charles J. Fillmore began using the term solely on the level of linguistic description, and later, he and others extended its use to include characterization of knowledge structures, thus linking the analysis of language to the study of cognitive phenomena.
In his papers ‘‘Frame semantics’’ and ‘‘A private history of the concept
‘Frame’”, Fillmore reveals the influences which led to his formulation and development of the notion.
In the 1950s, he was exploring the principles behind the co-occurrence of strings of words, influenced by Fries, and later by Pike’s work on ‘‘tagmemic formulas.’’ Fillmore’s early work on transformational syntax led him into researching the distributional properties of individual verbs. This research involved looking at the substitutability of words, within what could be called syntactic frames, while preserving the meaning of the utterance. But soon the use of ‘‘frame’’ extended from syntax to semantics. Fillmore reflects that by the late 1960s, ‘‘I began to believe that certain kinds of groupings of verbs and classifications of clause types could be stated more meaningfully if the structures with which verbs were initially associated were described in terms of the semantic roles of their associated arguments.’’ He explains: ‘‘I use the word frame for any system of linguistic choices—the easiest cases being collections of words, but also including choices of grammatical rules or linguistic categories—that can get associated with prototypical instances of scenes.’’ Though frames are talked about from a linguistic viewpoint, it is noteworthy that they are not presented as an independent approach to linguistic analysis, but rather as one part of a paradigm, integrally linked to the idea of scenes. Influenced by work in the 1970s on pragmatics and speech acts, Fillmore also claimed that we not only employ cognitive frames to produce and understand language, but also to conceptualize what is going on between the speaker and addressee, or writer and reader. This introduced the idea of framing on another level, in terms of ‘‘interactional frames.’’ Such interactional frames provide a tool for talking about the background knowledge and expectations one brings to bear for the production, and interpretation, of oral or written discourse, particularly in relation to accepted genre types. Knowing that a text is a business contract, a folktale, or marriage proposal, one employs specific structures of expectations which help lead to a full interpretation of the meaning, and also help one know when the text is ending, and how to respond, if that is appropriate. The notion of Idealized Cognitive Models was preceded by a theoretical exploration of the application of Gestalts in linguistics, namely in a new approach dubbed ‘‘experiential linguistics.’’ The basic claim of experiential linguistics, as Lakoff proposes, is that ‘‘a wide variety of experiential factors—perception, reasoning, the nature of the body, the emotions, memory, social structure, sensorimotor and cognitive development, etc.—determine in large measure, if not totally, universal structural characteristics of language.’’
The following are some of the many properties which Lakoff describes to Gestalts:
o Gestalts are structures that are used in cognitive processing;
o Gestalts are wholes whose component parts take on additional significance
by virtue of being within those wholes;
o Gestalts have internal relations among parts, which may be of different types;
o Gestalts may have external relations to other Gestalts;
o there may be partial mappings of one Gestalt onto another, or embedding
of one within another;
o a Gestalt analysis need not necessarily make claims about the ultimate parts
into which something can be decomposed, since such analysis would be
o guided by cognitive purposes and viewpoints, and thus different analyses
may be possible; but
o Gestalts must distinguish prototypical from nonprototypical properties; and
o Gestalts are often cross-modal.
Instantiations of Gestalts in language may involve a number of types of properties, such as grammatical, pragmatic, semantic, and/or phonological ones.
This notion of Gestalts provided the underpinnings for the development of Idealized Cognitive Models (ICMs) in Cognitive Linguistics. The first detailed explication of ICMs appeared in Lakoff, as part of a synthesis of existing research on categorization within the various branches of cognitive science. ICMs are proposed as a way in which we organize knowledge, not as a direct reflection of an objective state of affairs in the world, but according to certain cognitive structuring principles. The models are idealized, in that they involve an abstraction, through perceptual and conceptual processes, from the complexities of the physical world.
At the same time, these processes impart organizing structure—for example, in the form of conceptual categories. They provide an advantageous means of processing information because they are adapted to human neurobiology, human embodied experience, human actions and goals, and human social interaction.
Johnson characterizes an image schema as ‘‘a recurring, dynamic pattern of our perceptual interactions and motor programs that gives coherence and structure to our experience’’.
Problem questions: How are a sphere of concepts and a national mentality related? Can one of these phenomena take part in forming the other? In what way? What triggers an appearance of cultural stereotypes? Can they be a reliable source of information about the mentality of people of different nationalities?