Posted on by Virg

A tear is a universal sign. Since ancient times, philosophers and scientists have tried to explain weeping as part of a shared human language of emotional expression. But, in fact, a tear on its own means nothing. As they well up in our eyes, or dribble down our cheeks, the meanings of those salty droplets can only be tentatively inferred by others, and then only when they know much more about the particular mental, social, and narrative contexts that gave rise to them.

We cry in sadness, grief and mourning, but also from joy and laughter. Some are moved to tears of pity by human suffering; others have wept the enraged tears of the oppressed. A tear-streaked cheek might be produced by nothing more than a yawn or a chopped onion. The Victorian journalist Harriet Martineau had tears of intellectual ecstasy running down her cheeks as she translated the ponderous tomes of the French sociologist Auguste Comte. A friend of mine, a steam enthusiast, told me that when he first saw the record-breaking locomotive, the Mallard, at the National Railway Museum, he cried. A tear is a universal sign not in the sense that is has the same meaning in all times and all places. It is a universal sign because it can signify just about anything.

If weeping were a gesture with a single meaning, part of a universal language of feeling, then it would surely signify grief. That is the state with which it has been most frequently connected. Yet there were countless examples of joyful crying on display in London last summer. Streams of Olympic and Paralympic emotion spilled out by the bucketful. On the winner’s podium, as national anthems surged, so too did the lachrymal effluvia. Pride and joy expressed themselves in copious tears. Boris Johnson, the Mayor of London, bragged about his ‘hot tears of patriotic pride’ at the opening ceremony and proclaimed the end of the games a ‘tear-sodden juddering climax’. In 1872, when Charles Darwin wrote The Expression of the Emotions in Man and Animals, it might have been true that ‘Englishmen rarely cry’, but by 2012 the mayor and others had done their best finally to scotch that idea.

I can add my own personal example too: when my son was born at St Thomas’ Hospital, with the Diamond Jubilee flotilla of a thousand vessels bobbing down the Thames outside, I wept tears of joy and relief that a truly alarming emergency caesarean had ended successfully.

Theories of tears have always struggled to do justice to their threefold nature, as secretions, symptoms and signs. Are tears to be treated like urination, like a rash, or like a work of art? Does their interpretation require the expertise of the physiologist, the physician, or the metaphysician?

Tears were comparable to urination or even perhaps to sexual secretions: something to be produced and enjoyed under cover of darkness

The suggestive language used by Boris Johnson to describe his own ocular ejaculations deliberately confused one bodily secretion with another. Those who object to public weeping often refer to it as a kind of ‘emotional incontinence’ — a phrase with origins in the psychiatric literature of the late 19th century, which implies that a similar shame should attach to a public stream of tears as to a public stream of urine. In 2011, BBC Four screened a documentary about public weeping, presented by the comedian Jo Brand. She was against it, saying that crying should be reserved for rare occasions and then take place in private. Online comments responding to the programme proved she was not alone. One remark came from someone calling himself — and I speculate here about gender — Algol60, which is also the name of a type of computer language. Algol60 wrote:

If you need to blub, go into the bog and do it privately. Small children and effeminate foreigners might be expected to do otherwise but any Briton over the age of eight should have self-control.

This kind of comment seems out of keeping with 21st-century attitudes, but it is a pungent reminder of the ideology of the British ‘stiff upper lip’, which had begun to take root when Darwin wrote his book on emotion, and had its heyday during the two world wars.  The associated metaphor of weeping as incontinence suggested that tears should be an occasion for disgust and shame. For several decades in the mid-20th century, a wide-ranging government research programme called Mass Observation investigated ordinary British life. One Mass Observation questionnaire in the 1950s asked panel members, in a series of queries that also canvassed their views on margarine and foreigners: ‘Do you ever cry in the pictures? Which films, if any, have made you cry, how much, and — if you remember — which part of the film? How far, if at all, do you feel ashamed on such occasions?’

Many respondents denied feeling shame, but one abrasive participant (a man in his forties) got right to the nub of the question: ‘I have never cried “in the pictures” — I’ve sometimes urinated. “Ashamed” — yes — that I’ve chucked my money away.’ An unmarried male clerk of a similar age wrote: ‘I feel no shame for my feeling, rather a breadth of thankfulness that I can still be moved to the extent described. Possibly most of us — under cover of a dark theatre — can indulge in a little sentimentality in a similar way as we react to great sorrow — in the quietness of one’s own room.’ For both the ashamed and the shameless, tears were comparable to urination or even perhaps to sexual secretions: something to be produced and enjoyed under cover of darkness, whether in the semi-public space of the cinema, or the ‘quietness of one’s own room’, with the more luxurious sensory possibilities that suggests.

This connection between weeping and excretion, while it seems to have come into its own in the 20th century, is by no means new. In 1586 the English clergyman and physician Timothie Bright wrote an influential Treatise of Melancholie, whose many readers probably included Shakespeare, which described tears as a ‘kinde of excrement not much unlike’ urine. In a poem called ‘A Lady Who P-st at the Tragedy of Cato’, Alexander Pope lampooned Joseph Addison’s celebrated play, Cato: A Tragedy (1712) by describing a woman who responds to the drama with copious urine rather than the expected tears:

While maudlin Whigs deplor’d their Cato’s Fate,

Still with dry Eyes the Tory Celia sate,

But while her Pride forbids her Tears to flow,

The gushing Waters find a Vent below:

Tho’ secret, yet with copious Grief she mourns,

Like twenty River-Gods with all their Urns.

Let others screw their Hypocritick Face,

She shews her Grief in a sincerer Place;

There Nature reigns, and Passion void of Art,

And there is a traditional Yiddish phrase for crying that translates literally as ‘pissing from the eyes’.

This old idea has been reinforced by modern science in the last century and a half. In recent decades, the most widely quoted theorist of tears has been the American biochemist William H Frey II who, since the 1980s, has been arguing that the metaphor of weeping as excretion should be taken quite literally. In an interview with The New York Times in 1982, Frey claimed that crying is ‘an exocrine process’ which, ‘like exhaling, urinating, defecating and sweating’ releases toxic substances from the body — in this case, so-called ‘stress hormones’. But Frey’s biochemical version of the incontinence theory of weeping is just a recent spin-off from a much more influential underlying set of ideas, one generated in the 19th century by the psychoanalytic model of the mind.

There are two ideas at the heart of the psychoanalytic approach to tears, ideas that, during the middle decades of the 20th century, entered into psychological orthodoxy among professionals and the lay public alike: repression and regression. The first implies that tears are a kind of overflow or discharge of previously repressed emotion, while the second presents the phenomenon of adult weeping as some sort of return to infantile, even prenatal, experiences and emotions.

In their ‘preliminary communication’ on the ‘Psychical Mechanism of Hysterical Phenomena’ of 1893, Josef Breuer and Sigmund Freud explained how repressed memories of traumatic events could, for years afterwards, give rise to hysterical symptoms. They believed that hypnosis could access these traumatic memories, which they thought of as ‘foreign bodies’ that needed to be flushed out of the psyche. Freud and Breuer reported that once a patient had put the memory into words, given it utterance, the hysterical symptoms would disappear.

Tears feature in this model of the psyche in several ways, both healthy and pathological. The proper and healthy function of tears, along with other voluntary and involuntary reactions to traumatic events, was to function as a channel for the discharge of affect or strong feeling. Affect is conceived as a psychic fluid that needs to be drained out of the system; weeping is one way to achieve that. As an example of another such expedient, Breuer and Freud suggest acts of revenge. Tears, then, alongside words and deeds, are affect-discharge mechanisms, overflow channels, release valves.

While tears can be a sign of healthy catharsis in the Freudian model, they could, in other circumstances, be pathological, as in the case of Frau Emmy von N. She came to Freud complaining of confusion, sleeplessness and bouts of tears, which lasted for hours at a time. Freud decided that her tears were a hysterical symptom. Another of his cases, however, involved a woman who wept regularly on the anniversaries of her husband’s illness, decline and death. Freud described these as private, tearful ‘annual festivals of remembrance’. In this case, he insisted that the weeping was not hysterical, but was a ‘postponed abreaction’ — a delayed but healthy working out of affect, a belated expulsion of a traumatic foreign body.

Freud’s theories echo certain ideas proposed by Darwin and other evolutionary theorists in the 19th century, according to which weeping was one of many channels through which excess nervous energy could overflow. Tears, for Darwin, were never more than a side-effect of some other, useful behaviour. He started from the observation that the reflex secretion of tears was initially caused by ‘the irritation of any foreign body in the eye’. He then hypothesised that in cases of loud infant screaming, during which the eyes were closed tightly, that same reflex could be brought into action by pressure on the lachrymal glands. Over many generations, Darwin speculated, the association of tears with infant screams of pain and hunger gradually became extended to painful mental states of all kinds, so that tears could be produced even in the absence of irritating foreign bodies, or of screams. And thus, in the Freudian picture of tears washing away psychic foreign bodies, as well as in the imagery of mental fluids and bodily overflows, Darwin’s influence is clearly visible. Freud’s account is reminiscent of Darwin in one other way too, since it emphasises that weeping serves ‘no purpose whatever’ behaviourally speaking, other than to get rid of ‘increased cerebral excitation’ and to allow the excitation to ‘flow away’.

Contrary to appearances, he said, there was no such thing as weeping for joy

If Freud and Breuer understood weeping as essentially an excretory function, one in which tears could be associated symbolically with other bodily fluids, the psychoanalytic theorists who came after extended this framework in a multitude of weird and wonderful ways. In a couple of articles in the 1940s, the influential American Freudian Phyllis Greenacre put forward the view that neurotic weeping in women was to be understood as a displacement of urination. Involved in this theory was the idea of ‘body-phallus identification’ and the production of tears by women as an attempt to simulate male urination.

Greenacre subdivided the phenomenon into those women who exhibited ‘shower weeping’ and those who displayed ‘stream weeping’. The first type weeps inordinately, shedding floods of tears; the second allows a quiet stream to trickle down the cheek. Both types were explained with reference to a ‘struggle about urination in the infantile period of life’, including a strong element of penis envy. The difference between the psyches of these two kinds of women, roughly speaking, was that the ‘shower’ weeper was sadly resigned to her lack of a penis while the ‘stream’ weeper was still in revolt, harbouring illusional ideas of possessing a male organ and weeping in neurotic imitation of the longed-for male urination observed in childhood.

Not everyone put such emphasis on urination as a template for weeping. For other psychoanalysts, the key identification was between tears and amniotic fluid. In a lecture to the Society of Medical Psychoanalysts in New York in 1959, Thomas Szasz postulated that weeping represented an unconscious regression to the prenatal state in which the body is bathed in amniotic fluid. Weeping, then, was a regressive fantasy of return to the saline wetness of the womb.

But what would a psychoanalyst say about those tears of joy and pride that were so much on display at London 2012? ‘Crying at the Happy Ending’ is the title of a classic paper by the analyst Sandor Feldman, published in 1956. Contrary to appearances, he said, there was no such thing as weeping for joy. Those who cry at the happy ending of a film or at a moment of pride or joy in their own lives — at the birth of a child, or when reunited with a loved one who had been away or in danger, or, we might add, when receiving an Olympic gold medal — might think they shed tears of joy. In fact, on Feldman’s view, these are all merely cases of a delayed or displaced discharge of negative affect.

Underlying moments of pride or joy, Feldman claimed, was an awareness of the transitory nature of life and happiness. Seeing small children might make us cry tenderly, but it is because we know that they, like us, will lose their innocence, and that the infant idyll will pass, to be replaced by the ugly adult world. ‘Small children’ themselves, Feldman observed, ‘do not cry at the happy ending: they smile because they do not yet accept the fact of death. Crying at the happy ending probably starts when death is accepted as an inevitable fact.’ We cry, Feldman concluded, at the sad end that is sure to come: ‘There are no tears of joy, only tears of sadness.’

The incontinence theory of weeping is not currently in scientific vogue, despite its continuing popularity with some of the wider public. William H Frey II’s experiments, purporting to demonstrate that emotional tears serve as vehicles for the excretion of stress hormones, have not been successfully replicated by others. Freudian concepts of repression and regression no longer reign supreme. The idea that when women weep they are seeking to replicate the act of male urination, longed for since infancy, is a doctrine as quaint and incredible as anything produced by ancient physicians or medieval theologians. And the most recent research on the science of crying — surveyed in books such as Why Humans Like to Cry (2012) by the neuroscientist Michael Trimble and Why Only Humans Weep (2013) by the psychologist Ad Vingerhoets — does not support the idea that crying is an overflow of affect, an excretion, or a kind of catharsis.

Trimble and Vingerhoets both look to the history and evolution of cultural forms, including music, drama, literature and religious ritual, as well as to their own scientific disciplines, in search of a better understanding of this mysterious human phenomenon. Both conclude that the mental states which make humans cry are universal. But the categories they use to explain those triggers are so wide and vague as to include almost anything.

A tear on its own means nothing. A tear shed in a particular mental, social, and narrative context, can mean anything

For Trimble, the emphasis is on tragedy, grief, empathy, compassion, and hope. However, his account of the underlying neurology relies on connecting tears with that most nebulous of psychological categories, ‘emotion’. Now that scientists of the mind have conclusively rejected a division, psychological or neurological, between cognitive and affective processes, ‘emotion’ could mean, more or less, any mental state.

Vingerhoets’s list of the key antecedents of tears is similarly broad. He talks about states of helplessness and loss, but also includes personal conflict, anger, rejection, feelings of inadequacy, self-pity, joy, and the emotions produced by music and films. For Vingerhoets, almost any emotional state generated in the context of infantile isolation, maternal bonding, romantic relationships, and social connectedness can provide an occasion for weeping. In other words, tears can be produced either by emotional isolation, or by an emotional encounter with another; either by loss and sorrow, or by success and joy.

Darwin rightly noted that tears could not be neatly associated with any single kind of mental state. They can be secreted ‘in sufficient abundance to roll down the cheeks’, he wrote, ‘under the most opposite emotions, and under no emotion at all’. A tear on its own means nothing. A tear shed in a particular mental, social, and narrative context, can mean anything. ‘Tears, idle tears,’ wrote Alfred Tennyson, ‘I know not what they mean.’ Yet he, and we, continue to feel compelled to interpret them, to try to distil their meaning.

Syndicate this Essay

Consciousness & Altered StatesMood & EmotionPsychiatry & PsychotherapyStories & LiteratureAll topics →

Thomas Dixon

is director of the Centre for the History of the Emotions at Queen Mary University of London. His latest book is The Invention of Altruism (2008).

aeon.co

## 1. Basics

The notions of word and word meaning are problematic to pin down, and this is reflected in the difficulties one encounters in defining the basic terminology of lexical semantics. In part, this depends on the fact that the words ‘word’ and ‘meaning’ themselves have multiple meanings, depending on the context and the purpose they are used for (Matthews 1991). For example, in ordinary parlance ‘word’ is ambiguous between lexeme (as in “Color and colour are spellings of the same word”) and lexical unit (as in “there are thirteen words in the tongue-twister How much wood would a woodchuck chuck if a woodchuck could chuck wood?”). Let us then elucidate the notion of word in a little more detail, and specify what key questions will guide our discussion of word meaning in the rest of the entry.

### 1.1 The Notion of Word

The notion of word can be defined in two fundamental ways. On one side, we have linguistic definitions, which attempt to characterize the notion of word by illustrating the explanatory role words play or are expected to play in the context of a formal grammar. These approaches often end up splitting the notion of word into a number of more fine-grained and theoretically manageable notions, but still tend to regard ‘word’ as a term that zeroes in on a scientifically respectable concept (e.g., Di Sciullo & Williams 1987). For example, words are the primary locus of stress and tone assignment, the basic domain of morphological conditions on affixation, clitization, compounding, and the theme of phonological and morphological processes of assimilation, vowel shift, metathesis, and reduplication (Bromberger 2011). On the other side, we have metaphysical definitions, which attempt to elucidate the notion of word by describing the metaphysical type of words. This implies answering such questions as “what are words?”, “how should words be individuated?”, and “on what conditions two utterances count as utterances of the same word?”. For example, Kaplan (1990, 2011) has proposed to replace the orthodox type-token account of the relation between words and word occurrences with a “common currency” view on which words relate to their occurrences as continuants relate to stages in four-dimensionalist metaphysics (see the entries on types and tokens and identity over time). For alternative views, see McCulloch (1991), Cappelen (1999), Alward (2005), and Hawthorne & Lepore (2011).

For the purposes of this entry, we can proceed as follows. Every natural language has a lexicon organized into lexical entries, which contain information about lexemes. These are the smallest linguistic expressions that are conventionally associated with a non-compositional meaning and can be uttered in isolation to convey semantic content. Lexemes relate to words just like phonemes relate to phones in phonological theory. To understand the parallelism, think of the variations in the place of articulation of the phoneme /n/, which is pronounced as the voiced bilabial nasal [m] in “ten bags” and as the voiced velar nasal [ŋ] in “ten gates”. Just as phonemes are abstract representations of sets of phones (each defining one way the phoneme can be instantiated in speech), lexemes can be defined as abstract representations of sets of words (each defining one way the lexeme can be instantiated in sentences). Thus, ‘do’, ‘does’, ‘done’ and ‘doing’ are morphologically and graphically marked realizations of the same abstract lexeme do. To wrap everything into a single formula, we can say that the lexical entries listed in a lexicon set the parameters defining the instantiation potential of lexemes as words in utterances and inscriptions (Murphy 2010). In what follows, we shall rely on an intuitive notion of word. However, the reader should bear in mind that, unless otherwise indicated, our talk of ‘word meaning’ should be understood as talk of ‘lexeme meaning’, in the above sense.

### 1.2 Theories of Word Meaning

As with general theories of meaning (see the entry on theories of meaning), two kinds of theory of word meaning can be distinguished. The first type of theory, that we can label a semantic theory of word meaning, is interested in clarifying what meaning-determining information is encoded by the lexical items of a natural language. A framework establishing that the word ‘bachelor’ encodes the lexical concept adult unmarried male would be an example of a semantic theory of word meaning. The second type of theory, that we can label a foundational theory of word meaning, is interested in singling out the facts whereby lexical expressions come to have the semantic properties they have for their users. A framework investigating the dynamics of linguistic change and social coordination in virtue of which the word ‘bachelor’ has been assigned the function of expressing the lexical concept adult unmarried male would be an example of a foundational theory of word meaning. Obviously, the endorsement of a given semantic theory is bound to place important constraints on the claims one might propose about the foundational attributes of word meaning, and vice versa. Semantic and foundational concerns are often interdependent, and it is difficult to find theories of word meaning which are either purely semantic or purely foundational. For example, Ludlow (2014) establishes a strong correlation between the underdetermination of lexical concepts (a semantic matter) and the processes of linguistic entrenchment whereby discourse partners converge on the assignation of shared meanings to lexical expressions (a foundational matter). However, semantic and foundational theories remain in principle different and designed to answer partly non-overlapping sets of questions. Our focus will be on semantic theories of word meaning, i.e., on theories that try to provide an answer to such questions as “what is the nature of word meaning?”, “what do we know when we know the meaning of a word?”, and “what (kind of) information must an agent associate to the words of a language L in order to be a competent user of the lexicon of L?”. However, we will engage in foundational considerations whenever necessary to clarify how a given theoretical framework addresses issues in the domain of a semantic theory.

## 2. Historical Background

The study of word meaning acquired the status of a mature academic enterprise in the 19th century, with the birth of historical-philological semantics (Section 2.2). Yet, matters related to word meaning had been the subject of much debate in earlier times. Word meaning constituted a prominent topic of inquiry in three classical traditions: speculative etymology, rhetoric, and lexicography (Meier-Oeser 2011; Geeraerts 2013).

To understand what speculative etymology amounts to, it is useful to refer to the Cratylus (383a-d), where Plato presents his well-known naturalist thesis about word meaning: natural kind terms express the essence of the objects they name and words are appropriate to their referents insofar as they describe what their referents are (see the entry on Plato’s Cratylus). The task of speculative etymology is to break down the surface features of word forms and recover the descriptive (often phonoiconic) rationale that motivated their genesis. For example, the Greek word ‘anthrôpos’ can be broken down into anathrôn ha opôpe, which translates as “one who reflects on what he has seen”: the word used to denote humans reflects their being the only animal species which possesses the combination of vision and intelligence. More in Malkiel (1993), Fumaroli (1999), and Del Bello (2007).

The primary aim of the rhetorical tradition was the study of figures of speech. Some of these affect structural variables such as the linear order of the words occurring in a sentence (e.g., parallelism, climax, anastrophe); others are semantic and arise upon using lexical expressions in a way not intended by their normal meaning (e.g., metaphor, metonymy, synecdoche). Although originated for stylistic and literary purposes, the identification of regular patterns in the figurative use of words initiated by classical rhetoric provided a first organized framework to investigate the semantic flexibility of words, and stimulated an interest in our ability to use lexical expressions beyond the boundaries of their literal meaning. More in Kennedy (1994), Herrick (2004), and Toye (2013).

Finally, lexicography and the practice of writing dictionaries played an important role in systematizing the descriptive data on which later inquiry would rely to illuminate the relationship between words and their meaning. Putnam’s (1970) claim that it was the phenomenon of writing (and needing) dictionaries that gave rise to the idea of a semantic theory is probably an overstatement. But lexicography certainly had an impact on the development of modern theories of word meaning. The practice of separating dictionary entries via lemmatization and defining them through a combination of semantically simpler elements provided a stylistic and methodological paradigm for much subsequent research on lexical phenomena, such as decompositional theories of word meaning. More in Béjoint (2000), Jackson (2002), and Hanks (2013).

### 2.2 Historical-Philological Semantics

Historical-philological semantics incorporated elements from all the above classical traditions and dominated the linguistic scene roughly from 1870 to 1930, with the work of scholars such as Michel Bréal, Hermann Paul, and Arsène Darmesteter (Gordon 1982). In particular, it absorbed from speculative etymology an interest in the conceptual decomposition of word meaning, it acquired from rhetoric a toolkit for the classification of lexical phenomena, and it assimilated from lexicography and textual philology a basis of descriptive data for lexical analysis (Geeraerts 2013). On the methodological side, the key features of the approach to word meaning introduced by historical-philological semantics can be summarized as follows. First, it had a diachronic and contextualist orientation: that is, it was primarily concerned with the historical evolution of word meaning rather than with word meaning statically understood, and attributed major importance to the pragmatic flexibility of word meaning (e.g., witness Paul’s (1920 [1880]) distinction between usuelle Bedeutung and okkasionelle Bedeutung, or Bréal’s (1924 [1897]) account of polysemy as a byproduct of semantic change). Second, it considered word meaning a psychological phenomenon: it assumed that the semantic properties of words should be defined in mentalistic terms (i.e., words signify “concepts” or “ideas” in a broad sense), and that the dynamics of sense modulation, extension, and contraction that underlie lexical change correspond to patterns of conceptual activity in the human mind. Interestingly, while the rhetorical tradition had looked at tropes as devices whose investigation was motivated by stylistic concerns, historical-philological semantics regarded the psychological mechanisms underlying the production and the comprehension of figures of speech as part of the ordinary life of languages, and as engines of the evolution of all aspects of lexical systems (Nerlich 1992).

The contribution made by historical-philological semantics to the study of lexical phenomena had a long-lasting influence. First, with its emphasis on the principles of semantic change, historical-philological semantics was the first systematic framework to focus on the dynamic nature of word meaning, and to see the contextual flexibility of words as the primary phenomenon that a lexical semantic theory should aim to account for (Nerlich & Clarke 1996, 2007). This feature of historical-philological semantics makes it a forerunner of the stress on context-sensitivity encouraged by many subsequent approaches to word meaning in philosophy (Section 3) and linguistics (Section 4). Second, the psychological conception of word meaning fostered by historical philological-semantics added to the agenda of linguistic research the question of how word meaning relates to cognition at large (Geeraerts 2010). If word meaning is essentially a psychological phenomenon, how can we characterize it? What is the dividing line separating the aspects of our mental life that are relevant to the knowledge of lexical meaning from those that are not? As we shall see, this question will constitute a central concern for cognitive theories of word meaning (Section 5).

## 3. Philosophy of Language

In this section we shall review some semantic and metasemantic theories in analytic philosophy that bear on how lexical meaning should be conceived and described. We shall follow a roughly chronological order. Some of these theories, such as Carnap’s theory of meaning postulates and Putnam’s theory of stereotypes, have a strong focus on lexical meaning, whereas others, such as Montague semantics, regard it as a side issue. However, such negative views form an equally integral part of the philosophical debate on word meaning.

### 3.1 Early Contemporary Views

By taking the connection of thoughts and truth as the basic issue of semantics and regarding sentences as “the proper means of expression for a thought” (Frege 1979a [1897]), Frege paved the way for the 20th century priority of sentential meaning over lexical meaning: the semantic properties of subsentential expressions such as individual words were regarded as derivative, and identified with their contribution to sentential meaning. Sentential meaning was in turn identified with truth conditions, most explicitly in Wittgenstein’s Tractatus logico-philosophicus (1922). However, Frege never lost interest in the “building blocks of thoughts” (Frege 1979b [1914]), i.e., in the semantic properties of subsentential expressions. Indeed, his theory of sense and reference for names and predicates may be counted as the inaugural contribution to lexical semantics within the analytic tradition (see the entry on Gottlob Frege). It should be noted that Frege did not attribute semantic properties to lexical units as such, but to what he regarded as a sentence’s logical constituents: e.g., not to the word ‘dog’ but to the predicate ‘is a dog’. In later work this distinction was obliterated and Frege’s semantic notions came to be applied to lexical units.

Possibly because of lack of clarity affecting the notion of sense, and surely because of Russell’s (1905) authoritative criticism of Fregean semantics, word meaning disappeared from the philosophical scene during the 1920s and 1930s. In Wittgenstein’s Tractatus the “real” lexical units, i.e., the constituents of a completely analyzed sentence, are just names, whose semantic properties are exhausted by their reference. In Tarski’s (1933) work on formal languages, which was taken as definitional of the very field of semantics for some time, lexical units are semantically categorized into different classes (individual constants, predicative constants, functional constants) depending on the logical type of their reference, i.e., according to whether they designate individuals in a domain of interpretation, classes of individuals (or of n-tuples of individuals), or functions defined over the domain. However, Tarski made no attempt nor felt any need to represent semantic differences among expressions belonging to the same logical type (e.g., between one-place predicates such as ‘dog’ and ‘run’, or between two-place predicates such as ‘love’ and ‘left of’). See the entry on Alfred Tarski.

Quine (1943) and Church (1951) rehabilitated Frege’s distinction of sense and reference. Non-designating words such as ‘Pegasus’ cannot be meaningless: it is precisely the meaning of ‘Pegasus’ that allows speakers to establish that the word lacks reference. Moreover, as Frege (1892) had argued, true factual identities such as “Morning Star = Evening Star” do not state synonymies; if they did, any competent speaker of the language would be aware of their truth. Along these lines, Carnap (1947) proposed a new formulation of the sense/reference dichotomy, which was translated into the distinction between intension and extension. The notion of intension was intended to be an explicatum of Frege’s “obscure” notion of sense: two expressions have the same intension if and only if they have the same extension in every possible world or, in Carnap’s terminology, in every state description (i.e., in every maximal consistent set of atomic sentences and negations of atomic sentences). Thus, ‘round’ and ‘spherical’ have the same intension (i.e., they express the same function from possible worlds to extensions) because they apply to the same objects in every possible world. Carnap later suggested that intensions could be regarded as the content of lexical semantic competence: to know the meaning of a word is to know its intension, “the general conditions which an object must fulfill in order to be denoted by [that] word” (Carnap 1955). However, such general conditions were not spelled out by Carnap (1947). Consequently, his system did not account, any more than Tarski’s, for semantic differences and relations among words belonging to the same semantic category: there were possible worlds in which the same individual a could be both a married man and a bachelor, as no constraints were placed on either word’s intension. One consequence, as Quine (1951) pointed out, was that Carnap’s system did not capture our intuitive notion of analyticity, on which “Bachelors are unmarried” is not just true but true in every possible world.

To remedy what he agreed was an unsatisfactory feature of his system, Carnap (1952) introduced meaning postulates, i.e., stipulations on the relations among the extensions of lexical items. For example, the meaning postulate

• MP$$\forall x (\mbox{bachelor}(x) \supset \mathord{\sim}\mbox{married} (x))$$

stipulates that any individual that is in the extension of ‘bachelor’ is not in the extension of ‘married’. Meaning postulates can be seen either as restrictions on possible worlds or as relativizing analyticity to possible worlds. On the former option we shall say that “If Paul is a bachelor then Paul is unmarried” holds in every admissible possible world, while on the latter we shall say that it holds in every possible world in which (MP) holds. Carnap regarded the two options as equivalent; nowadays, the former is usually preferred. Carnap (1952) also thought that meaning postulates expressed the semanticist’s “intentions” with respect to the meanings of the descriptive constants, which may or may not reflect linguistic usage; again, today postulates are usually understood as expressing semantic relations (synonymy, analytic entailment, etc.) among lexical items as currently used by competent speakers.

In the late 1960s and early 1970s, Montague (1974) and other philosophers and linguists (Kaplan, Kamp, Partee, and D. Lewis among others) set out to apply to the analysis of natural language the notions and techniques that had been introduced by Tarski and Carnap and further developed in Kripke’s possible worlds semantics (see the entry on Montague semantics). Montague semantics can be represented as aiming to capture the inferential structure of a natural language: every inference that a competent speaker would regard as valid should be derivable in the theory. Some such inferences depend for their validity on syntactic structure and on the logical properties of logical words, like the inference from “Every man is mortal and Socrates is a man” to “Socrates is mortal”. Other inferences depend on properties of non-logical words that are usually regarded as semantic, like the inference from “Kim is pregnant” to “Kim is not a man”. In Montague semantics, such inferences are taken care of by supplementing the theory with suitable Carnapian meaning postulates. Yet, some followers of Montague regarded such additions as spurious: the aims of semantics, they said, should be distinguished from those of lexicography. The description of the meaning of non-logical words requires considerable world knowledge: for example, the inference from “Kim is pregnant” to “Kim is not a man” is based on a “biological” rather than on a “logical” generalization. Hence, we should not expect a semantic theory to furnish an account of how any two expressions belonging to the same syntactic category differ in meaning (Thomason 1974). From such a viewpoint, Montague semantics would not differ significantly from Tarskian semantics in its account of lexical meaning. But not all later work within Montague’s program shared such a skepticism about representing aspects of lexical meaning within a semantic theory, using either componential analysis (Dowty 1979) or meaning postulates (Chierchia & McConnell-Ginet 2000).

For those who believe that meaning postulates can exhaust lexical meaning, the issue arises of how to choose them, i.e., of how—and whether—to delimit the set of meaning-relevant truths with respect to the set of all true statements in which a given word occurs. As we just saw, Carnap himself thought that the choice could only be the expression of the semanticist’s intentions. However, we seem to share intuitions of analyticity, i.e., we seem to regard some, but not all sentences of a natural language as true by virtue of the meaning of the occurring words. Such intuitions are taken to reflect objective semantic properties of the language, that the semanticist should describe rather than impose at will. Quine (1951) did not challenge the existence of such intuitions, but he argued that they could not be cashed out in the form of a scientifically respectable criterion separating analytic truths (“Bachelors are unmarried”) from synthetic truths (“Aldo’s uncle is a bachelor”), whose truth does not depend on meaning alone. Though Quine’s arguments were often criticized (for recent criticisms, see Williamson 2007), the analytic/synthetic distinction was never fully vindicated, at least within philosophy (for an exception, see Russell 2008). Hence, it was widely believed that lexical meaning could not be adequately described by meaning postulates. Fodor and Lepore (1992) argued that this left semantics with two options: lexical meanings were either atomic (i.e., they could not be specified by descriptions involving other meanings) or they were holistic, i.e., only the set of all true sentences of the language could count as fixing them.

Neither alternative looked promising. Holism incurred in objections connected with the acquisition and the understanding of language: how could individual words be acquired by children, if grasping their meaning involved, somehow, semantic competence on the whole language? And how could individual sentences be understood if the information required to understand them exceeded the capacity of human working memory? (For an influential criticism of several varieties of holism, see Dummett 1991; for a review, Pagin 2006). Atomism, in turn, ran against strong intuitions of (at least some) relations among words being part of a language’s semantics: it is because of what ‘bachelor’ means that it doesn’t make sense to suppose we could discover that some bachelors are married. Fodor (1998) countered this objection by reinterpreting allegedly semantic relations as metaphysically necessary connections among extensions of words. However, sentences that are usually regarded as analytic, such as “Bachelors are unmarried”, are not easily seen as just metaphysically necessary truths like “Water is H2O”. If water is H2O, then its metaphysical essence consists in being H2O (whether we know it or not); but there is no such thing as a metaphysical essence that all bachelors share—an essence that could be hidden to us, even though we use the word ‘bachelor’ competently. On the contrary, on acquiring the word ‘bachelor’ we acquire the belief that bachelors are unmarried (Quine 1986); by contrast, many speakers that have ‘water’ in their lexical repertoire do not know that water is H2O. The difficulties of atomism and holism opened the way to vindications of molecularism (e.g., Perry 1994; Marconi 1997), the view on which only some relations among words matter for acquisition and understanding (see the entry on meaning holism).

While mainstream formal semantics went with Carnap and Montague, supplementing the Tarskian apparatus with the possible worlds machinery and defining meanings as intensions, Davidson (1967, 1984) put forth an alternative suggestion. Tarski had shown how to provide a definition of the truth predicate for a (formal) language L: such a definition is materially adequate (i.e., it is a definition of truth, rather than of some other property of sentences of L) if and only if it entails every biconditional of the form

• (T) S is true in L iff p,

where S is a sentence of L and p is its translation into the metalanguage of L in which the definition is formulated. Thus, Tarski’s account of truth presupposes that the semantics of both L and its metalanguage is fixed (otherwise it would be undetermined whether S translates into p). On Tarski’s view, each biconditional of form (T) counts as a “partial definition” of the truth predicate for sentences of L (see the entry on Tarski’s truth definitions). By contrast, Davidson suggested that if one took the notion of truth for granted, then T-biconditionals could be read as collectively constituting a theory of meaning for L, i.e., as stating truth conditions for the sentences of L. For example,

• (W) “If the weather is bad then Sharon is sad” is true in English iff either the weather is not bad or Sharon is sad

states the truth conditions of the English sentence “If the weather is bad then Sharon is sad”. Of course, (W) is intelligible only if one understands the language in which it is phrased, including the predicate ‘true in English’. Davidson thought that the recursive machinery of Tarski’s definition of truth could be transferred to the suggested semantic reading, with extensions to take care of the forms of natural language composition that Tarski had neglected because they had no analogue in the formal languages he was dealing with. Unfortunately, few of such extensions were ever spelled out by Davidson or his followers. Moreover, it is difficult to see how, giving up possible worlds and intensions in favor of a purely extensional theory, the Davidsonian program could account for the semantics of propositional attitude ascriptions of the form “A believes (hopes, imagines, etc.) that p”.

Construed as theorems of a semantic theory, T-biconditionals were often accused of being uninformative (Putnam 1975; Dummett 1976): to understand them, one has to already possess the information they are supposed to provide. This is particularly striking in the case of lexical axioms such as the following:

• (V1) Val(x, ‘man’) iff x is a man;
• (V2) Val($$\langle x,y\rangle$$, ‘knows’) iff x knows y.

(To be read, respectively, as “the predicate ‘man’ applies to x if and only if x is a man” and “the predicate ‘know’ applies to the pair $$\langle x, y\rangle$$ if and only if x knows y”). Here it is apparent that in order to understand (V1) one must know what ‘man’ means, which is just the information that (V1) is supposed to convey (as the theory, being purely extensional, identifies meaning with reference). Some Davidsonians, though admitting that statements such as (V1) and (V2) are in a sense “uninformative”, insist that what (V1) and (V2) state is no less “substantive” (Larson & Segal 1995). To prove their point, they appeal to non-homophonic versions of lexical axioms, i.e., to the axioms of a semantic theory for a language that does not coincide with the (meta)language in which the theory itself is phrased. Such would be, e.g.,

• (V3)Val(x, ‘man’) si et seulement si x est un homme.

(V3), they argue, is clearly substantive, yet what it says is exactly what (V1) says, namely, that the word ‘man’ applies to a certain category of objects. Therefore, if (V3) is substantive, so is (V1). But this is beside the point. The issue is not whether (V1) expresses a proposition; it clearly does, and it is, in this sense, “substantive”. But what is relevant here is informative power: to one who understands the metalanguage of (V3), i.e., French, (V3) may communicate new information, whereas there is no circumstance in which (V1) would communicate new information to one who understands English.

### 3.2 Grounding and Lexical Competence

In the mid-1970s, Dummett raised the issue of the proper place of lexical meaning in a semantic theory. If the job of a theory of meaning is to make the content of semantic competence explicit—so that one could acquire semantic competence in a language L by learning an adequate theory of meaning for L—then the theory ought to reflect a competent speaker’s knowledge of circumstances in which she would assert a sentence of L, such as “The horse is in the barn”, as distinct from circumstances in which she would assert “The cat is on the mat”. This, in turn, appears to require that the theory yields explicit information about the use of ‘horse’, ‘barn’, etc., or, in other words, that it includes information which goes beyond the logical type of lexical units. Dummett identified such information with a word’s Fregean sense. However, he did not specify the format in which word senses should be expressed in a semantic theory, except for words that could be defined (e.g., ‘aunt’ = “sister of a parent”): in such cases, the definiens specifies what a speaker must understand in order to understand the word (Dummett 1991). But of course, not all words are of this kind. For other words, the theory should specify what it is for a speaker to know them, though we are not told how exactly this should be done. Similarly, Grandy (1974) pointed out that by identifying the meaning of a word such as ‘wise’ as a function from possible worlds to the sets of wise people in those worlds, Montague semantics only specifies a formal structure and eludes the question of whether there is some possible description for the functions which are claimed to be the meanings of words. Lacking such descriptions, possible worlds semantics is not really a theory of meaning but a theory of logical form or logical validity. Again, aside from suggesting that “one would like the functions to be given in terms of computation procedures, in some sense”, Grandy had little to say about the form of lexical descriptions.

In a similar vein, Partee (1981) argued that Montague semantics, like every compositional or structural semantics, does not uniquely fix the intensional interpretation of words. The addition of meaning postulates does rule out some interpretations (e.g., interpretations on which the extension of ‘bachelor’ and the extension of ‘married’ may intersect in some possible world). However, it does not reduce them to the unique, “intended” or, in Montague’s words, “actual” interpretation (Montague 1974). Hence, standard model-theoretic semantics does not capture the whole content of a speaker’s semantic competence, but only its structural aspects. Fixing “the actual interpretation function” requires more than language-to-language connections as encoded by, e.g., meaning postulates: it requires some “language-to-world grounding”. Arguments to the same effect were developed by Bonomi (1983) and Harnad (1990). In particular, Harnad had in mind the simulation of human semantic competence in artificial systems: he suggested that symbol grounding could be implemented, in part, by “feature detectors” picking out “invariant features of objects and event categories from their sensory projections” (for recent developments see, e.g., Steels & Hild 2012). Such a cognitively oriented conception of grounding differs from Partee’s Putnam-inspired view, on which the semantic grounding of lexical items depends on the speakers’ objective interactions with the external world in addition to their narrow psychological properties.

A resolutely cognitive approach characterizes Marconi’s (1997) account of lexical semantic competence. In his view, lexical competence has two aspects: an inferential aspect, underlying performances such as semantically based inference and the command of synonymy, hyponymy and other semantic relations; and a referential aspect, which is in charge of performances such as naming (e.g., calling a horse ‘horse’) and application (e.g., answering the question “Are there any spoons in the drawer?”). Language users typically possess both aspects of lexical competence, though in different degrees for different words: a zoologist’s inferential competence on ‘manatee’ is usually richer than a layman’s, though a layman who spent her life among manatees may be more competent, referentially, than a “bookish” scientist. However, the two aspects are independent of each another, and neuropsychological evidence appears to show that they can be dissociated: there are patients whose referential competence is impaired or lost while their inferential competence is intact, and vice versa (see Section 5.3). Being a theory of individual competence, Marconi’s account does not deal directly with lexical meanings in a public language: communication depends both on the uniformity of cognitive interactions with the external world and on communal norms concerning the use of language, together with speakers’ deferential attitude toward semantic authorities.

### 3.3 The Externalist Turn

Since the early 1970s, views on lexical meaning were revolutionized by semantic externalism. Initially, externalism was limited to proper names and natural kind words such as ‘gold’ or ‘lemon’. In slightly different ways, both Kripke (1972) and Putnam (1970, 1975) argued that the reference of such words was not determined by any description that a competent speaker associated with the word; more generally, and contrary to what Frege may have thought, it was not determined by any cognitive content associated with it in a speaker’s mind (for arguments to that effect, see the entry on names). Instead, reference is determined, at least in part, by objective (“causal”) relations between a speaker and the external world. For example, a speaker refers to Aristotle when she utters the sentence “Aristotle was a great warrior”—so that her assertion expresses a false proposition about Aristotle, not a true proposition about some great warrior she may “have in mind”—thanks to her connection with Aristotle himself. In this case, the connection is constituted by a historical chain of speakers going back to the initial users of the name ‘Aristotle’, or its Greek equivalent, in baptism-like circumstances. To belong to the chain, speakers (including present-day speakers) are not required to possess any precise knowledge of Aristotle’s life and deeds; they are, however, required to intend to use the name as it is used by the speakers they are picking up the name from, i.e., to refer to the individual those speakers intend to refer to.

In the case of most natural kind names, it may be argued, baptisms are hard to identify or even conjecture. In Putnam’s view, for such words reference is determined by speakers’ causal interaction with portions of matter or biological individuals in their environment: ‘water’, for example, refers to this liquid stuff, stuff that is normally found in our rivers, lakes, etc. The indexical component (this liquid, our rivers) is crucial to reference determination: it wouldn’t do to identify the referent of ‘water’ by way of some description (“liquid, transparent, quenches thirst, boils at 100°C, etc.”), for something might fit the description yet fail to be water, as in Putnam’s famous Twin Earth thought experiment (see the entry on reference). It might be remarked that, thanks to modern chemistry, we now possess a description that is sure to apply to water and only to water: “being H2O” (Millikan 2005). However, even if our chemistry were badly mistaken (as it could, in principle, turn out to be) and water were not, in fact, H2O, ‘water’ would still refer to whatever has the same nature as this liquid. Something belongs to the extension of ‘water’ if and only if it is the same substance as this liquid, which we identify—correctly, as we believe—as being H2O.

Let it be noted that in Putnam’s original proposal, reference determination is utterly independent of speakers’ cognition: ‘water’ on Twin Earth refers to XYZ (not to H2O) even though the difference between the two substances is cognitively inert, so that before chemistry was created nobody on either Earth or Twin Earth could have told them apart. However, the label ‘externalism’ has been occasionally used for weaker views: a semantic account may be regarded as externalist if it takes semantic content to depend in one way or another on relations a computational system bears to things outside itself (Rey 2005; Borg 2012), irrespective of whether such relations affect the system’s cognitive state. Weak externalism is hard to distinguish from forms of internalism on which a word’s reference is determined by information stored in a speaker’s cognitive system—information of which the speaker may or may not be aware (Evans 1982). Be that as it may, in what follows ‘externalism’ will be used to mean strong, or Putnamian, externalism.

Does externalism apply to other lexical categories besides proper names and natural kind words? Putnam (1975) extended it to artifactual words, claiming that ‘pencil’ would refer to pencils—those objects—even if they turned out not to fit the description by which we normally identify them (e.g., if they were discovered to be organisms, not artifacts). Schwartz (1978, 1980) pointed out, among many objections, that even in such a case we could make objects fitting the original description; we would then regard the pencil-like organisms as impostors, not as “genuine” pencils. Others sided with Putnam and the externalist account: for example, Kornblith (1980) pointed out that artifactual kinds from an ancient civilization could be re-baptized in total ignorance of their function. The new artifactual word would then refer to the kind those objects belong to independently of any beliefs about them, true or false. Against such externalist accounts, Thomasson (2007) argued that artifactual terms cannot refer to artifactual kinds independently of all beliefs and concepts about the nature of the kind, for the concept of the kind’s creator(s) is constitutive of the nature of the kind. Whether artifactual words are liable to an externalist account is still an open issue, as is, more generally, the scope of application of externalist semantics.

There is another form of externalism that does apply to all or most words of a language: social externalism (Burge 1979), the view on which the meaning of a word as used by an individual speaker depends on the semantic standards of the linguistic community the speaker belongs to. In our community the word ‘arthritis’ refers to arthritis—an affliction of the joints—even when used by a speaker who believes that it can afflict the muscles as well and uses the word accordingly. If the community the speaker belongs to applied ‘arthritis’ to rheumatoids ailments in general, whether or not they afflict the joints, the same word form would not mean arthritis and would not refer to arthritis. Hence, a speaker’s mental contents, such as the meanings associated with the words she uses, depend on something external to her, namely the uses and the standards of use of the linguistic community she belongs to. Thus, social externalism eliminates the notion of idiolect: words only have the meanings conferred upon them by the linguistic community (“public” meanings); discounting radical incompetence, there is no such thing as individual semantic deviance, there are only false beliefs (for criticisms, see Bilgrami 1992, Marconi 1997; see also the entry on idiolects).

Though both forms of externalism focus on reference, neither is a complete reduction of lexical meaning to reference. Both Putnam and Burge make it a necessary condition of semantic competence on a word that a speaker commands information that other semantic views would regard as part of the word’s sense. For example, if a speaker believes that manatees are a kind of household appliance, she would not count as competent on the word ‘manatee’, nor would she refer to manatees by using it (Putnam 1975; Burge 1993). Beyond that, it is not easy for externalists to provide a satisfactory account of lexical semantic competence, as they are committed to regarding speakers’ beliefs and abilities (e.g., recognitional abilities) as essentially irrelevant to reference determination, hence to meaning. Two main solutions have been proposed. Putnam (1973) suggested that a speaker’s semantic competence consists in her knowledge of stereotypes associated with words. A stereotype is an oversimplified theory of a word’s extension: the stereotype associated with ‘tiger’ describes tigers as cat-like, striped, carnivorous, fierce, living in the jungle, etc. Stereotypes are not meanings, as they do not determine reference in the right way: there are albino tigers and tigers that live in zoos. What the ‘tiger’-stereotype describes is (what the community takes to be) the typical tiger. Knowledge of stereotypes is necessary to be regarded as a competent speaker, and—one surmises—it can also be considered sufficient for the purposes of ordinary communication. Thus, Putnam’s account does provide some content for semantic competence, though it dissociates it from knowledge of meaning.

On an alternative view (Devitt 1983), competence on ‘tiger’ does not consist in entertaining propositional beliefs such as “tigers are striped”, but rather in being appropriately linked to a network of causal chains for ‘tiger’ involving other people’s abilities, groundings, and reference borrowings. In order to understand the English word ‘tiger’ and use it in a competent fashion, a subject must be able to combine ‘tiger’ appropriately with other words to form sentences, to have thoughts which those sentences express, and to ground these thoughts in tigers. Devitt’s account appears to make some room for a speaker’s ability to, e.g., recognize a tiger when she sees one; however, the respective weights of individual abilities (and beliefs) and objective grounding are not clearly specified. Suppose a speaker A belongs to a community C that is familiar with tigers; unfortunately, A has no knowledge of the typical appearance of a tiger and is unable to tell a tiger from a leopard. Should A be regarded as a competent user ‘tiger’ on account of her being “part of C” and therefore linked to a network of causal chains for ‘tiger’?

### 3.4 Internalism

Some philosophers (e.g., Loar 1981; McGinn 1982; Block 1986) objected to the reduction of lexical meaning to reference, or to non-psychological factors that are alleged to determine reference. In their view, there are two aspects of meaning (more generally, of content): the narrow aspect, that captures the intuition that ‘water’ has the same meaning in both Earthian and Twin-Earthian English, and the wide aspect, that captures the externalist intuition that ‘water’ picks out different substances in the two worlds. The wide notion is required to account for the difference in reference between English and Twin-English ‘water’; the narrow notion is needed, first and foremost, to account for the relation between a subject’s beliefs and her behavior. The idea is that how an object of reference is described (not just which object one refers to) can make a difference in determining behavior. Oedipus married Jocasta because he thought he was marrying the queen of Thebes, not his mother, though as a matter of fact Jocasta was his mother. This applies to words of all categories: someone may believe that water quenches thirst without believing that H2O does; Lois Lane believed that Superman was a superhero but she definitely did not believe the same of her colleague Clark Kent, so she behaved one way to the man she identified as Superman and another way to the man she identified as Clark Kent (though they were the same man). Theorists that countenance these two components of meaning and content usually identify the narrow aspect with the inferential or conceptual role of an expression e, i.e., with the aspect of e that contributes to determine the inferential relations between sentences containing an occurrence of e and other sentences. Crucially, the two aspects are independent: neither determines the other. The stress on the independence of the two factors also characterizes more recent versions of so-called “dual aspect” theories, such as Chalmers (1996, 2002).

While dual theorists agree with Putnam’s claim that some aspects of meaning are not “in the head”, others have opted for plain internalism. For example, Segal (2000) rejected the intuitions that are usually associated with the Twin-Earth cases by arguing that meaning (and content in general) “locally supervenes” on a subject’s intrinsic physical properties. But the most influential critic of externalism has undoubtedly been Chomsky (2000). First, he argued that much of the alleged support for externalism comes in fact from “intuitions” about words’ reference in this or that circumstance. But ‘reference’ (and the verb ‘refer’ as used by philosophers) is a technical term, not an ordinary word, hence we have no more intuitions about reference than we have about tensors or c-command. Second, if we look at how words such as ‘water’ are applied in ordinary circumstances, we find that speakers may call ‘water’ liquids that contain a smaller proportion of H2O than other liquids they do not call ‘water’ (e.g., tea): our use of ‘water’ does not appear to be governed by hypotheses about microstructure. According to Chomsky, it may well be that progress in the scientific study of the language faculty will allow us to understand in what respects one’s picture of the world is framed in terms of things selected and individuated by properties of the lexicon, or involves entities and relationships describable by the resources of the language faculty. Some semantic properties do appear to be integrated with other aspects of language. However, so-called “natural kind words” (which in fact have little to do with kinds in nature, Chomsky claims) may do little more than indicating “positions in belief systems”: studying them may be of some interest for “ethnoscience”, surely not for a science of language. Along similar lines, others have maintained that the genuine semantic properties of linguistic expressions should be regarded as part of syntax, and that they constrain but do not determine truth conditions (e.g., Pietroski 2005, 2010). Hence, the connection between meaning and truth conditions (and reference) may be significantly looser than assumed by many philosophers.

### 3.5 Contextualism, Minimalism, and the Lexicon

“Ordinary language” philosophers of the 1950s and 1960s regarded work in formal semantics as essentially irrelevant to issues of meaning in natural language. Following Austin and the later Wittgenstein, they identified meaning with use and were prone to consider the different patterns of use of individual expressions as originating different meanings of the word. Grice (1975) argued that such a proliferation of meanings could be avoided by distinguishing between what is asserted by a sentence (to be identified with its truth conditions) and what is communicated by it in a given context (or in every “normal” context). For example, consider the following exchange:

• A: Will Kim be hungry at 11am?

Although B does not literally assert that Kim had breakfast on that particular day (see, however, Partee 1973), she does communicate as much. More precisely, A could infer the communicated content by noticing that the asserted sentence, taken literally (“Kim had breakfast at least once in her life”), would be less informative than required in the context: thus, it would violate one or more principles of conversation (“maxims”) whereas there is no reason to suppose that the speaker intended to opt out of conversational cooperation (see the entries on Paul Grice and pragmatics). If the interlocutor assumes that the speaker intended him to infer the communicated content—i.e., that Kim had breakfast that morning, so presumably she would not be hungry at 11—cooperation is preserved. Such non-asserted content, called ‘implicature’, need not be an addition to the overtly asserted content: e.g., in irony asserted content is negated rather than expanded by the implicature (think of a speaker uttering “Paul is a fine friend” to implicate that Paul has wickedly betrayed her).

Grice’s theory of conversation and implicatures was interpreted by many (including Grice himself) as a convincing way of accounting for the variety of contextually specific communicative contents while preserving the uniqueness of a sentence’s “literal” meaning, which was identified with truth conditions and regarded as determined by syntax and the conventional meanings of the occurring words, as in formal semantics. The only semantic role context was allowed to play was in determining the content of indexical words (such as ‘I’, ‘now’, ‘here’, etc.) and the effect of context-sensitive structures (such as tense) on a sentence’s truth conditions. However, in about the same years Travis (1975) and Searle (1979, 1980) pointed out that the semantic relevance of context might be much more pervasive, if not universal: intuitively, the same sentence type could have very different truth conditions in different contexts, though no indexical expression or structure appeared to be involved. Take the sentence “There is milk in the fridge”: in the context of morning breakfast it will be considered true if there is a carton of milk in the fridge and false if there is a patch of milk on a tray in the fridge, whereas in the context of cleaning up the kitchen truth conditions are reversed. Examples can be multiplied indefinitely, as indefinitely many factors can turn out to be relevant to the truth or falsity of a sentence as uttered in a particular context. Such variety cannot be plausibly reduced to traditional polysemy such as the polysemy of ‘property’ (meaning quality or real estate), nor can it be described in terms of Gricean implicatures: implicatures are supposed not to affect a sentence’s truth conditions, whereas here it is precisely the sentence’s truth conditions that are seen as varying with context.

The traditionalist could object by challenging the contextualist’s intuitions about truth conditions. “There is milk in the fridge”, she could argue, is true if and only if there is a certain amount (a few molecules will do) of a certain organic substance in the relevant fridge (for versions of this objection, Cappelen & Lepore 2005). So the sentence is true both in the carton case and in the patch case; it would be false only if the fridge did not contain any amount of any kind of milk (whether cow milk or goat milk or elephant milk). The contextualist’s reply is that, in fact, neither the speaker nor the interpreter is aware of such alleged literal content (the point is challenged by Fodor 1983, Carston 2002); but “what is said” must be intuitively accessible to the conversational participants (Availability Principle, Recanati 1989). If truth conditions are associated with what is said—as the traditionalist would agree they are—then in many cases a sentence’s literal content, if there is such a thing, does not determine a complete, evaluable proposition. For a genuine proposition to arise, a sentence type’s literal content (as determined by syntax and conventional word meaning) must be enriched or otherwise modified by primary pragmatic processes based on the speakers’ background knowledge relative to each particular context of use of the sentence. Such processes differ from Gricean implicature-generating processes in that they come into play at the sub-propositional level; moreover, they are not limited to saturation of indexicals but may include the replacement of a constituent with another. These tenets define contextualism (Recanati 1993; Bezuidenhout 2002; Carston 2002; relevance theory (Sperber & Wilson 1986) is in some respects a precursor of such views). Contextualists take different stands on the existence and nature of the contribution of the semantic properties of words and sentence-types, though they all agree that it is insufficient to fix truth conditions (Stojanovic 2008).

Even if sentence types have no definite truth conditions, it does not follow that lexical types do not make definite or predictable contributions to the truth conditions of sentences (think of indexical words). It does follow, however, that conventional word meanings are not the final constituents of complete propositions (see Allot & Textor 2012). Does this imply that there are no such things as lexical meanings understood as features of a language? If so, how should we account for word acquisition and lexical competence in general? Recanati (2004) does not think that contextualism as such is committed to meaning eliminativism, the view on which words as types have no meaning; nevertheless, he regards it as defensible. Words could be said to have, rather than “meaning”, a semantic potential, defined as the collection of past uses of a word w on the basis of which similarities can be established between source situations (i.e., the circumstances in which a speaker has used w) and target situations (i.e., candidate occasions of application of w). It is natural to object that even admitting that long-term memory could encompass such an immense amount of information (think of the number of times ‘table’ or ‘woman’ are used by an average speaker in the course of her life), surely working memory could not review such information to make sense of new uses. On the other hand, if words were associated with “more abstract schemata corresponding to types of situations”, as Recanati suggests as a less radical alternative to meaning eliminativism, one wonders what the difference would be with respect to traditional accounts in terms of polysemy.

Other conceptions of “what is said” make more room for the semantic contribution of conventional word meanings. Bach (1994) agrees with contextualists that the linguistic meaning of words (plus syntax and after saturation) does not always determine complete, truth-evaluable propositions; however, he maintains that they do provide some minimal semantic information, a so-called ‘propositional radical’, that allows pragmatic processes to issue in one or more propositions. Bach identifies “what is said” with this minimal information. However, many have objected that minimal content is extremely hard to isolate (Recanati 2004; Stanley 2007). Suppose it is identified with the content that all the utterances of a sentence type share; unfortunately, no such content can be attributed to a sentence such as “Every bottle is in the fridge”, for there is no proposition that is stably asserted by every utterance of it (surely not the proposition that every bottle in the universe is in the fridge, which is never asserted). Stanley’s (2007) indexicalism rejects the notion of minimal proposition and any distinction between semantic content and communicated content: communicated content can be entirely captured by means of consciously accessible, linguistically controlled content (content that results from semantic value together with the provision of values to free variables in syntax, or semantic value together with the provision of arguments to functions from semantic types to propositions) together with general conversational norms. Accordingly, Stanley generalizes contextual saturation processes that are usually regarded as characteristic of indexicals, tense, and a few other structures; moreover, he requires that the relevant variables be linguistically encoded, either syntactically or lexically. It remains to be seen whether such solutions apply (in a non-ad hoc way) to all the examples of content modulation that have been presented in the literature.

Finally, minimalism (Borg 2004, 2012; Cappelen & Lepore 2005) is the view that appears (and intends) to be closest to the Frege-Montague tradition. The task of a semantic theory is said to be minimal in that it is supposed to account only for the literal meaning of sentences: context does not affect literal semantic content but “what the speaker says” as opposed to “what the sentence means” (Borg 2012). In this sense, semantics is not another name for the theory of meaning, because not all meaning-related properties are semantic properties (Borg 2004). Contrary to contextualism and Bach’s theory, minimalism holds that lexicon and syntax together determine complete truth-evaluable propositions. Indeed, this is definitional for lexical meaning: word meanings are the kind of things which, if one puts enough of them together in the right sort of way, then what one gets is propositional content (Borg 2012). Borg believes that, in order to be truth-evaluable, propositional contents must be “about the world”, and that this entails some form of semantic externalism. However, the identification of lexical meaning with reference makes it hard to account for semantic relations such as synonymy, analytic entailment or the difference between ambiguity and polysemy, and syntactically relevant properties: the difference between “John is easy to please” and “John is eager to please” cannot be explained by the fact that ‘easy’ means the property easy (see the entry on ambiguity). To account for semantically based syntactic properties, words may come with “instructions” that are not, however, constitutive of a word’s meaning like meaning postulates (which Borg rejects), though awareness of them is part of a speaker’s competence. Once more, lexical semantic competence is divorced from grasp of word meaning. In conclusion, some information counts as lexical if it is either perceived as such in “firm, type-level lexical intuitions” or capable of affecting the word’s syntactic behavior. Borg concedes that even such an extended conception of lexical content will not capture, e.g., analytic entailments such as the relation between ‘bachelor’ and ‘unmarried’.

## 4. Linguistics

The emergence of modern linguistic theories of word meaning is customarily placed at the transition from historical-philological semantics (Section 2.2) to structuralist semantics.

### 4.1 Structuralist Semantics

The advances introduced by the structuralist conception of word meaning can be best appreciated by contrasting its tenets with those of historical-philological semantics. Let us recall the three most important differences (Lepschy 1970).

• Anti-psychologism. Structuralist semantics views language as a symbolic system whose internal dynamics can be analyzed apart from the psychology of its users. Just as the rules of chess can be expressed without mentioning the mental properties of chess players, so the semantic attributes of words can be investigated simply by examining their relations to other elements in the same lexicon.
• Anti-historicism. Since the primary subject matter of structuralist semantics is the role played by lexical expressions in structured linguistic systems, structuralist semantics privileges synchronic linguistic description. Diachronic accounts of the evolution of a word w presuppose an analysis of the relational properties statically exemplified by w at different stages of the lexical system it belongs to.
• Anti-localism. As the semantic properties of lexical expressions depend on the relations they entertain with other expressions in the same lexical system, word meanings cannot be studied in isolation. This is both an epistemological and a foundational claim, i.e., a claim about how matters related to word meaning should be addressed in the context of a semantic theory, and a claim about the dynamics whereby the elements of a system of signs acquire the meaning they have for their users.

The account of lexical phenomena popularized by structuralism gave rise to a variety of descriptive approaches to word meaning. We can group them in three categories (Lipka 1992; Murphy 2003; Geeraerts 2006).

• Lexical Field Theory. Introduced by Trier (1931), it argues that words should be studied by looking at their relations to other words in the same lexical field. A lexical field is a set of semantically related lexical items whose meanings are mutually interdependent and which together provide a given domain of reality with conceptual structure. Lexical field theory assumes that lexical fields are closed sets with no overlapping meanings or semantic gaps. Whenever a word undergoes a change in meaning (e.g., its range of application is extended or contracted), the whole arrangement of its lexical field is affected (Lehrer 1974).
• Componential Analysis. Developed in the second half of the 1950s by European and American linguists (e.g., Pattier, Coseriu, Bloomfield, Nida), this framework argues that word meaning can be described on the basis of a finite set of conceptual building blocks called semantic components or features. For example, ‘man’ can be analyzed as [+ male], [+ mature], ‘woman’ as [− male], [+ mature], ‘child’ as [+/− male] [− mature] (Leech 1974).
• Relational Semantics. This approach, prominent in the work of linguists such as Lyons (1963), shares with lexical field theory the commitment to a mode of analysis that privileges the description of lexical relations, but departs from it in two important respects. First, it postulates no isomorphism between sets of related words and domains of reality, thereby eliminating non-linguistic predicates from the theoretical vocabulary that can be used in the description of lexical relations, and dropping the assumption that the organization of lexical fields has to reflect ontology. Second, instead of deriving statements about the meaning relations entertained by a lexical item (e.g., synonymy, hyponymy) from an independent account of its meaning, relational semantics sees word meanings as constituted by the set of semantic relations they participate in (Evens et al. 1980; Cruse 1986).

### 4.2 Generativist Semantics

The componential current of structuralism was the first to produce an important innovation in theories of word meaning, namely Katzian semantics (KS; Katz & Fodor 1963; Katz 1972, 1987). KS combined componential analysis with a mentalistic conception of word meaning and developed a method for the description of lexical phenomena in the context of a formal grammar. The psychological component of KS is twofold. First, word meanings are defined in terms of the combination of simpler conceptual components. Second, the subject of semantic theorizing is not identified with the “structure of the language” but, following Chomsky (1957, 1965), with the ability of the language user to interpret sentences. In KS, word meanings are structured entities whose representations are called semantic markers. A semantic marker is a tree with labeled nodes whose structure reproduces the structure of the represented meaning, and whose labels identify the word’s conceptual components. For example, the figure below illustrates the sense of ‘chase’ (simplified from Katz 1987).

Katz (1987) claimed that KS was superior to the kind of semantic analysis that could be provided via meaning postulates. For example, in KS the validation of conditionals such as $$\forall x\forall y (\textrm{chase}(x, y) \to \textrm{follow}(x,y))$$ could be reduced to a matter of inspection: one had simply to check whether the semantic marker of ‘follow’ was a subtree of the semantic marker of ‘chase’. Moreover, the method allowed to incorporate syntagmatic relations among the phenomena to be considered in the representation of word meanings (witness the grammatical tags ‘NP’, ‘VP’ and ‘S’ attached to the conceptual components above). KS was favorably received by the Generative Semantics movement (Fodor 1977; Newmeyer 1980) and boosted an interest in the formal representation of word meaning that would dominate the linguistic scene for decades to come (Harris 1993).Nonetheless , it was eventually abandoned. First, it had no theory of how lexical expressions contributed to the truth conditions of sentences (Lewis 1972). Second, some features that could be easily represented with the standard notation of meaning postulates could not be expressed through semantic markers, such as the symmetry and the transitivity of predicates (e.g., $$\forall x\forall y (\textrm{sibling}(x, y) \to \textrm{sibling}(y, x))$$ or $$\forall x\forall y\forall z (\textrm{louder}(x, y) \mathbin{\&} \textrm{louder}(y, z) \to \textrm{louder}(x, z))$$; see Dowty 1979). Third, the arguments staged by KS in support of its assumption that lexical meaning should be regarded as having an internal structure turned out to be vulnerable to objections from proponents of an atomistic view of word meaning (Fodor & Lepore 1992).

After KS, the landscape of linguistic theories of word meaning bifurcated. On one side, we have a group of theories advancing the decompositional agenda established by Katz. On the other, we have a group of theories aligning with the relational approach originated by lexical field theory and relational semantics. Following Geeraerts (2010), we shall briefly characterize the following ones.

Decompositional FrameworksRelational Frameworks
Natural Semantic Metalanguage Symbolic Networks
Conceptual Semantics Statistical Analysis
Two-Level Semantics
Generative Lexicon Theory

### 4.3 Decompositional Approaches

The basic idea of the Natural Semantic Metalanguage approach (NSM; Wierzbicka 1972, 1996; Goddard & Wierzbicka 2002) is that word meaning should be described in terms of a small core of elementary conceptual particles, known as semantic primes. According to NSM, primes are primitive, innate, unanalyzable semantic constituents that are lexicalized in all natural languages (in the form of a word, a morpheme, a phraseme) and whose appropriate combination should be sufficient to delineate the semantic properties of any lexical expression in any natural language. Wierzbicka (1996) proposed a catalogue of about 60 primes, to be exploited to spell out the internal structure of word meanings and grammatical constructions using so-called reductive paraphrases: for example, ‘top’ is analyzed as a part of something; this part is above all the other parts of this something. NSM has produced interesting applications in comparative linguistics (Peeters 2006), language teaching (Goddard & Wierzbicka 2007), and lexical typology (Goddard 2012). However, it has been criticized on various grounds. First, it has been argued that the method followed by NSM in the identification of lexical semantic universals is invalid (e.g., Matthewson 2003), and that reductive paraphrases are too vague to be considered full specifications of lexical meanings, since they fail to account for fine-grained differences among words whose semantic attributes are closely related. For example, the definition provided by Wierzbicka for ‘sad’ (i.e., xfeels something; sometimes a person thinks something like this: something bad happened; if i didn’t know that it happened i would say: i don’t want it to happen; i don’t say this now because i know: i can’t do anything; because of this, this person feels something bad;xfeels something like this) seems to apply equally well to ‘unhappy’, ‘distressed’, ‘frustrated’, ‘upset’, and ‘annoyed’ (Aitchison 2012). In addition, it has been observed that some items in the lists of primes elaborated by NSM theorists fail to comply with the requirement of universality and are not explicitly lexicalized in all known languages (Bohnemeyer 2003; Von Fintel & Matthewson 2008). See Goddard (1998) for some replies and Riemer (2006) for further objections.

For NSM, lexical meaning is a purely linguistic entity that bears no constitutive relation to the domain of world knowledge. Conceptual Semantics (CSEM; Jackendoff 1983, 1990, 2002) proposes a more open-ended approach. According to CSEM, formal semantic representations do not contain all the information on the basis of which lexically competent subjects use and interpret words. Rather, the meaning of lexical expressions is determined thanks to the interaction between the formal representations that constitute the primary level of word knowledge and conceptual structure, which is the domain of non-linguistic modes of cognition such as perceptual knowledge and motor schemas. This interface is reflected in the way CSEM proposes to model word meanings. Below, the semantic representation of ‘drink’ according to Jackendoff.

drink:
V
$$-\langle\mbox{NP}_j\rangle$$
$$[_{\textrm{Event}} {\Tiny\textrm{ CAUSE}} ([_{\textrm{Thing}} \Rule{2em}{1px}{0px}]_i, [_{\textrm{Event}} {\Tiny\textrm{ GO}}([_{\textrm{Thing}} {\Tiny\textrm{ LIQUID}}]_j, [_{\textrm{Path}} {\Tiny\textrm{ TO}} ([_{\textrm{Place}} {\Tiny\textrm{ IN}} ([_{\textrm{Thing}} {\Tiny\textrm{ MOUTH OF}} ([_{\textrm{Thing}} \Rule{2em}{1px}{0px}]_i)])])])])]$$

Syntactic tags represent the way the word interacts with the grammatical environment where it is used, while the items in subscript come from a set of perceptually grounded primitives (e.g., event, state, thing, path, place, property, amount) which are assumed to be innate, cross-modal and universal categories of human cognition. CSEM elaborates with accuracy on the interface between syntax and lexical semantics, but some of its claims about the interplay between formal lexical representations and non-linguistic information seem less stringent. To begin with, psychologists have observed that speakers tend to use causative predicates and the paraphrases expressing their decompositional structure in different and partially non-interchangeable ways (e.g., Wolff 2003). Furthermore, CSEM provides no well-founded method for the identification of pre-conceptual primitives (Pulman 2005), and the claim that the bits of information to be inserted in the definition of word meaning should be ultimately perception-related looks disputable. For example, how can we account for the difference in meaning between ‘jog’ and ‘run’ without pointing to information about the social characteristics of jogging, which imply a certain leisure setting, the intention to contribute to physical wellbeing, and so on? See Taylor (1996), Deane (1996).

The principled division between word knowledge and world knowledge introduced by CSEM does not have much to say about the dynamic interaction of the two in language use. The Two-Level Semantics (TLS) of Bierwisch (1983a,b) and Lang (Bierwisch & Lang 1989; Lang 1993) aims to provide precisely such a dynamic account. TLS views lexical meaning as the output of the interaction of two systems: semantic form (SF) and conceptual structure (CS). SF is a formalized representation of the basic features of a lexical item. It contains grammatical information that specifies how a word can contribute to the formation of syntactic structures, plus a set of variables and parameters whose value is determined through CS. By contrast, CS consists of language-independent systems of knowledge that mediate between language and the world as construed by the human mind (Lang & Maienborn 2011). According to TLS, polysemous words express variable meanings because their stable SF interacts flexibly with CS. Consider for example the word ‘university’, which can be read as referring either to an institution (as in “the university selected John’s application”) or to a building (as in “the university is located on the North side of the river”). Skipping some technical details, TLS construes the dynamics governing the selection of these readings as follows.

1. The word ‘university’ is assigned to the category $$\lambda x [\textrm{purpose} [x w]]$$ (i.e., ‘university’ belongs to the category of words denoting objects primarily characterized by their purpose).
2. Based on a general understanding of the defining purposes of universities, the SF of ‘university’ is specified as $$\lambda x [\textrm{purpose} [x w] \mathbin{\&} \textit{advanced study and teaching} [w]]$$.
3. The alternative readings obtain as a function of the two ways CS allows $$\lambda x [\textrm{purpose} [x w]]$$ to be specified, i.e., $$\lambda x [\textrm{institution} [x] \mathbin{\&} \textrm{purpose} [x w]]$$ or $$\lambda x [\textrm{building} [x] \mathbin{\&} \textrm{purpose} [x w]]$$.

TLS aligns with Jackendoff’s and Wierzbicka’s commitment to a descriptive paradigm that takes into account the plasticity of lexical meaning while anchoring it to a stable semantic template. But even if explaining the contextual flexibility of word uses in terms of access to non-linguistic information were as unavoidable a move as TLS suggests, there may be reasons to doubt that the approach privileged by TLS is the best to provide a detailed account of such dynamics. A first problem has to do, once again, with definitional accuracy: defining the SF of ‘university’ as $$\lambda x [\textrm{purpose} [x w] \mathbin{\&} \textit{advanced study and teaching} [w]]$$ seems inadequate to reflect the subtle differences in meaning among ‘university’ and related terms designating institutions for higher education, such as ‘college’ or ‘academy’. Furthermore, the apparatus of TLS excludes from CS bits of encyclopedic knowledge that would be difficult to represent via lambda expressions, and yet are indispensable to select among the alternative meanings of a word (Taylor 1994, 1995). See also Wunderlich (1991, 1993).

Generative Lexicon Theory (GLT; Pustejovsky 1995) developed out of a goal to provide a computational semantics for the way words modulate their meaning in language use, and proposed to model the contextual flexibility of lexical meaning as the output of formal operations defined over a generative lexicon. According to GLT, the computational resources available to a lexical item w consist of the following four levels.

• A lexical typing structure, giving an explicit type for w positioned within a type system for the language;
• An argument structure, representing the number and nature of the arguments supported by w;
• An event structure, defining the event type denoted by w (e.g., state, process, transition);
• A qualia structure, specifying the predicative force of w.

In particular, qualia structure captures how humans understand objects and relations in the world and provides a minimal explanation for the behavior of lexical items based on some properties of their referents (Pustejovsky 1998). GLT distinguishes four types of qualia:

• constitutive: the relation between an object x and its constituent parts;
• formal: the basic ontological category of x;
• telic: the purpose and the function of x;
• agentive: the factors involved in the origin of x.

For example, the qualia structure of the noun ‘sandwich’ will contain information about the composition of sandwiches, their typical role in the activity of eating, and their nature of physical artifacts. If eat(P, g, x) denotes a process, P, involving an individual gand an object x, then the qualia structure of ‘sandwich’ is as follows.

sandwich(x)
form = physobj(x)
tel = eat(P, g, x)
agent = artifact(x)

Qualia structure is the primary explanatory device by which GLT accounts for polysemy: the sentence “Mary finished the sandwich” receives the default interpretation “Mary finished eating the sandwich” because the argument structure of ‘finish’ requires an action as direct object, and the qualia structure of ‘sandwich’ allows the generation of the appropriate sense via type coercion (Pustejovsky 2006). GLT is an ongoing research program (Pustejovsky et al. 2012) that has led to significant applications in computational linguistics (e.g., Pustejovsky & Jezek 2008; Pustejovsky & Rumshisky 2008). But like the theories mentioned so far, it has been subject to criticisms. A first objection has argued that the decompositional assumptions underlying GLT are unwarranted and should be replaced by an atomist view of word meaning (Fodor & Lepore 1998; see Pustejovsky 1998 for a reply). Second, many have pointed out that while GLT reduces polysemy to a formal mechanism operating on information provided by the sentential context, contextual variations in lexical meaning often depend on non-linguistic factors (e.g., Lascarides & Copestake 1998; Asher 2011) and can conflict with the predictions offered by GLT (Blutner 2002). Third, it has been argued that qualia structure sometimes overgenerates or undergenerates interpretations (e.g., Jayez 2001), and is included in lexical representations by drawing an arbitrary dividing line between linguistic and non-linguistic information (Asher & Lascarides 1995).

### 4.4 Relational Approaches

To conclude this section, we shall mention some contemporary approaches to word meaning that develop the relational component of the structuralist paradigm. We can group them into two categories. On the one hand, we have symbolic approaches, whose goal is to build formalized models of lexical knowledge in which the lexicon is seen as a structured system of entries interconnected by sense relations such as synonymy, antonymy, and meronymy. On the other, we have statistical approaches, whose primary aim is to investigate the patterns of co-occurrence among word forms in linguistic corpora.

The chief example of symbolic approaches is Collins and Quillian’s (1969) hierarchical network model, in which words are represented as entries in a network of nodes comprising a set of conceptual features defining the conventional meaning of the word in question, and connected to other nodes in the network through semantic relations (more in Lehman 1992). Subsequent developments of the hierarchical network model include the Semantic Feature Model (Smith, Shoben & Rips 1974), the Spreading Activation Model (Collins & Loftus 1975; Bock & Levelt 1994), the WordNet database (Fellbaum 1998), as well as the connectionist models of Seidenberg & McClelland (1989), Hinton & Shallice (1991), and Plaut & Shallice (1993) (see the entry on connectionism).

Statistical analysis, by contrast, is based on an attempt to gather evidence about the distribution of words in corpora and use this information to account for their meaning. Basically, collecting data about the patterns of preferred co-occurrence among lexical items helps identify their semantic properties and differentiate between their different senses (for overviews, see Atkins & Zampolli 1994; Manning & Schütze 1999; Stubbs 2002; Sinclair 2004). It is important to mention that although network models and statistical analysis share an interest in developing computational tools for language processing, they are divided by a difference. While symbolic networks are models of the architecture of the lexicon that seek to be cognitively adequate and to fit psycholinguistic evidence, statistical analysis is a practical methodology for the analysis of corpora which is not necessarily interested in providing a psychological account of the information that a subject must associate with words in order to master a lexicon (see the entry on computational linguistics).

## 5. Cognitive Science

As we have seen, most theories of lexical meaning in linguistics attempt to trace a plausible dividing line between word knowledge and world knowledge, and the various ways they tackle this task display some recurrent features. They focus on the structural attributes of lexical meaning rather than on the dynamics of word use, they maintain that words encode distinctively linguistic information about their alternative senses, they see the study of word meaning as an enterprise whose epistemological niche is linguistic theory, and they assume that the lexicon constitutes a system whose properties can be illuminated with a fairly economical appeal to the landscape of factual knowledge and non-linguistic cognition. In this section, we survey a group of theories that adopt a different stance on word meaning. The focus is once again psychological, which means that the overall goal is to provide a cognitively realistic account of the representational repertoire underlying our ability to use words. But unlike the approaches mentioned in Section 4, these theories tend to encourage a view on which the distinction between lexical semantics and pragmatics is highly unstable (or impossible to draw), where word knowledge is richly interfaced with general intelligence, and where lexical activity is not sustained by an autonomous lexicon that operates entirely apart from other cognitive systems (Evans 2010). The first part of this section will examine some cognitive linguistic theories of word meaning, whose primary aim is to shed light on the complexities of lexical phenomena through a characterization of the processes interfacing word knowledge with non-linguistic cognition. The second part will go into some psycholinguistic and neurolinguistic approaches to word meaning, which attempt to identify the representational format and the neural correlates of word knowledge through the experimental study of lexical activity.

### 5.1 Cognitive Linguistics

At the beginning of the 1970s, Eleanor Rosch put forth a new theory of the mental representation of categories. Concepts such as furniture or bird, she claimed, are not represented just as sets of criterial features with clear-cut boundaries, so that an item can be conceived as falling or not falling under the concept based on whether or not it meets some relevant criteria. Rather, items within categories can be considered differentially representative of the meaning of category-terms (Rosch 1975; Rosch & Mervis 1975; Mervis & Rosch 1981). Several experiments seemed to show that the application of concepts was no simple yes-or-no business: some items (the “good examples”) are more easily identified as falling under a concept than others (the “poor examples”). An automobile is perceived as a better example of vehicle than a rowboat, and much better than an elevator; a carrot is more readily identified as falling under the concept vegetable than a pumpkin. If lexical concepts were represented merely by criteria, such differences would be inexplicable when occurring between items that meet the criteria equally well. It is thus plausible to assume that the mental representations of category words are somehow closer to good examples than to bad examples of the category: a robin is perceived as a more “birdish” bird than an ostrich or, as people would say, closer to the prototype of a bird or to the prototypical bird (see the entry on concepts).

Although nothing in Rosch’s experiments licensed the conclusion that prototypes should be reified and treated as mental entities (what her experiments did support was merely that a theory of the mental representation of categories should be consistent with the existence of prototype effects), prototypes were soon identified with feature bundles in the mind and led to the formulation of a prototype-based approach to word meaning (Murphy 2002). First, prototypes were used for the development of the Radial Network Theory of Brugman (1988 [1981]; Brugman & Lakoff 1988), who proposed to model the sense network of polysemous words by introducing in the architecture of lexical items the center-periphery relation envisaged by Rosch. According to Brugman, the meaning potential of a polysemous word can be modeled as a radial complex where a dominant sense is related to less typical senses by means of semantic relations such as metaphor and metonymy (e.g., the sense network of ‘fruit’ has product of plant growth at its center and a more abstract outcome at its periphery, and the two are connected by a metaphorical relation). Shortly after, the Conceptual Metaphor Theory of Lakoff & Johnson (1980; Lakoff 1987) and the Mental Spaces Approach of Fauconnier (1994; Fauconnier & Turner 1998) combined the assumption that words encode radial categories with the claim that word uses are governed by mechanisms of figurative mapping that integrate lexical categories across different conceptual domains (e.g., “love is war”, “life is a journey”). These associations are creative, perceptually grounded, systematic, cross-culturally uniform, and emerge on pre-linguistic patterns of conceptual activity which correlate with core elements of human embodied experience (see the entries on metaphor and embodied cognition). More in Kövecses (2002), Gibbs (2008), and Dancygier & Sweetser (2014).

Categories: 1