Anti-Lexicalism , and the Status of the Unarticulated

This paper explores the prospect that grammatical expressions are propositionally whole and psychologically plausible, leading to the explanatory burden being placed on syntax rather than pragmatic processes, with the latter crucially bearing the feature of optionality. When supposedly unarticulated constituents are added, expressions which are propositionally distinct, and not simply more specific, arise. The ad hoc nature of a number of pragmatic processes carry with them the additional problem of effectively acting as barriers to implementing language in the brain. The advantages of an anti-lexicalist biolinguistic methodology are discussed, and a bi-phasal model of linguistic interpretation is proposed, Phasal Eliminativism, carved by syntactic phases and (optionally) enriched by a restricted number of pragmatic processes. In addition, it is shown that the syntactic operation of labeling (departing from standard Merge-centric evolutionary hypotheses) is responsible for a range of semantic and pragmatic phenomena, rendering core aspects of syntax and lexical pragmatics commensurable.


Introduction
The distinction between the uttered and the meant dates back at least to the 4 th century rhetoricians Servius and Donatus (Horn 2004: 3).More recently, divisions between linguistic form and semantic content have been proposed from a number of perspectives, invoking unarticulated constituents and 'completion processes' such as free enrichment to derive and fully specify the supposedly underdetermined conceptual representations delivered by syntax (Carston 2007(Carston , 2009(Carston , 2012;;Fodor 2008).In this paper, the status of unarticulated constituents in pragmatics is claimed to have a more much limited role in linguistic interpretation than standardly assumed, and what the computational system delivers is shown to be propositionally sufficient and psychologically plausible enough to eliminate certain pragmatic operations from the Conceptual-Intentional (CI) system (Chomsky 1995(Chomsky , 2014)).The problem of biological adequacy is also addres-sed in relation to the syntax-pragmatics division of labour, and new directions are suggested for how the study of the computational system and pragmatic competence can embrace the plurality characteristic of the life sciences.

Lexicocentrism and the Structure of CI
This first section will present a basic overview of some standard cases discussed in the pragmatics literature, setting up the main focus of the paper in section 3, which explores how syntax delivers propositionally whole structures and that pragmatics can only enrich, and not alter, such structures.It is shown that in cases of complex polysemy, nominal reference, and even Case assignment, meaning is determined grammatically, not pragmatically.

Polysemy
To begin, I will assume that questions of meaning and reference should be explored at the grammatical, and not purely lexical, level.This 'internalist' (Hinzen 2006(Hinzen , 2007) ) perspective can be explored through classic pragmatic thought experiments.For instance, if we take Travis's (1997: 90) sentence "The leaf is green" when spoken by either a child or a botanist, the former would be accessing a representation from the INTUITIVE BIOLOGY core knowledge system (CKS for short; Spelke 2010), while the latter would be using the "science forming faculty" (Chomsky 1975). 1 We could notate these meanings as LEAFi and LEAFs for intuitive concept and (natural) science concept.The meaning of leaf would still be atomic, but speakers could employ the respective representations based on appropriateness (Ludlow 2014: 132).Is there a need, then, to appeal to pragmatic processes in this case?I think not.As with complex nominals like book, city, person, appointment, or construction, the word leaf can bear the above multiple senses not because it is an indexical (shifting its meaning based on context) or because its meanings are coerced or because of pragmatic processes, but simply because it is polysemous. 2 As Frisson (2009) and Vicente (2015) demonstrate, variations in the truth conditions of a number of standardly explored utterances like "The leaf is green" or "The book was brilliant but weighed a ton" can be systematically explained if we assume the existence of semantic operations forming complex types like book(INFOR-MATION•PHYSICAL_OBJECT) and school(BUILDING•INSTITUTION). 3 A similar situ-1 Perhaps a clearer example would be a generic statement like "A leaf is green", since Travis's case refers to a token leaf (at least, on the most natural reading).As Vicente (2015: 54) points out, "[p]olysemy seems to be a relatively neglected phenomenon within philosophy of language as well as in many quarters in linguistic semantics.Part of this neglect is due to the fact that philosophical and a good part of linguistics semantics have been focused on sentential, truth-conditional, meaning, instead of on lexical meaning for a long time.But another part has to do with […] the idea that, barring homonymy, each word-type has a unique simple denotation".ation arises in blue ink due to the polysemous status of the noun: When we apply the concept BLUE to INK we can do so either by modifying what Pustejovsky (1995) terms its 'formal' aspect (which defines the ink as a liquid) or its 'telic' aspect (which defines the ink as a device for writing).This yields the apparent flexibility in describing ink as either appearing blue or being able to write in blue.
The topic of co-predication is particularly relevant here.This is the phenomenon of two apparently incompatible properties being attributed to a single object (Murphy 2015b).In (1a), informational and physical predicates apply to book, while in (1b) the bill is simultaneously an abstract monetary amount and a slip of paper: (1) a.The book was brilliant but weighed a ton.
b.He paid the bill and threw it away.
Philosophers of language and pragmaticists have typically sidelined the importance and intricacy of complex dotted types (Pustejovsky 1995) of the kind found in co-predication (see Carston 2012Carston : 616, 2002: 362f., 374f.): 362f., 374f.).Following proposals from Gotham (2012Gotham ( , 2015) ) and Bosch (2007), treating the meaning of nominals like book and city as reflections of conceptual, and not lexical semantic, complexity, allows us to deal with apparent paradoxes which force the multiplication of semantic senses.Consequently, novel simultaneously possesses abstract and concrete conceptual features, a form of productivity which allows it to extend its meaning from a material text to a piece of electronic information on a memory stick, and beyond, licensing different CI representations: (i) ∀x (BOOK1(x) PRINTED TEXT(x)…), (ii) ∀x (BOOK2(x) INFORMATION(x)…). 4 It follows that our knowledge of novels being prose "is not lexical knowledge, but literary theory" (Bosch 2014: 45, emphasis his).The NOVEL concepts seen in co-predication are what Bosch terms "contextual"/unsaturated concepts which are enriched by subcategorization and predication information, along with discourse data.This allows us to relocate polysemy-including the verbal-nominal cases of cut and stop discussed by Searle (1980) and Rayo (2013)-to the CI-system. 5These results

4
Co-predication appears to have a number of grammatical and semantic constraints which have not been noticed in the literature (including Gotham 2015 andChomsky 2000), which remain to be accounted for in model-theoretic or syntactic terms and which suggest that the qualia and argument structure relations of words like book and newspaper are much more intricate than Pustejovsky originally assumed.Dot-type nominals like translation can refer to a process or a physical text, but while (i) is well-formed, (ii), which reverses the sense order, is not.While the physical and informational senses of newspaper can appear together, as in (iii), adding the institutional sense leads to a licensing failure in (iv)-but not when placed in a modificational structure, as in (v), and (vi) even points to the existence of three-way copredication (see Murphy 2015b for discussion): (i) The translation that lies on the table was difficult.(ii) ?The translation was difficult and lies on the table.(iii) John held the reactionary and large newspaper.(iv) *That newspaper is owned by a trust and is covered with coffee.(v) The most provocative newspaper of the year has been sued by the government.(vi) The well-written newspaper that I held this morning has been sued by the government.also depart from Fodor's (2008) view that words express Fregean 'senses', or modes of presentation (MOPs), being representations of some mind-external entity 'out there', invoking an indefensible mind-world dualism (see Collins 2015 for related discussion). 6 There have been attempts, however, to explain co-predication on pragmatic grounds, which I think can be shown to be empirically inadequate.For instance, Brandtner (2009) proposes that the co-predication yielded by German deverbal -ung nominalization cannot be addressed by compositional operations alone, but additionally require pragmatic processes.In the case of Übersetzung 'translation', the selectional restrictions of two conflicting interpretations simultaneously apply to one token of the nominal, where 'tedious' and 'easy' refer both to the act of translation and more difficult translations are assumed to have a higher 'pay-off' (an event, EV, and result, RE): (2) German If co-predication with event and result readings is only possible if there is a salient relation between the two, then co-predication, Brandtner argues, cannot be reduced to semantic principles.He points to 'general ontological constraints' on type combinations, such that we cannot conceive of a firm's management as being simultaneously an event (an act of managing) and an agent (a manager), even though management can independently be either an individual (or individuals) and an event (where AG denotes AGENT): (4) German ??Die Leitung der Anwaltskanzlei ist [schwierig]EV und hat [angerufen]AG.
the management the.GEN law-firm is difficult and has called 'The management of the law firm is difficult and has called.' relatively small compared to the extremely vast number of situations we may encounter and ideas we can entertain about them" (Bouchard 2013: 49).

6
Contrary to Russell, Kripke, and Putnam's semantic externalism, Franz Brentano thought that the default mode of human cognition centered on thoughts about non-existent things, and that thoughts about existent things are 'secondary'.Co-predication and related studies in semantic internalism (Pietroski 2012, forthcoming) appear to support this (admittedly vague) position that we initially think about non-existent things as if they existed; books and cities are thought about 'as if' they were existent entities, but in fact are not.
With derived nominals, event and object readings can be licensed if a causal or salient reading obtains, as in ( 2), with the difficult translation paying off with higher sales.But in 'The easy translation sold million-fold', Brandtner claims that since expectations (of easy translations selling poorly) have not been, the unexpected reading must be licensed by local discourse markers, as in the case in which an easy translation is made known to nevertheless sell well: (5) German Die einfache Übersetzung verkaufte sich dennoch millionenfach. the easy translation sold itself still million-fold 'The easy translation still sold million fold.'But we do not need to say that pragmatic processes 'save' the co-predication interpretation from crashing, since 'The easy translation sold million fold' is semantically unexpected but still able to constitute a complex type.Co-predication is licensed in the same way it is in unexpected cases like 'The brilliant newspaper I held this morning has been sued'.Even if Brandtner's pragmatics-based theory was accurate, we would only need to invoke world knowledge to yield the supposedly poor judgment in (3), and the processes of pragmatics and dot-type generation need not interact or influence each other. 7

The Philosophy of Case
While the word-world relation is gradually being severed through the work of such semantic internalism, or 'I-Semantics' (Pietroski 2005, Hinzen 2007), the word-concept relation, used to defend the existence of pragmatic unarticulated constituents, remains strong and well accepted-for dubious reasons.Nouns, for instance, "need not denote objects either when they are used as predicates or parts of predicates" (Hinzen & Sheehan 2013: 73).For instance, 'This is the building' is referential when asserted, but when functioning as an argument ('believed [this is the building]') it is a predicate to a mental event.It may be possible to go so far as to argue that propositional forms of reference are not 'semantic' or 'pragmatic' but are rather grammar-dependent, relying on relations typically designated as structural cases (nominative and accusative).For instance, by showing that such Case cannot be reduced to thematic structure, Person, Tense, or Agreement, Hinzen (2014) argues that Case can receive a rationalization through its role of marking cross-phasal word movement, rather than simply reflecting formal licensing constraints on nominal arguments.Case has also been assigned no philosophical significance (troubling no theorists of denotation or reference), and is seen as a peculiar, even quirky 'syntactic' feature of grammatical structure.Since language is often thought of as merely a mode of expressing thought, and not organizing it, Case has unsurprisingly been sidelined.But if it 7 The complex polysemy seen in co-predication also seems to explain why Chomsky's (2000) London can be demolished and rebuilt elsewhere, and why Pietroski's (2005) France can simultaneously be both hexagonal and a republic.Additionally, I think much of the 'lexical underdeterminacy' discussed in Ludlow (2014) can be accounted for when explored through accounts of polysemy.can be shown that Case licenses referential arguments and that no other grammatical device can do this, Case would be given a meaningful role in linguistic cognition, sidelining the need for extra-syntactic pragmatic accounts. 8 As a way of approaching this possibility, we can begin by noting that lexically organized meaning is distinct from syntactically configured meaning: Despite its internal featural complex, lacking any subjects, predicates, modifiers, definiteness, or assertions, an individual word like lion can never refer to a particular lion, some lions, lion-meat, lion-like characteristics, or lions more generally.Since lion is a lexical item and the lion is not, and only the latter can refer to a lion, then reference consequently lies on the side of grammatical organization.Since the carries no substantive lexical content, it also cannot be maintained that reference arises from a somewhat more complex lexical level; even the lion will only become referential when occuring in a suitable grammatical structure.The lion ran is referential, for instance, but I wish to be the lion, in the case of a stage production, is not.In addition, referentiality can arise from other structures like Gold is yellow, in which a common noun becomes referential, possible through N-to-D movement (see Longobardi 2005;Murphy 2014a).
Further, the KP/nominal 'phase' (Chomsky 2008, Gallego 2012) appears to be mapped to objects, the vP/verbal phase to events, and the CP/clausal phase to propositions, with a containment relation existing from the highest to the lowest phase (hence events necessarily contain objects): (6) Phasal Hierarchy These formal distinctions (objects, events, propositions) co-vary with grammar, not other systems like beliefs or intentions.There are no clauses that are object-referential in the way proper names are, verb phrases or non-finite clauses can denote events but not full propositions with truth-values, and nominals cannot do so either.Referential objects are consequently ordered in a hierarchical fashion, in part-whole terms (events include states, which in turn include objects and their substances), with a core function of grammar appearing to be the regulation of reference. 9 A notion of referential strength also seems to mirror this hierarchy.The KP argument [KP K [NP lions]] in I like lions is an argument, but the phase edge is empty, lacking a determiner.As a result, reference is only to lions in general, and not to any particular lion(s).Reference to an amount of lion (as a substance) requires reducing this structure even more, removing Number marking: I had lion 8 As Hinzen (2014: 138) puts it: "If grammar is meaningful, and in a different sense of 'meaning' than we find in the lexicon, then grammatical relations matter as an independent input to semantic interpretation, in a way that Agree or Merge, as abstractions from such meaningful relations, do not."9 This mereological, hierarchical organization lends support to the characterization of the CP, vP, and KP phases in Murphy (2014a) as, respectively, the Russellian phase (co-varying with propositions), the Davidsonian phase (co-varying with events, defined as such regardless of physical or temporal features-atomic decay qualifies as an event just as much as cosmic expansion or football game), and the Fregean phase (co-varying with objects and their substances).
(i.e.I ate lion).When a determiner appears, individual reference is licensed: I had a lion.Definiteness is also licensed in this configuration, but only when the determiner is strong, e.g. the (as in I had the lion).Individual deictic reference requires a deictic morpheme (I had that lion), and the NP itself is no longer required (I had that/this).Finally, with the presence of singular personal deictics, this restriction on NP dropping is no longer optional, but obligatory (*I man). 10This leads to a hierarchy ranging from predicative to non-specific to indefinite-specific to definitespecific to deictic to personal (of the kind Zamparelli 2000 called for): 11 Grammatical hierarchies also arise with events (which progress from states to full events) and propositions (progressing from non-finite to finite CPs).Only syntactic theory, and not model-theoretic semantics, can offer an explanation for the emergence of such ontologies, and not just a formal characterization of them.The generation of these hierarchies crucially relies on relations morphologically interpreted as Cases: "In the absence of argument-positions, which do not exist in Minimalist grammar, Case is the only thing that yields argument relations: thematic roles, in particular, exist in the adjunct system, and require no arguments" (Hinzen 2014: 140). 12 Assuming that all arguments are introduced by functional heads (indirect objects via an applicative head, agentive or active subjects via a Voice head, and direct objects in the edge of v), this leads to the following schema, which can reveal how particular argument relations are morphologically realized: Kratzer 1996).As noted in Hinzen & Sheehan (2013) and Hinzen (2014), a formal ontology is produced alongside this licensing, with NP1 being licensed in relation to v yielding an event or state, Voice yielding more complex events containing the previous event/state, and all of which is morphologically interpreted in terms of Case-marking.Different Voice-v heads generate distinct Case-marking patterns, something which covaries with the hierarchy of meaning: 10 These observations suggest that the term 'definite description' is misleading, since whether or not a phrase like the lion is used as a definite description is determined by its syntactic context.

11
Similar considerations lead Martin & Hinzen (2014: 102) to propose a Grammar-Reference Link Hypothesis: "Referential strength (from predicativity to deixis) is not an intrinsic property of lexical items, but rather of certain grammatical configurations." 12 It remains to be stressed that, contrary to the statements in Hinzen & Sheehan (2013) and Hinzen (2014), grammar itself does not refer, but rather people use grammar to refer in a given circumstance.Words like chair and table do not 'refer' to anything, but when used by a speaker (with intentions) they can potentially refer, given the phasal configurations provided by grammar.Syntax itself is not enough for reference to be established, any more than a functioning olfactory system is enough for someone to smell (the notion of will needs to enter, too).The referential capacities of non-human primates should also not be underestimated (see Murphy 2015c), with Hinzen & Sheehan summarily dismissing primate cognition for supposedly not rising to the level of 'thought' (a term they mystically associate only with grammatical cognition, misleadingly and unhelpfully).Reference cannot be solely reduced to grammatical factors, then, and questions remain about how speaker intention interfaces with syntax, to be explored in section 3. Case morphology consequently appears to track cross-phasal relations; when a nominal crosses a phase boundary, its Case marking changes, despite the fact that, for instance, in (8a) and (8b) him and he have identical thematic roles.In (8a), ACC expresses a relation between the verb and internal argument, yielding a predicate with no truth-value (killed him cannot be assessed for truth).But in (8b), NOM expresses a relation between the finite verb and external argument, corresponding to a proposition. 13Movement for reasons of Case is therefore interpretable, contrary to standard minimalist assumptions (Chomsky 2001). 14 There is also no need to invoke deflationary theories of truth (Horwich 1998) according to which to assert that a statement is true is just to assert the statement itself, since grammar delivers the structure which the interpretive systems proceed to substantiate (see Martin & Hinzen 2014 and section 3 for examples).Moreover, the criterion of individuation is heavily influenced by grammar, and is not just an amorphous series of 'sortal concepts' as invoked by Galery (2009).This phasal theory of reference also resolves certain debates in contemporary Millianism (Mousavian 2015), since the strength with which it is possible to refer using some nominal expression is modulated by how 'high' up the phasal hierarchy an object moves, while existence is shown not to be encoded in the grammar (unlike features such as Tense).While we cannot explore the formal ontological structure of the world through physical experimentation or perceptual analysis, linguistic theory provides a way (indeed, perhaps the only way) of doing so.
With a phasal syntax, then, concepts like proposition and event co-vary with the grammatical architecture itself (de Villiers 2014)-that is, its computational and its base of declarative knowledge as opposed to the system processing speech acts-and it is needless to postulate Mentalese or a language of thought (à la Fodor 1998) or pragmatic processes, even if mediation of such syntactic and semantic structures is not identical to reduction, since an understanding of CI bare output conditions and of conceptual representations (through developmental psychology, philosophy of language, and other domains) is, under core minimalist assumptions, all that is needed.The importance of developing syntactic analyses becomes clearer when we acknowledge with Narita (2014: 10) that there is no direct evidence for the nature of CI, and we can access its effects only via "speculations, introspections or theory-internal considerations". 1513 This leads to a possible solution to the perennial problem surrounding why Case is category-sensitive, being primarily assigned to nominals, since it may be that Case is rather sensitive to referentiality, which happens to be related to nominals (but which cannot be reduced to the category Noun).14 Knott's (2014) model of Case also attributes a semantic role to it, but from the perspective of a particular hypothesis concerning the sensorimotor-LF interface.

15
More worryingly, since the human scientific faculty is, like all organic faculties, limited and constrained-being composed of concepts such as probability, determinacy, and input-output schemas-the conceptual tools available to linguists may be insufficient to explain core (and very simple) aspects of CI.

Lexical Semantics
Since I have argued against any substantial role for pragmatic processes in complex polysemy and nominal reference, I would like to propose two phases of linguistic interpretation: syntactic computation and pragmatic computation.In order to properly address the debates surrounding unarticulated constituents, I will situate the discussion in section 3 within a clear architectural framework, which requires elaboration.
I will assume that the inputs to syntax are flat or atomic 'lexical precursor cells' (LPCs; Boeckx 2014: 27) which acquire their lexical features as the derivation proceeds and phases are transferred to the interfaces (Munakata 2009).Call the set of LPCs the Precursor Lexicon, pLEX for short (as in Murphy 2015a).What lexical features contribute is a unique configuring of other mental systems, providing instructions to them (Chomsky 2012: 191).This idea seems to be supported by the finding that verbal labels influence categorization in infants (Plunkett et al. 2008).Questions remain over how the nature of lexical items came about, but, as argued below, a significant step in the right direction is made when we detach syntactic computation from lexical influence. 16The work of Borer (2005), Boeckx (2014), and others suggests that forms of grammatical order arise not from the lexicon but from the dynamics of the derivation. 17In sharp contrast to this, it is possible to detect notable similarities between lexicocentrism, genocentrism, and neo-phrenology, all of which seek explanations based on elementary components rather than deriving structure from processes or forms.But unlike lexicocentrism, the latter two have been surpassed over the last half century by evo-devo agendas, systems neuroscience, and other 'formalist' (Amundson 1998;Hinzen 2006) and dynamic approaches.
In addition to the above model of reference, I think there are other reasons not to rely on pragmatic accounts of natural language meaning.For instance, there is a near-optimal match between phasal derivations and Neo-Davidsonian event representations (Kratzer 2003), with Boeckx (2014: 103) arguing for a causality between the emergence of phases and the evolution of complex event concepts.C maps to the point of existential closure, v to internal/external thematic role assignment, and n to type-lifting turning a predicate into an argument, and p 16 This is a change which is already steadily developing in labeling theory and recent discussions centered on how Merge is no longer seen to be 'driven' by, for instance, feature valuation and θ-role assignment; see Epstein et al. (2015), but also Wurmbrand's (2014) valuation-based Merge Condition and Bošković's (2008) observation that feature checking has a freezing effect on movement, which would be difficult to capture in a 'free Merge' architecture.

17
In this sense, but not others (for instance, see Boeckx 2015), this anti-lexicalist stance is compatible with the present reference-centered rationality of Case, in which it is grammatical organization which gives rise to propositional reference, and not Case-features rendering Goals active to Agree with Probes-a move which reduces grammatical relations to features; an example of what Boeckx would term 'featuritis', with features in contemporary minimalism often being deployed simply to 'capture the facts', being all-powerful in a similar way transformations and parameters were in the 1960s and 1980s.The explanatory power of feature-driven theories is far from awe-inspiring: Nouns are defined as belonging to the category [+Noun], and so on; similar to how Fodor "often seems satisfied to explain a concept by typing the word for it in different cases and fonts" (Pinker 2008: 97), with dog denoting the concept DOG and subprime mortgage denoting the concept MORTGAGE(SUB-PRIME).
to adjunct introduction.Although these correspondences are admittedly cursory, it nevertheless appears that Collins and many others are likely incorrect in claiming that "syntax fails to match up with content in a principled way" (Collins 2007: 806).
These observations support Borer's (2005) exoskeletal morphological model which views open-class words as hidden 'conceptual packages' that are purely embedded in the syntactic structure, causing no alteration to it or itself.Only when the structure is built by phase is the package 'opened' (interpreted).This is one of many reasons why syntax appears to be entirely free of lexical influence, operating independently of the needs of feature matrices (see Epstein et al. 2014 on 'free' Simplest Merge).As discussed in Murphy (2015d), many linguistic modules constructed during the Government and Binding era live on re-cast as features, yielding a form of massive modularity as a historical residue. 18Invoking Merge and roots (Marantz 1997;Borer 2014), though necessary, also fails to illuminate the structure of the objects Merge ultimately operates on, leaving unaddressed the question of how feature-bundles are constructed to begin with (for a proposed mechanism of feature-set binding based on neural oscillations, see Murphy 2015e, forthcoming; Murphy & Benítez-Burraco 2016). 19This kind of lexicocentrism is simultaneously a barrier to developing linking hypotheses between linguistics and neuroscience, since it presupposes that syntax only operates on something as narrowly domain-specific as a lexicon.Instead, we can assume that LPCs are enriched by a small set of featural representations as the derivation proceeds, such as obligatory A-features (φ-features) and optional A'-features (Wh, Top, Rel, and so forth), as they are termed in van Urk (2015).
This modular perspective opens up new avenues for interpretation.The concept BOTTLE, for instance, relies on visual cognition through its shape and colour features, with language contributing its functional properties (container, used to move material masses, and so forth; McGilvray 2005: 308).Similarly, a pile of leaves in a forest becomes a thing if it is put there intentionally, perhaps to act as a signal, where thinghood should be distinguished from the objecthood of visual perception (the point being that the functional criteria of language is not strictly aligned with the structural criteria of the visual system: Language seems to deliver to objects a functional role).If the mind is composed of CKSs (Spelke 2010), then language may allow a child to use pre-lexical concepts, which may not be systematically combinable, 'to introduce Lexical Concepts', which-via setformation/Conjoin-can be combined (Pietroski 2014a).Exploring these issues further, Pietroski (2012) has claimed that the lexicalization of concepts is a necessarily creative process which cannot be reduced to instances of words 'labeling' or 'standing for' particular concepts, with his theory of semantic computation reducing to Conjoin and a limited form of existential closure. 20This seems to sup-18 Thus Lakoff (1972: ii): "So linguists fudge, just as has been done in the reflexive rule, by sticking on the arbitrary feature +REFL.Such a feature is a fudge.It might as well be called +CHOCOLATE, which would in fact be a better name, since it would clearly reveal the nature of the fudge." 19 See Svenonius (2012) for a proposed 'Bundle' operation, and Murphy (2015a) for an alternative based on set-formation/Simplest Merge.

20
When the brain acquired the ability to lexicalize concepts (see Murphy 2015e for a neurobiological proposal for how this was achieved), it created "items that could be freely called up, port the notion that words are not concepts but rather instructions to build concepts (from their semantic features) (Pietroski 2014a, forthcoming).Given this background, it remains to be seen how the syntactic and pragmatic components operate to derive conceptual structures.This topic will be the concern of section 3.

Unarticulated Constituents and the Syntax-Semantics Interface
While much is known about the computational operations of syntax (Adger 2003;Narita 2014), the computations of the pragmatics module(s) stand on less firm ground.One such proposed computation is saturation, a 'completion process' (Bach 1994: 133;Carston 2009: 15) operating on constructions like "Paracetamol is better [than x]". 21Saturation has been argued by Stanley & Szabó (2000), in their syntactically-motivated rejection of much of the pragmatics architecture, to be the only active pragmatic computation.For others, such as Hall (2008), there are processes such as free enrichment, a lexical process modifying subparts of CI representations: "It's snowing [in location x]" (Sperber & Wilson 1995;Recanati 2004).These are cases of 'unarticulated constituents' (Recanati 2002), differing from those proposed in theories of syntactic ellipsis in that they are motivated on pragmatic grounds.Finally, computations involving lexical pragmatics (Wilson & Carston 2007, Sperber & Wilson 2015) adjust or modulate existing elements of linguistic meaning, as in the case where "David is a man" is interpreted as meaning David is an ideal man.

Phasal Eliminativism
This section will argue that most of these operations can be eliminated in favor of more principled grammatical accounts.As an alternative, the following model will be defended: (9) Phasal Eliminativism Syntax supplies instructions to CI to construct conceptual representations which can optionally be pragmatically enriched.
Phasal Eliminativism is a form of eliminativism not because it denies word meanings, but rather because it denies the existence of core components of the partially independent of perception" (Hinzen & Sheehan 2013: 38).Boeckx (2014: 109) speculates that it may turn out that "some of the properties of human concepts can be derived from the fact that they have been lexicalized".

21
Saturation is seen as a central pragmatic process under Wrong Format (WF) theories.Recanati (2004) defines this as the view that linguistic semantics does not yield a truthconditional component, but there is nonetheless context-independent meaning associated with a word.WF holds that words have meanings, albeit ones which are too semantically rich to be employed in utterance interpretation.On the more extreme end of the spectrum, Meaning Eliminativism (ME) is the most radical form of contextualism (Bezuidenhout 2002), or the view that sentences express content only given a particular speech act context, which triggers relevant memory traces (Hintzman 1986) from a 'grab bag' of maps and images (Rayo 2013), along with other top-down processes (Rumelhart 1993: 78).ME is WF "pushed to the extremes" (Recanati 2004: 141), denying word meaning at the type level and embracing only contextual tokens.
pragmatics system.It should be stressed, however, that while pragmatic processes can enrich syntactic structures (hence the 'optional' status), such structures are not as radically underspecified for conceptual content as is typically claimed in the pragmatics literature.As a way of explanation, unarticulated constituents are usually deemed part of presupposed discourse knowledge.They are taken to be propositional elements which do not arise at the sensorimotor interface (SM) but are necessary for a sentence to become truth-evaluable, occuring as part of the top-down free enrichment process. 22The status of such constituents remains a major topic of research.To take a few simple examples, McIntosh (2014) claims that "It is raining in London" really expresses the proposition "It is raining in London now", while Fodor (2001: 12) argues that "It's three o'clock" really represents "It's three o'clock here and now, in the afternoon" (or whenever the sentence is spoken).These cases suggest a degree of misalignment between externalization and thought.Debates in lexical pragmatics often then center on the question of whether one should expand the influence of Logical Form (Chomsky 1995) or pragmatic processes of free enrichment to account for such misalignment.Departing from this focus and putting the explanatory burden on syntax, Hinzen (2015) argues that there is nothing 'missing' in grammatical constructions for them to encode propositionality.When the supposed unarticulated expressions are inserted, completely different propositions result, rather than a simply more 'overt' or 'specific' form of the underlying proposition usually posited.The only hidden constituents are the ones syntactically motivated in theories of movement, control, and so on, which are interpreted at CI but not externalized at SM. 23 Focusing on free enrichment, a cross-linguistic exploration of pragmatic processes led Martí (2015) to suggest that most cases typically used to defend free enrichment processes can be explained through the involvement of grammatical processes.In addition, the present section will show that only lexically and not grammatically specified aspects of meaning are altered by context: The principles of theta-role assignment and agreement relations are unswayed by how much rain is pouring in London.There may consequently exist cases of what could be called 'lexical underdeterminacy', but not at the level of phrase structure building.I will also argue, following numerous others, that Logical Form can be dispensed with, and that syntactic structures are mediated purely through pragmatic processes (extra-grammatical in nature, such as world knowledge), as Phasal Eliminativism suggests.
The cases typically explored in defence of pragmatic unarticulated constituents crucially involve a level of optionality; that is, the enriched meaning is not necessarily tied to the expression, as in "Every time John lights a cigarette, it rains" (Stanley 2000), which can introduce a location variable ("…rains in the place he lights the cigarette"), but does not have to. 24Confusing optional for obli-

23
The centrality of CI in linguistic structure, a core minimalist assumption, was arguably appreciated by 'Counter-Enlightenment' philosopher Johann Hamann: "To speak is to translate-from a language of angels into a language of men" (Rudd 1994: 197).See Strawson (2008) and Murphy (2014b) for related classical references.

24
This perspective aligns with Chomsky's claim that pragmatics, although being the "wastebin" (2006: 98) of linguistics into which inexplicable puzzles are cast, equates to "principles of action" and comprehension, not structure-building (Andor 2015: 148).gatory processes, perhaps the defining characteristics of contemporary pragmatics, would be deemed a thorough methodological catastrophe in any other scientific domain.It can further be shown that grammar systematically determines, and not merely constrains, the proposition expressed in cases of supposed misalignment between meaning and syntax.For instance, all too often are possible logical deductions about expressions taken to be 'implicatures', when in fact the content of implicatures derived from a single expression are not only propositionally distinct, but contextually distinct, encoding dissimilar thoughts and being appropriate under different circumstances.To illustrate, the implicature standardly derived from "John has four cars" is "John has exactly four cars," despite the latter construction being propositionally and circumstantially distinct from the former.Following Phasal Eliminativism, assume that semantic interpretations are built via cyclically transferred labeled structures and the operations of the pragmatic/interpretative faculties.Pragmatics can enrich but not conflict with the output of the grammatical Merge-Agree-Label-Transfer process.What's more, to revert the standard metaphysical assumptions of lexical pragmatics, phases also determine what constitutes a context by constraining how a situation can impact the referential uses of expressions.Contexts are not preexisting, mind-independent states of affairs in the world, but rather amount to different ways in which grammar can orient its delivery of conceptual content around experience, which determines (at most) perceptual schemas and preferences for particular representational retrievals.In short, contexts are nothing more than mental states, and syntactic structures denote updates of these states.There is no a priori reason why some independently defined 'context' should influence the operations of syntax and CI.The present proposal consequently departs from Wilson's (2014: 144) belief that a "linguistic expression" is interpreted by being "put in systematic correspondence with states of the world."The pragmatics literature rarely unpacks what is meant by "linguistic expression" or "states of the world", and with words like book and city being shown to have no mind-independent referent (Murphy 2015b), the referentialist perspective Wilson adopts becomes unmotivated.
With syntax ultimately being responsible for numerous interpretive phenomena, this leads to a similar situation that Strawson (2015) is left in after discussing the philosophical distinction between internal and external content: The domain of external content has traditionally been taken to be the external world of tables and chairs.It's a vast domain, even when we restrict ourselves to concrete reality and put aside things like numbers and concepts.I've argued that the domain of external content is larger still.It extends further into the mind than is sometimes supposed.It's only at a very late stage, as we travel into the mind from its traditional heartland-the 'external environment' as it is usually understood-that we reach its true border.And the metaphor of the border is misleading.The internal content-external content boundary isn't a straightforward ontological line, because we can think consciously about everything that exists, and everything we can think about can be external content, and is external content when we think about it.
While it is true that lexical features determine conceptual content, what has not been acknowledged by the pragmatics literature is that reference is deter-mined by the grammar, not the lexicon, as noted in section 2. Lexical specifications are thoroughly context-sensitive, but grammar is not.Both the pLEX and the computational system contribute to meaning, but meaning of a different sort.
Consider the examples below: (10) a.The colonel wanted to be [a man].
b. [A man] was seen by the butcher.
In (10a), 'a man' is used predicatively to denote a property the colonel wishes to satisfy, whereas in (10b) the phrase is used referentially.It can also be used generically, as in 'A man would be wise to avoid that river', and the uses of the phrase 'a man' are purely determined by the grammar, not lexical content.That is, it is determined by the phasal position, with movement to the phase edge -such as in N-to-D movement-yielding stronger referentiality. 25The crucial factor in determining referentiality is not lexical category (N or D), but rather phasal position, so a more adequate term for this would be Interior-to-Edge movement, where [edge [interior]] is the phasal template, and under which descriptive content is found in the Interior (e.g.predicative interpretations like 'I saw [E [I lamb]]' describing the kind of meat witnessed) and referentiality is established at the Edge (e.g.referential interpretations like 'I saw [E the [I man]]' or rigid 3 rd Person interpretations like 'I saw [E Mary [I t]]', where Interior-to-Edge movement has taken place).A given name's semantic type reflects its phasal position, then, and so claiming (and is standardly done) that names enter the derivation as type <e> is an inaccurate generalization.
These observations are compatible with the ethological literature (Murphy 2015a(Murphy , 2015c)), which reveals that non-human forms of externalization are most likely limited to 'functional reference', and not the elaborate forms of reference made available through the labeling-driven syntactic component.The lexicon (however one formulates it) and syntax play entirely different roles in the construction of interpretations, as cases of specific language impairments and forms of mental disorders appear to illustrate (Hinzen & Sheehan 2013).The computational system therefore allows for a systematic and well-grounded investigation into the nature of meaning, escaping the kind of ad hoc adventurism of many philosophical and semantic theories parodied by Lycan (1984: 272) in his Double Indexical Theory of Meaning: (11) MEANING=def Whatever aspect of linguistic activity happens to interest me now.
Another major issue I would like to raise with the lexical pragmaticist's reliance on enrichment is that it dodges the more fundamental problem of accounting for the internal complexities of individual lexical items and phrases.

25
In a related discussion, Hinzen (2014: 140-141) writes: "We only know whether a given lexical item functions referentially or predicatively by looking at its grammaticalization, and the question of whether proper names are predicates or referential is ill formed.It wouldn't make sense to class a word like 'Mary' lexically through a feature like REF (for 'referential'); or to specify a given nominal as 'ARG' (for 'argument'); or to class it as 'ACC' and define a derivation through the need to 'check' such a feature."Again, the prospects for featuredriven theories of meaning are bleak.
The logic appears to be as follows: Explain the mysteries of lexical items and their complex conceptual interface properties by invoking further lexical items with their complex conceptual interface properties.
Going somewhat beyond Hinzen's (2015) and Hinzen & Sheehan's (2013) analysis, I would like to propose that it is not just grammar that mediates conceptual content like propositionality and objecthood, but more specifically grammatical sub-operations. 26Set-formation (in its various guises in the literature) is responsible for forming adjuncts, which crucially do not influence the grammaticality (i.e.phrasal/labeled/headed status) or truth-values of the construction they adjoin to.The truth of "It is 6 o'clock" is not affected by adding the PP 'in the afternoon'.The common claim in lexical pragmatics and lexical semantics that the PP is somehow 'hidden'-indeed, 'obligatorily hidden'-does not square with the well-established influences of the computational system on sentence meaning (Pietroski forthcoming;Hornstein & Pietroski 2009).To illustrate further, consider the following: Even though (12a) means, in virtue of its fully specified tense, "It is three o'clock now", we cannot suitably answer the question "When is it three o'clock?" by saying "It's three o'clock"; instead, we have to say "It's three o'clock now".Again, the adjunct is not 'hidden', and a different CI representation arises in the absence and presence of the adjunct (contra McIntosh 2014: 97 and his focus on Dummettian 'ingredient senses' and statements being true "just in case" certain conditions obtain).Adding now only serves to distort the meaning of the sentence in particular circumstances: (12b) can be corrected with ( 13), but (12a) cannot (in a case in which it's three o'clock when ( 13) is being spoken): (13) No, it's three o'clock then.
The supposedly hidden constituents posited by Fodor (2001) and others do not in fact yield a 'more accurate' structure, but simply a propositionally and psychologically distinct representation. 27Among other theories, this weighs against McIntosh's (2014) revival of Evans's (1985) proposal that the truth-value of propositions can vary over time.The grammar of a proposition is an object independent of its truth-value, though one which crucially directs its construction.Syntax builds sentences through procedures which are unrelated to truth, contrary to a number of claims in the philosophical literature (see Pietroski 2014b for discussion).Truth is an epiphenomenon of syntax requiring various kinds of 26 This proposal should be understood in the context of the Decompositionalist Project outlined in Murphy (2015c), which seeks to achieve a finer-grained level of understanding of syntactic computations in an effort to make the theories of linguists relatable to (and perhaps commensurable with) domains outside of the language sciences.27 Belleri (2016: 29) presents different, complimentary arguments that "it is not the case that sentences even generally fail to fully express our thoughts", given "minimal contextual information," countering the claims of Recanati (2004), Carston (1999), Travis (1996), and countless others.cognitive processes, and to say that a particular expression can vary in its truthvalue is, on the one hand, to utter a platitude, but on the other it is to imply that constructs like time and personal taste somehow impinge upon the operations of syntax. 28Evans's use of tense logic, for instance, reveals its shortcomings in representing (let alone explaining) the semantics of (12a), which it would incorrectly notate as ( 14) (similar to how the decompositional approaches to open class words like open noted in section 2 are inadequate): Further, by placing time on an independent metaphysical plane from all other forms of context (while also exclusively discussing the concerns of 'modal logicians' and 'tense logicians', and not linguists), McIntosh ( 2014) is indirectly sidelining the importance of other contextual variables, which, as we have seen, are much closer to the content of a given linguistic construction than has typically been appreciated.Quite apart from these empirical considerations, from a purely naturalistic perspective the motivation behind invoking pragmatic operations and unarticulated constituents is a relatively peculiar one, failing to meet universally accepted standards of theoretical simplicity and empirical adequacy taken for granted in other domains of the cognitive and natural sciences.If an investigator of a higher-level science can explain some phenomenon by way of a lower-level account (a systems neuroscientist invoking neurochemical processes, for instance), then the need to construct further higher-level objects or processes becomes redundant.
Relatedly, while philosophers of physics and philosophers of biology need to be well versed in physics and biology, it is somewhat peculiar that philosophy of language textbooks are remarkably light on linguistics.Discussions of syntax typically reduce to bullet points about how syntax is the study of 'word order' and the like.Empirical claims about language should be accompanied by a scientific understanding of linguistic structure, just as debates about physicalism and Hox genes need to be supported by an understanding of the relevant area of knowledge.More generally, there is a tendency in philosophical and pragmatic circles to ignore the fact that language has grammatical organization, and to sideline the implications of this for topics ranging from meaning to vagueness to intentionality.The ideological and structural barriers to innovation in this domain are substantial, and often overlooked.
The pragmaticist's desire to appeal to, for instance, essentialism and Atlas (1989Atlas ( , 2005) and Carston's (2002) 'underdeterminacy thesis' may stem from a more general cognitive tendency to complete missing details, such as in the case of Kanizsa's 'incomplete' triangle, which is not too distant from saying that "Mary has arrived" is really a particular manifestation of "Mary has arrived in London".The evidence used to defend, for instance, the silent subject-argument of 'win' in control structures like "Mary wants to win the game", with the argument co-referring with 'Mary', are entirely different from the justifications given This perspective is in fact similar to J.L. Austin's intuition that questions of truth arise at a different level from natural language expressions.in the pragmatics literature for silent elements.The principles of syntactic computation explain the semantic phenomena, and the hidden constituents (e.g.PRO) are not only defended on such grounds, but also crucially interact with other grammatical constructs like Case and c-command.The kind of rationale seen in discussions of pragmatic underdetermination, saturation and free enrichment is similar to the one seen in the case of someone explaining that the reason why "John arrived at the park" really expresses the meaning "John arrived at the nice, sunny, green park" is because they happen to know (via a close friend) how much John loves parks which are nice, sunny, and green.Missing elements need to be independently grammatically licensed, not stipulated as a 'general pragmatic process'.Pragmaticists also often 'over-generate' hidden elements (Sennet 2011), as in the case of the common claim that "Everyone went to London" or "Everyone screamed" really express the propositions that everyone in a given context went to London or screamed (Carston 2009: 7).But to claim that everyone went to London is certainly not to claim that "Everyone alive went to London", and so there is no need to invoke ancillary pragmatic operations, since there is in fact no semantic mismatch between content and linguistic form to begin with.More generally, a tendency to interpret a given utterance in a particular way (e.g.interpreting "It's three o'clock" as "It's three o'clock at the time of utterance") says nothing about the intrinsic content of that utterance.Likewise, a given speaker's communicative intention to express "It's three o'clock now" when uttering "It's three o'clock" does not provide a basis from which to posit obligatorily present silent elements (contra Fodor & Lepore 2004: 10), any more than my communicative intention to express "I don't know" when shrugging my right shoulder in a given context implies that there is some inherent, obligatory semantic content to the act of shrugging one's shoulder, which could mean any number of things.
These observations also apply to supposedly unarticulated instruments."He took the gun out and shot John" is often taken to include a hidden constituent 'with the gun' after 'John' (Korta & Perry 2008).But not only is it otiose to add the PP 'with the gun' when answering the question "What did he do with the gun?" by saying "He took the gun out and shot John with the gun", but this statement is also false in situations in which John was shot with another instrument, unlike the original utterance in which 'with the gun' is absent.The two constructions yield different truth-values with distinct propositional force.It is also worth noting that many of the justifications for underarticulation and underdeterminacy, such as Neale's (2007) Underarticulation Thesis, rest on an implicit adoption of semantic externalism, through which linguistic meaning is somehow 'tied' and 'connected' to mind-external physicalist objects and processes, often discussed within the context of Twin Earth and Dry Earth scenarios, and driven and reproduced by an unspoken but powerful allegiance to what Sellars (1963: 6) called "the manifest image" of ordinary perceptual content, rather than the underlying conceptual representations and system of computations which sustain it (see Chomsky 2000Chomsky , 2013;;Lau & Deutsch 2014). 29Defend-ing the existence of hidden location variables in utterances like "It's raining", Perry (1998: 9) reasons that the location "is a constituent, because, since rain occurs at a time in a place, there is no truth-evaluable proposition unless a place is supplied."But this argument stands only if syntax (and presumably lexical content, too) is structured and directed by necessarily syntax-external processes.Rain is not the same thing as the complex concept RAIN, any more than a child's (or, for that matter, an adult's) conception of water the same thing as H2O (contra Belleri 2016: 36).Syntax supplies temporal and spatial associations in the structure "It's raining" through tense and nominal reference, while what Hinzen (2015) calls the Here and Now of a speech act (its given time and location, which necessarily accompany any utterance) also saturate it, and so nothing is underarticulated, and grammar requires no assistance from exotic pragmatic operations.
As noted in section 1, grammar and content align in ways largely unrecognized in the literature on pragmatics and human cognition more generally.Grammar-meaning alignment should be seen as the null hypothesis, with empirical work required to justify deviation from it.Assuming this, we can propose the following guideline: (15) Syntax-Internal Precedence (SIP) Invoke syntax-external processes, such as pragmatic procedures, only when the explanatory power of syntax-internal operations reaches its limit.
To push the above argument further, the present evaluation of ( 12)-( 14) leads to a rejection of pragmatic unarticulated constituents of the kind which can also be produced by merely observing the adicity of particular concepts, without even resorting to the level of analysis Phasal Eliminativism operates at.Indeed, instead of invoking unarticulated elements, an urgent and more difficult task faces the linguist and philosopher concerned with why ( 16) is a full sentence, if stabbed is a concept with a variable corresponding to the stabber: (16) Caesar was stabbed.
Understanding the valences/adicities of lexicalized concepts requires no 'silent' elements constructed by human pragmatic competence (e.g."Caesar was stabbed [Op]", with a silent operator standing for the stabber) or even individual differences amongst people of different tongues, but can be achieved through a deeper understanding of the individual set-theoretic operations and representations which host such properties, without the need to bring the grim human being into things.In a similar way that truth cannot purely be reduced to syntactic propositional structures (CPs), and can only be said to operate within walks among the "glaucous-leaved, crimson-stalked marsh-plants" in Lodmoor country park's "dark stretches of gloomy peat-sold" with a young companion, Perdita Wane, and the "vague warmth of diffused well-being that it cast over him seemed to reveal with a culminating vividness that all material objects were unreal compared with the mental activity in which they floated, like rocking driftwood on an intangible tide" (1999: 155; see Murphy 2014b).
these constraints, so too can the differing number of arguments a verb takes (compare the dyadic slept to the highly polyadic-that is, ranging from dyadic to triadic to tetradic-put) be attributed to multiple factors.The adicities of lexical items is plainly one factor in the construction of their meaning.Another appears to be the effects of lexical items being instructions to build monadic concepts which may in turn have been introduced by other concepts with varied adicities.Since non-humans are capable of combining two distinct concepts (Murphy 2015a), the term concept, when used within linguistics and philosophy, should therefore be understood as concepts which display a (yet to be fully determined) level of inter-modular assimilation.
Given that thematic concepts are yielded by labeling, and the possibly human-unique nature of this operation, I would correspondingly like to shift the explanatory burden in accounts of linguistic interpretation not just to syntax, but to labeling.While there are a number of reasons to be suspicious of their lexicalist, word-internal definition of 'label' (according to which individual lexical items have a 'relabeling' capacity under certain conditions of movement), Cecchetto & Donati (2015: 31) are nevertheless right to note that "labels belong to the core part of grammar that cannot be dispensed with and cannot be relegated to the interface".The above division in the role of the computational system and the interpretive systems also reflects the blind and free nature of Merge on the one hand (documented by Epstein et al. 2014), and the necessary optionality of the interpretive operations of pragmatics, such as lexical narrowing (where drink is interpreted as alcoholic drink), on the other.
Syntax can also explain a number of entailment phenomena, if we assume, as noted above, that adjuncts are simply concatenated to a structure and do not reconfigure (i.e.re-label) the clause to which they are adjoined."It's three o'clock in London" entails "It's three o'clock", but "Every man in the building is tall" does not entail "Every man is tall".This is due to the fact that in the former case the PP 'in London' is adjoined to the full structure, whereas in the latter case the PP 'in the building' is adjoined to the NP 'man' before quantification is fixed.Lexically, [NP man [PP in the building]] entails 'man', since a man who is in the building is still a man, but it does not entail that every man is tall since the PP is not adjoined to the QP 'every man'.The reason why an utterance of "Every man is tall" in a given context can entail that every man in the building is tall is because, as discussed, lexical but not syntactic representations are sensitive to context.In addition, meaning has always been justifiably understood as a relation between concepts and one, not two, linguistic structures. 30Those claiming that hidden adjuncts and other material simply constitute the same meaning as the surface utterance therefore have to also provide reasons for abandoning this assumption; no such contemporary argument has been presented.When Recanati (2004: 58) moves beyond lexical underdeterminacy and argues for the existence of 'constructional' (grammatical) underdeterminacy in the case of 'red pen', we can simply point to the trusty computational system and invoke labeling as the operation which ensures that this phrase can never denote a red object which 30 See also the contrast principle put forward in Clark (1988), under which speakers assume that two forms always differ in meaning.
happens to function as a pen (in which the adjective would label the phrase an AP), and can only denote a pen with the property of being red (an NP).Syntax consequently determines meaning, carving the path pragmatics must blindly follow. 31 Let us assume further that nouns, verbs and modifiers are instructions to build monadic concepts, a process achieved via set-formation/Conjoin.In contrast, labeling permits the creation of thematic concepts (Murphy 2015c), and can be appealed to in order to reduce the need for extra pragmatic processes.Labels can introduce dyadic concepts like INTERNAL(E,X), with the necessary information being filled by the monadic concepts fetched by lexical items.Adding to Pietroski's (2012) argument that "lexicalization is a tool for creating concepts that abstract from certain formal distinctions exhibited by prior concepts", the combinability of human concepts may be generated by labeling, amounting to the kind of type-lifting operation proposed by numerous figures (Montague 1974;Kamp 1975) yielding higher-order concepts (than purely monadic ones) able to be saturated, e.g.λY.λX.RED(X) & Y(X) can be saturated by CHAIR(X) to form RED(X) & CHAIR(X).
Given this, the implications for the language sciences are clear.The linguist concerned with exploring topics ranging from ellipsis to co-predication should attempt to construct hypotheses based on SIP, invoking pragmatic processes only in an effort to explain how syntactically determined representations are ultimately enriched.A principled account of syntactic computation can derive many of the truth-conditional effects discussed by pragmaticists, and there is no need to couch a description of such phenomena within theories of 'free enrichment' or 'underarticulation', which are, ultimately, nothing more than elaborate methods of data coding, not explanation.

Labeling Theory
Before concluding, I would like to briefly discuss the actual labeling architecture invoked above, since instead of exclusively seeking semantic or pragmatic accounts of linguistic content, an understanding of syntax, following SIP, can also reveal the structure (or at least the legibility conditions) of CI.
Though phrase structure building has traditionally been seen as being categorially anchored, Chomsky (2015) proposes that labeling can also be achieved through less obvious methods.He explores so-called <φ,φ> agreement, through which an {XP, YP} structure is labeled by symmetric φ-features.For instance, φfeatures shared by K/D and T can label a phrase in which KP has undergone movement due to Q-feature agreement with T: Questions of semantic content hover in the background, and are related to and constrained by labeling, but naturally do not reduce to it.Epstein et al. (2014: 465) stress that labeling theory is silent about "how the conceptual-intentional systems use the information that, say, {H, XP} is an H-type thing" (where 'H' denotes the phrasal head/label).
would otherwise be the embedded TP remains unlabeled: But the agreement features (uninterpretable features, uFs, valued by Agree) on T in (17) would have to be visible at CI for the derivation to converge and satisfy Full Interpretation, which seemingly contradicts the uninterpretable (i.e.undetectable at CI) status of uFs in the context of T, since there can be no meaningful Person, Gender, and Number specifications with regards to Tense.As a way of remedying this, suppose that Transfer to CI (either as a property of Transfer or an interpretive condition) can select which copied feature-bundles are interpreted in the case of featurally symmetric {XP, YP} structures, perhaps through a form of semantically driven minimal head detection, as in Narita (2014), or a modification to the labeling algorithm, as in Adger (2013), allowing the copied (via Agree) φ-features on T to remain invisible and K's feature-bundle to be assigned interpretation.This would effectively turn the above structure into a <φ,φ> object at CI, with the strikethrough denoting selective uninterpretability, and we can assume that agreement features are visible for labeling.This symmetrybreaking perspective on labeling, yielded by cyclic transfer, also speaks to the present anti-lexicalism, since labeling is shown to be independent of the influence of lexical categories. 32 Relatedly, it is worth briefly returning to the similarity discussed above between certain syntactic and semantic operations, since this might shed further light on syntax-CI interactions and disparities.Consider Function Application and theories of type-shifting, which appear (broadly) syntactic since they concern forms of mental computation.Though many other semantic operations (such as Pietroski's Conjoin) may simply be syntax 'in disguise', these operations notably violate principles of minimal computation like the No-Tampering Condition (NTC) and Inclusiveness Condition (IC), and so may have been the result of the kind of Darwinian modification by descent impacting conceptual representations (Hurford 2007;Carey 2009), unlike the 'perfect' system of narrow syntax with its operations of cyclic transfer and labeling by minimal search, which likely arose through a less gradual evolutionary process, perhaps of the Thompsonian kind as discussed in the evo-devo literature (see Hinzen 2006;Murphy 2015a). 33Such 32 It also seems to me incorrect to claim, as is standardly done (e.g. in Epstein et al. 2014Epstein et al. , 2015)), that Collins (2002) attempted to eliminate labels from the grammar.Despite the title of his article, Collins rather worked towards changing traditionally labeled nodes to a lexically defined set of prominence relations.

33
Concerns over simplicity and elegance in the study of language trace back at least to Leonard & Goodman's (1940: 51) 'considerations of economy' in their logical-epistemic work, and to Quine & Goodman's (1940: 109) distinction between 'real and apparent economy', i.e. theory-internal elegance vs. notational simplicity.It was understood that simplicity of theory is tantamount to explanatory depth.See Larson (2015) and Narita (2014) for discussions of computational efficiency and attempts to refine this amorphous notion, and Boeckx (2014: 87) for a critical examination of the standard claim that an increase in phase boundaries necessarily leads to greater computational complexity, which is rejected in favor of a novel phase model which suggests that if the phase head α labels the singleton set β it is an 'intransitive' phase, and if δ labels the two-member set {γ,α} it is a 'transitive' phase; see Murphy (2015d) for discussion and also Boeckx (2012) for the original proposal.
'semantic' CI operations involve mapping from hierarchical sentential structures to truth-values and sets of functions, violating IC and NTC.This topic has not been addressed to my knowledge in the semantics or wider cognitive science literature, but further interdisciplinary collaboration would achieve a richer analysis of both the core computational system and more peripheral semantic conditions and pragmatic operations, perhaps alleviating Tomalin's (2006: 3) concern, in his history of generative grammar and its relation to the formal sciences, that there exist "many areas of research that are not understood with sufficient precision to permit an axiomatic-deductive analysis". 34The mode of grammar which characterizes certain aspects of human thought, based on a phasal architecture, has the potential to reveal the structure, relations, and development of semantic representations; one of the motivations behind the following claim from Ott (2009: 360): "The particular phases we find in human syntax are thus not a matter of necessity; if the C-I system were structured differently, different structures would be 'picked out'." Notice also that at this point the topics of anti-lexicalism, pragmatic competence and biolinguistics make a certain amount of hitherto unnoticed contact.If linguistic structures can amount to φ-labeled, propositionally complete, and cyclically transferred conceptual representations which are only optionally enriched through more general cognitive and pragmatic processes, then the task of achieving a suitable level of granularity from which to meet the demands of biological adequacy not only becomes much clearer, but it also carries with it a higher level of falsifiability, given a developing understanding of the neural dynamics of linguistic computation (Bastiaansen & Hagoort 2015;Murphy 2015e, 2016 and cartographic advances in mapping the brain regions implicated in particular semantic representations (Moseley & Pulvermüller 2014).This is a clear advantage over less specific biolinguistic hypotheses, as Lasnik & Kupin (1977) already noted in their discussion of reduced phrase markers and restrictive syntactic theories.In addition, Moseley & Pulvermüller's (2014) study found that topographical differences in brain activation in response to a variety of noun and verb types were modulated by semantics, and not lexical category.Combined with the insight that phrases can be labeled and stored in short-term memory (during online structure-building) as objects labeled by non-lexical features (e.g.[ φ … α … [ γ … β …]]), this consequently broadens the available options for empirically studying language at the implementational level, with those concerned with, for instance, the neural correlates of phrasal comprehension no longer being limited to searching for signs of NP or VP interpretation.
As this section has demonstrated, there is no shortage of constructive ways to explore linguistic computation and the structure of conceptual representations; some well-established, others newly emerging.But the search for unarticulated 34 Even in debates at the syntax-semantics interface, it is often left unaddressed which structures, operations and features qualify as being narrowly syntactic, and which are postsyntactic.This is indirectly intensified by much of the cartographic literature.EvaluativeMoodP, for instance, is likely not a syntactic primitive, rather arising as the output of syntax-CI Transfer operations.In addition, due to the 'syntacticocentric' (to use Jackendoff's term) perspective on CI interpretation adopted here, perhaps the 'interpretive' function of formal semantics, ⟦ ⟧, should be re-analyzed as being concerned with labels.constituents via pragmatic processes of free enrichment is not one of them.As the bridges between linguistic sub-disciplines become stronger, and the prospects for wider collaboration with the life sciences grow, it should by now be particularly clear that multidisciplinary perspectives, goals and agendas should be pursued.Syntax is not enough, semantics and pragmatics are not enough, mathematics and philosophy are not enough, anthropology and brain dynamics are not enough.Nothing short of everything will really suffice.

Conclusion
The ongoing search in the field of pragmatics for unarticulated, missing elements has derailed a generation of inquiry into the interpretive systems, even if certain properties of these systems, such as relevance-seeking, have been exposed.Following SIP, a return to the original concerns of generative grammar centered on the computational system and how it interfaces with the external systems is now needed.As I hope to have shown, reference is achieved through grammatical, not lexical, mechanisms, while syntactic structures are not underspecified for semantic content, as is often claimed by pragmaticists.An understanding of what syntax can and cannot do is required before post-syntactic CI structures and pragmatic mechanisms can be appealed to.By adopting Phasal Eliminativism and viewing word meanings as a combination of pragmatically enriched concepts and pLEX representations, there is consequently no underdeterminacy, no pragmatic compositionality, no signifier-signified semantics, no Twin Earth paradox, no productive polysemy, no essentialism, no problems of reference or eternal sentences.There is only the bi-phasal syntactic and pragmatic computational procedure and its various operations: set-formation, labeling, cyclic transfer, saturation, relevance-seeking, and so on.It remains to be seen how far the pragmatic systems and semantic content can be reduced either to operations of the computational system or the structure of CI.