^{1}

Minimal Search (MS henceforth) was introduced as a mechanism underlying syntactic dependencies in

MS has become a key component in the formulation of various syntactic operations. To name but a few,

[MS is]

Or the semi-formal formulation in

In this context, it is essential to address the following questions: how do the properties of MS relate to those of structure building? Is there a way to define MS in a way that is both formally explicit and empirically fruitful? We argue that there are difficulties that arise when we attempt to define MS under the assumption that Merge produces unordered sets, difficulties that are mirrored in other areas of the grammar such as argument structure. Given that a search algorithm seems to be required independently in syntactic theory (e.g., in the treatment of Agree and long-distance dependencies), this constitutes an argument for exploring a format for syntactic representations not based on unordered sets. We will provide an alternative formalism for structural descriptions which departs from the set-theoretic foundations of current Minimalist syntax and which not only allows us to define MS, but also has independent empirical advantages.

In Minimalist theorising, Merge / MERGE (in what follows we will use ‘Merge’ for convenience, unless a technical point hinges in the distinction between ‘Merge’ and ‘MERGE’) are instances of an allegedly irreducible operation of unordered set formation over elements in a workspace (

{{the, man}, {{φ, T}, {fall, {the, man}}}}

We contend that there are reasons to prefer an ^{1}

A reviewer suggests that ‘

^{2}

We will go back to the problem posed by the set-theoretic status of

One way to address the inadequacy of label-less unordered sets to represent argument structure is to allow Merge to output ordered sets.

L(α, β) = <α, β> (if α is atomic)

L(<α, …>, β) = <α, …, β> (if α is a phrase)

This order is important in order to define, among other things, hierarchy and semantic interpretation: given a set of lexical items LI =

We want to take this objection to unordered sets a step further: we require of an adequate format for syntactic structures that they not only encode asymmetric relations (such as selection, which could also be encoded in feature-driven Merge), but also do it in such a way that a search sequence can be defined unambiguously over structural descriptions. For this, we need structure building to impose an order over expressions. The idea that syntactic representations are ordered (in ways other than precedence) has a long tradition in generative grammar, beyond set-theoretic commitments: to give but an example, ^{3}

As observed by a reviewer, ‘

In this paper we argue that a shift from set theory to graph theory as the model for structural descriptions in natural language syntax is a desirable way to deliver structures where MS can be defined. Within Minimalism, ^{4}

^{5}

Alternatively, we can notate that edge as

The goal of this paper is twofold: to define MS as a search algorithm and to characterise a format for syntactic structure where MS can apply and which can be independently justified. In this light, we will argue that (i) structure building must encode selection and argument structure, which can be accomplished with directed graphs, and (ii) this leads to a natural characterisation of MS as a sequential search algorithm (in addition to avoiding or solving problems that arise independently in set-theoretic syntax).

The paper is structured as follows. Section 3 introduces search algorithms. Section 4 defines graphs and compares the properties of sets and graphs as the output of generative operations. Section 5 discusses the implications of Merge-as-unordered-set-formation for MS in the ‘easy cases’ (i.e., {X, YP}). Section 6 discusses aspects of the labelling algorithm related to MS. Section 7 focuses on the ‘hard cases’ for MS: {XP, YP} and {X, Y}.

In computer science, a search algorithm is a sequence of well-defined, implementable instructions that retrieves some information stored in a data structure; in other words, a sequence of steps to locate a memory address and retrieve the information contained in that address (

We can distinguish between sequential, parallel, and random search. In a sequential search, values in a data structure are read one at a time starting from a root: this corresponds to the characterisation of MS in ^{6}

A comment is in order (pun intended): given {

A simple sequential search algorithm can be exemplified as follows (_{1}, …V_{n}_{1}, …K_{n}_{x}_{i}_{i}_{x}_{i+1}_{n}_{x}_{i}_{n}_{i}_{j}_{n}_{i}_{j}_{n}

Search algorithms have been devised not only for table-ordered datasets (e.g., a phonebook), but also for datasets structured in tree form, where each node in the tree is assigned a uniquely identifying address: this is sometimes known as a Gorn addressing scheme (

Sequential tree search algorithms, as assumed widely in the Minimalist literature, are broadly divided in two kinds: ^{7}

Recall that in a

A ^{8}

The order in which nodes are checked depends on the type of traversal algorithm chosen, since there are three possible ways to implement it. Take a branching node A immediately dominating nodes B on the left and C on the right as an example:

A

An

A

The choice of

A

Suppose that the search starts from the root, and the target value is Y (a terminal node). A

Σ = <R, YP,

The algorithm, at every node visited, compares the element in its input with its target value (a head or a valued feature of a specific kind). If the scanned symbol matches the target value, the algorithm halts; otherwise, it keeps going. This mechanism underlies both depth-first and breadth-first algorithms, the only thing that changes is the order in which nodes are visited. For a breadth-first algorithm (assuming a

Σ = <R, XP, YP,

In this example both algorithms find Y before any other head; however, a depth-first search finds Y after visiting two nodes, whereas a breadth-first search finds Y after visiting three.

As highlighted above, the Minimalist framework in which MS plays a major role is strongly committed to the idea that structural descriptions are

A graph is a pair G = (V, E), where V is a set of vertices (or

The _{x}_{x}_{x}_{x}

the _{1}_{2}_{1}-v_{2} walk_{1}_{2}

Trees are specific kinds of graphs: a _{x}_{y}_{x}_{y}^{9}

This is because Minimalism rejects multidominance (

^{10}

The tree in

In

A set theoretic representation of

{S, {

In (5), A contains {

The notion of

Set-theoretically, the

Minimalism has sometimes equated set-theoretic representations with graph-theoretic ones; for example,

This alleged correspondence is not unproblematic. In the set-theoretic representation, the status of ZP, X’, and XP is unclear: they seem to be used as proxies for subsets, as informal ways to ‘refer to’ sets (Chomsky says they have ‘no status’, but then it is not easy to see why they are used at all).

Chomsky’s fragment suggests that the choice between sets and graphs as the format of linguistic descriptions is merely notational. We want to emphasise that this is not the case: there are relations and operations that we can define in objects like

W_{b, c} = <

In (6) we have an ordered set of edges: we go from ^{11}

As a reviewer points out,

_{1}, SO_{2}, ..., SO_{n}〉 where for every adjacent pair SO_{i}, SO_{i+1} of objects in the path, SO_{i+1} ∈ SO_{i} (i.e., SO_{i+1} is immediately contained in SO_{i}).

We have emphasised that problems arise when trees are used as diagrams to represent set-theoretic Merge. Suppose that we Merge X and Y (for X and Y arbitrary SOs in the workspace), yielding the set {X, Y}. This is diagrammed in Minimalism using a binary-branching tree:

_{1} <_{2} <

A more accurate representation of Merge(X, Y) = {X, Y} than

Here, no new nodes are introduced^{12}

Graphs like that diagrammed in

An alternative is to define that Merge is asymmetric, driven by the satisfaction of selectional requirements (e.g.,

In Minimalist Grammars, for example, X would have a selector feature =F, matched with a categorial feature F in Y. Implemented as in

Note that in defining Merge graph-theoretically we have not lost any of the classical properties of classical Merge-based syntax: binarity, recursion, and discrete infinity (what

Trees and sets are neither equivalent nor notational variants of each other, and the choice between one and the other as the basis for syntactic theory has far-reaching consequences in terms of the relations and operations that can be defined in each. This is important, as one of our objectives is to evaluate the feasibility of MS as a search algorithm defined over sets formed by Merge/MERGE and over graphs. If MS is as fundamental an operation as current Minimalist theorising makes it to be (at the core of labelling, long-distance dependencies, anaphoric binding, and Agree), then we may use MS as part of an argument to decide what the best format for structural descriptions is. And if some of the areas where MS becomes crucial are also shared with other theories of syntactic structure, we take this to be an argument in favour of paying more attention to those domains.

The next section presents the properties of the outputs of the generative operation Merge in the context of recent Minimalist works. We must examine these properties and determine to what extent a search algorithm like the one assumed in the works we have cited here can apply to SO generated by Merge.

It is necessary at this point to characterise the generative operation to evaluate the feasibility of defining a search algorithm for its output.

Merge is iterable

Merge is binary (the input of Merge is always a pair of objects)

Merge is commutative (Merge(X, Y) = Merge(Y, X))^{13}

This property of Merge makes MS algorithms such as

(The output of) Merge is unspecified for linear order (

(The output of) Merge is unlabeled

Merge is not triggered (by a head, a feature, etc.)

Merge is never counter-cyclic

Merge is all there is structure building-wise: there is no Move or Copy

Merge

Merge allows to dispense with traces, indices, and copies

Merge allows to dispense with the notion of Chain

These properties constitute the background against which the definition and role of MS can be evaluated. The properties of Collins-style Merge that matter for our purposes also hold for the approach in

Workspace (WS) contains X and Y: [_{WS} X, Y]

MERGE(X, Y) = [_{WS} {X, Y} X, Y]^{14}

Remove X, Y from WS = [_{WS} {X, Y}]

The basic properties of Merge are still there. The removal of X and Y from the workspace intends to restrict the probing space and the number of elements available for further computations.

This version of Minimalism differs quite substantially from previous stages of the theory in terms of the properties of the outputs of the generative operation. In the first incarnation of Minimalism, the generative operation Merge was defined as follows:

Furthermore, because SO are interpreted at the C-I and A-P interfaces differently depending on whether they are verbal, nominal, etc. (

Since then, Chomsky and others have separated labelling from structure building (e.g.

Now we have a characterisation of the objects that, in current Minimalist theorising, MS is supposed to apply to: Chomsky-Collins sets. How does it work and how do we know if the search is successful? This latter point has been addressed explicitly: MS looks for a

For purposes of labelling,

For things to work as Chomsky and others have suggested, it must be possible for LA, given an SO, to determine whether there is a head: this means that

{

as the output of MERGE(^{15}

As observed by a reviewer,

This approach is somewhat confusing^{16}

And may lead to inconsistencies. For example, {Ø} is a subset of every set, but Ø is not an element of every set.

. Let us explore some of its consequences. If MERGE involves an operation of removal or replacement such that a workspace containingWS: [{

Then, MERGE replaces this pair of objects with a set containing them:

MERGE({

Under Chomskyan assumptions, the sets {^{17}

For purposes of Agree by MS, this characterisation is challenged by

Which is exactly what we sketched above: following

If, however, we do

A further issue that impacts on the implementation of MS in labelling is that the identification of heads seems to be completely independent from subcategorisation. This is surprising if External Merge delivers argument structure, as claimed by Chomsky. The precise nature of the relation between structure building and predicate-argument relations in set-theoretic syntax is unclear. Thus, in determining a label for the output of Merge({

(i) transform a relation between two singletons into a relation between a singleton and a set with a greater cardinality by adding more structure, as in Self-Merge proposals (thus multiplying elements in representations)

(ii) remove one of the objects after additional structure has been introduced, as in labelling-driven movement proposals (multiplying steps in derivations)

Both involve complications and departures from Minimalist desiderata (

An alternative is readily available: to define the output of Merge(

Note first that the digraph is not a new object independent of the nodes

Having an address indexing system is allows us to refer to expressions unambiguously, regardless of context (i.e., what they are immediately dominated by or immediately dominate): wherever in a structure that we find the address

In our view, Merge delivers asymmetric relations between expressions: the order thereby imposed is not linear precedence, but determined by subcategorisation. We want to capture the fact that

read:: =D =D +case V

books:: D -case

=F is a selector feature, which requires Merge with a category F, +F is a licensor (or ‘probe’) feature, and -F is a licensee (or ‘goal’) feature. In this case, ^{18}

There is a potential redundancy between selector-categorial features and arcs, since they both represent selection. Given the choice, arcs should stay, since they also provide a way to define search sequences for the satisfaction of nonlocal features (e.g., [wh], see Section 7).

. The core idea is that we want of a successful syntactic theory to represent argument structure and modification, both empirically motivated relations.The final difficulty that we will consider, and which underlies not only MS for purposes of labelling but also Internal Merge, is precisely that search algorithms are defined for structured data: there must be an order imposed over the data that search algorithms go through. If a search algorithm can be defined such that paths can be compared and shortest paths can be chosen (^{19}

Incidentally, many of these proposals involve computing at least two paths (sets of terms) and comparing their cardinality, choosing the shortest path (

From a set-theoretic perspective,

{b}

{a, b}

{b_{1}, {a, b_{2}}}

Here, b_{1} is Internally Merged to the set {a, b}. According to _{2} inaccessible to further operations because b_{1} is found first. This entails, however, that the system knows (i) that it is looking for an object of category b, and (ii) that b_{1} and b_{2} are copies of each other (or ‘occurrences’ of b; see _{2} and excludes b_{1}, but there is no set that contains b_{1} and excludes b_{2} (so (12c) must be a multiset; see

{reads, books}

{{the, man}, {reads, books}}

Graph-theoretically, the dependency between the lexical predicate and its arguments can be minimally represented as arcs, as can dependencies between members of each SO (leaving aside for the time being ^{20}

We assume in (14) and

Neither (13b) nor (14) (diagrammed in

G = <

The order between edges can be defined in several ways: one possibility is to have the order represent order of composition. In that case, (15) defines a top-down parse (^{21}

Alternatively, as explored in

^{22}

See

Set-theoretic Merge only defines two relations: containment and co-containment. If (co-)containment alone does not provide enough information to define a sequential search, can we appeal to some other mechanism? In the original, asymmetric version of Merge, the order in the output is given by labelling, such that {X, Y} is either {X, {X, Y}} or {Y, {X, Y}} (<X, Y> or <Y, X> under the Wiener-Kuratowski definition). However, if order is given by labelling, then MS cannot be a pre-condition for LA (since order is itself a pre-condition for MS), nor can MS be the labelling algorithm itself (since MS underpins other operations, such as Agree). If Merge/MERGE creates Chomsky-Collins sets, and a search algorithm is to be defined over those sets, they must be ordered (and MS itself cannot impose that order).

Assuming that labelling is driven by MS results in a problem: before labelling, the result of Merge is an unordered array and MS should apply either randomly (for sequential search) or in parallel. We will come back to labelling on Section 6; before doing that, we need to revise some basic assumptions about the (a)symmetry of Merge.

Suppose we have a tree as in

where Y and Z may be internally complex. We will first explore the case where Y is a head and Z a non-head. Y is an expression assigned to an indexed category in the grammar, and so is Z. What is the indexed category assigned to an expression of the form [Y Z] (which will determine the kind of syntactic rules that can affect that object, the rules of semantic interpretation that will apply, its distributional properties, etc.)? For example, if Y =

An object embedded in VP cannot provide a label because

The idea is intuitive. It also imposes requirements over specific configurations: for example, an implementation of this idea requires us to assume that

^{23}

In Multiple Spell-Out (MSO), a complex specifier or an adjunct is derived in parallel, and once introduced in the main structure its internal dynamics are inaccessible. In the ‘radical’ version of MSO (

Labelling in BPS was encoded as part of Merge itself, with some later developments requiring Merge to be triggered by featural requirements (e.g.,

In any case, the situation first explored by Chomsky (namely, {H, XP}) seems straightforward: if an SO contains a head and a non-head, the labelling algorithm, which works by MS, finds that head and labels the SO. However, formalising that is not a trivial task. The first question is how the search would take place; in other words, how to define each step that makes up the algorithm. Above, when defining a search algorithm, we specified that it needs some way to compare inputs with the target of the search (the case of MS applied to Agree, where probes search for matched features that need valuation, is considered in

Given a sequence of syntactic objects SO_{1}, SO_{2}, …SO_{n}

Step 1: Initialise. Set

Step 2: Compare. If SO_{1} is a head, terminate.

Step 3: Advance. Increase

Step 4: End of input – terminate.

A problem with the application of this algorithm to Minimalist syntactic structures (in addition to the issue, noted above, that if Merge yields unordered sets it is not possible to arrange terms in a unique sequence) is that the system must somehow know how to determine, given an SO, whether it is a head or not: an object cannot be identified as a head by the search procedure (as suggested by a reviewer) because the algorithm needs to know what it is looking for in advance (e.g., ^{24}

Minimalist Grammars (e.g.,

Let us preface this section by pointing out that the ‘problem of labelling’ arises only if (i) the derivation proceeds bottom-up, step-by-step

Consider now what a label-less object commits us to, for purposes of MS. In set-theoretic terms, all objects are either sets or members of sets (notation like {_{α} X, Y} has no formal status in set theory). Thus, if we have Merge(X, Y) = {Z, {X, Y}}, with Z the ‘label’ of {X, Y} (

In graph-theoretic terms, in contrast, what we would call a ‘label’ is the root of a graph, at each derivational step. In other words: the label of a SO (a set of nodes and edges) is the node that is the root of the local graph which defines that SO. Thus, if an operation takes that SO as part of its input, the structural description of that operation will refer to the root of the SO, not to every node properly contained in it. In set-theoretic terms, there must be a way to refer to sets in a more abstract way; a ‘variable over sets’ as it were, such that we can formulate structure mapping rules without mentioning specific sets.

In what pertains to MS, it is unclear how approaches with ^{25}

In some Minimalist works (e.g.,

Neither object is a head (both

At this point, we need to refer to the set extensionally (since there is no root address). Here we can clearly distinguish between a graph-theoretic and a set-theoretic approach: set-theoretically, there is no root accessible to the Narrow Syntax in ^{26}

As in

This process involves backtracking: the object {DP, ^{27}

Exocentricity seems to us to be clearly distinct from the lack of a label. A phrase structure tree defined by the phrase structure rule S → NP VP is exocentric, but labelled.

), and counter-cyclic operations to label them is a source of formal difficulties that did not exist in previous stages of Minimalist theorising and which does not seem to be required to provide empirical analyses that would be otherwise impossible.In the literature, the fact that DP movement in structures like ^{28}

We leave aside, for reasons of space, a discussion of the procedure that labels the root of the structure in

Above we introduced the concept of search algorithms for trees, in two variants: breadth-first and depth-first. We also reviewed the difficulties of defining a search for unordered sets. However, what if Minimalist trees were taken as more than just graphical aids, notational variants of sets? The alternative, as emphasised throughout this paper, is to consider them not sets, but graphs (^{29}

More than one cyclic model of syntactic computation can implement MS in these terms, delivering different results. A Multiple Spell-Out model (

Following Minimalist proposals, once a head is found the search terminates. This can be generalised: when the algorithm finds an object in the input that matches its target, the search halts (

In Section 4 we introduced the notions of

The

A

Intermediate nodes are nodes with indegree non-zero and outdegree non-zero

These definitions are weakly equivalent to saying that a leaf in a tree is a head, and require nothing other than the tools already available to us to characterise the format of syntactic structures as graphs and the concept of tree traversals (

the search sequence from the root until the first node with outdegree 0 would be Σ = <●, X> (i.e., Σ = <●, buy>), both under depth-first and breadth-first algorithms, assuming a preorder. By Chomsky’s LA, this identifies ● as XP (VP in (b)).

The first question to address is whether objects of the type {XP, YP} are indeed ambiguous for MS. To this end, we need to examine how the search algorithms defined before would work. Suppose that XP and YP both contain a head and a phrase:

XP = {X, WP} (e.g., {D, NP})

YP = {Y, ZP} (e.g., {

And we Merge XP and YP to create {XP, YP}, represented in tree form in

Suppose that the label of the new node created by this merger is to be determined by MS. The theory in

Consider now what happens if the object in

Σ = <●, XP, X>

And a sequential breadth-first search would follow Σ’:

Σ’ = <●, XP, YP, X>

In neither case is there an ambiguity: when the algorithm finds an object that matches the target (here, a head X), the search stops. An advantage of the present approach is that it is not necessary to compare the distance between the root and X vs. the root and Y (cf. e.g.

In

Finally, if a situation where two non-terminals are in a relation of sisterhood was ambiguous for MS, and there was no external controller the search procedure would just halt with no output, regardless of the trigger of the search. But it doesn’t, in formal characterisations of sequential search algorithms. If MS doesn’t just halt, then we must assume that there is either a bias in the algorithm that directs the branch that is to be looked at first or there is some external factor that dictates which branch is to be probed first. We may define that the search follows a

So far as we can see, essentially the same argument holds for {X, Y} situations. However, some additional considerations must be made. An {X, Y} situation emerges, according to ^{30}

It may be worth noting that in the Minimalist literature cited in this paper the notation for a head is always H, not {H}. That is: a configuration that contains a head and a complex object is notated {H, XP} (where XP is possibly a proxy for a set), and not {{H}, XP}. This suggests that indeed the sequence

I

John

Under BPS assumptions,

{love, her}

Only the arc (21b) allows us to define an ordered sequence without further stipulations or intermediate nodes, while at the same time representing predicate-argument relations. The treatment of {X, Y} situations in the framework of

In addition to Agree and labelling, MS has also been invoked in the analyses of long-distance relations. For example, the interpretation of a filler-gap dependency may be construed as a search^{31}

This is not just the case in transformational analyses. For example, LFG’s treatment of long-distance dependencies, using functional uncertainty, defines a dependency path between filler and gap as a regular expression (

Consider the following intermediate structural description of a

[C [[Mary] [T [[Mary] [

If C searches for a SO to Internally Merge at the root, it is necessary to specify that in addition to being an NP, the target of the search must bear a(n unchecked) ^{32}

If an adequately restrictive meta-theory of features is devised (as of yet a missing part of the Minimalist theory; see

_{wh}, what

_{wh}>

What to do after a target has been found is a different matter from the search itself: In Minimalist terms, the target is in some sense ‘copied’ and re-Merged at the root: we obtain a SO with two ‘tokens’ of

How can a graph-theoretic definition of Merge help us solve these problems? We showed above that in binary-branching graph-theoretic trees, search ambiguities do not arise, but even then complications related to copies and repetitions remain. Simply replacing tree diagrams with graph-theoretic trees does not suffice. Recall that our proposal departs from trees as the format of structural descriptions: in our analysis, the syntactic workspace is a lattice of uniquely indexed basic expressions (

The analysis of filler-gap dependencies allows us to emphasise the usefulness of the unique indexing system in simplifying structural descriptions. For example, following standard Minimalist analyses, in

What did Mary read 𝕨𝕙𝕒𝕥?

both

The derivation of (23), under present assumptions, involves the following steps (for concreteness, we follow

Merge(read, what) = _{-wh} > -------

Merge(_{-wh}>>

Merge(_{-wh}>>^{33}

At this point, having saturated the valency of the predicate, we can put the order between arcs in correspondence, e.g., with the grammatical function hierarchy (see fn. 21), and read off grammatical functions from the ordered set of arcs:

Merge(T, _{-wh}>>

Merge(Mary, T) = <_{-wh}>> -------

Merge(C_{wh}, T) = <_{+wh}, T>, _{-wh}>>^{34}

Note that

At this point, we can define a search sequence triggered by C_{+wh}, where C as a probe (or its +wh feature) searches for a suitable goal to check its

Σ = <C_{+wh}, T, _{-wh}>

If expressions are assigned uniquely identifying addresses, there is no need (and indeed no way) to ‘copy’

Merge(C, what) = <_{+wh}, what_{-wh}>, _{-wh}>> -------

Under set-theoretic Minimalist assumptions,

Consider now a structure like (26),

John INFL {_{2} Bill_{1}, {_{1} V, {Bill_{2} to leave}}}} (taken from

where V is an ECM/object raising verb (e.g., expect), and _{1}_{2}_{2}_{1}_{2}_{1}_{2}

G_{1} = <

G_{2} =

The two local domains defined by the lexical predicates contain identically indexed nodes: both lexical predicates dominate a node with address _{1} and G_{2} (triggered, assume, by Merge applied to the root of each) delivers a new graph, G_{3}. We may ask how many _{3}: graph-theory allows us to define the composition of local domains as graph union, where G_{1} ∪ G_{2} = (_{1} ∪ _{2}, _{1} ∪ _{2}). Recall that nodes are uniquely indexed: as part of graph union, nodes that are assigned the same are collapsed to one (this is usually referred to as _{1} and _{2} (_{3} (

Contrary to set-theoretic Minimalism, we do not start with one ‘occurrence’ of

If each expression is uniquely indexed (by the set of addresses), nodes corresponding to expressions with distinct indices (i.e., distinct addresses and thus distinct semantic values) will

The aim of this paper was to explore MS as a search algorithm and its interaction with structure building operations. We contended that the set-theoretic commitments of Minimalism conspire against a definition of sequential search algorithms over the output of Merge^{35}

Merge(X, Y) = {X, Y}

with

Merge(X, Y) =

where X takes Y as an argument (for X and Y uniquely indexed expressions in the workspace). In this view, structural descriptions are ordered sets of arcs. The historically peripheral role that graph theory has played in the development of generative grammar with respect to operations over strings and sets (despite early work such as ^{36}

The MS procedure argued for here has the same properties as structure building: asymmetry, recursion, and sequentiality. So far as we can see, Ke’s arguments from parallel search in vision do not obviously apply to a syntactic algorithm in stepwise structure building.

. If graph-theoretic Merge is adopted, the phenomena that are supposedly characterised in terms of set-theoretic ambiguities and their resolution need to be defined in different terms. Incidentally, the same argument form is used in current Minimalism against so-called ‘extensions of Merge’ such as Multidominance or Sidewards Movement (Interestingly, BPS trees can be mapped to irreducible graphs without intermediate symbols by means of an operation made available by the formalism: edge contraction. In graph theory,

The mapping from

Finally, we want to briefly mention some issues that in our opinion set the agenda for future research. A crucial one pertains to

Another salient question is whether there is a way of determining empirically if MS is best modelled as a

Again, the literature has explored both options.

Finally, we want to emphasise that the graph-theoretic view of structure building proposed here has important advantages over the set-theoretic one. Considering

(i)

(ii)

(iii)

(iv)

I would like to thank Kleanthes Grohmann for his patience and editorial work, and two anonymous

The author has no funding to report.

The author has declared that no competing interests exist.