What is a negative polarity item? Abstract * Ahti-Veikko Pietarinen

What is a negative polarity item?*
Ahti-Veikko Pietarinen
April 6, 2003
Abstract
Traditional approaches to polarity phenomena presuppose that the linguistic environment of a negative
polarity item licenses the item, and that more often than not, the environment contains negation or
some negative expression or implicature. Such theories aim at achieving licensing conditions by
generalising empirically from data, which is a method undermined by too many counterexamples. In
this paper, a different approach is proposed, which is based on game-theoretic semantics and game
rules for polarity items. The NPI-thesis is formulated, which says that the grammaticality condition of
polarity sentences turns on a meaning-comparison between sentences containing negative polarity
items and sentences with suitably defined contrast terms. Consequently, it becomes possible to account
descriptively for a wide range of polarity phenomenon and to produce exact licensing conditions for
polarity items. This theory has significant repercussions to linguistic methodology as grammaticality
becomes semantically constrained.
Key words: Negative polarity items, game-theoretic semantics, NPI licensing, NPI-thesis.
1. Introduction
The characteristic feature of the class of expressions known as negative polarity items (NPIs)
has been claimed to be the negative polarity property: a negative or affective construction in
the environment, usually a morphologically explicit negation, negative adverb, negative
adjective, implicature, or some other ‘abrogate’ term. It has been claimed that the lack of such
property runs the risk of making sentences ungrammatical, incorrect or ill formed. NPIs are
said to be ‘sensitive’ to the material that gives rise to the negative polarity property. Thus, and
this is the central problem with NPIs: to spell out the precise conditions under which NPIs
become permitted has turned out to be elusive, because of the apparently diversified and
heterogeneous behaviour of such cross-categorial items in a variety of contexts. This loose
category of linguistic items, or some reasonable subset of it, seems to desist any proposed
licensing conditions.
Typical examples of NPIs include any, ever, at all, in the least, yet, the slightest and
*
This paper was written in spring 2000. The work has been supported by the Osk. Huttunen
Foundation.
budge (an inch), just to mention a few out of many hundreds. For example:
(1)
John hasn’t arrived yet.
It is obvious that licensing is not restricted to the negative polarity property:
(2)
Mary can solve any problem.
Accordingly, modal expressions may authorise them. In fact, one can find a number of
‘licensers’, such as interrogatives, conditionals, hypotheticals, comparatives, imperatives,
directives, habituals, grading particles, equatives, adversatives, temporal conjunctions,
restrictors of universal quantifiers, and several others.
In this paper, I will propose necessary and sufficient conditions for NPI licensing by
using the resources of game-theoretic semantics (GTS) [20,21,22,23,51]. Let it be said that
such licensing conditions are nevertheless not entirely new. They have prevailed in the
literature in the guise of Hintikka’s any-thesis [19], originally devised to provide
grammaticality conditions for indefinite morphemes of any and even. The goal of this paper is
thus to revitalise and generalise the any-thesis, so that it can be made also to apply to other
NPIs in addition to the polarity sensitive any. What is seen to emerge is a new theory of the
licensing of a set of negative polarity items that covers a reasonably large fragment of
English. To do this, a number of new game rules are formulated. Together with ordering
principles, these rules answer to the so-called ‘status question’ [38]: What is the theoretical
status of a structure containing an unlicensed polarity item? Are such strings syntactically
well formed but uninterpretable, or do they have a well-defined interpretation that renders
them pragmatically available? The NPI-thesis invites one to compare ill-formed strings and
grammatical sentences for synonymity. However, such ill-formedness is a consequence of
game-theoretic
evaluation
principles,
which
themselves
may
turn
on
pragmatic
considerations. A spin-off is the question of the priority between semantics and the
semantics/pragmatics interface in determining grammaticality.
This paper proceeds by first presenting and evaluating some received theories of NPIs
and their licensing. A fragment of GTS for a selection of English polarity items is outlined,
the NPI-thesis presented, and supporting data put forward. Questions concerning the logical
behaviour of polarity items as regards to contrapositions will also be discussed.
2. Some previous theories
One of the most influential accounts of NPI phenomena has been given by Ladusaw [37],
2
who built on the earlier work on scalar predication [9,10,27]. Ladusaw suggested a semantic
theory that appeals to downward entailment (monotone decreasingness). An NPI is downward
entailing precisely when it approves inferences from sets to subsets. A typical example of
downward entailing item is an explicit negation operation: from “Someone did not watch TV”
one may infer that “Someone did not watch Channel 5,” for example. 1
Although downward entailingness agrees with the plausible observation that NPIs
typically aim at strengthening negative contexts, and that it is not negative operators also that
exhaust the NPI behaviour, numerous counterexamples remain a problem. There are items
that trigger NPIs even though they are not downward entailing, such as the adversative
surprised:
(3)
I was surprised that he budged an inch. 2
On the other hand, although Ladusaw’s original proposal was devised to give necessary
conditions to NPI licensing, one can find downward entailing items that do not license NPIs,
or non-downward entailing items that license NPIs. The former case occurs in simple
conditional clauses:
(4)
1
?If you eat any vegetables, you’ll be fine. 3
The role and significance of this property is exaggerated in linguistic theories, especially if they seek
explanations. For instance, it is not obvious how to define downward entailment for interrogative,
intensional or imperative contexts.
2
Two responses are possible. One is that the presupposition of (3), namely “I did not expect that he
budges”, is what does the licensing. The other is that a rational person who is surprised of some fact p
is likely to be also surprised that p and q, where q bears some (typically causal) relation to p, that is, the
conjunction is resultative.
3
According to some informants, this sentence is not marked, especially when of the is inserted between
any and vegetables. Similar sentences were discussed in [XX], where it was remarked that in minimal
pairs such as (i) and (ii), the latter is bizarre because it rejects the presupposition that the conditional
should remain true when the bare existential polarity item is replaced by stronger existential item.
Since the presupposition is that one will like the soup no matter how much pepper is put into it, the
sentence becomes absurd.
(i)
If you put any pepper in this soup, you won’t like it.
(ii)
*If you put any pepper in this soup, you will like it.
It may be possible to make the implicatures in (i) and (ii) to be that of logical implication, in which
case Ladusaw’s downward-entailing analysis may be able to explain these examples.
3
An example of the latter is given by:
(5)
Only those who have ever eaten vegetables know their taste. 4
Similar counterexamples have been proposed by Linebarger [41,42,43]. She argues that
Ladusaw’s theory of downward entailment is in some respects too strict and results in
incorrect predictions, while too permissive in others. Her own account comes in two phases.
First, there needs to be a condition according to which there will be a direct accreditation by
means of a governing negation. Second, an additional immediate scope constraint for such
government needs to be imposed. Any licensed NPI must, according to her, remain within the
immediate scope of negation, which means that no operator may intervene the negation and
the NPI, for otherwise the presence of such ‘harmful’ interveners may render the sentence
unacceptable. 5 This immediate scope constraint is assumed to be a sufficient condition for
NPI licensing.
Furthermore, she presents another sufficient condition for residual cases in terms of
negative implicature: since there are cases where an explicit negative construction is not
available, yet such sentences seem acceptable, there must be some implicature of negative
kind conveyed by the speaker, possibly by way of weak negative cues elsewhere in the
sentence. An example is provided by only, which in (5) is taken to convey the message:
(6)
Anyone who hasn’t eaten vegetables cannot know their taste.
Such implicit or tacit licensing by negative implicature introduced an important step towards
more general licensing conditions, since they put grammatical considerations under new,
semantic and pragmatic light.
However, Linebarger’s theory is not free from counterexamples, either. It does not
explicitly define what makes something to count as permissible negative implicature. One
would need conditions also for negative implicatures, for otherwise they may license
something ungrammatical:
(7)
4
Exactly one person at the meeting budged an inch to dismiss the proposal.
Sentences such as (i) “Only A are B” may be interpreted as (ii) “If not A then not B”, assuming that
the A are in fact B. But can one then interpret e.g. “??Only those with any money need apply”?
Alternatively, the meaning of (i) could be “There are no more Bs than there are As”, which amounts to
a higher-order branching quantifier representation.
5
Giannakidou [12] tackles the issue of which interveners count as harmful and which non-harmful.
4
Without further reason to choose one negative implicature over another, one oscillates
between the two readings: 6
(8)
Not more than one person at the meeting budged an inch to dismiss the proposal.
(9)
Not less than one person at the meeting budged an inch to dismiss the proposal.
Thus, licensing by means of negative implicature remains a mystery, lest it is agreed that
implicature can remain underspecified and still count as a licenser, which disposes much of
the explanatory virtues of the theory. One lacks an insight as to why many negative
implicatures do not license NPIs, and why some contexts that do not have any kind of
negative implicature, let alone negation, such as interrogative in “Do you know the answer
yet?” do seem to license NPIs.
Quite apart from Ladusaw and Linebarger, Zwarts [57] takes a lexical view of NPIs.
He purports to distinguish between different types of NPIs in terms of the properties of the
negative expression and the linguistic environment within which they can be found. He sets
apart three conditions for three different cases, distinguished from each other in terms of the
type of negation and the properties of linguistic environment. Only one of these conditions
appeals to downward entailment, and hence the theory seeks to circumvent at least those
counterexamples problematic for Ladusaw. Zwarts distinguishes between three types of
negation: sub-minimal (only a few N, not all N, at most), minimal (none of the N, neither N,
no one, not a single, not a), and classical negation (none of the N, no N, or a negative adverb
not as in don’t). These negations act as licensing triggers for weak, strong and super-strong
NPIs, respectively. Examples of weak NPIs include can abide and sleep a wink, and strong
NPIs include a thing and lift a finger. An example of a superstrong NPI is one bit.
These three types of negative expressions are distinguished from each other by their
logical behaviour characterised by conditions imposed on the functional behaviour of the
underlying hierarchy. The functional behaviour is argued to provide licensing conditions for
these three classes of NPIs: the first is a downward entailing environment reflecting
Ladusaw’s proposition, the second covers anti-additive expressions, and the third covers antimorphic expressions, corresponding to a classical negation. Formal characterisations of these
notions can be found in [57]. Since only the first condition appeals to the property of
downward entailment, the lexical theory seeks to circumvent at least those well-known
counterexamples problematic for previous theories of NPI licensing, including [37].
6
These readings are predicted by the generalised-quantifier analysis of the quantifier exactly.
5
All three conditions are sufficient, and so there can be expressions that are members
of none of these three types (or do not contain any of the three negation types), but which
nonetheless license some NPIs. This lexical theory also claims that the three licensing
conditions are downwards applicable in the sense that they hold for NPIs that are members of
a class with a weaker condition. That is, anti-morphic environment (classical negation) should
license, in addition to superstrong NPIs, also strong NPIs, and anti-additive environment
(minimal negation) should license, in addition to strong NPIs, also weak NPIs. This falls out
from the algebraic definitions of these negations.
The rules are not upward applicable, however, and this in fact raises the question of
whether there are violations to these rules. It turns out that the following sentences can indeed
be problematic for this theory:
(10)
At most two people lifted a finger to help.
This sentence has sub-minimal negation at most two creating a downward-entailing context
while licensing the strong NPI lifted a finger. Likewise, in (11) the anti-additive minimal
negation not a single licenses the superstrong NPI one bit.
(11)
Not a single guest liked the performance one bit.
If the licensing expression none of the N agrees with both the minimal and classical
definitions of negation, as it does according to Zwarts’ theory, upwards applicable licensing
should not be ruled out. Analogously with (11), then, surely the minimal (and classical)
negation none of the N should be able to trigger superstrong NPIs:
(12)
None of the guests liked the performance one bit.
Krifka [34,35,36] refines the notion of downward entailment so that it applies only in
environments of the same ‘sort’ as the original NPI contexts. For each NPI, these ‘sorts’
constitute a property lattice in which an NPI is the least element of the lattice (quantitatively it
can be as little as ε ) and every other element covers it. Dually, PPIs (positive polarity items,
such as some) constitute a lattice where the PPI is the greatest element in the lattice denoting
the property, covering other properties of the same sort. 7
One can view Krifka’s proposal as a certain generic model subsuming scalar-based
theories of NPI licensing. Israel [30,31] suggests that although it might be disputable whether
7
Ladusaw [38] speaks of minimum and maximum items, but there are no such elements in a lattice.
6
NPIs show different sorts of sensitivities, sensitivity reflects the underlying unifying
phenomena of polarity items as scalar operators (cf. the original proposal in [9,10]). Their
scalar nature means that NPIs refer to some quantificational notion such as amount, degree, or
intensity. The scalar model, based on items acting as scalar operators, comprises an ordered
set of elements preserving certain (often pragmatically constrained) inferences among
propositions. Propositions are associated with alternative scales in a model, and NPIs
(typically) denote the least element in the model, because they tend to relate to superlatives
and similar minimal quantitative or informational values. He maintains, however, that all
NPIs are scalar operators. For example, the aspectual adverbs such as the PPI already or the
NPI yet do not endorse quantificational or informational scaling. Accordingly, Israel evaluates
them on inceptive scales, while evaluating aspectual NPIs such as anymore on continuative
scales.
There is little reason to cast doubt on the scalar theory of NPIs, which seems to
capture at least one fundamental property. But the question of the explanatory value of the
scaling technique devoid of theoretical backing remains. In particular, it does not answer to
the question of how the model would differentiate NPIs from other items also behaving as
scalar operators.
Progovac [45,46] has advanced a syntactic binding theory where NPIs are taken to
behave anaphorically, bound to their governing category of negation, negative operator or
conditional. According to her, there is a parallel between anaphora and polarity licensing,
suggesting a reduction of polarity sensitivity to that of anaphoric sensitivity. The proposal is
formulated in the spirit of binding theory, and so is calculated to cope with the non-presence
of any explicit binders, in which case one posits an empty polarity operator. Whether there are
such dummy operators in language is a moot point. A likely rejoinder is that items themselves
may signal tacit operators, which nevertheless leaves something to be desired from the
proposed theory.
In the wake of these diverse opinions, Giannakidou [11] defends the claim that the
licensing conditions have to depend on semantic properties of the linguistic environment.
Since data can be found which are not explained by downward entailing contexts, and since
many of the semantic properties of linguistic environment are vaguely identifiable, the answer
to the licensor question that she proposes is the property of veridicality and nonveridicality of
the linguistic environment. The licensing of NPIs results from semantic dependency of the
items on nonveridical contexts. The negative or downward entailing contexts become special
cases of nonveridical contexts.
The semantic dependency is given by the relation R that must hold between the
‘dependant’ expression α and the ‘dependee’ β . A negated relation would also count,
7
whereby α depends semantically on β if ¬(α R β ) (‘antilicensing’). Nonveridicality
means the property of the context operator O , which applied to α gives a true expression
Oα whenever the truth of α is contingent. In other words, Oα → α is not a logically valid
rule. 8 If Oα → ¬α is logically valid then the operator O is antiveridical. For example,
classical negation is a typical antiveridical operator: “John didn’t win the game” entails “It is
not the case that John won the game,” that is, the truth of the negated sentence entails the
falsity of the clause embedded in the sentence which is subordinate to negation.
These notions may be slightly problematic since no logic or calculus is presented
where the notion of logical validity or non-validity could be applied, and so the notion of
nonveridicality has to be taken cum grano salis. In any case, Giannakidou succeeds in
covering a wide range of data, in unifying earlier treatments, and in yielding largely correct
predictions. For example, she argues that NPIs are a proper subclass of affective polarity
items (APIs). Whereas APIs are licensed in nonveridical environments, antiveridicality
suffices for NPIs (but in intensional contexts it has to cope with possible-world notion of truth
for propositional attitudes). In a wider perspective, the general case is that syntactic aspects
follow semantic sensitivity features of APIs themselves. One should also note some
similarities with Zwarts’ classification of negation. Accordingly, the earlier remarks and
counterexamples I made concerning his theory carry over to Giannakidou’s theory.
There are other problems, such as long-distance licensing, where the problem is to
find theories that would tell how one negation operator, for example, can affect two or more
items elsewhere in the sentence, sometimes even across sentence boundaries. Furthermore,
how can negation affect items despite the existence of some interveners, as happens with the
intensional predicate want to in “I don’t want you to say anything” (see [13]). 9
3. Game-theoretic semantics for polarity items
1. Basic ideas
Any sentence of English defines a game between two players, the verifier and the falsifier, the
former aiming to show that the sentence is true and the latter aiming to show that the sentence
is false. The game rules for the familiar quantificational expressions such as some, every,
a(n), and any prompt a player to choose an individual from the relevant domain (choice set),
8
9
In case of O being a belief operator, it is clear that Oα should not entail
α , for instance.
Obviously, long-distance licensing poses problems to Linebarger’s immediateness constraints. More
theories and discussion on NPIs and their licensing can be found in [1–5,8,16,24–
26,28,29,32,33,39,40,48,49,52,55,56].
8
giving it a name, and the game continues with respect to an output sentence defined by the
game rules. Analogously to the game-theoretic semantics for formal languages, the game
terminates when such components (atomic formulas) are reached where further applications
of game rules are not allowed. For instance:
(G.every) If the game has reached the sentence of the form
X – every Y who Z – W,
the falsifier chooses an individual, say b, from the choice set I, giving it a name. The
game is then continued with respect to the sentence
X – b – W, b is a Y, and b Z.
The game rule (G.any) is the same as (G.every) except that every is replaced by any.
See [20] for further conditions that need to be imposed to the rules for connectives in
certain cases. In general it is presupposed that any behaves like a universal quantifier, and that
universal readings of the indefinite a(n) are ignored. These presuppositions are not
unproblematic, since any has been argued to have existential manifestations [6,9,10,32,37,47].
(G.neg) If the game has reached a sentence of the form neg(X), the players exchange
roles (also the winning conventions will change), and the game continues with respect
to X.
The operation neg(X) is a sentential negation-forming functor.
In addition to game rules for quantifiers, negation and other connectives, such rules
can also be defined for lexical items. I use this possibility in addressing lexical polarity items.
Since the notion of scope does not manifest itself in the syntactic structure of natural
language sentences, ordering principles will tell the order of the application of game rules in
sentences. In the syntactic structure of sentences, a node N1 is said to be in a higher clause
than the node N 2 if the S-node immediately dominating N1 also dominates N 2 , but not vice
versa. The following two general ordering principles are customary:
(O.LR) For any two phrases in the same clause a game rule must not be applied to
the one on the right if a rule can be applied to the one on the left in the clause.
(O.comm) A game rule must not be applied to a phrase in a lower clause if a rule can
be applied to a phrase in a higher clause.
The special ordering principles may override general ones. In particular, the following special
9
ordering principles are needed:
(O.any) (G.any) has logical priority over (G.neg), (G.cond), (G.or), and some modal
rules such as (G.can), (G.may), (G.must), (G.possible) and (G.likely) (but not over
modalities that are propositional attitudes such as epistemic operators).
(O.some) (G.some) has logical priority over (G.neg).
It can be checked that these principles give the right predictions for a suitable set of English
sentences. For further discussion on GTS, including its notion of strategies and the relation to
meaning and truth, see [17–23,51]. The notion of strategy is particularly appealing, because
the notion of meaning that goes beyond the existence of strategies can be made to
accommodate pragmatic and rhetoric effects, entropy measures, and the associated payoffs.
The latter give content to Grice’s maxims of conversation, including the notions of relevance
and discourse coherence. I will largely ignore these further refinements here.
2. Game rules for polarity items
To formulate game rules for NPIs, it is useful to distinguish between lexical polarity items,
regular NPIs, and aspectual adverbs. Aspectual adverbs can be either NPIs or PPIs (Positive
Polarity Items, e.g. some, already, still, etc.). Lexical NPIs typically denote some minimal
amount, degree, intensity, movement, intention, reaction, manner or similar scalar quantity,
and they include idiomatic expressions such as lift a finger, have a hope in hell, the
superlatives the slightest, the foggiest, and so on. Regular NPIs include any, ever, all that,
long and at all, and aspectual adverbs operate on some quantitatively meaningful scales other
than lexical items, such as inceptive or continuative scales. 10 These items can exhibit either
negative of positive sensitivities, which will turn out to be useful at the further stages of our
theory of NPI licensing.
The game rule for lexical NPIs (G.l-NPI) actually covers a whole range of rule
instances for various items, primarily distinguished by the ontological nature of the elements
included into the choice set. It is in order to categorise these elements according to what they
denote, the denotation being some amount, degree, intensity, intention, or movement, for
instance, expressed by a special parameter C derived from the main verb of the sentence, and
included into the rule associated with the lexical item in question.
10
Thus regular and aspectual NPIs do not need to denote minimality: “She is not very wise”, “It won’t
take long”.
10
(G.l-NPI) If the game has reached the sentence of the form
X – l-NPI – Z,
the player chooses a minimal amount from the choice set. Let this quantity be b . The
game is then continued with respect to the sentence
X – have (has) b , and b is a minimal Z/ C .
Sometimes instead of ‘X having b ’ it would be more natural to speak of ‘X doing b ’, for
example. The ‘minimal amount’ can be taken to be an element picked from the choice set, the
elements of which are in partially ordered relation. Various ways of dealing with tenses can
also be incorporated into game rules.
An example of an application of this rule for (13) produces (14):
(13)
John doesn’t have the slightest idea of the solution.
(14)
John doesn’t have b, and b is a minimal idea of the solution.
Likewise, (15) produces (16) and (17) produces (18):
(15)
Mary didn’t sleep a wink.
(16)
Mary didn’t have b, and b is a minimal amount of sleep.
(17)
Bill didn’t budge an inch.
(18)
Bill didn’t do b, and b is a minimal movement (action, reaction, manner etc.)
A negation or a negative construction is not necessary in the grammatical sentences that
contain lexical NPIs. For consider the following example with adversative surprised:
(19)
John was surprised that Bill budged.
This naturally turns into (20):
(20)
John was surprised that Bill did b, and b is a minimal movement (action, reaction,
manner etc.)
11
In addition, emphases replace the need for a negative element in the linguistic environment:
(21)
John does give a damn.
(22)
John has b, and b is a minimal amount of care (attention, concern, etc.)
The game rule (G.l-NPI) enjoys a special ordering principle:
(O.l-NPI) (G.l-NPI) has priority over (G.neg), (G.cond), as indeed over (G.can),
(G.may) and similar modal operators, but not over modals that are propositional
attitudes.
The game rule for (G.ever) is similar to the rule for (G.always), except that it has always
replaced by ever. For these rules, one can safely assume a branching model of time.
(G.always) If the game has reached a sentence of the form
X – always Y,
and the time point t1 , the falsifier chooses a possible future time point t2 accessible
from t1 , and the game continues with respect to the sentence
X – Y at t2 .
There is a special ordering principle for (G.ever):
(O.ever) (G.ever) has a priority over (G.neg) and (G.cond).
Thus future tenses such as always in the future can be treated as in tense logic, marking a
universal quantification over all future moments in time. These tensed modalities illustrate
how tenses in general can be treated in GTS, and how other modalities such as those
pertaining to knowledge, belief, will, wish, permission and so on acquire a game-theoretic
interpretation in terms of possible-worlds semantics.
The next game rule is formulated for the quantitative PPI adverb totally.
(G.totally) If the game has reached the sentence
X – Y – Z totally,
the player chooses d , where d is the total quantity belonging to the category C ,
where C is extracted from the main verb Y, and the game continues with respect to
the sentence
12
X – Y – Z, d is Y.
The later occurrences of Y in the output sentences are supposed to be infinitives.
Game rules (G.wholly), (G.altogether), (G.completely), (G.entirely), (G.perfectly),
(G.in full) and so on are obvious and unsurprising variations to (G.totally). Perhaps more
surprisingly, however, this same game rule can be used for the regular NPI at all, with a
minor modification of totally replaced by at all in the game rule. There is also a special
ordering rule for (G.at all):
(O.at all) (G.at all) has priority over negation (G.not), conditional (G.cond), and
adversatives such as (G.amazed), (G.surprised), but not over modals.
Turning now to aspectual adverbs, let us consider items that denote properties on inceptive
scales.
(G.already) If the game has reached
X – already Y – Z,
the verifier chooses a time t1 , whereupon the falsifier chooses a time t2 from a
reference interval I , t1 < t2 , and the game continues with respect to the sentence
X – Y – Z at t1 , and X – was expected to Y – Z at t2 .
Here t1 < t2 means that the time point t1 occurs earlier than t2 .
An example of an application of this rule renders the sentence (23) as (24):
(23)
John already did the job.
(24)
John did the job on Monday, and John was expected to do the job on Friday.
The game rule for yet is of the following type.
(G.yet) If the game has reached
X – Y – Z yet,
the verifier chooses a time t1 , whereupon the falsifier chooses a time t2 from a
reference interval I , t1 < t2 or t1 = t2 , and the game continues with respect to the
sentence
13
X – Y – Z at t1 , and X – was expected to neg(Y – Z) at t2 .
In this rule the main verb is taken to be in past tense. This rule is seen to require minor
qualifications when dealing with future tenses.
An application of this rule to (25) can yield (26).
(25)
Alice hadn’t stopped talking yet.
(26)
By 9 pm, Alice hadn’t stopped talking, and Alice was expected not to talk at nine.
This sentence implies that the action denoted by the main verb Y eventually comes to finish.
The game rule (G.now) is analogous to (G.yet).
In interrogatives, similar scalar behaviour does not occur because they are nonmonotonic, and thus negative items are not needed.
Turning next to continuative scales, the following rules can be formulated and further
rules created similarly.
(G.still) If the game has reached a sentence of the form
X – still Y – Z,
the verifier chooses a time t1 , whereupon the falsifier chooses t2 from a reference
interval I , where t1 > t2 or t1 = t2 , and the game continues with respect to the
sentence
At t1 , X – Y – Z, and X – was expected to neg(Y’ – Z) at t2 .
Here Y’ is otherwise like Y but the main verb is not progressive. It is presupposed that the
action denoted by Y took place prior to falsifier’s selection of t2 , which implies that what is
expected is that X actually has finished Y by t2 . An example:
(27)
Alice was still cooking chicken.
(28)
At midnight, Alice was cooking chicken, and Alice was expected to have finished the
cooking at 10pm.
Some straightforward instances of this rule can also have the sentences of the form X – Y –
still Z as input. The next rule is deals with the NPI anymore.
14
(G.anymore) If the game has reached a sentence of the form
X – Y – Z anymore,
the verifier chooses time points t1 and t2 , t2 < t1 , whereupon the falsifier chooses
t3 , t2 < t3 < t1 , and the game continues with respect to the sentence
At t3 , X – neg(Y – Z), and at t1 , X – Y – Z.
An example:
(29)
Alice was not cooking anymore.
(30)
At noon, Alice was cooking chicken, and in the afternoon, Alice was not cooking
chicken.
This sentence implies that the situation where Alice was cooking is over at the time of the
utterance.
Finally, the following special ordering rules are imposed:
(O.yet) (G.yet) has a priority over (G.neg), (G.cond), and the modalities (G.can) and
(G.may).
(O.anymore) (G.anymore) has a priority over (G.neg).
The rule (G.yet) in fact seems to enjoy a considerably high logical priority as regard to a
number of other rules. The fact that the polarity items listed here provide a true crossselection of linguistic categories is shown by recognising that yet and already are suppletives,
yet and anymore are NPIs, and already and still are PPIs, for example.
With these rules at hand, let us turn to the construction of the licensing conditions for
NPIs.
4. A new theory
1. One, two, many?
It has frequently been observed that the indefinite any plays a double-agent role: it can act as
a polarity item (“I didn’t notice anything”), or as a free-choice item (common in mathematical
15
parlance, as in: “Take any x from X ”). Whether it has more (or less!) than these two roles
has not been conclusively settled. It has also been claimed that in both roles any has a
universal ‘wide scope’ representation over a licenser (typically a modal operator or a
negation, see [40,47]). This analysis was criticised in [6,9,10,32,37,41]. It was proposed
instead that any surfaces in existentially quantified expressions, presumably in the scope of
negation.
In its free-choice incarnation, it is customary to interpret any as a universal quantifier
taking wide scope over other expressions. There are reasons to believe that this bipartite
manifestation does not exhaust the behaviour of this item, and that the proposed licensing
contexts are not immune against further criticism. For example, there are such methods as pre
and post-nominal modifications that may license any in the environments but which do not
create evidently polar sensitive or free-choice contexts. There even appear to be licensing
methods where polarity sensitive any occurs but nevertheless connects with methods that
typically license a free-choice any [7]. 11
2. Hintikka’s any-thesis
It is nonetheless possible to devise techniques for capturing the behaviour of any in a unifying
manner. One early theory is Hintikka’s any-thesis [19]: The word any is acceptable
(grammatical) in a given context X – any Y – Z if and only if an exchange of any for every
results in a grammatical expression which is not identical in meaning with X – any Y – Z.
There are many problematic notions in this thesis, such as acceptability, grammaticality,
identity and meaning. The content of these will unravel as we proceed.
Instances in which any-thesis applies are:
(31)
a. Mary cannot solve any problem.
b. Mary cannot solve every problem.
Because of (O.any), this pair of sentences is non-identical in meaning, and therefore the anythesis renders (31a) acceptable. An example of the identity of meaning is given by the
following pair:
11
Modal theories of any are insufficient, because in them the difference between every and any reduces
to the difference between actual and possible existence of individuals (or eventualities), a view that
turns a blind eye on the presuppositions of many any-sentences, namely ones that do not differ from the
corresponding presuppositions of every-sentences, while the latter clearly assume the actual existence
of elements within the domain.
16
(32)
a. *Mary has solved any problem.
b. Mary has solved every problem.
Because logical priorities applying the otherwise identical game rules are the same in these
two sentences, they do not receive a distinct interpretation, and thus (32a) is ruled
unacceptable.
How about the possibility of having ungrammatical every-sentence? Such examples
are easy to find, for consider a definite NP inside partitive:
(33)
a. *John has any of the apples in the basket.
b. *John has every of the apples in the basket.
Indeed, if the every-sentence is non-grammatical, the meanings do not have to be contrasted,
since the respective any-sentence would be, according to the thesis, non-grammatical. Still,
one might sense some non-synonymity in (33a,b), and thus there seems to be yet the fourth
possibility: identity of meaning, non-grammatical any-sentence and non-grammatical everysentence:
(34)
a. *John has any apples in the basket.
b. *John has every apples in the basket.
Synonymity becomes all the more vague when both sentences are ill formed. However, this
last example actually suggests a counterexample to any-thesis:
(35)
a. John does not have any apples in the basket.
b. *John does not have every apples in the basket. 12
Since the meaning is not identical, and (35a) is grammatical while (35b) is not, it may appear
that this example constitutes the case where any-thesis does not hold. For simplicity, the
range of the thesis as such ought to be limited to a manageable fragment of English,
comprising quantifier phrases, connectives, and some modal elements, for instance, but not
cover plural nouns, mass nouns or adverbial occurrences—or else one should apply additional
independent criteria to rule some of the every-sentences as ungrammatical. It should be
pointed out that the thesis can, however, be extended to cope with the previous
12
No singular noun for every is permitted, because the only change in the sentences takes place
between the NPI and its contrast.
17
counterexample, by assuming that the relevant contrast term for any in plural or mass noun
context can be all, all of the or all the in place of every. Such amendments will be
sidestepped.
Furthermore, Hand [14] questions Hintikka’s explanations for the ungrammaticality
of (36a):
(36)
a. *You must pick any apple.
b. You must pick every apple.
This is because the any-thesis, contrasting with (36b), turns on the equality of these sentences,
even though (G.any) has priority over (G.must); yet the same explanation does not work for
the following pair:
(37)
a. You must pick any apple that squirrels have not damaged.
b. You must pick every apple that squirrels have not damaged.
However, it seems that some light can be thrown on this by observing that the latter pair
occurs within what is known as ‘subtrigging’, viz. context that has the effect of restricting the
available domain and making the indefinite any ‘less indefinite’ by mimicking the force of a
demonstrative (like in “You must pick this apple or that apple or that apple...”). What
subtrigging logically does is, assuming a possible-worlds semantics, a perspectival crossworld identification of individuals across the alternative states of affairs in the possibleworlds structure introduced by the modal operator must. Since such subtrigging does not exist
in (41), the sentences are equivalent even in the presence of the differing ordering principles.
This provides an explanation for the phenomenon Hintikka presented in [18].
3. Some data for NPI grammaticality conditions
This much said on the behaviour of any, let me put forward the first group of examples
confirming the behaviour of the temporal suppletive NPI yet. Generally, this is captured in my
yet-thesis:
(YET-THESIS): The word yet is licensed in a given context X – Y – Z yet (X nonempty) if and only if an exchange of yet for now results in a grammatical expression
that is non-synonymous with X – Y – Z yet.
Here the meaning of yet relates to so far, thus far, until now, by now etc., but not to the
18
evaluative adverb still, which has a distinct meaning. 13 An example of the application of the
yet-thesis is:
(38)
a. John doesn’t talk to Mary yet.
b. John doesn’t talk to Mary now.
Because of logical priorities spelled out by (O.yet), this pair of sentences illustrates a nonidentity of meaning, and thus (38a) and (38b) are both grammatical.
Furthermore, the following sentences are rendered synonymous by the associated
game rules, and thus (39a) is not grammatical:
(39)
a. *It is evident yet that the proof works.
b. It is evident now that the proof works.
The next three pairs of examples are easily seen to confirm the theory:
(40)
a. John doesn’t believe Mary yet.
b. John doesn’t believe Mary now.
(41)
a. Mary will defeat John yet.
b. Mary will defeat John now.
(42)
a. *John is talking to Mary yet.
b. John is talking to Mary now.
The domain of the yet-thesis does not extend over interrogatives or past tenses. 14 Some other
constraints are also inevitable. Suffice it to mention that, as in (43), yet can occur in a lower
syntactic clause than must and should, as indeed in the lower syntactic clause than might (44),
although (O.yet) renders yet logically prior to these operators:
(43)
13
John must / should yet talk to Mary.
Vernacular English may have no quarrel with sentences such as “John is at work yet”, where the
meaning of yet is comparable to that of still.
14
For instance, the questions in the pair “Has Mary defeated John already / yet?”, quite common-
sensibly, ask for different things and thus are both acceptable.
19
(44)
John might yet talk to Mary.
It can now be easily verified that these sentences are grammatical by contrasting them with
the respective now-sentences.
The prima facie inapplicability of yet-thesis to past tenses actually suggests that one
can pick additional contrast terms. These qualifications do not diminish the importance of the
theory, since there is no pre-theoretical reason to expect that contrast terms should be unique
module target clauses. A candidate for past tenses is the positive polarity item already:
(45)
a. *Mary defeated John yet.
b. Mary defeated John already.
A slight modification to the game rule (G.yet) is needed in the context of past tenses,
involving the statement of expectation in the output sentence, to stay on par with (G.already).
Such a modification is thoroughly unsurprising, however.
The next examples provide evidence for the NPI at all, behaviour of which is
captured be the at all-thesis:
(AT ALL-THESIS): The word at all is licensed in a given context X – Y – Z at all
(X non-empty) if and only if an exchange of at all for totally results in a grammatical
expression that is non-synonymous with X – Y – Z at all.
Among others, at all can acquire the meaning of, or be substituted with, the NPIs in any way
or to any extent. Likewise, totally can be substituted with, say, altogether, wholly, fully,
thoroughly, and possibly others, or even with the PPI in the first place. 15 Some data for the at
all-thesis include the pairs (46)–(50):
(46)
a. John didn’t see the building at all.
b. John didn’t see the building totally.
(47)
15
a. *Mary lost her control at all.
One should note that the thesis does not (and cannot) presuppose that the target and contrast terms
have ‘the same distribution’, as such presupposition would be extremely demanding for a descriptive
generalisation. It is the thesis itself that aims to explain the distribution. Consequently, the thesis does
not pretend to apply to all possible environments, and so reasonable conditions can be imposed on its
range.
20
b. Mary lost her control totally.
(48)
a. *Suzy would like to understand the example at all.
b. Suzy would like to understand the example wholly.
(49)
a. I am amazed that she came at all.
b. I am amazed that she came in the first place.
(50)
a. No one will be interested at all.
b. No one will be interested totally.
One might think that the appropriate contrast term for at all would be a PPI such as somewhat
or slightly, which from the scalar point of view denotes some minimal quantity, amount or
degree (possibly only slightly above the quantity denoted by the related NPI). The problem
with at least these particular PPIs is that they do not naturally occur in negative contexts, and
when they do, they would express denial or be used metalinguistically. Hence they either alter
the meaning of the sentence and not count as reliable contrast terms or then the contrast
sentence would be rendered ungrammatical, thereby prescribing the paired counterpart
sentence ungrammatical.
The third group of examples deals with the NPI the slightest, where its contrast term
may be a slight:
(51)
a. *?John has the slightest idea about the proof.
b. John has a slight idea about the proof.
(52)
a. Mary doesn’t have the slightest idea about the proof.
b. Mary doesn’t have a slight idea about the proof.
A question mark is perhaps most that can be hoped for in (51a), since the sentence appears
grammatical in a similar way as other weakly licensed NPIs (the faintest, the foggiest, and so
on). These items may be acceptable (and even grammatical) in the contexts that usually do
not license many other NPIs. They are often grammatical in the presence of focus or
emphasis.
It can now be checked that these sentences would confirm the the slightest-thesis,
modelled on the previous ones. Confirming examples can be generated ad nauseam.
21
4. The NPI-thesis
By this cumulative evidence, the NPI-thesis comes out thus:
(NPI-THESIS): An NPI is licensed (grammatical) in an environment X – NPI Y – Z
(X non-empty) if and only if the replacement of the NPI for an appropriate contrast
term results in a well-formed expression that is non-synonymous with the original
sentence.
In this thesis, nothing is said about the negative constructions that the usual approaches to
NPI licensing advocate. Thus, it may be viewed as a ‘negation-less’ account of NPIs. It may
be suspected that such negative constructs have actually been misleading and destructive. To
wit, a typical sufficient well-formedness condition for NPI sentences says that:
An expression is not well formed, if it contains an NPI that is not in the (immediate)
scope of a negative construction.
Such conditions turning on syntactically characterised notions of scope do not throw much
light on polarity phenomena, as they merely dictate that the occurrences and the scopes of
negations can be fixed by some set of grammatical rules. And a sentence can be
ungrammatical even when any is syntactically governed by negation, because the negation in
question is constitutive, not game-theoretic:
(53)
*Not any child knows the rules. 16
To require licensing by a governing conditional would not help either, for an NPI in the
consequent may be ungrammatical:
(54)
*If John has heard the results he has visited any relative.
Furthermore, what often happens is that the negation occurs in a lower clause than its alleged
licensee. A better account of licensing is thus provided by a theory that invokes relevant
comparisons for synonymity.
It is an interesting further question how to find and characterise the appropriate
16
Another meaning, namely that not just any child knows the rules, with a suitable stress on just, will
not discussed here.
22
contrast terms for NPIs. I have observed that in many occasions there are options. A goodquality contrast term or a phrase is nevertheless not typically found among the PPIs such as
most, few, some, even, several, or still. Rather, a good contrast term shares some of the
semantic properties when contrasted with the negated or otherwise licensed NPIs, but which
stands in opposition to unnegated ones. Some examples include any — every; ever — always;
yet — now (always), already; the slightest — a minor, a slight; at all — totally, wholly; all
that — very; lift a finger, budge an inch — barely bother; give a damn — care only a little.
A question can thus be raised about the conditions for the selection and the use of
appropriate contrast terms. To point out a partial answer to this question, consider the
following pair:
(55)
a. *The painting is all that impressive.
b. The painting is very impressive.
This prediction naturally arises from the following rules for the regular NPI (G.all that).
(G.all that) If the game has reached a sentence
X – Y all that – Z,
the verifier chooses d, d is a relatively large quantity of S extracted from Z, if Z is
adjective, and the game continues with respect to the sentence
X – Y – Z, d is S, X has d.
(O.all that) (G.all that) has priority over (G.not) and (G.cond).
The rules (G.very), (G.considerably) and so on, closely follow the rule (G.all that). They
explain the behaviour of all that in (55) and in other similar cases. The clue that can be
extracted from this and other examples as to the relationship between a NPI and its contrast
term is that the pairs seem to combine informativeness with quantificational values, such that
low informational value pairs with high quantificational value. Following this insight, the
sentence “The painting is not all that impressive,” for example, would be roughly similar in
meaning to the sentence “The painting is very unimpressive”. But the former can be argued to
be less informative than the latter, whereas quantificational scale of the latter certainly scores
higher than what would be accomplished with the phrases like not all that.
Some of the game rules, as indeed the corresponding instances of the NPI-thesis, may
introduce relatively surprising aspects of semantic behaviour. There does not seem to be any
general uniformity in the semantics of NPIs in particular. Whereas lexical NPIs usually
invoke selections of some minimal quantity from an ontological category, regular items
23
actually prompt maximal quantities. The irregularity of the latter is not that surprising, given
that these items are typically adjoined with occurrences of the morpheme all, which can be
implicit. In addition, since they have logical priority over negative expressions, their
denotation has to complement the totality of some domain. Accordingly, they cannot denote
any minimal quantity or degree in the relevant scale, but merely a zero measure.
It is indicative of my theory that it can cope with such irregularities without changing
its principles or fundamental assumptions, working by way of choosing and varying the
contrasts.
Ordinary ways of defining meaning assume that the terms in an underlying language
are grammatical (well formed). Thus, no algebraic approach says anything interesting about
the meaning of ill-formed expressions. However, inherent in the NPI-thesis is that sentences
can be compared for synonymity, even though one of the sentences may be ungrammatical. It
is to be seen what kind of extension principles, for example, could be used to extend the given
semantics (as a meaning function for a restricted class of expressions) to the whole language,
also covering ill-formed expressions.
Those NPI sentences that are marked do not necessarily have to be entirely
unacceptable. In a sense, they are to a small extent meaningful, although they sometimes
receive obscure interpretations. In some restricted sense they are even informative. However,
they are ill formed in the sense that they would not be pragmatically useful or effectively
assertable. The weak or even almost zero informativeness does not provide reason to render
sentences ungrammatical, as witnessed by tautological expressions, for example.
The NPI-thesis appears to have the status of a descriptive generalisation. However, it
is unclear, albeit unlikely, whether there is some mechanism of grammar that could derive the
effect of the NPI-thesis. Whether there is an ‘isomorphic’ mapping from syntactic relations
governing context and NPIs to their semantic interpretation “isomorphically” has to be left as
an open question. The reason that this is unlikely is shown by the fact that NPIs interact
across sentence boundaries, which violates the basic principles of generative grammar.
GTS differs from other semantic theories in that it does not stop at the level of
empirical generalisations. It does not, strictly speaking, even have a very healthy relation to
such ‘honest-to-data’ methodologies. It provides theoretic backing to empirical issues other
theories may lack. For example, in dynamic semantics, the treatment of anaphora is
conducted so that components of universal quantification and negation do not ‘pass on
bindings’ of variables, thus aiming at describing the conditions under which anaphoric
antecedent is not available for coreference. In terms of semantic games, one additionally has
an explanation why these sentences fail, and why certain components do not pass on bindings
(see the discussion in [44,51).
Another example of this methodological poverty is found in choice function theories
24
[15], which appeal to the concept of choice functions without answering to the question of
what the choices there are to be choices for. From our perspectives, such choices are
performed by players using strategy functions. Accordingly, GTS provides theoretic
explanations not only for what is going on in various NPI theories such as in the scalar-based
ones [9,10,34] and in the informational consistency theory of [33], but also for the theories of
choice functions.
5. Further corroboration
Sedivy [53] has argued against negative polarity licensing on the grounds that they falter on
the vital separation between lexical and regular NPIs. Regular NPIs such as any, ever and at
all cannot, according to Sedivy, be subsumed under the same licensing principles with
idiomatic expressions such as lift a finger, give a damn or have a hope in hell. According to
Sedivy, there are special contexts such as questions, emphases, modals, superordinate
negations and adversatives in which the distinction between these two classes flouts
treatments not taking the distinction in consideration, thus yielding incorrect predictions.
The claim that these two would require different licensing conditions is false, which
is shown by the GTS approach in which the game rules for either class of items is essentially
the same.
In contexts with explicit emphases, contrastive do can be used. For example, the
following sentence is acceptable:
(56)
Jill does give a damn.
Observe also exclamative use:
(57)
Tell it to someone who gives a damn!
The emphasis creates an environment in which the lexical NPI give a damn becomes licensed.
However, with the regular NPIs ever or at all, for instance, the emphasis does not function as
a licenser:
(58)
*Mary does work at all / ever.
Thus, we have yet another reason to be suspicious of the kinds of explanations that resort to
the idea that what properties there are in the environment is the key.
Let us see how the NPI-thesis fares in these environments. By applying the NPI-
25
thesis to these or other similar examples, it is seen at once that the differentiation between
lexical and regular NPIs does not pose those problems that are undermining Progovac’s
syntactic binding, Ladusaw’s semantic downward entailment, or Linebarger’s negative
implicatures, just to name a couple of alternatives. For instance, one can effectively assert:
(59)
Jill does care only a little.
This is not identical in meaning to (56), for the emphasis with respect to NPIs has a curious
effect: it seems to almost reverse its usual meaning. In (56), it is legitimate to query about the
actual quantity of Jill’s caring, for she is asserted to indulge at least in some minimal amount
of empathy. In fact, this sentence implies that Jill’s caring is roughly comparable to quantity
substantially above any minimal amount. The scalar model is a useful notion to be applied,
since emphasis provides a thrust to the least element, sending it up in the scale. Similar things
are seen to happen in theories based on polarity lattices.
The following pair of sentences describes a similar situation:
(60)
a. I do lift a finger for examinations.
b. I do barely bother for examinations.
These are non-identical, and thus the NPI-thesis predicts (60a) to be grammatical. 17
Turning then to (58), where the emphasis is not manifested by an explicit command,
its markedness is due to the contrastive sentence:
(61)
Mary does work in the first place.
Are the meanings of (58) and (61) synonymous? A moment’s reflection on the game rules and
ordering principles shows that they are, and although (58) is ungrammatical, it is informative.
The emphative does corrects its meaning toward the intended direction, namely, in striving to
convey the reading according to which Mary indeed is, more or less enthusiastically, involved
in her daily routines. Precisely what her efforts are is not conveyed by (61), and such
implications would in any case be irrelevant.
These two examples purport to demonstrate that the distinction between lexical and
regular NPIs is meaningful. Although the distinction does not affect the applicability of the
17
Without emphases, the prediction of course does not hold, and hence sentences such as “*Today, I
happened to lift a finger for examinations”, by virtue of their contrast with “Today, I happened to
barely bother for examinations”, are marked.
26
NPI-thesis, it portrays a significant element of language, namely whenever there is emphasis
in the sentence, it aims to lift the denotation of lexical NPIs upwards in the quantitative or
informative scale of the given model. As far as regular NPIs are concerned, impact is not
necessarily as effective as for lexical items.
In adversative contexts, the distinction and the NPI-thesis continue in force. For
consider the regular ever:
(62)
a. Mary doubts that John ever loved her.
b. Mary doubts that John always loved her.
These are non-synonymous. For lexical gives a damn, matters are slightly more complicated:
(63)
a. ?John doubts that Mary gives a damn.
b. John doubts that Mary cares only a little.
Why do these sentences differ in meaning? In (63b), John thinks that Mary cares more than a
little, however much it ever is. In (63a), on the other hand, the direct phrase “Mary gives a
damn” might well contribute to the ungrammaticality, but it surely is informative in the sense
that it signifies, ceteris paribus, a small amount of caring, though quite enough to be
contrasted with an appropriate contrast term or a phrase in another sentence. The adversative
doubt applies to the direct phrase as it applies to the latter, grammatical sentence. In both
cases the prediction of grammaticality arises from the respective game and ordering rules
(G.give a damn), (O.give a damn), (G.ever) and (O.ever).
The conditions in which question marks are attached should be studied separately,
and I have tried to avoid clouding the issue with too many pragmatic phenomena. Such
pragmatic considerations would not pose problems for the NPI-thesis but rather confirm it,
given the intimacy GTS has with the field of pragmatics.
5. Contraposition in conditionals
In addressing the NPI behaviour, one concern is the question of whether contrapositive
statements can be formed, and what the grammaticality status of the resulting sentences
would be. This is related to cases in which contrapositions preserve the meaning of original
statements. For example, a downward entailing expression presupposes the existence of
contrapositives. It is not difficult to find out that some expressions do not respect
contraposition salva veritate, and so here one comes up with additional counterexamples to
the proposed licensing conditions that turn on monotonicity properties.
27
In conditionals that license NPIs in the antecedent clause, the formation of
contraposition appears unproblematic. The resulting sentences are both grammatical and
meaning preserving, as in the following two pairs:
(64)
a. If he is in the least happy, he has taken some stimuli.
b. If he hasn’t taken some stimuli, he is not in the least happy.
(65)
a. If the medicine is any good at all, she is going to have visible changes.
b. If she is not going to have visible changes, the medicine is not any good at all.
The converse transformations preserve the meanings in precisely the same way.
Contrapositions of ungrammatical conditionals are grammatical, however:
(66)
a. *If he has heard the results yet, he will spend the day working.
b. If he won’t spend the day working, he hasn’t heard the results yet.
As predicted by GTS, (66a) and (66b) have the same meaning even though the former is
ungrammatical.
In contrast, an ungrammatical sentence mixing both PPIs and NPIs does not yield
grammatical contraposition, since negation usually does not cancel PPIs:
(67)
a. *If she is going to do something at all, the test will be successful.
b. *If the test won’t be successful, she is not going to do something at all.
Yet another interesting situation arises with mere PPIs:
(68)
a. If she is going to do something, the test will be successful.
b. If the test won’t be successful, she is not going to do something.
(69)
a. If John told some secrets to his friend, the girl will be in trouble.
b. If the girl won’t be in trouble, John didn’t tell some secrets to his friend.
The act of contrapositioning in (67)–(69) seems predictable insofar as grammaticality is
concerned. However, one senses that (68b) and (69b) do not mean quite the same as (68a) and
(69a), respectively. This is because negations in front of PPIs produce echo-effect
(denial/meta-linguistic use, see [54]): they indicate points where something else is meant than
attempting to cancel a PPI. In particular, by negating PPIs, although not cancelling the PPIs
28
themselves, one cancels presuppositions.
Similar things happen when PPIs reside in the consequent of (70a):
(70)
a. If he considers attending the meeting, he has some ideas worth telling.
b. If he doesn’t have some ideas worth telling, he doesn’t consider attending the
meeting.
Although perfectly grammatical, the meanings of these two sentences are not the same. The
sentence (70b) can be read as meaning that the person might indeed have ideas worth telling,
but that they are not the ones that would inspire him to consider attending the meeting.
Depending on stress, it can also be read as meaning that the person, having an excess of ideas,
considers the meeting to be idle.
6. Conclusion
The NPI-thesis states necessary and sufficient ‘licensing’ conditions for a set of NPIs,
prescribing when the sentence containing NPIs is well formed. The thesis does not turn on the
existence of explicit negation, negative construction, or any tacit such implicature. Instead,
one is asked to fathom the meaning of NPI sentences and those where NPIs are replaced with
proper contrast terms. It no longer refers to negative constructions in the environment. It is a
‘negation-less’ theory of NPIs.
However, the NPI-thesis does not pretend to apply to every possible NPI in English.
Such an attempt would be grossly extravagant. Accordingly, it is not known how many
contrast terms there exists in language. Polar-sensitive items constitute a loose crosscategorial set from various linguistic domains, and NPIs in particular stand for a variety of
linguistic categories. They do not form a natural class, despite the possibility that their alleged
licensing environments may do so. Even the borderline between NPIs and PPIs is not
demarcated in a clear-cut manner in natural language to allow a unified treatment of all NPIs.
Hence any attempt that tries to spell out their licensing conditions ought to draw on crosscategorial explanations, which are unlikely to be based solely on empirical generalisations.
Therefore it should not come as a surprise that attempts to capture their behaviour only
syntactically is a lost cause, the reason being not that much the specific counterexamples as
the unwarranted presupposition that well-formedness is prior to synonymity.
In a sufficiently similar vein, there are other phenomena such as informational
independence that influences items subsisting in remote linguistic categories [23,44]. Just as
informational independence cannot be captured by syntactic rules (since it is not syntactically
marked in language), the licensing conditions for NPIs need to be spelled out by using cross-
29
categorial and semantic resources, not by trying to identify and list the environments in which
they occur.
References
[1]
Atlas, J.D. 1998a. “Only” noun phrases, pseudo-negative generalized quantifiers, negative
polarity items, and monotonicity. Journal of Semantics 13: 265–329.
[2]
Atlas, J.D. 1998b. Negative adverbials, prototypical negation and the de Morgan taxonomy.
Journal of Semantics 14: 349–367.
[3]
Baker, C.L. 1970. Double Negatives. Linguistic Inquiry 1: 169–186.
[4]
von Bergen, A. and von Bergen, K. 1993. Negative Polarität im Englischen. Gunter Narr:
Tübingen.
[5]
Borkin, A. 1971. Polarity items in questions. Chicago Linguistic Society 7: 53–62.
[6]
Carlson, G. 1980. Polarity any is existential. Linguistic Inquiry 11: 799–804.
[7]
Dayal, V. 1998. Any as inherently modal. Linguistics and Philosophy 21: 433–476.
[8]
Dowty, D.R. 1994. The role of negative polarity and concord marking in natural language
reasoning. In Proceedings from Semantics and Linguistic Theory 4, ed.by M. Harvey and L.
Santelmann. Ithaca, NY: Cornell University, 114–144.
[9]
Fauconnier, G. 1975a. Pragmatic scales and logical structure. Linguistic Inquiry 6: 353–375.
[10]
Fauconnier, G. 1975b. Polarity and the scale principle. In Papers from the Eleventh Regional
Meeting of the Chicago Linguistic Society, ed.by R.E. Grossman et al. Chicago: Chicago
Linguistic Society, 188–199.
[11]
Giannakidou, A. 1998. Polarity Sensitivity as (Non)Veridical Dependency. Amsterdam: John
Benjamins.
[12]
Giannakidou, A. 1999. Affective dependencies. Linguistics and Philosophy 22: 367–421.
[13]
Giannakidou, A. and Quer, J. 1997. Long-distance licensing of negative indefinites. In
Negation and Polarity: Syntax and Semantics, ed.by D. Forget et al. Amsterdam: John
Benjamins, 95–113.
[14]
Hand, M. 1999. Semantics and pragmatics: ANY in game-theoretical semantics. In The
Semantics/Pragmatics Interface form Different Points of View 1, ed.by K. Turner. Oxford:
Elsevier, 179–198.
[15]
von Heusinger, K. and Egli, U. (eds.) 2000. Reference and Anaphoric Relations. Dordrecht:
Kluwer.
[16]
Higginbotham, J. 1993. Interrogatives. In The View from Building 20. Essays in Linguistics in
Honor of Sylvain Bromberger, ed.by K. Hale and S.J. Keyser. Cambridge: MIT, 195–228.
[17]
Hintikka, J. 1979a. Quantifiers in natural language: some logical problems. In [57].
[18]
Hintikka, J. 1979b. A rejoinder to Peacocke. In [57].
[19]
Hintikka, J. 1980. On the any-thesis and the methodology of linguistics. Linguistics and
Philosophy 4: 101–122.
[20]
Hintikka, J. and Kulas, J. 1983. The Game of Language: Studies in Game-Theoretical
30
Semantics and its Applications. Dordrecht: D. Reidel.
[21]
Hintikka, J. and Kulas, J. 1985. Anaphora and Definite Descriptions. Dordrecht: D. Reidel.
[22]
Hintikka, J. and Sandu, G. 1991. On the Methodology of Linguistics. Oxford: Basil Blackwell.
[23]
Hintikka, J. and Sandu, G. 1997. Game-theoretical semantics. In Handbook of Logic and
Language, ed.by J. van Benthem and A. ter Meulen. Amsterdam: Elsevier, 361–410.
[24]
Hoeksema, J. 1983. Negative polarity and the comparative. Natural Language and Linguistic
Theory 1: 403–434.
[25]
Hoeksema, J. 1986. Monotonicity phenomena in natural language. Linguistic Analysis 16: 25–
40.
[26]
Hoeksema, J. 1994. On the grammaticalization of negative polarity items. In Proceedings of
the Twentieth Annual Meeting of the Berkeley Linguistic Society, ed.by S. Gahl et al.
Berkeley: Berkeley Linguistic Society, 273–282.
[27]
Horn, L.R. 1972. On the Semantic Properties of Logical Operators in English. Dissertation,
UCLA.
[28]
Horn, L.R. 1989. A Natural History of Negation. Chicago: University of Chicago Press.
[29]
Horn, L.R. 1997. Negative polarity and the dynamics of vertical inference. In Negation and
Polarity: Syntax and Semantics, ed.by D. Forget et al. Amsterdam: John Benjamins, 157–182.
[30]
Israel, M. 1996. Polarity Sensitivity as Lexical Semantics, Linguistics and Philosophy 19:
619–666.
[31]
Israel, M. 1997. The scalar model of polarity sensitivity: the case of aspectual operators. In
Negation and Polarity: Syntax and Semantics, ed.by D. Forget et al. Amsterdam: John
Benjamins, 209–229.
[32]
Kadmon, L. and Landman, F. 1993. Any. Linguistics and Philosophy 16: 353–422.
[33]
von Klopp, A. 1998. An alternative view of polarity items. Linguistics and Philosophy 21:
393–432.
[34]
Krifka, M. 1991. Some remarks on polarity items. In Semantic universals and Universal
Semantics, ed.by D. Zaefferer. Dordrecht: Foris, 150–189.
[35]
Krifka, M. 1994. The semantics and pragmatics of weak and strong polarity items in
assertions. In Proceedings from Semantics and Linguistic Theory 4, ed.by M. Harvey and L.
Santelmann. Ithaca, NY: Cornell University, 195–219.
[36]
Krifka, M. 1995. The semantics and pragmatics of polarity items. Linguistic Analysis 25: 209–
257.
[37]
Ladusaw, W.A. 1979. Polarity Sensitivity as Inherent Scope Relations. Dissertation. Austin:
University of Texas.
[38]
Ladusaw, W.A. 1996. Negation and polarity items. In The Handbook of Contemporary
Semantic Theory, ed.by S. Lappin. Oxford: Basil Blackwell, 321–341.
[39]
Lahiri, U. 1991. Embedded Interrogatives and Predicates that Embed Them. Doctoral
dissertation. Cambridge: MIT.
[40]
Lasnik, H. 1972. Analyses of Negation in English. Dissertation. Bloomington: Indiana
University Linguistics Club.
31
[41]
Linebarger, M.C. 1981. The Grammar of Negative Polarity. Dissertation. Bloomington:
Indiana University Linguistics Club.
[42]
Linebarger, M.C. 1987. Negative polarity and grammatical representation. Linguistics and
Philosophy 10: 325–387.
[43]
Linebarger, M.C. 1991. Negative polarity as linguistic evidence. The 27th Meeting of the
Chicago Linguistic Society. Parasession on Negation. Chicago Linguistic Society, Chicago,
165–188.
[44]
Pietarinen, A. 2001. Most even budged yet: some cases for game-theoretic semantics in
natural language, Theoretical Linguistics 27: 20–54.
[45]
Progovac, L. 1993a. Negative polarity: entailment and binding. Linguistics and Philosophy
16: 149–180.
[46]
Progovac, L. 1993b. Negative and Positive Polarity: A Binding Approach. Cambridge:
Cambridge University Press.
[47]
Quine, W. 1960. Word and Object. Cambridge: MIT.
[48]
Rohrbaugh, E. 1997. The role of focus in the licensing and interpretation of negative polarity
items. In Negation and Polarity: Syntax and Semantics, ed.by D. Forget et al. Amsterdam:
John Benjamins, 311–321.
[49]
Rullmann, H. 1996. Two types of negative polarity items. In Proceedings of NELS 26, ed.by
K. Kusumoto. Amherst: GLSA, 335–350.
[50]
Saarinen, E. (ed.) 1979. Game-Theoretical Semantics: Essays on Semantics by Hintikka,
Carlson, Peacocke, Rantala, and Saarinen, Dordrecht: D. Reidel.
[51]
Sandu, G. 1997. On the theory of anaphora: dynamic predicate logic vs. game-theoretical
semantics. Linguistics and Philosophy 20: 147–174.
[52]
Schmerling, S. 1971. A note on negative polarity. Papers in Linguistics 4, 200–206.
[53]
Sedivy, J. 1990. Against a unified analysis of negative polarity licensing. Cahiers Lingistiques
D’Ottawa 18: 95–105.
[54]
Seuren, P.A.M. 1985. Discourse Semantics. Oxford: Basil Blackwell.
[55]
Van der Wouden, T. 1997. Negative Contexts: Collocation, Polarity, and Multiple Negation.
London: Routledge.
[56]
Van der Wouden, T. and Zwarts, F. 1993. A semantic analysis of negative concord. In
Proceedings from Semantics and Linguistic Theory 3, ed.by U. Lahiri and A. Zachary Wyner.
Ithaca, NY: Cornell University, 202–219.
[57]
Zwarts, F. 1998. Three types of polarity. In Plurality and Quantification, ed.by F. Hamm and
E. Hinrichs. Dordrecht: Kluwer, 177–238.
32