How to tidy up a general set-system Alexander V. Karzanov

How to tidy up a general set-system
by use of uncrossing operations
Alexander V. Karzanov
Institute for System Analysis of Russian Acad. Sci.
9, Prospect 60 Let Oktyabrya, 117312 Moscow, Russia
∗
Abstract. Let V be a finite set, and let S ⊆ 2V be a cross-closed collection of sets,
in the sense that for any two crossing X, Y ∈ S, at least one of the pairs {X ∩ Y, X ∪ Y }
and {X − Y, Y − X} is in S. Red and Blue play the following game, starting with a
family F ⊆ S. Red chooses two crossing sets X, Y in the current F and replace them
by sets X 0 , Y 0 ∈ S such that either X 0 = X ∩Y, Y 0 = X ∪Y or X 0 = X −Y, Y 0 = Y −X.
Then Blue returns one of X, Y to F. The game terminates (and Red wins) when F no
longer contains crossing sets.
[Hurkens et al. 1987] considers a similar game which, being reformulated in our
terms, deals with S consisting of all sets X ⊂ V such that s ∈ X 63 t, given s, t ∈ V . It
was shown there that Red can win in time polynomial in |V | and |F|.
Extending that result, we develop an algorithm for Red to win, in polynomial time,
for an arbitrary cross-closed S. Also a polynomial algorithm for a certain weighted
version of the game is given. The key idea of both methods is that the whole problem
can be split into a polynomial number of problems, each dealing with a cyclic family –
a family of which members correspond to partitions of a cycle into two connected parts.
The results have applications in combinatorial optimization, e.g., when we deal
with packing problems on certain cuts of a graph, such as T -cuts, directed cuts and
etc., and we desire to transform a given optimal packing into another one which is free
of crossing cuts.
Key words: Cut Packing Problem, Crossing Sets, Uncrossing.
Abbriviated title: How to tidy up a set-system.
1. Introduction
Let V be a finite set, called a ground set. Two sets X, Y ⊆ V are called crossing
if none of X − Y , Y − X, X ∩ Y and V − (X ∪ Y ) is empty; otherwise they are called
laminar. If X, Y are crossing (laminar), we write X 6 k Y (respectively, X k Y ).
Suppose we are given a set-system S ⊆ 2V consisting of subsets of V . If S has
∗
This research was partially supported by European Union grant INTAS-93-2530
1
no crossing pairs, it is called laminar. We assume that S is cross-closed. This means
that for any crossing X, Y ∈ S there are X 0 , Y 0 ∈ S such that either X 0 = X − Y
and Y 0 = Y − X, or X 0 = X ∩ Y and Y 0 = X ∪ Y ; we say that X 0 , Y 0 are obtained
by uncrossing X and Y , denoted as X, Y → X 0 , Y 0 . For example, the following four
set-systems S are cross-closed.
(E1): S is a crossing family, i.e., for any crossing X, Y ∈ S, the sets X ∩ Y, X ∪ Y
are also members of S (cf. [Edmonds, Giles 1977]).
(E2): Given T ⊆ V with |T | even, S consists of the sets X ⊂ V such that |X ∩T | is
odd. Such an S corresponds to the set of T -cuts of a graph G = (V, E), which originally
came up in connection with the Chinese postman problem [Guan 1962; Edmonds,
Jonhson 1973].
(E3): Given an (undirected) graph G = (V, E) and a set U ⊆ E, S consists of all
X ⊂ V such that |δ(X) ∩ U | = 1 (cf. [Seymour 1981]).
(E4): Given a graph G = (V, E) and a mapping a : E → ZZ, S consists of all
P
X ⊆ V such that (a(e) : e ∈ δ V (X)) is odd (cf. [Karzanov 1984]).
[Here for a graph G = (V, E) and a subset X ⊆ V , δ(X) = δ G (X) is the set of
edges of G with one end in X and the other in V − X, a cut in G.]
We consider a game of two players, Red and Blue, as follows. It starts with a family
F ⊆ S. At each step (move),
(1) (i) Red chooses crossing sets X, Y in the current F and makes uncrossing X, Y →
X 0 , Y 0 , i.e., Red replaces X, Y by X 0 , Y 0 in F; then
(ii) Blue returns one of X and Y to F.
Multiple sets in F are ignored; if, say, X 0 has been a member of F before the move,
it simply remains a member. Note that if all X ∩Y, X ∪Y, X −Y, Y −X occur in S, Red
has two possibilities to choose X 0 and Y 0 at his move, namely, {X 0 , Y 0 } = {X ∩Y, X ∪Y }
or {X 0 , Y 0 } = {X − Y, Y − X}. The game terminates (and Red wins) when the current
F becomes laminar.
The goal of Red is to win as soon as possible. A priori, it is unclear whether Red
can win at all. E.g., an unhappy choice of X, Y to uncross at each step may result in
cycling, as this can be shown by simple examples. If the game still terminates and even
if the initial F is small, the size of intermediate families may grow significantly during
the game; so the number of steps may be large. We denote n = |V | and m = |F| (for
the initial F) and take as a measure of time of the game the number of steps occurred
in it, thus ignoring the real complexity of performing (i)-(ii) in (1).
2
An interesting special case was studied in [Hurkens, Lov´asz, Schrijver, Tardos
0
1987]. In the original formulation, it deals with a family F ⊆ 2V , the first player is
allowed to choose arbitrary (not necessarily crossing) X, Y ∈ F and replace them by
X ∩ Y and X ∪ Y , and the game terminates when a chain of sets is obtained. We can
reduce such a game to that with rule (1) if we add to V 0 new elements s and t, forming
V , consider S as consisting of all X ⊂ V with s ∈ X 63 t, and add s to each member
of F. It was shown in [Hurkens et al. 1987] that, following a simple rule, Red wins in
time polynomial in n and m.
We prove the following theorem.
Theorem 1.
In general case, Red has a strategy to make F laminar in time polynomial in n, m.
Moreover, the number of steps Red uses is O(n5 m), as shown in Sections 2,3 where
Theorem 1 is proved.
The algorithm developed for the game is applied to solve the following uncrossing
problem. Suppose we are given a nonnegative integer-valued function f : S → ZZ+ . Let
F = F(f ) denote the support {X ∈ S : f (X) 6= 0} of f . By the uncrossing operation
we mean the following transformation of f (and F):
(2) (i) choose some crossing X, Y ∈ F; then
(ii) choose X 0 , Y 0 ∈ S which are obtained by uncrossing X and Y ; then
(iii) for a = min{f (X), f (Y )}, decrease f by a on the sets X, Y and increase f by
a on the sets X 0 , Y 0 .
The uncrossing problem is to arrange a sequence of uncrossing operations which
results in a function f ∗ with F(f ∗ ) laminar. Such a problem looks trivial as, independently of the choice of X, Y, X 0 , Y 0 , the process terminates in finite time (in the
sense of the number of uncrossing operations we apply). Indeed, let us associate with
V the complete graph G = (V, EV ) with vertex-set V , and for e ∈ EV , let fb(e) denote
P
(f (X) : X ∈ S, e ∈ δ G (X)). If f 0 is obtained from f by the uncrossing operation
(2), then it is easy to see that
(3) fb(e) − fb0 (e) is nonnegative for all e ∈ EV , and equals 2a for some e.
¡ ¢
Therefore, to solve the uncrossing problem takes time at most n2 kf k/2 for the
initial f , where kf k denotes 1 + max{fb(e) : e ∈ EV }. However, a stronger result is
true.
Theorem 2.
The uncrossing problem defined by uncrossing operation (2) can be
3
solved in time polynomial in n, m.
Indeed, we can think of the uncrossing problem as the above game, interpreting
(iii) in (2) as the move of Blue who always returns the set Z ∈ {X, Y } for which f (Z)
remains positive, and then apply Theorem 1. Note that, to be consistent, we have to
extend slightly our game by allowing Blue to return none of X, Y . Nevertheless, we
shall see that such an extension does not affect our proof of Theorem 1.
The uncrossing problem comes up in combinatorial optimization when we deal with
certain packing problems. E.g., the well-known T -cut packing problem is: given G, T as
in (E2) and a function w : E → ZZ+ , find a function g : C → IR+ on the set C of T -cuts
P
P
such that (g(C) : C ∈ C) is maximum provided that (g(C) : e ∈ C ∈ C) ≤ w(e)
for any e ∈ E. [A cut δ(X) in G is called a T -cut if |T ∩ X| is odd.] As mentioned
in (E2), the set S = {X : δ(X) is a T -cut} is cross-closed. If g is an optimal solution
to this problem, given by its support and values of g on it, then g can be transformed,
in strongly polynomial time, into another optimal solution g 0 whose support is laminar
(in the sense that {X : g 0 (δ(X)) 6= 0} is laminar). To do this, we solve the uncrossing
problem for S and f satisfying f (X) + f (V − X) = g(δ(X)), taking into account
that uncrossing operation (2) does not change the value of the objective function and
maintains the above packing condition (by (3)). Similarly, one can apply our uncrossing
method to problems concerning the sets in (E1),(E3),(E4).
One sort of uncrossing techniques was elaborated in [Gr¨otschel, Lov´asz, Schrijver
1988], Ch. 10.3 for the dual submodular flow problem [Edmonds, Giles 1977]. It
transforms, in polynomial time, an optimal solution to one with laminar support but
the uncrossing operations it applies are somewhat different from (2).
Next we consider another, weaker, kind of uncrossing operations. It is described
via a game. We assume that S = 2V . Given f : 2V → ZZ+ ,
(4) (i) Red chooses crossing X, Y in F(f ); then
(ii) Blue chooses an integer b between 0 and a = min{f (X), f (Y )}; then
(iii) f is decreased by a on X, Y , increased by b on X ∩Y and X ∪Y , and increased
by a − b on X − Y and Y − X.
As before, Red tries to minimize the total time, while Blue tries to maximize it.
Clearly for an arbitrary cross-closed S, step (ii) in uncrossing operation (2) can be
realized by choosing a proper b in (4)(ii). So, the uncrossing problem with (4) takes
time at least as much as that with respect to (2). Similarly to the previous case, the
game always terminates in finite time (namely, O(n2 kf k)) since we observe that
(5) fb(e) − fb0 (e) is nonnegative for all e ∈ EV and there are e, e0 ∈ EV such that
4
fb0 (e) + fb0 (e0 ) = fb(e) + fb(e0 ) − 2a.
Theorem 3. For game (4), Red has a strategy to make F laminar in time polynomial
in n, m and log kf k, where f is the initial function.
Theorem 3 is proved in Section 4. The key idea of the proofs of Theorems 1 and
3 is that both games can be reduced to a polynomial number of games with cyclic
set-systems on n0 ≤ n elements. We say that S 0 ⊆ 2W is cyclic if the elements of
W can be numbered by 1, 2, . . . , r (r = |W |) so that each X ∈ S 0 is of the form
i, j = {i, i + 1, . . . , j} for some 1 ≤ i, j ≤ r (allowing i > j and taking indices modulo
r). The advantage of dealing with a cyclic set-system is twofold. First, the sets X 0 , Y 0
obtained by uncrossing X, Y ∈ S 0 are obviously of similar form, therefore, any current
family F ⊆ S 0 that we handle remains cyclic under uncrossing. Second, S 0 consists
of at most n2 members, therefore, the size of any intermediate family in the process
is polynomially bounded. We explain how both games are reduced to those on cyclic
families in Section 2.
It should be mentioned that Theorems 1–3 strengthen corresponding results in
[Karzanov 1990] where self-complementary set-systems were considered. At the same
time, the proving methods we develop for Theorems 1 and 3 borrow many ideas from
that work.
The proofs of Theorems 1 and 3 will provide polynomial algorithms which arrange
the desired efficient uncrossing processes. Regarding computational aspects, we need to
specify the input of the problem. In game (1), we may assume that S is given implicitly
via the membership oracle (MO) that, being asked of a set X ⊆ V , returns whether
or not X belongs to S. In reasonable applications, MO is realized by a procedure
polynomial in n. MO enables us to recognize efficiently which of the pairs {X−Y, Y −X}
and {X ∩ Y, X ∪ Y } (or both) is contained S, thus providing the choice of X 0 , Y 0 in
(1)(i). In game (4), the initial f is assumed to be given explicitly by listing all members
X of the support F(f ) and indicating f (X) for these X’s. Since in this paper we care
only for polynomiality and do not aim to precise running time bounds, we need not
come into details of how procedure (1) or (4) is implemented.
In conclusion of this section we give two statements which will be used later on.
Statement 1.1.
Let X, Y, Z ⊆ V be such that X 6 k Y , X k Z and Y k Z. Then
0
0
X k Z for any X ∈ {X − Y, Y − X, X ∩ Y, X ∪ Y }.
Statement 1.2.
0
If F 0 ⊆ 2V is laminar and |V 0 | = n0 then |F 0 | < 4n0 .
Stamement 1.1 is trivial. To prove Statement 1.2 (see, e.g., [Karzanov 1979]),
denote by α(n) the maximum cardinality of a laminar set-system on n-element set. It
is easy to show by induction on n that α(n) ≤ α(n − 1) + 4, whence the statement
5
follows.
An important corollary of Statement 1.1 is that if, at some step, a set Z in the
current F becomes laminar to all other members of F then, independently of further
steps, this property continues to hold for Z up to termination of the game. So we
will throughout assume that after each step such Z’s are automatically excluded from
consideration; i.e., for every current family F,
(6) each member of F is crossing some other of its members.
Also we will usually assume that such a property holds when we deal with the
game on a subfamily of F being under consideration. Next, given a ground set V 0 ,
0
X denotes the complement V 0 − X to X ⊆ V 0 . For A ⊆ 2V , sym-A denotes the
symmetrization A ∪ {X : X ∈ A}. We say that a set X ⊆ V 0 separates elements
x, y ∈ V 0 if |X ∩ {x, y}| = 1, and separates another set Y ⊆ V 0 if both X ∩ Y and
Y − X are nonempty.
2. Reduction to cyclic families
We prove the following lemma.
Lemma 2.1.
For rule (1), the game is reduced to O(nm) games, each for a cyclic
family R on a set W with |W | ≤ n.
Proof. The desired strategy for Red is as follows. Red fixes a laminar family L1 ⊆ F
and chooses a set A1 ∈ F − L1 . If A1 k X for all X ∈ L1 , Red simply adds A1 to L1 .
Otherwise Red plays within the family L1 ∪ {A1 }. As a result, a laminar family L2 will
be constructed. Then Red chooses a set A2 in the new current F which is not in L2
and treats L2 ∪ {A2 } in a similar way, and so on. Eventually, after k ≤ m iterations
the obtained laminar family Lk+1 will coincide with the current F, and we are done.
We now explain how to play within a family L ∪ {A}, where L is laminar and A
is crossing some members of L. By an atom of a family A ⊆ 2V we mean a maximal
e ⊆ V is 2-partitioned
subset of V not separated by any member of A. We say that X
e contains at most two atoms of D ∪ {X},
e
with respect to a laminar family D ⊂ 2V if X
e (possibly Z = ∅ or X)
e such that
i.e., if there exists Z ⊆ X
(7)
e ∩ Q ∈ {Z, X
e − Z, X,
e ∅}
X
for any Q ∈ D.
Red plays in such a way that, at each step, the current family has a partition into
two laminar families P and D such that
(8) for each X ∈ P, at least one of X, X is 2-partitioned with respect to D.
6
Obviously, (8) holds for the initial P = L and D = {A}. To maintain this property,
e ∈ {X, X} is a maximal set in sym-P which
Red chooses an X ∈ P such that some X
is 2-partitioned with respect to D and plays within D ∪ {X}. Then the laminar family
D0 obtained from D ∪ {X} has a similar property, as follows.
Claim.
For each Y ∈ P at least one of Y, Y is 2-partitioned with respect to D0 .
Proof. Consider Y ∈ P. Since P is laminar, there is Ye ∈ {Y, Y } such that either
e or X
e ∩ Ye = ∅ and X
e ∪ Ye 6= V . First we observe that Ye is 2-partitioned with
Ye ⊆ X,
e this easily follows from the fact that X
e is 2-partitioned
respect to D. Indeed, if Ye ⊆ X,
e therefore, Yb cannot be
with respect to D. Otherwise Yb = V − Ye strictly includes X,
e
2-partitioned because of the maximality of X.
Hence, there is Z 0 ⊆ Ye such that each member of D separates neither Z 0 nor
e does not separate Ye either, none of Z 0 and Y − Z 0 is separated by
Ye − Z 0 . Since X
any set arising during the game for D ∪ {X} regardless of the moves applied by Red
and Blue. This proves the claim. •
In view of the Claim, the game for L ∪ {A} as above is reduced to at most |L|
games, each starting with a family R = D ∪ {X} such that D is laminar and some
e ∈ {X, X} is 2-partitioned with respect to D. Since |L| ≤ 4n (by Statement 1.2), the
X
total number of games arising for the initial F is O(nm), as required in the lemma.
It remains to show that the game within R is in fact the game on a cyclic family.
Indeed, we may assume that each Y ∈ R is crossing some other of its members (otherwise Y is excluded from the consideration). Then each set in D is crossing X. Let Z
e and D. Each Q ∈ D includes one of the nonempty subsets Z and
be as in (7) for our X
e − Z and does not meet the other one. Let D
e consist of the members of sym-D that
X
e Q
e 0 ∈ D,
e either Q
e⊂Q
e 0 or Q
e 0 ⊂ Q.
e This means that
contain Z. Then for distinct Q,
e − Z and each set
there is a partition {V1 , V2 , ..., Vr } of V such that V1 = Z, Vr = X
e∈D
e has the form V1 ∪ V2 ∪ . . . ∪ Vi for some 1 < i < r − 1. Furthermore, X
e = V1 ∪ Vr .
Q
Hence, D ∪ {X} is a cyclic family, and the lemma follows. • •
The above proof shows that every cyclic family appeared during the game has a
stronger form; this will be important for the proof of Theorem 1 in the next section.
Corollary 2.2.
Every cyclic family occurring in the above process is equivalent to
e on a set W = {1, . . . , r} such that
a cyclic family R
e has one member of the form r, 1 or 2, r − 1, and each other member of R
e is of
(9) R
the form 1, i or i + 1, r for some 2 ≤ i ≤ r − 2. •
7
Next, one can see that the above arguments remain valid if we consider the game
with step rule (4) instead of (1). Furthermore, the function fb is monotone nonincreasing during the game, by (5). Thus, the following is true.
Statement 2.3. For rule (4), the game for f is reduced to O(nm) games, each for a
cyclic family R on W with |W | ≤ n and a function g : R → ZZ+ with kgk ≤ kf k, where
m = |F(f )|. •
3. Proof of Theorem 1
In view of Lemma 2.1 and Corollary 2.2, it suffices to consider the game that starts
e on W = {1, . . . , r} satisfying (9), and show that Red can win in
with a cyclic family R
a polynomial number of moves. Note that W was formed by shrinking the subsets Vi in
a partition {V1 , . . . , Vr } of V . Let S ∗ be the collection of subsets of W that correspond
to members Y ∈ S of the form Y = Vi ∪ Vi+1 ∪ . . . Vj (here an later on indices are taken
modulo r). Obviously, S ∗ is cross-closed and cyclic, and we may think of S ∗ as the
e
set-system behind R.
To prove the theorem, we are forced to consider cyclic families R ⊆ S ∗ on W as
follows:
(10) R is partitioned into two laminar subfamilies D and L such that D consists of sets
1, i for some 2 ≤ i ≤ r − 2 and L consists of sets 2, j for some 3 ≤ j ≤ r − 1.
e satisfies (10) if it contains neither r, 1 nor i + 1, r for any
Note that the above R
i. The following lemma is the core of our proof.
Lemma 3.1.
For R as in (10), Red can win in time O(r3 ).
Proof. As before, we assume that each member of R is crossing some other of its
members. Let d = d(R) denote the minimum number j such that L contains 2, j; then
3 ≤ d ≤ r − 1. We use induction on
ω = ω(r, R) := r3 + r|L| + d.
One may assume that
(11) for i = 1, . . . , r, there is a member of R separating i − 1 and i.
For if (11) is violated for some i, then shrinking i − 1 and i into one element yields an
equivalent problem with smaller ω. Properties (10)-(11) imply
(12)
2, r − 1 ∈ L
and
1, i ∈ D for i = 2, . . . , d − 1.
8
In what follows X, Y denote the crossing sets in a current R that Red chooses
to uncross; X 0 , Y 0 denote the sets obtained by uncrossing X, Y ; and R0 , L0 , D0 denote
the corresponding objects that arise after the answering move of Blue and then by
deleting the sets of the resulting family which are laminar to all other sets (the new
family R0 will satisfy (10) as well). We denote by r(R0 ) the number of maximal subsets
i, j which are separated by no member of R0 . If r0 = r(R0 ) < r then contracting
corresponding subsets in W results in a family R00 on an r0 -element set. Obviously,
ω(r0 , R00 ) < ω(r, R), and therefore, we can immediately apply induction.
First we suppose that {1} 6∈ S ∗ . Then Red takes X = 2, r − 1 and Y = {1, 2} to
uncross. Since Y −X = {1} 6∈ S ∗ and S ∗ is cross-closed, we have X ∩Y = {2} ∈ S ∗ and
X ∪ Y = W − {r} ∈ S ∗ . Red takes just X ∩ Y and X ∪ Y as X 0 and Y 0 , respectively.
Since both X 0 and W − Y 0 are singletons (so they are not in R0 ) and one of X, Y
vanishes after the move of Blue, we conclude that at least one of the pairs {2, 3} and
{r − 1, r} is separated by no member of R0 . Hence, r(R0 ) < r, and the result follows
by induction.
Thus we may assume that {1} ∈ S ∗ . We may also assume that
(13) 1, d belongs to S ∗ .
Indeed, suppose that 1, d 6∈ S ∗ . Let X = 2, d, Y = 1, d − 1, X 0 = X − Y and
Y 0 = Y − X. Then Y ∈ D (by (12)), and both X 0 = {d} and Y 0 = {1} exist in S ∗ since
X ∪ Y = 1, d 6∈ S ∗ . Red makes uncrossing X, Y → X 0 , Y 0 ; so both X 0 , Y 0 vanish in R0 .
If Blue returns X, then no set in R0 separates d − 1 and d, whence r(R0 ) < r. And if
Blue returns Y , then L0 = L − {X}. In both cases, we get ω(r(R0 ), R0 ) < ω(r, R) and
apply induction.
Let k be the maximal number such that 1 ≤ k ≤ d and {k} ∈ S ∗ .
Claim.
(i) k ≥ 2. (ii) If k < d then Z = k + 1, d 6∈ S ∗ .
Proof. From (11) and the fact that S ∗ is cross-closed we observe that any minimal
nonempty set in S ∗ is a singleton. Therefore, {i} ∈ S ∗ for some 2 ≤ i ≤ d (as 2, d ∈ R),
which implies (i). Next, if k < d and Z ∈ S ∗ , then there is a j ∈ Z such that {j} ∈ S ∗ .
But k < j ≤ d contradicts the maximality of k. •
Consider three possible cases. In each case Red takes as X the set 2, d.
Case 1. k = d. Then Red plays in a similar way as in the proof of (13).
Case 2. k = 2. Let Y = 1, 2, X 0 = X ∩ Y and Y 0 = X ∪ Y . Then X 0 = {2} ∈ S ∗
and Y 0 = 1, d ∈ S ∗ (by (13). Red makes uncrossing X, Y → X 0 , Y 0 . Then, after
the move of Blue, at least one of the following situations takes place: (i) no set in R0
9
separates 2 and 3, or (ii) L0 = L − {X}. The result follows by induction.
Case 3. 2 < k < d. Let Y = 1, k, X 0 = X ∩ Y , Y 0 = X ∪ Y and Z = k + 1, d.
By the Claim, Z = X − Y is not in S ∗ . Therefore, both X 0 (= 2, k) and Y 0 = X ∪ Y
(= 1, d) are in S ∗ . Make the uncrossing operation X, Y → X 0 , Y 0 . Suppose that Blue
returns Y . Then X 6∈ L0 and X 0 ∈ L0 , whence |L0 | = |L|. But d(R0 ) = k < d = d(R),
so the result follows by induction.
Now suppose that Blue returns X. Then X, X 0 ∈ R0 ; therefore, L0 = L ∪ {X 0 },
and we cannot apply induction immediately. Nevertheless, we can use the property
that
(14) k and k + 1 are separated in R0 by the only set 2, k
e = 2, k and Ye = 1, k − 1
(since Y = 1, k vanishes by uncrossing). Consider the sets X
e0 = X
e − Ye (= {k}) and Ye 0 = Ye − X
e (= {1}) are in S ∗ , so Red
in R0 . Both sets X
e Ye → X
e 0 , Ye 0 . Let R00 be the family
can apply to R0 the next uncrossing operation X,
obtained after the move of Blue. Two cases are possible.
e∈
(i) Blue returns Ye . Then X
6 R00 , and now there is no set in R00 separating k and
k + 1, in view of (14).
e Then Ye ∈
(ii) Blue returns X.
6 R00 , therefore, no set in R00 separates k − 1 and k.
In both cases, we have r(R00 ) < r and apply induction.
Now the lemma follows from the fact that ω is O(r3 ). • •
e as in (9). Let X ∈ R
e be the set of the form 2, r − 1 or r, 1.
Return to the family R
e − {X} is partitioned into two subfamilies Q1 and Q2 whose members are
The family R
of the form 1, i and j, r, respectively. Consider two possible cases.
A. Let X = 2, r − 1. Apply Lemma 3.1 to R = Q1 ∪ {X}. As a result, R is
transformed into a laminar cyclic family P on W . Moreover, it is easy to show by
induction that the uncrossing operation applied to arbitrary crossing pair in R, as well
as each family appeared during the game for R, results in sets X 0 , Y 0 such that X 0
is either (i) 1, i for some 1 ≤ i ≤ r − 1, or (ii) i, j for some 2 ≤ i ≤ j ≤ r − 1, and
similarly for Y 0 . Partition P into subfamilies P1 and P2 formed by the sets as in (i) and
(ii), respectively. Obviously, each member of P2 is laminar to all members of P ∪ Q2 ;
therefore, we may ignore P2 and continue to play within A = P1 ∪ Q2 .
We proceed by induction on σ = σ(A) := |P1 |. Obviously, σ < 2r. Choose a
minimal set Z = i, j in P1 , and let Q0 consist of the members of Q2 crossing Z. We can
apply Lemma 3.1 to Q0 ∪{Z}. This is because, after contracting the intervals 1, i − 1 and
j + 1, r to single elements x and y, respectively, and then renumbering y, j, j −1, . . . , i, x
10
as 10 , 20 , . . . , r0 , respectively, we obtain a cyclic family R on W 0 = {10 , . . . , r0 } as in (10).
Hence, Red can transform Q0 ∪ {Z} into a laminar family Q00 in time O(|Z|3 ).
Note that each member of Q00 is of the form either (i) i0 , j 0 for some i ≤ i0 ≤ j 0 ≤ j, or
(ii) i0 , r for some i ≤ i0 ≤ j + 1. Moreover, each set as in (i) is laminar to all members of
P1 ∪Q2 ∪Q00 ; therefore, we may delete such sets. Then the remaining family A0 consists
of the members of P1 − {Z} and Q2 plus sets as in (ii). Hence, σ(A0 ) = σ(A) − 1, and
we apply induction. Thus, Red can make the initial A laminar in time O(r4 ).
B. Let X = r, 1. Renumber r, 1, 2, . . . , r − 1 as 1, 2, . . . , r, respectively. Then X
becomes 1, 2, a set 1, i ∈ Q1 becomes 2, i + 1, and a set j, r ∈ Q2 becomes j + 1, 1.
So we can apply Lemma 3.1 to R = D ∪ L with D = {X} and L = Q1 and obtain
a laminar family R0 consisting of sets of the form (i) 1, i for some 1 ≤ i ≤ r − 1 and
(ii) 2, j for some 2 ≤ j ≤ r (in the new numeration). Now we delete the sets as in (ii)
(which are laminar to all other sets) and play within Q2 ∪ B, where B is formed by the
members of R0 as in (i). Arguing as in the previous case, we conclude that Red can
make Q2 ∪ B laminar in time O(r4 ).
Summing up the above arguments, we obtain Theorem 1.
4. Proof of Theorem 3
In view of Statement 2.3, Theorem 3 is implied by the following lemma.
Lemma 4.1. Let g be a nonnegative integer-valued function whose support forms a
cyclic family on a set W = {1, . . . , r}. For rule (4), Red has a strategy to win in time
polynomial in r and log kgk.
Proof. Let h and R be the current function and its support before a move of Red. We
know that R is cyclic and khk ≤ kgk (by (5)).
Red plays as follows. He fixes a set X ∈ R such that h(X) is maximum provided
that 2 ≤ |X| ≤ r − 2; if such an X does not exist, R is already laminar, and we are
done. Red takes this X and an arbitrary Y ∈ R (Y 6 k X) to uncross, then Red takes X
and another Y 0 (Y 0 6 k X), and so on until a function h0 is obtained such that h0 (X) = 0
or all members of the support R0 of h0 are laminar to X. We call this sequence of moves
a big iteration. Let RX = {Y ∈ R : Y 6 k X}. Consider possible cases.
P
Case 1. h(X) ≥ h(RX ), where h(RX ) stands for
(h(Y ) : Y ∈ RX ). Then
0
0
h (Y ) = 0 for all Y ∈ RX (and no new set Y such that Y 0 6 k X and h0 (Y 0 ) > 0 can
appear, by Statement 1.1). Therefore, all members of R0 are laminar to X. Thus, we
can split the game for R0 into two games, one with the family R1 = {Y ∈ R0 : Y ⊂ X
11
or W − Y ⊂ X} and the other with R2 = {Y ∈ R0 : X ⊂ Y or X ⊂ W − Y }. In fact,
we may assume that the former (latter) game deals with a cyclic family on the set W1
(respectively, W2 ) obtained from W by contracting W − X (respectively, X) to a single
element. Then |Wi | < r (i = 1, 2) and |W1 | + |W2 | = r + 2. Furthermore, if |Wi | ≤ 3
then Ri is obviously laminar. Using these facts, it is easy to show by induction on
|W | that the total number of big iterations (for families on W and those occurred on
reduced sets) at which Case 1 takes place is bounded by a polynomial in r.
Case 2. h(X) < h(RX ). For e ∈ EW , let γ(e) be the sum of h(Y )’s among
P
Y ∈ R with 2 ≤ |Y | ≤ r − 2 and e ∈ δ G (Y ), and let β = (γ(e) : e ∈ EW ), where
G = (W, EW ) is the complete graph on W . Let γ 0 (e) and β 0 be the corresponding
numbers for h0 .
Obviously, β < r2 khk. Since X vanishes at the big iteration, we deduce from (5)
(with γ instead of fb) that β 0 is at most β − 2h(X). Furthermore, the maximality of
h(X) implies that β ≤ |EW ||R|h(X) < r4 h(X) (taking into account that |R| < r2 , as
R is cyclic). Therefore,
(15)
β 0 < β(1 − 2/r4 ).
Suppose that Case 2 occurs in k consecutive big iterations, and let β0 and β1 be
the values of β at the first and last of these iterations, respectively. We may assume
that β1 ≥ 1. Then (15) together with β0 < r2 kgk (for the initial g) implies that k is
bounded by a polynomial in r and log kgk, whence the lemma easily follows. •
In conclusion, imagine the game similar to (1) in which S has the weaker property
that for any crossing X, Y ∈ S, there are X 0 , Y 0 ∈ S such that either X 0 ∈ {X −
Y, X − Y } and Y 0 ∈ {Y −X, Y − X}, or X 0 ∈ {X ∩Y, X ∩ Y } and Y 0 ∈ {X ∪Y, X ∪ Y },
and Red can choose any such X 0 , Y 0 to uncross X, Y . Can Red win in polynomial time?
We conjecture that this is possible. Another open question: can Red win, in polynomial
time, in game (1) if the choice of X 0 , Y 0 (when all X − Y, Y − X, X ∩ Y, X ∪ Y are in
S) belongs to Blue?
REFERENCES
[Gr¨
otschel, Lov´
asz, Schrijver 1988] M. Gr¨otschel, L. Lov´asz and A. Schrijver: Geometric Algorithms and Combinatorial Optimization (Springer, 1988).
[Edmonds, Giles 1977] J. Edmonds and R. Giles: A min-max relation for submodular functions on graphs. In: P.L. Hammer, E.L. Johnson, B. Korte, G.L.
Nemhauser, eds., Studies in Integer Programming (North-Holland, Amsterdam,
1977) 185-204 (Annals of Discrete Math. 1).
12
[Edmonds, Jonhson 1973] J. Edmonds and E.L. Jonhson: Matching, Euler tours
and Chinese postman. Math. Programming 5(1973) 88-124.
[Guan 1962] M. Guan: Graphic programming using odd and even points, Chinese
Mathematics 1(1962) 273-277.
[Hurkens, Lov´
asz, Schrijver, Tardos 1987] C.A.J. Hurkens, L. Lov´asz, A. Schrijver,
and E. Tardos: How to tidy up your set-system? In: Colloquia Mathematica
Societies Janos Bolyai, 52. Combinatorics (North Holland, Amsterdam, 1987),
pp. 309-314.
[Karzanov 1979] A.V. Karzanov: Combinatorial methods to solve cut-determined
multiflow problems. In: Combinatorial Methods for Flow Problems (Inst. for
System Studies, Moscow, 1979, iss. 3), pp. 6-69, in Russian.
[Karzanov 1984] A.V. Karzanov: A generalized MFMC-prorerty and multicommodity cut problems. In: Finite and Infinite Sets (Proc. 6th Hungar. Comb.
Coll. (Eger,1981)) (North-Holland, Amsterdam, 1984), v.2, pp. 443-486.
[Karzanov 1990] A.V. Karzanov: Polynomial uncrossing processes, Research Report
90647-OR (Institut f¨
ur Diskrete Mathematik, Bonn, 1990) 11p.
[Seymour 1981] P.D. Seymour: On odd cuts and plane multicommodity flows,
Proc. London Math. Soc., Ser. 3, 42(1981) 178-192.
13