Geometric Representations of Graphs L´ aszl´

1
Geometric Representations of
Graphs
´ szlo
´ Lova
´ sz
La
Institute of Mathematics
E¨
otv¨
os Lor´and University, Budapest
e-mail: [email protected]
October 17, 2014
2
Contents
1 Introduction
9
1.1
Why are representations interesting? . . . . . . . . . . . . . . . . . . . . . . .
9
1.2
Introductory example: Approximating the maximum cut . . . . . . . . . . . .
10
2 Planar graphs
15
2.1
Planar graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
Planar separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
2.3
Straight line representation and 3-polytopes . . . . . . . . . . . . . . . . . . .
19
2.4
Crossing number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
3 Graphs from point sets
15
25
3.1
Skeletons of polytopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2
Unit distance graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
3.2.1
The number of edges . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
3.2.2
Chromatic number and independence number . . . . . . . . . . . . . .
27
Bisector graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
3.3.1
The number of edges . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
3.3.2
k-sets and j-edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
3.3.3
Rectilinear crossing number . . . . . . . . . . . . . . . . . . . . . . . .
32
Orthogonality graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
3.3
3.4
4 Harmonic functions on graphs
25
37
4.1
Definition and uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
4.2
Constructing harmonic functions . . . . . . . . . . . . . . . . . . . . . . . . .
39
4.2.1
Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
4.2.2
Random walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
4.2.3
Electrical networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
4.2.4
Rubber bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
4.3
Relating different models
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
43
4
CONTENTS
5 Rubber bands
47
5.1
Geometric representations, a.k.a. vector labelings . . . . . . . . . . . . . . . .
47
5.2
Rubber band representation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
5.3
Rubber bands, planarity and polytopes . . . . . . . . . . . . . . . . . . . . . .
50
5.3.1
How to draw a graph? . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
5.3.2
How to lift a graph? . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
5.3.3
How to find the starting face? . . . . . . . . . . . . . . . . . . . . . . .
58
Rubber bands and connectivity . . . . . . . . . . . . . . . . . . . . . . . . . .
59
5.4.1
Degeneracy: essential and non-essential . . . . . . . . . . . . . . . . .
59
5.4.2
Connectivity and degeneracy . . . . . . . . . . . . . . . . . . . . . . .
60
5.4.3
Rubber bands and connectivity testing . . . . . . . . . . . . . . . . . .
62
5.4
6 Bar-and-joint structures
6.1
6.2
69
Stresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
6.1.1
Braced stresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
6.1.2
Nodal lemmas and braced stresses on 3-polytopes . . . . . . . . . . . .
74
Rigidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
6.2.1
Infinitesimal motions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
6.2.2
Rigidity of convex 3-polytopes . . . . . . . . . . . . . . . . . . . . . .
80
6.2.3
Generic rigidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
6.2.4
Generically stress-free structures in the plane . . . . . . . . . . . . . .
82
7 Coin representation
7.1
7.2
7.3
89
Koebe’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
7.1.1
Conditions on the radii . . . . . . . . . . . . . . . . . . . . . . . . . .
90
7.1.2
An almost-proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
7.1.3
A strange objective function . . . . . . . . . . . . . . . . . . . . . . . .
94
Formulation in space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95
7.2.1
The Cage Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95
7.2.2
Conformal transformations . . . . . . . . . . . . . . . . . . . . . . . .
98
Applications of coin representations
. . . . . . . . . . . . . . . . . . . . . . .
99
7.3.1
Planar separators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
7.3.2
Laplacians of planar graphs . . . . . . . . . . . . . . . . . . . . . . . . 101
7.4
Circle packing and the Riemann Mapping Theorem . . . . . . . . . . . . . . . 102
7.5
And more... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.5.1
Tangency graphs of general convex domains . . . . . . . . . . . . . . . 103
7.5.2
Other angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
CONTENTS
5
8 Square tilings
107
8.1
Electric current through a rectangle . . . . . . . . . . . . . . . . . . . . . . . 107
8.2
Tangency graphs of square tilings . . . . . . . . . . . . . . . . . . . . . . . . . 109
9 Discrete holomorphic forms
115
9.1
Circulations and homology
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
9.2
Discrete holomorphic forms from harmonic functions . . . . . . . . . . . . . . 116
9.3
Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
9.3.1
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
9.3.2
Critical analytic functions . . . . . . . . . . . . . . . . . . . . . . . . . 119
9.3.3
Polynomials, exponentials and approximation . . . . . . . . . . . . . . 121
9.4
Nondegeneracy properties of rotation-free circulations . . . . . . . . . . . . . 122
9.5
Geometric representations and discrete analytic functions . . . . . . . . . . . 125
9.5.1
Rubber bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
9.5.2
Square tilings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
9.5.3
Circle packings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
9.5.4
Touching polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
10 Orthogonal representations I: the theta function
131
10.1 Shannon capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
10.2 Definition and basic properties of the theta-function . . . . . . . . . . . . . . 134
10.3 Random graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
10.4 The TSTAB body and the weighted theta-function . . . . . . . . . . . . . . . 143
10.5 Theta and stability number . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
10.6 Algorithmic applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
11 Orthogonal representations II: minimum dimension
151
11.1 Minimum dimension with no restrictions . . . . . . . . . . . . . . . . . . . . . 151
11.2 General position and connectivity . . . . . . . . . . . . . . . . . . . . . . . . . 152
11.2.1 G-orthogonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
11.3 Faithful orthogonal representations . . . . . . . . . . . . . . . . . . . . . . . . 157
11.4 Strong Arnold Property and treewidth . . . . . . . . . . . . . . . . . . . . . . 157
11.5 The variety of orthogonal representations . . . . . . . . . . . . . . . . . . . . 162
11.5.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
11.5.2 Creating orthogonal representations . . . . . . . . . . . . . . . . . . . 164
11.5.3 Irreducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
11.6 And more... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
11.6.1 Other inner product representations . . . . . . . . . . . . . . . . . . . 172
11.6.2 Unit distance representation . . . . . . . . . . . . . . . . . . . . . . . . 173
6
CONTENTS
12 The Colin de Verdi`
ere Number
177
12.1 The definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
12.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
12.1.2 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
12.1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
12.1.4 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
12.2 Special values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
12.3 Nullspace representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
12.3.1 Nodal theorems and planarity . . . . . . . . . . . . . . . . . . . . . . . 184
12.3.2 Steinitz representations from Colin de Verdi`ere matrices . . . . . . . . 185
12.3.3 Colin de Verdi`ere matrices from Steinitz representations . . . . . . . . 190
12.3.4 Further geometric properties . . . . . . . . . . . . . . . . . . . . . . . 192
12.4 Inner product representations and sphere labelings . . . . . . . . . . . . . . . 193
12.4.1 Gram representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
12.4.2 Gram representations and planarity . . . . . . . . . . . . . . . . . . . 195
12.5 And more... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
12.5.1 Connectivity of halfspaces . . . . . . . . . . . . . . . . . . . . . . . . . 197
13 Linear structure of vector-labeled graphs
201
13.1 Covering and cover-critical graphs . . . . . . . . . . . . . . . . . . . . . . . . 202
13.1.1 Cover-critical ordinary graphs . . . . . . . . . . . . . . . . . . . . . . . 202
13.1.2 Cover-critical vector-labeled graphs . . . . . . . . . . . . . . . . . . . . 203
13.2 Matchings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
14 Metric representations
215
14.1 Isometric embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
14.1.1 Supremum norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
14.1.2 Manhattan distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
14.2 Distance respecting representations . . . . . . . . . . . . . . . . . . . . . . . . 217
14.2.1 Dimension reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
14.2.2 Embedding with small distortion . . . . . . . . . . . . . . . . . . . . . 218
14.2.3 Multicommodity flows and approximate Max-Flow-Min-Cut . . . . . . 222
14.3 Respecting the volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
14.3.1 Bandwidth and density . . . . . . . . . . . . . . . . . . . . . . . . . . 225
14.3.2 Approximating the bandwidth . . . . . . . . . . . . . . . . . . . . . . 227
15 Adjacency matrix and its square
229
15.1 Neighborhood representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
15.2 Second neighbors and regularity partitions . . . . . . . . . . . . . . . . . . . . 232
CONTENTS
7
15.2.1 ε-packings, ε-covers and ε-nets . . . . . . . . . . . . . . . . . . . . . . 232
15.2.2 Voronoi cells and regularity partitions . . . . . . . . . . . . . . . . . . 233
15.2.3 Computing regularity partitions . . . . . . . . . . . . . . . . . . . . . 236
Appendix: Background material
237
15.3 Graph theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
15.3.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
15.3.2 Graph products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
15.3.3 Chromatic number and perfect graphs . . . . . . . . . . . . . . . . . . 238
15.3.4 Regularity partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
15.4 Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
15.4.1 Basic facts about eigenvalues . . . . . . . . . . . . . . . . . . . . . . . 240
15.4.2 Semidefinite matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
15.4.3 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
15.4.4 Matrices associated with graphs . . . . . . . . . . . . . . . . . . . . . 244
15.4.5 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
15.5 Convex bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
15.5.1 Polytopes and polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . 246
15.5.2 Polyhedral combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . 246
15.5.3 Polar, blocker and antiblocker . . . . . . . . . . . . . . . . . . . . . . . 247
15.5.4 Spheres and caps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
15.5.5 Volumes of other convex bodies . . . . . . . . . . . . . . . . . . . . . . 250
15.6 Matroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
15.7 Semidefinite optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
15.7.1 Semidefinite duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
15.7.2 Algorithms for semidefinite programs . . . . . . . . . . . . . . . . . . . 255
Bibliography
257
8
CONTENTS
Chapter 1
Introduction
1.1
Why are representations interesting?
To represent a graph geometrically is a natural goal in itself, but in addition it is an important
tool in the study of various graph properties, including their algorithmic aspects. There are
several levels of this interplay between algorithms and geometry.
— Often the aim is to find a way to represent a graph in a “good” way. We refer to
Kuratowski’s characterization of planar graphs, to its more recent extensions (most notably
the work of Robertson, Seymour and Thomas), and to Steinitz’s theorem representing 3connected planar graphs by 3-dimensional polyhedra. Many difficult algorithmic problems
in connection with finding these representations have been studied.
— In other cases, graphs come together with a geometric representation, and the issue is
to test certain properties, or compute some parameters, that connect the combinatorial and
geometric structure. A typical question in this class is rigidity of bar-and-joint frameworks,
an area whose study goes back to the work of Cauchy and Maxwell.
— Most interesting are the cases when a good geometric representation of a graph not
only gives a useful way of understanding its structure, but it leads to algorithmic solutions of
purely graph-theoretic questions that, at least on the surface, do not seem to have anything
to do with geometry. Our discussions contained several examples of this (the list is far from
complete): rubber band representations in planarity testing and graph drawing; repulsive
springs in approximating the maximum cut; coin representations in approximating optimal
bisection; nullspace representations for Steinitz representations of planar graphs; orthogonal representations in algorithms for graph connectivity, graph coloring, finding maximum
cliques in perfect graphs, and estimating capacities in information theory; volume-respecting
embeddings in approximation algorithms for bandwidth.
Is there a way to fit the many forms of geometric representations discussed in these notes
into a single theory? Perhaps not, considering the variety of possibilities how the graph
9
10
CHAPTER 1. INTRODUCTION
structure can be reflected by the geometry. Nevertheless, there are some general ideas that
can be pointed out.
1.2
Introductory example:
mum cut
Approximating the maxi-
A cut in a graph G = (V, E) is the set of edges connecting a set S ⊆ V to V \ S, where
∅ ⊂ S ⊂ V . The Maximum Cut Problem is to find a cut with maximum cardinality. We
denote by maxcut(G) this maximum. (More generally, we can be given a weighting w : V →
R+ , and we could be looking for a cut with maximum total weight. To keep things simple,
however, we restrict our introductory discussions to the unweighted case.)
The Maximum Cut Problem is NP-hard; one natural approach is to find an “approximately” maximum cut. Formulated in different terms, Erd˝os in 1967 described the following simple heuristic algorithm for the Maximum Cut Problem: for an arbitrary ordering
(v1 , . . . , vn ) of the nodes, we color v1 , v2 , . . . , vn successively red or blue. For each i, vi is
colored blue iff the number of edges connecting vi to blue nodes among v1 , . . . , vi−1 is less
than the number of edges connecting vi to red nodes in this set. Then the cut formed by the
edges between red and blue nodes contains at least half of all edges. In particular, we get a
cut that is at least half as large as the maximum cut.
There is an even easier randomized algorithm to achieve this approximation, at least in
expected value. Let us 2-color the nodes of G randomly, so that each node is colored red or
blue independently, with probability 1/2. Then the probability that an edge belongs to the
cut between red and blue is 1/2, and expected number of edges in this cut is |E|/2.
Both of these algorithms show that the maximum cut can be approximated from below
in polynomial time with a multiplicative error of at most 1/2. Can we do better? The
following strong negative result of Hastad [107] shows that we cannot get arbitrarily close to
the optimum:
Proposition 1.2.1 It is NP-hard to find a cut with more than (16/17)maxcut(G) ≈
.94 maxcut(G) edges.
Building on results of Delorme, Poljak and Rendl [58, 195], Goemans and Williamson
[96] give a polynomial time algorithm that approximates the maximum cut in a graph with
a relative error of about 13%:
Theorem 1.2.2 One can find in polynomial time a cut with at least .878 maxcut(G) edges.
What is this strange constant? From the proof below we will see that it can be defined
as 2c/π, where c is the largest positive number for which
arccos t ≥ c(1 − t)
(1.1)
1.2. INTRODUCTORY EXAMPLE: APPROXIMATING THE MAXIMUM CUT
11
Figure 1.1: The constant c in the Goemans–Williamson algorithm
for −1 ≤ t ≤ 1 (Figure 1.1).
The algorithm of Goemans and Williamson is based on the following idea. Suppose that
we have somehow placed the nodes of the graph in space. (The space may be more than
3-dimensional.) Next, take a random hyperplane H through the origin. This partitions the
nodes into two classes (Figure 1.2). We “hope” that this will give a good partition.
Figure 1.2: A cut in the graph given by a random hyperplane
How good is this cut? It the nodes are placed in a random position (say, their location
is independently selected uniformly from the unit ball centered at the origin), then it is easy
to see that we get half of the edges in expectation.
Can we do better by placing the nodes in a more sophisticated way? Still heuristically, an
edge that is longer has a better chance of being cut by the random hyperplane. So we want
a construction that pushes adjacent nodes apart; one expects that for such a placement, the
random cut will intersect many edges.
How to construct a placement in which edges tend to be long? Clearly we have to
constrain the nodes to a bounded region, and the unit sphere S n−1 is a natural choice. It is
not necessary to have all edges long; it suffices that edges are long “on the average”. Finally,
it is more convenient to work with the square of the lengths of the edges than with the lengths
themselves. Based on these considerations, we want to find a representation i 7→ ui (i ∈ V )
of the nodes of the graph in the unit ball in Rd so that the following “energy” is maximized:
12
CHAPTER 1. INTRODUCTION
E(u) =
∑ 1
1 ∑
(ui − uj )2 =
(1 − uT
i uj ).
4
2
ij∈E
(1.2)
ij∈E
What is this maximum of this energy? If we work in R1 , then the problem is equivalent
to the Maximum Cut problem: each node is represented by either 1 or −1, which means
that a placement is equivalent to a partition of the node set; the edges between differently
labeled nodes contribute 1 to the energy, the other edges contribute 0, and so the energy
of the placement is exactly the number of edges connecting the two partition classes. So at
least we have been able to relate our placement problem to the Maximum Cut problem.
Unfortunately, the argument above also implies that for d = 1, the optimal embedding
is NP-hard to find. For d > 1, the maximum energy Emax is not equal to the size of the
maximum cut, but it is still an upper bound. While I am not aware of a proof of this, it is
probably NP-hard to find the placement with maximum energy for d = 2, and more generally,
for any fixed d. The surprising fact is that for d = n, such an embedding can be found in
polynomial time using semidefinite optimization (cf. Section 15.7).
Let X denote the V × V matrix defined by Xij = uT
i uj . Then X is positive semidefinite,
and its diagonal entries are 1:
X ≽ 0,
(1.3)
Xii = 1.
(1.4)
The energy E(u) can also be expressed in terms of this matrix as
1 ∑
(1 − Xij ).
2
(1.5)
ij∈E
Conversely, if X is a V × V matrix satisfying (1.3) and (1.4), then we can write it as a Gram
matrix of vectors in Rn , these vectors will have unit length, and (1.5) gives the energy.
The semidefinite optimization problem of maximizing (1.5), subject to (1.3) and (1.4),
can be solved in polynomial time (with an arbitrarily small relative error; let’s not worry
about this error here). So Emax is a polynomial time computable upper bound on the size of
the maximum cut.
How good is this upper bound? Let ij ∈ E and let ui , uj ∈ S n−1 be the corresponding
vectors in the representation constructed above. It is easy to see that the probability that a
random hyperplane H through 0 separates ui and uj is (arccos uT
i uj )/π. Thus the expected
number of edges intersected by H is
∑ 1 − uT uj
∑ arccos uT uj
2c
2c
i
i
≥
c
=
Emax ≥
maxcut(G).
π
π
π
π
ij∈E
ij∈E
This completes the analysis of the algorithm.
(1.6)
1.2. INTRODUCTORY EXAMPLE: APPROXIMATING THE MAXIMUM CUT
13
Remark 1.2.3 Often the Max Cut Problem arises in a more general setting, where the
edges of a the graph have nonnegative weights (or values), and we want to find a cut with
maximum total weight. The algorithm described above is easily extended to this case, by
using the weights as coefficients in the sums 1.2, 1.5 and 1.6.
Remark 1.2.4 It is not quite obvious how to generate a random hyperplane in a highdimensional linear space. We want that the random hyperplane does not prefer any direction,
which means the normal vector of the hyperplane should be uniformly distributed over the
unit sphere in Rn .
On way to go is to generate n independent random numbers g1 , . . . , gn from the standard
Gaussian (normal) distribution, and use these as coefficients of the hyperplane H:
g1 x1 + · · · + gn xn = 0.
It is well known from probability theory that (g1 , . . . , gn ) is a sample from an n-dimensional
Gaussian distribution that is invariant under rotation, which means that the normal vector
√
1
g12
+ · · · + gn2
(g1 , . . . , gn )
of the hyperplane H is uniformly distributed over the unit sphere.
Remark 1.2.5 One objection to the above algorithm could be that it uses random numbers.
In fact, the algorithm can be derandomized by well established but non-trivial techniques.
We do not discuss this issue here; see e.g. [17], Chapter 15 for a survey of derandomization
methods.
Remark 1.2.6 The constant c would seem like a “random” biproduct of a particular proof;
however, it turns out that if we use the complexity theoretic hypothesis called the “Unique
Games Conjecture” (stronger than P ̸= N P ), then this constant is optimal (Khot, Kindler,
Mossel, O’Donnell [136]): no polynomial time algorithms can approximate the maximum cut
within a better ratio for all graphs.
14
CHAPTER 1. INTRODUCTION
Chapter 2
Planar graphs
2.1
Planar graphs
A graph G = (V, E) is planar, if it can be drawn in the plane so that its edges are Jordan
curves and they intersect only at their endnodes1 . A plane map is a planar graph with a
fixed embedding. We also use this phrase to denote the image of this embedding, i.e., the
subset of the plane which is the union of the set of points representing the nodes and the
Jordan curves representing the edges.
The complement of a plane map decomposes into a finite number of arcwise connected
pieces, which we call the countries of the planar map2 . If G is a plane map, then the set of
its countries will be denoted V ∗ , and often we use the notation f = |V ∗ |.
Every planar map G = (V, E) has a dual map G∗ = (V ∗ , E ∗ ) (Figure 2.1) As an abstract
graph, this can be defined as the graph whose nodes are the countries of G, and if two
countries share k edges, then we connect them in G∗ by k edges. So each edge e ∈ E will
correspond to an edge e∗ of G∗ , and |E ∗ | = |E|.
Figure 2.1: A planar map and its dual.
1 We use the word node for the node of a graph, the word vertex for the vertex of a polytope, and the word
point for points in the plane or in other spaces.
2 Often the countries are called “faces”, but we reserve the word face for the faces of polyhedra, and the
word facet for maximum dimensional proper faces.
15
16
CHAPTER 2. PLANAR GRAPHS
This dual has a natural drawing in the plane: in the interior of each country F of G we
select a point vF (which can be called its capital), and on each edge e ∈ E we select a point
ue (this will not be a node of G∗ , just an auxiliary point). We connect vF to the points ue for
each edge on the boundary of F by nonintersecting Jordan curves inside F . If the boundary
of F goes through e twice (i.e., both sides of e belong to F ), then we connect vF to ue by
two curves, entering e from two sides. The two curves entering ue form a single Jordan curve
representing the edge e∗ . It is not hard to see that each country of G∗ will contain a unique
node of G, and so (G∗ )∗ = G.
−
→
If G = (V, E ) is an oriented planar map, then there is a natural way to define an orienta−
→
−
→
→
−
tion of its dual G∗ = (V ∗ , E ∗ ), so that if −
pq = ij ∗ , then →
pq crosses ij from left to right. (The
dual of the dual is not the original map, but the original map with its orientation reversed!)
We can associate new maps with a planar map G in several other ways, of which we need
the following. The lozenge map G♢ = (V ♢ , E ♢ ) of G is defined by V ♢ = V ∪ V ∗ , where we
connect p ∈ V to i ∈ V ∗ if p is a node of country i. We do not connect two nodes in V nor
in V ∗ , so G♢ is a bipartite graph. It is also clear that G♢ is planar.
The dual map of the lozenge map is also interesting. It is called the medial map of G, and
it can be described as follows: we create a new node on each edge; these will be the nodes of
the medial map. For every corner of every face, we connect the two nodes on the two edges
bordering this corner by a Jordan curve inside the face. Since G♢ is bipartite, the faces of its
dual G can be two-colored so that every edge separates faces of different colors (see Figure
2.2).
Figure 2.2: The dual, the lozenge map and the medial map of a part of a planar
map.
A planar map is called a triangulation if every country has 3 edges. Note that a triangulation may have parallel edges, but no two parallel edges can bound a country. In every
simple planar map G we can introduce new edges to turn all countries into triangles while
keeping the graph simple.
We often need the following basic fact about planar graphs, which we quote without proof
(see Exercise 2.4.6).
2.1. PLANAR GRAPHS
17
Theorem 2.1.1 (Euler’s Formula) For every connected planar map, n − m + f = 2 holds.
Some important consequences of Euler’s Formula are the following.
Corollary 2.1.2 (a) A simple planar graph with n nodes has at most 3n − 6 edges.
(b) A simple bipartite planar graph with n nodes has at most 2n − 4 edges.
(c) Every simple planar graph has a node with degree at most 5.
(d) Every simple bipartite planar graph has a node with degree at most 3.
From (a) and (b) it follows immediately that the “Kuratowski graphs” K5 and K3,3 (see
Figure 2.3) are not planar. This observation leads to the following characterization of planar
graphs.
Theorem 2.1.3 (Kuratowski’s Theorem) A graph G is embedable in the plane if and
only if it does not contain a subgraph homeomorphic to the complete graph K5 or the complete
bipartite graph K3,3 .
K
5
K
3,3
Figure 2.3: The two Kuratowski graphs. The two drawings on the right hand side
show that both graphs can be drawn in the plane with a single crossing.
Among planar graphs, 3-connected planar graphs are especially important. We start with
a simple but useful fact. A cycle C in a graph G is called separating, if G \ V (C) has at least
two connected components, where any chord of C is counted as a connected component here.
Proposition 2.1.4 In a 3-connected planar graph a cycle bounds a country if and only if it
is non-separating.
Corollary 2.1.5 Every simple 3-connected planar graph has an essentially unique embedding
in the plane in the sense that the set of cycles that bound countries is uniquely determined.
The following characterization of 3-connected planar graphs was proved by Tutte [236].
Theorem 2.1.6 Let G be a 3-connected graph. Then every edge of G is contained in at least
two non-separating cycles. G is planar if and only if every edge is contained in exactly two
non-separating cycles.
18
CHAPTER 2. PLANAR GRAPHS
As a further application of Euler’s Formula, we prove the following lemma, which will be
useful in several arguments later on.
Lemma 2.1.7 In a simple planar map whose edges are 2-colored with red and blue there is
always a node where the red edges (and so also the blue edges) are consecutive.
Proof. We may assume that the graph is connected (else, we can apply the lemma to each
component). Define a happy corner as a pair of edges that are consecutive on the boundary
of a country and one of them is red, the other blue. We are going to estimate the number N
of happy corners in two different ways.
Suppose that the conclusion does not hold. Then for every node v, the color of the edge
changes at least four times if we consider the edges incident with v in their cyclic order
according to the embedding. (In particular, every node must have degree at least four.) So
every node is incident with at least 4 happy corners, which implies that
N ≥ 4n.
(2.1)
On the other hand, let fr be the number of countries with r edges on their boundary.
(To be precise: since we did not assume that the graph is 2-connected, it may happen that
an edge has the same country on both sides. In this case we count this edge twice.) The
maximum number of happy corners a country with r edges can have is
{
r − 1, if r is odd,
r,
if r is even.
Thus the number of happy corners is at most
N ≤ 2f3 + 4f4 + 4f5 + 6f6 + 6f7 + . . .
(2.2)
Using the trivial identities
f = f3 + f4 + f5 + . . .
and
2m = 3f3 + 4f4 + 5f5 + . . .
we get by Euler’s formula
N ≤ 2f3 + 4f4 + 4f5 + 6f6 + 6f7 + · · · ≤ 2f3 + 4f4 + 6f5 + 8f6 + · · · = 4m − 4f = 4n − 8,
which contradicts (2.2).
2.2
Planar separation
The following important theorem about planar graphs was proved by Lipton and Tarjan:
2.3. STRAIGHT LINE REPRESENTATION AND 3-POLYTOPES
19
Theorem 2.2.1 (Planar Separator Theorem) Every planar graph G = (V, E) on n
√
nodes contains a set S ⊆ V such that |S| ≤ 4 n, and every connected component of G \ S
has at most 2n/3 nodes.
We do not prove this theorem here; a proof (with a weaker bound of 3n/4 on the sizes of
the components) based on circle packing will be presented in Section 7.3.
2.3
Straight line representation and 3-polytopes
Theorem 2.3.1 (F´
ary–Wagner Theorem) Every simple planar map can be drawn in the
plane with straight edges.
Proof. We use induction on the number of nodes. If this is at most 3, the assertion is
trivial. It suffices to prove the assertion for triangulations.
There is an edge xy which is contained in two triangles only. For let x be a node which is
contained inside some triangle T (any point not on the outermost triangle has this property)
and choose x, T so that the number of countries inside T is minimal. Let y be any neighbor
of x. Now if xy belongs to three triangles (x, y, z1 ), (x, y, z2 ) and (x, y, z3 ), then all these
triangles would be properly contained in T and, say, (x, y, z1 ) would contain z3 inside it,
contrary to the minimality of T .
Contract xy to a node p and remove one edge from both arising pairs of parallel edges.
This way we get a new simple triangulation G0 and by the induction hypothesis, there is an
isomorphic triangulation G′0 with straight edges.
Now consider the edges pz1 and pz2 of G′0 . They split the angle around p into two angles;
one of these contains the edges whose pre-images in G are adjacent to x, the other one those
whose pre-images are adjacent to y. Therefore, we can “pull x and y apart” and get an a
straight line drawing of G.
Let P be a 3-polytope. (See Section 15.5 for basic facts about polytopes). The vertices
and edges of P form a graph GP , which we call the skeleton of P .
Proposition 2.3.2 The skeleton of every 3-polytope is a 3-connected planar graph.
We describe the simple proof, because this is our first example of how a geometric representation implies a purely graph-theoretic property, namely 3-connectivity.
Proof. Let F be any facet of P , and let x be a point that is outside P but very close to
an interior point of F ; more precisely, assume that the plane Σ of F separates x from P , but
for every other facet F ′ , x is on the same side of the plane of F ′ as P . Let us project the
skeleton of P from x to the plane Σ. Then we get an embedding of GP in the plane (Figure
2.4.
20
CHAPTER 2. PLANAR GRAPHS
Figure 2.4: Projecting the skeleton of a polytope into one of the facets.
To see that GP is 3-connected, it suffices to show that for any four nodes a, b, c, d there
is a path from a to b which avoids c and d.
If a, b, c, d are not coplanar, then let Π be a plane that separates {a, b} from {c, d}; then
we can connect a and b by a polygon consisting of edges of P that stays on the same side of
Π as a and b, and so avoids c and d (see Appendix 15.5).
If a, b, c, d are coplanar, let Π be a plane that contains them. One of the open halfspaces
bounded by Π contains at least one vertex of P . We can then connect a and b by a polygon
consisting of edges of P that stays on this side of Π (except for its endpoints a and b), and
so avoids c and d.
The embedding of GP in the plane constructed above is called the Schlegel diagram of
the polytope (with respect to the facet F ).
The converse of this last proposition is an important and much more difficult theorem,
proved by Steinitz [224]:
Theorem 2.3.3 (Steinitz’s Theorem) A simple graph is isomorphic to the skeleton of a
3-polytope if and only if it is 3-connected and planar.
A bijection between the nodes of a simple graph G and the vertices of a convex polytope
P in R3 that gives a bijection between the edges of G and the edges of P is called a Steinitz
representation of the graph G. We don’t prove Steinitz’s Theorem here, but later constructions of representations by polytopes, and in fact with special properties, will follow from the
material in sections 5.3, 7.2.1 and 12.3.
There may be many representations of a graph by a 3-polytope; in 3-space the set of these
representations has nice properties, but in higher dimension it can be very complicated. We
refer to [201].
The construction of the planar embedding of GP in the proof of Proposition 2.3.2 gives an
embedding with straight edges. Therefore Steinitz’s Theorem also proves the F´ary–Wagner
theorem, at least for 3-connected graphs. It is easy to see that the general case can be reduced
to this by adding new edges so as to make the graph 3-connected (see exercise 2.4.4).
Finally, we note that the Steinitz representation is also related to planar duality.
2.4. CROSSING NUMBER
21
Proposition 2.3.4 Let P be a convex polytope in R3 with the origin in its interior, and let
P ∗ be its polar. Then the skeletons GP are GP ∗ are dual planar graphs.
2.4
Crossing number
If G is a non-planar graph, we may want to draw it so as to minimize the number of intersections of the curves representing the edges. To be more precise, we assume that the drawing
satisfies the following conditions: none of the edges is allowed to pass through a node, no
three edges pass through the same point, any two edges intersect in a finite number of points,
and every common point of two edges other than a common endpoint is an intersection point
(so edges cannot touch). The minimum number of intersection points in such a drawing is
called the crossing number of the graph, and denoted by cr(G).
The question whether cr(G) = 0, i.e., whether G is planar, is decidable in polynomial
time; but to compute cr(G) in general is NP-hard.
It is ot even known what the crossing numbers of complete are complete bipartite graphs
are. Guy proved (by constructing an appropriate drawing of Kn in the plane) that
( )
1⌊n⌋ ⌊n − 1⌋ ⌊n − 2⌋ ⌊n − 3⌋ 3 n
cr(Kn ) ≤
·
·
·
<
.
(2.3)
4 2
2
2
2
8 4
Zarankiewicz proved that
cr(Kn,m ) ≤
⌊ n ⌋ ⌊ n − 1 ⌋ ⌊ m ⌋ ⌊ m − 1 ⌋ 1 (n)(m)
·
·
·
<
.
2
2
2
2
4 2
2
(2.4)
In both cases, it is conjectured that equality holds in the first inequality.
We will need lower bounds on the crossing number. We start with an easy observation.
Lemma 2.4.1 Let G be a simple graph with n nodes and m edges, then
cr(G) ≥ m − 3n + 6.
Proof. Consider a drawing of G with a minimum number of crossings. If m ≤ 3n − 6,
then the assertion is trivial. If m > 3n − 6, then by Corollary 2.1.2(a), there is at least one
crossing. Deleting one of the edges participating in a crossing, the number of crossings drops
by at least one, and the lemma follows by induction.
If m is substantially larger than n, then this bound becomes very week. The following
theorem of Ajtai, Chv´
atal, Newborn, and Szemer´edi [3] and Leighton [148] gives a much
better bound when m is large, which is best possible except for the constant.
Theorem 2.4.2 Let G be a simple graph with n nodes and m edges, where m ≥ 4n. Then
cr(G) ≥
m3
.
64n2
22
CHAPTER 2. PLANAR GRAPHS
Proof. Consider a drawing of G with a minimum number of crossings. Let p = (4n)/m,
and delete each node of G independently with probability 1 − p. The remaining graph H
satisfies
cr(H) ≥ |E(H)| − 3|V (H)| + 6 > |E(H)| − 3|V (H)|.
Take the expectation of both sides:
E(cr(H)) > E(|E(H)|) − 3E(|V (H)|).
Here
E(|V (H)|) = pn,
E(|E(H)|) = p2 m,
E(cr(H)) ≤ p4 cr(G).
(For the last inequality: a crossing of G survives if and only if all four endpoints of the two
participating edges survive, so the drawing of H we get has, in expectation, p4 cr(G) crossings. However, this may not be an optimal drawing of H, hence we only get an inequality.)
Substituting these values, we get
cr(G) ≥
m 3n
m3
− 3 =
.
2
p
p
64n2
Remark 2.4.3 As first observed by Sz´ekely [231], this bound on the crossing number can
be used to derive several highly non-trivial results in combinatorial geometry. See the proof
of Theorem 3.2.1.
Exercise 2.4.4 Let G be a simple planar graph. Prove that you can add edges
to G so that you make it 3-connected while keeping it simple and planar.
Exercise 2.4.5 Describe the dual graph of G♢ in terms of the original planar
map G.
Exercise 2.4.6 (a) Prove Euler’s Formula. (b) Find a relationship between the
numbers of nodes, edges and countries of a map in the projective plane and on
the torus.
Exercise 2.4.7 Construct a 2-connected simple planar graph on n nodes which
has an exponential (in n) number of different embeddings in the plane. (Recall:
two embeddings are considered different if there is a cycle that is the boundary
cycle of a country in one embedding but not in the other.)
Exercise 2.4.8 Show that the statement of Lemma 2.1.7 remains valid for maps
on the projective plane, but not for maps on the torus.
Exercise 2.4.9 (a) Let L be a finite set of lines in the plane, not all going through
the same point. Prove that there is a point where exactly two lines in L go through.
(b) Let S be a finite set of points in the plane, not all on a line. Prove that there
is a line which goes through exactly two points in S.
2.4. CROSSING NUMBER
Exercise 2.4.10 (a) Let L be a finite set of lines in the plane, not going through
the same point. Color these lines red and blue. Prove that there is a point where
at least two lines in L intersect and all the lines through this point have the same
color.
(b) Let S be a finite set of points in the plane, not all on a line. Color these points
red and blue. Prove that there is a line which goes through at least two points in
S and all whose points have the same color.
Exercise 2.4.11 If G is a k × k grid, then
√ every set S of nodes separating G into
parts smaller that (1 − c)k2 has at least 2ck nodes.
23
24
CHAPTER 2. PLANAR GRAPHS
Chapter 3
Graphs from point sets
3.1
Skeletons of polytopes
The vertices and edges of a polytope P form a simple graph GP , which we call the skeleton
of the polytope.
Proposition 3.1.1 Let P be a polytope in Rd , let a ∈ Rd , and let u be a vertex of P .
Suppose that P has a vertex v such that aT u < aT v. Then P has a vertex w such that uw
is an edge of P , and aT u < aT w.
Another way of formulating this is that if we consider the linear objective function aT x on
a polytope P , then from any vertex we can walk on the skeleton to a vertex that maximizes
the objective function so that the value of the objective function increases at every step. This
important fact is the basis for the Simplex Method. For our purposes, however, the following
corollary of Proposition 3.1.1 will be important:
Proposition 3.1.2 Let G be the skeleton of a d-polytope, and let H be an (open or closed)
halfspace containing an interior point of the polytope. Then the subgraph of GP induced by
those vertices of P that are contained in this halfspace is connected.
From Proposition 3.1.2, it is not hard to derive the following (see Theorem 2.3.2 for
d = 3):
Proposition 3.1.3 The skeleton of every d-polytope is d-connected.
The following property was proved by Gr¨
unbaum and Motzkin [103]:
Proposition 3.1.4 The skeleton of every d-polytope has a Kd+1 minor.
25
26
CHAPTER 3. GRAPHS FROM POINT SETS
3.2
Unit distance graphs
Given a set S of points in Rd , we construct the unit distance graph on S by connecting two
points by an edge if and only if their distance is 1. Many basic properties of these graphs
have been studied, most of which turn out quite difficult.
3.2.1
The number of edges
Let S be a finite set of n points in the plane. A natural problem is the following: how many
times can the same distance (say, distance 1) occur between the points of a set S of size n.
In the first paper dealing with this problem [72], Erd˝os proved that the unit distance graph
of n points in the plane has at most n3/2 edges. This follows from the observation that a
unit distance graph contains no K2,3 , together with the simple fact from graph theory that
any simple graph on n nodes that contains no K2,3 has at most n3/2 edges.
It was not easy to improve this simple bound. The only substantial progress was made
by Spencer, Szemer´edi and Trotter [221], who proved the following bound, which is still the
best known (up to the constant).
Theorem 3.2.1 The unit distance graph G of n points in the plane has at most 2n4/3 edges.
Proof. We describe an elegant proof of this bound by Sz´ekely [231]. Let S be a set of n
points in the plane. We may assume that the theorem is true for fewer than n points. We
define an auxiliary graph G′ on V (G′ ) = S: we draw a unit circle about each point in S, and
consider the arcs of these circles between two points on them as the edges of G′ .
We better get rid of some trivial complications. If there is a point v in S such that the
unit circle about v contains fewer than 3 other points of S, then deleting this point destroys
at most 2 unit distances, and so we can use induction on n. Thus we may assume that each
of the circles contains at least 3 points of S. This means that the graph G′ is simple.
The number of nodes of G′ is n. The number of edges that are part of the unit circle
about v is the same as the degree of v in G, and hence |E(G′ )| = 2|E(G)|. Since any two
unit circles intersect at no more than 2 points, the number of crossings between edges of G′
( )
is at most 2 n2 < n2 . So the crossing number of G′ is cr(G′ ) < n2 . On the other hand,
cr(G′ ) >
|E(G)|3
|E(G′ )|3
=
64n2
8n2
by Theorem 2.4.2. This gives n2 < |E(G)|3 /(8n2 ), from where the theorem follows.
The proof just given also illuminates that this bound depends not only on graph theoretic
properties of G(S) but, rather, on properties of its actual embedding in the plane. The
difficulty of improving the bound n4/3 is in part explained by a construction of Brass [36],
which shows that in some Banach norm, n points in the plane can determine as many as
Θ(n4/3 ) unit distances.
3.2. UNIT DISTANCE GRAPHS
27
The situation in R3 is probably even less understood, but in dimensions 4 and higher, an
easy construction of Lenz shows that every finite complete k-partite graph can be realized in
R2k as the graph of unit distances: place the points on two orthogonal planes, at distance
√
1/ 2 from the origin. Hence the maximum number of unit distances between n points in Rd
(d ≥ 4) is Θ(n2 ).
Another intriguing unsolved question is the case when the points in S form the vertices
of a convex n-gon. Erd˝os and Moser conjectured the upper bound to be O(n). The best
construction [68] gives 2n − 7 and the best upper bound is O(n log n) [93].
We get different, and often more accessible questions if we look at the largest distances
among n points. In 1937 Hopf and Pannwitz [121] proved the following.
Theorem 3.2.2 The largest distance among n points in the plane occurs at most n times.
Proof. The proof of this is left to the reader as Exercise 3.4.4.
For 3-space, Heppes and R´ev´esz [112] proved that the largest distance occurs at most
2n − 2 times. In higher dimensions the number of maximal distances can be quadratic:
Lenz’s construction can be carried out so that all the n2 /4 unit distances are maximal. The
exact maximum is known in dimension 4 [35].
3.2.2
Chromatic number and independence number
The problem of determining the maximum chromatic number of a unit distance graph in the
plane was raised by Hadwiger [105]. He proved the following bounds; we don’t know more
about this even today.
Theorem 3.2.3 Every planar unit distance graph has chromatic number at most 7. There
is a planar unit distance graph with chromatic number 4.
Proof. Figure 3.1(a) shows a coloring of all points of the plane with 7 colors such that
no two points at distance 1 have the same color. Figure 3.1(b) shows a unit distance graph
with chromatic number 4: Suppose it can be colored with three colors, and let the top point
v have color red (say). The other two points in the left hand side triangle containing v must
be green and blue, so the left bottom point must be red again. Similarly, the right bottom
point must be red, so we get two adjacent red points.
For higher dimensions, we know in a sense more. Klee noted that χ(G(Rd )) is finite
for every d, and Larman and Rogers [146] proved the bound χ(G(Rd )) < (3 + o(1))d . Erd˝os
conjectured and Frankl and Wilson [90] proved that the growth of χ(Rd ) is indeed exponential
in d: χ(Rd ) > (2 − o(1))d . The exact base of this exponential function is not known.
28
CHAPTER 3. GRAPHS FROM POINT SETS
(a)
(b)
Figure 3.1: (a) The plane can be colored with 7 colors. (b) Three colors are not
enough.
One gets interesting problems by looking at the graph formed by largest distances. The
chromatic number of the graph of the largest distances is equivalent to the well known Borsuk
problem: how many sets with smaller diameter can cover a convex body in Rd ? Borsuk
conjectured that the answer is d + 1, which is not difficult to prove this if the body is smooth.
The bound is also proved for d ≤ 3. (Interestingly the question in small dimensions was settled
by simply using upper bound on the number of edges in the graph of the largest distances,
discussed above). But the general case was settled in the negative by Kahn and Kalai [130],
who proved that the minimum number of covering sets (equivalently, the chromatic number
√
of the graph of largest distances) can be as large as 2Ω( d) .
3.3
3.3.1
Bisector graphs
The number of edges
For a set S of n = 2m points in R2 , we construct the bisector graph GS on S by connecting
two points by an edge if and only if the line through them has m − 2 points on each side.
Lemma 3.3.1 Let v ∈ S, let e, f be two edges of GS incident with v that are consecutive in
the cyclic order of such edges, let W be the wedge between e and f , and let W ′ be the negative
of W . Then there is exactly one edge of GS incident with v in the wedge W ′ .
Proof. Draw a line ℓ, with orientation, through v, and rotate it counterclockwise by π.
Consider the number of points of S on its left hand side. This number changes from less
than m − 2 to larger than m − 2 whenever the negative half of e passes through an edge of
GS incident with v, and it changes from larger than m − 2 to less than m − 2 whenever the
positive half of e passes through an edge of GS incident with v.
3.3. BISECTOR GRAPHS
29
Corollary 3.3.2 Let v ∈ S, and let ℓ be a line through v that does not pass through any
other point in S. Let H be the halfplane bounded by e that contains at most m − 1 points.
Then for some d ≥ 0, d edges incident with v are contained in H and d + 1 of them are on
the opposite side of e.
Corollary 3.3.3 Every node in GS has odd degree.
Corollary 3.3.4 Let ℓ be a line that does not pass through any point in S. Let k and n − k
of the points be contained on the two sides of ℓ, where k ≤ m. Then ℓ intersects exactly k
edges of GS .
From this corollary it is easy to deduce that the number of edges of GS is at most 2n3/2
(Exercise 3.4.5). If you solve this exercise, you see that its proof seems to have a lot of
room for improvement; however, natural improvements of the argument only lead to an
improvement by a constant factor. But using the “crossing number method”, a substantially
better bound can be derived [60]:
Theorem 3.3.5 The number of edges of the bisector graph GS is at most 4n4/3 .
Proof. We prove that
( )
n
cr(GS ) ≤
.
2
(3.1)
Theorem 2.4.2 implies then that
|E(GS )| ≤ (64n2 cr(GS ))1/3 < 4n4/3 .
To prove (3.1), note that we already have a drawing of GS in the plane, and it suffices to
estimate the number of crossings between edges in this drawing. Fix a coordinate system so
that no segment connecting two points of S is vertical. Let P = v0 v1 . . . vk be a path in GS .
We call P a convex chain, if
(i) the x-coordinates of the vi are increasing,
(ii) the slopes of the edges vi vi+1 are increasing,
(iii) for 1 ≤ i ≤ k − 1, there is no edge incident with vi whose slope is between the slopes
of vi−1 vi and vi vi+1 ,
(iv) P is maximal with respect to this property.
Note that we could relax (iii) by excluding such an edge going to the right from vi ; indeed,
if an edge vi u has slope between the slopes of vi−1 vi and vi vi+1 , and the x-coordinate of u
is larger than that of vi , then there is also an edge wvi that has slope between the slopes of
vi−1 vi and vi vi+1 , and the x-coordinate of w is smaller than that of vi . This follows from
Lemma 3.3.1.
30
CHAPTER 3. GRAPHS FROM POINT SETS
Claim 1 Every edge of GS belongs to exactly one convex chain.
Indeed, the edge as a path of length 1 satisfies (i)-(iii), and so it will be contained in a
maximal such path. On the other hand, if two convex chains share and edge, then they must
“branch” somewhere, which is impossible as they both satisfy (iii).
We can use convex chains to charge every intersection of two edges of GS to a pair of
nodes of GS as follows. Suppose that the edges ab and cd intersect at a point p, where the
notation is such that a is to the left from b, c is to the left of d, and the slope of ab is larger
than the slope of cd. Let P = v0 v1 . . . ab . . . vk and Q = u0 u1 . . . cd . . . ul be the convex chains
containing ab and cd, respectively. Let vi be the first point on P for which the semiline vi vi+1
intersects Q, and let uj be the last point on Q for which the semiline ui−1 ui intersects P .
We charge the intersection of ab and cd to the pair (vi , uj ).
Claim 2 Both P and Q are contained in the same side of the line vi uj .
Assume (for convenience) that vi uj lies on the x-axis. We may also assume that P goes
at least as deep below the x-axis as Q.
If the slope of vi−1 vi is positive, then the semiline vi−1 vi intersects Q, contrary to the
definition of i. Similarly, the slope of uj uj+1 (if j < l) is positive.
If the slope of vi vi+1 is negative, then the semiline vi vi+1 stays below the x-axis, and P
is above this semiline. So if it intersects Q, then Q must reach farther below the x-axis than
P , a contradiction.
Thus we know that the slope of vi−1 vi is negative but the slope of vi vi+1 is positive, which
implies by convexity that P stays above the x-axis. By our assumption, this implies that Q
also stays above the x-axis, which proves the claim.
Claim 3 At most one convex chain starts at each node.
Assume that P = v0 v1 . . . ab . . . vk and Q = u0 u1 . . . cd . . . ul are two convex chains with
u0 = v0 . The edges v0 v1 and u0 u1 are different, so we may assume that the slope of u0 u1 is
larger. By Lemma 3.3.1, there is at least one edge tu0 entering u0 whose slope is between the
slopes of v0 v1 and u0 u1 . We may assume that tu0 has the largest slope among all such edges.
But then appending the edge tu0 to Q conditions (i)-(iii) are preserved, which contradicts
(iv).
To complete the proof of (3.1), it suffices to show:
Claim 4 At most one intersection point is charged to every pair (v, u) of points of S.
Suppose not. Then there are two pairs of convex chains P, Q and P ′ , Q′ such that p and
P ′ pass through v, Q and Q′ pass through u, some edge of P to the right of v intersects an
edge of Q to the left of P , and similarly for P ′ and Q′ . We may assume that P ̸= P ′ . Let
vw and vw′ be the edges of P and P ′ going from v to the right. Claim 1 implies that these
3.3. BISECTOR GRAPHS
31
edges are different, and so we may assume that the slope of uw is larger than the slope of
uw′ ; by Claim 2, these slopes are positive. As in the proof of Claim 3, we see that Q does
not start at v, and in fact it continues to the left by an edge with positive slope. But this
contradicts Claim 2.
3.3.2
k-sets and j-edges
A k-set of S is a subset T ⊆ S of cardinality k such that T can be separated from its
complement T \ S by a line. An i-set with 1 ≤ i ≤ k is called an (≤ k)-set.
A j-edge of S is an ordered pair uv, with u, v ∈ S and u ̸= v, such that there are exactly
j points of S on the right hand side of the line uv. Let ej = ej (S) denote the number of
j-edges of S; it is well known and not hard to see that for 1 ≤ k ≤ n − 1, the number of
k-sets is ek−1 . An i-edge with i ≤ j will be called a (≤ j)-edge; we denote the number of
(≤ j)-edges by Ej = e0 + . . . + ej .
It is not hard to derive a sharp lower bound for each individual ej . We will use the
following theorem from [71], which can be proved extending the proof of Corollary 3.3.4.
Theorem 3.3.6 Let S be a set of n points in the plane in general position, and let T be a
k-set of S. Then for every 0 ≤ j ≤ (n − 2)/2, the number of j-edges uv with u ∈ T and
v ∈ S \ T is exactly min(j + 1, k, n − k).
As an application, we prove the following bounds on ej (the upper bound is due to Dey
[60], the lower bound, as we shall see, is an easy consequence of Theorem 3.3.6).
Theorem 3.3.7 For every set of n points in the plane in general position and for every
j < n−2
2 ,
2j + 3 ≤ ej ≤ 8nj 1/3 .
For every j ≥ 0 and n ≥ 2j + 3, the lower bound is attained. It is not known how sharp
the upper bound is.
Proof. The proof of the upper bound is a rather straightforward extension of the proof of
Theorem 3.3.5.
To prove the lower bound, consider any j-edge uv, and let e be the line obtained by
shifting the line uv by a small distance so that u and v are also on the right hand side, and
let T be the set of points of S on the smaller side of e. This will be the side containing u and
v unless n = 2j + 3; hence |T | ≥ j + 1. By Theorem 3.3.6, the number of j-edges xy with
x ∈ T , y ∈ S \ T is exactly j + 1. Similarly, the number of j-edges xy with y ∈ T , x ∈ S \ T
is exactly j + 1, and these are distinct from the others since n ≥ 2j + 3. Together with uv,
this gives a total of 2j + 3 such pairs.
32
CHAPTER 3. GRAPHS FROM POINT SETS
The following construction shows that the lower bound in the previous theorem is sharp.
Despite significant progress in recent years, the problem of determining the (order of magnitude of) the best upper bound remains open. The current best construction, due to T´oth
[235], gives a set of points with ej ≈ neΘ(
√
log k)
.
Example 3.3.8 Let S0 be a regular (2j + 3)-gon, and let S1 be any set of n − 2j − 3 points
in general position very near the center of S0 .
Every line through any point in S1 has at least j + 1 points of S0 on both sides, so the
j-edges are the longest diagonals of S0 , which shows that their number is 2j + 3.
We now turn to estimating the number Ej of ≤ j-edges. Theorem 3.3.7 immediately
implies the following lower bound on the number of (≤ j)-edges:
j
∑
ei ≥ (j + 2)2 − 1.
(3.2)
i=0
This is, however, not tight, since the constructions showing the tightness of the lower bound
ej ≥ 2j + 3 are diffierent for different values of j. The following bounds can be proved by
more complicated arguments.
Theorem 3.3.9 Let S be a set of n points in the plane in general position, and j < (n−2)/2.
Then
(
)
j+2
3
≤ Ej ≤ n(j + 1).
2
.
The upper bound, due to Alon and Gy˝ori [11], is tight, as shown by a convex polygon;
the lower bound [1, 172] is also tight for k ≤ n/3, as shown by the following example:
Example 3.3.10 Let r1 , r2 and r3 be three rays emanating from the origin with an angle of
120◦ between each pair. Suppose for simplicity that n is divisible by 3 and let Si be a set of
n/3 points in general position, all very close to ri but at distance at least 1 from each other
and from the origin. Let S = S1 ∪ S2 ∪ S3 .
Then for 1 ≤ k ≤ n/3, every k-set of S contains the i points farthest from 0 in one Sa ,
for some 1 ≤ i ≤ k, and the (k − i) points farthest from 0 in another Sb . Hence the number
( )
of k-sets is 3k and the number of (≤ k)-sets equals 3 k+1
2 .
3.3.3
Rectilinear crossing number
The rectilinear crossing number of a graph G is the minimum number of crossings in a drawing
of G in the plane with straight edges and nodes in general position. The F´ary–Wagner
Theorem 2.3.1 says that if the crossing number of a graph is 0, then so is its rectilinear
3.3. BISECTOR GRAPHS
33
crossing number. Is it true more generally that these two crossing numbers are equal? The
answer is negative: the rectilinear crossing number may be strictly larger than the crossing
number [219]. As a general example, the crossing number of the complete graph Kn is less
( )
( )
( )
than 38 n4 = 0.375 n4 (see (2.3)), while its rectilinear crossing number is at least 0.37501 n4
if n is large enough [172].
The rectilinear crossing number of the complete n-graph Kn can also be defined as follows.
Let S be a set of n points in general position in the plane, i.e., no three points are collinear.
Four points in S may or may not form the vertices of a convex quadrilateral; if they do,
we call this subset of 4 elements convex. We are interested in the number cS (S) of convex
( )
4-element subsets. This can of course be as large as n4 , if S is in convex position, but what
is its minimum?
The graph K5 is not planar, and hence every 5-element set has at least one convex 4element subset, from which it follows by straightforward averaging that at least 1/5 of the
4-element subsets are convex for every n ≥ 5.
( )
The best upper bound on the minimum number of convex quadrilaterals is 0.3807 n4 ,
obtained using computer search by Aichholzer, Aurenhammer and Krasser [2]. As an application of the results on k-sets in Section 3.3.2, we prove the following lower bound [1, 172]:
Theorem 3.3.11 Let S be a set of n points in the plane in general position. Then the
( )
number cS (S) of convex quadrilaterals determined by S is at least 0.375 n4 .
The constant in the bound can be improved to 0.37501 [172]. This tiny improvement is
significant, as we have seen, but we cannot give the proof here.
The first ingredient of our proof is an expression of the number of convex quadrilaterals
in terms of j-edges of S.
Let cS denote the number of 4-tuples of points in S that are in convex position, and let
bS denote the number of those in concave (i.e., not in convex) position. We are now ready
to state the crucial lemma [244, 172], which expresses cS as a positive linear combination of
the numbers ej (one might say, as the second moment of the distribution of j-edges).
Lemma 3.3.12 For every set of n points in the plane in general position,
(
)2
( )
∑
3 n
n−2
−j −
.
cS =
ej
2
4 3
n−2
j<
2
Proof. Clearly we have
( )
n
cS + bS =
.
4
(3.3)
To get another equation between these quantities, let us count, in two different ways, ordered
4-tuples (u, v, w, z) such that w is on the right of the line uv and z is on the left of this line.
34
CHAPTER 3. GRAPHS FROM POINT SETS
First, if {u, v, w, z} is in convex position, then we can order it in 4 ways to get such an
ordered quadruple; if {u, v, w, z} is in concave position, then it has 6 such orderings. Hence
the number of such ordered quadruples is 4cS + 6bS . On the other hand, any j-edge uv it
can be completed to such a quadruple in j(n − j − 2) ways. So we have
4cS + 6bS =
n−2
∑
ej j(n − j − 2).
(3.4)
j=0
From (3.3) and (3.4) we obtain


( ) n−2
∑
1 n
6
cS =
−
ej (n − j − 2)j  .
2
4
j=0
Using that
n−2
∑
ej = n(n − 1),
(3.5)
j=0
we can write
( ) n−2
∑ (n − 2)(n − 3)
n
6
=
ej
4
4
j=0
to get
cS =
(
)
n−2
(n − 2)(n − 3)
1∑
ej
− j(n − j − 2) ,
2 j=0
4
from which the lemma follows by simple computation, using (3.5) and that ej = en−2−j . Proof of Theorem 3.3.11. Having expressed cS (up to some error term) as a positive
linear combination of the ej ’s, we can substitute any lower estimates for the numbers ej to
obtain a lower bound for cS . Using (3.2), we get
( )
1 n
+ O(n3 ).
cS ≥
4 4
This lower bound for cS is quite weak. To obtain the stronger lower bound stated in Theorem
3.3.11, we do “integration by parts”, i.e., we pass from j-facets to (≤ j)-facets. We substitute
ej = Ej − Ej−1 in Lemma 3.3.12 and rearrange to get the following:
( )
∑
3 n
Ej (n − 2j − 3) −
+ εn ,
cS =
4 3
n−2
j<
where
εn =
{
2
1
n−3
4E 2 ,
if n is odd,
0,
if n is even.
The last two terms in this formula are O(n3 ). Using these bounds, the theorem follows.
3.4. ORTHOGONALITY GRAPHS
3.4
35
Orthogonality graphs
For a given d, let Gd denote the graph whose nodes are all unit vectors in Rd (i.e., the points
of the unit sphere S d−1 ), and we connect two nodes if and only if they are orthogonal.
Example 3.4.1 The graph G2 consists of disjoint 4-cycles.
Theorem 3.4.2 (Erd˝
os–De Bruijn Theorem) Let G be an infinite graph and k ≥ 1, and
integer. Then G is k-colorable if and only if every finite subgraph of G is k-colorable.
Proof. See [154], Exercise 9.14.
Theorem 3.4.3 There exists a constant c > 1 such that
cd ≤ χ(Gd ) ≤ 4d .
Exercise 3.4.4 Let S be a set of n points in the plane.
(a) Prove that any two longest segments connecting points in S have a point in
common.
(b) Prove that the largest distance between points in S occurs at most n times.
Exercise 3.4.5 Prove that the bisector graph of a set of n points in the plane
(in general position) has at most 2n3/2 edges.
36
CHAPTER 3. GRAPHS FROM POINT SETS
Chapter 4
Harmonic functions on graphs
The notion of a harmonic function is basic in analysis, usually defined as smooth functions
f : Rd → R (perhaps defined only in some domain) satisfying the differential equation
∑d
2
2
∆f = 0, where ∆ =
i=1 ∂ /∂xi is the Laplace operator. It is a basic fact that such
functions can be characterized by the “mean value property” that their value at any point
equals to their average on a ball around this point. Taking this second characterization as
our starting point, we will define an analogous notion of harmonic functions defined on the
nodes of a graph.
Harmonic functions play an important role in the study of random walks (after all, the
averaging in the definition can be interpreted as expectation after one move). They also
come up in the theory of electrical networks, and also in statics. This provides a connection
between these fields, which can be exploited. In particular, various methods and results from
the theory of electricity and statics, often motivated by physics, can be applied to provide
results about random walks. From our point of view, their main applications will be rubber
band representations and discrete analytic functions. In this chapter we develop this simple
but useful theory.
4.1
Definition and uniqueness
Let G = (V, E) be a connected simple graph and S ⊆ V . For a function f : V → R, we
define its defect at node v by
Lf (v) =
∑
(f (u) − f (v)).
(4.1)
u∈N (v)
Here L is the Laplacian of the matrix, if we consider f as a vector indexed by the nodes,
then Lf is just the Laplace matrix of G applied to f .
The function f is called a harmonic at a node v ∈ V if Lf (v) = 0. This equation can also
37
38
CHAPTER 4. HARMONIC FUNCTIONS ON GRAPHS
be written in the form
1 ∑
f (u) = f (v).
dv
(4.2)
u∈N (v)
A node where a function is not harmonic is called a pole of the function. It is useful to note
that
∑
Lf (v) = 1T Lf = f T L1 = 0,
(4.3)
v∈V
so every function is harmonic “on the average”.
Remark 4.1.1 We can extend this notion to multigraphs by replacing (4.2) by
1
dv
∑
f (u) = f (v)
∀v ∈ V \ S.
(4.4)
Another way of writing this is
∑
auv (f (u) − f (v)) = 0
∀v ∈ V \ S,
(4.5)
e∈E
V (e)={u,v}
u∈V
where auv is the multiplicity of the edge uv. We could further generalize this by assigning
arbitrary nonnegative weights βuv to the edges of G (uv ∈ E(G)), and require the condition
∑
βuv (f (u) − f (v)) = 0
∀v ∈ V \ S.
(4.6)
u∈V
We will restrict our arguments to simple graphs (to keep things simple), but all the results
could be extended to multigraphs and weighted graphs in a straightforward way.
Every constant function is harmonic at each node. On the other hand,
Proposition 4.1.2 Every non-constant function has at least two poles.
This implies that if f : V → R is a nonconstant function, then Lf ̸= 0, so the nullspace
of L is one-dimensional: it consists of the constant functions only.
Proof. Let S be the set where the function assumes its maximum, and let S ′ be the set of
those nodes in S that are connected to any node outside S. Then every node in S ′ must be
a pole, since in (4.2), every value f (u) on the left had side is at most f (v), and at least one
is less, so the average is less than f (v). Since the function is nonconstant, S is a nonempty
proper subset of V , and since the graph is connected, S ′ is nonempty. So there is a pole
where the function attains its maximum. Similarly, there is another pole where it attains its
minimum.
For any two nodes there will be a nonconstant harmonic function that is harmonic everywhere else. More generally, we have the following theorem.
4.2. CONSTRUCTING HARMONIC FUNCTIONS
39
Theorem 4.1.3 For a connected simple graph G = (V, E), nonempty set S ⊆ V and function
f0 : S → R, there is a unique function f : V → R extending f0 that is harmonic at each
node of V \ S.
We call this function f the harmonic extension of f0 . Note that if |S| = 1, then the
harmonic extension is a constant function (and so it is also harmonic at S, and it does not
contradict Proposition 4.1.2). If S = {a, b}, then a function that is harmonic outside S is
uniquely determined by two of its values, f (a) and f (b). Scaling the function by a real number
and translating by a constant preserves harmonicity at each node, so if we know the function
f with (say) f (a) = 0 and f (b) = 1, harmonic at v ̸= a, b, then g(v) = A + (B − A)f (v)
describes the harmonic extension with g(a) = A, g(b) = B. In this case equation (4.5) is
equivalent to saying that Fij = f (j) − f (i) defines a flow from a to b.
The uniqueness of the harmonic extension is easy by the argument in the proof of Proposition 4.1.2. Suppose that f and f ′ are two harmonic extensions of f0 . Then g = f − f ′ is
harmonic on V \ S, and satisfies g(v) = 0 at each v ∈ S. If g is the identically 0 function,
then f = f ′ as claimed. Else, either is minimum or its maximum is different from 0. But we
have seen that both these sets contain at least one pole, which is a contradiction.
4.2
Constructing harmonic functions
There are several ways to construct a harmonic function with given poles, and we describe
four of them. This abundance of proof makes sense, because each of these constructions
illustrates an application area of harmonic functions.
4.2.1
Linear algebra
First we construct a harmonic function with two given poles a and b. Let 1a : V → R denote
the function which is 1 on a and 0 everywhere else. Consider the equation
Lf = 1b − 1a .
(4.7)
(where L is the Laplacian of the connected graph G). Every function satisfying this equation
is harmonic outside {a, b}.
The Laplacian L is not quite invertible, but its nullspace is only one-dimensional, spanned
by the vector 1 = (1, . . . , 1)T . So (4.7) determines f up to adding the same real number to
every entry. Hence we may look for a special solution satisfying 1T f = 0, or equivalently,
Jf = 0 (where J ∈ RV ×V denotes the all-1 matrix). The trick is to observe that L + J is
invertible, and
(L + J)f = Lf = 1b − 1a ,
40
CHAPTER 4. HARMONIC FUNCTIONS ON GRAPHS
and so we can express f as
f = (L + J)−1 (1b − 1a ).
(4.8)
All other solutions of (4.7) can be obtained from (4.8) by adding the same real number to
every entry; all other functions that are harmonic outside {a, b} can be obtained from these
by multiplying by a scalar.
Second, suppose that |S| ≥ 3, let a, b ∈ S, and let ga,b denote the function which is
harmonic outside {a, b} and satisfies ga,b (a) = 0 and ga,b (b) = 1. We then have
{
̸= 0, if x ∈ {a, b},
Lga,b (x)
= 0, otherwise.
Note that every linear combination of the functions ga,b is harmonic outside S.
′
Let ga,b
denote the restriction of ga,b to S. Fix a ∈ S. We claim that the functions
∑
′
(b ∈ S \ {a}) are linearly independent. Indeed, a relation b αb ga,b
= 0 implies that
∑
αb ga,b = 0 (by the uniqueness of the harmonic extension), which in turn implies that
∑b
b αb Lga,b = 0. The value of this last function at b ∈ S \ {a} is αb Lga,b (b), where Lga,b (b) ̸=
0, and so αb = 0.
′
ga,b
′
It follows that the functions ga,b
generate the (|S| − 1)-dimensional space of all functions
S
h ∈ R with h(a) = 0. This means that f0 − f0 (a) is a linear combination of functions
′
ga,b
and a constant function. The corresponding linear combination of the functions ga,b is
the harmonic extension of f0 − f0 (a). Adding the constant f0 (a) at every node, we get the
harmonic extension of f0 .
4.2.2
Random walks
Let G = (V, E) be a connected graph and v ∈ V . Start a random walk Wv = (w0 , w1 , . . . ) at
w0 = v. This means that given wt , the node wt+1 is chosen randomly and uniformly from
the set of neighbors of wt . The theory of random walks (finite Markov chains) has a large
literature, and we will not be able to provide here even an introduction. I hope that the
following discussion, based on well known and intuitive facts, can be followed even by those
not familiar with the subject. For example, we need the fact that a random walk hits every
node with probability 1.
For any event A expressible in terms of sequence Wv , we denote by Pv (A) its probability.
For example, Pv (wt = v) is the probability that after t steps, the random walk returns to the
starting node. Expectations Ev (X) are defined similarly for random variables X depending
on the walk Wv .
Fixing a nonempty set S ⊆ V , let fa (v) denote the probability that Wv hits a ∈ S before
it hits any other element of S. Then trivially
{
1, if v = a,
fa (v) =
0, if v ∈ S \ {a}.
4.2. CONSTRUCTING HARMONIC FUNCTIONS
41
For v ∈ V \ S, we have
fa (v) = Pv (Wv hits S at a) =
∑
Pv (w1 = u)Pu (Wu hits S at a) =
u∈N (v)
∑
1
fa (u).
d(v)
u∈N (v)
This shows that fa is harmonic at all nodes outside S. Since every function on S can be
written as linear combinations of functions fa |S , it follows that every function on S has a
harmonic extension.
This harmonic extension can be given a more direct interpretation. Given a function
f0 : S → R, we define f (v) for v ∈ V as the expectation of f0 (av ), where av is the (random)
node where a random walk Wv first hits S. Clearly f (v) = f0 (v) for v ∈ S, and it is easy to
check (by a similar computation as above) that f (v) is harmonic on V \ S.
4.2.3
Electrical networks
In this third construction for harmonic functions, we use facts from the physics of electrical
networks like Ohm’s Law and Kirchhoff’s two laws. This looks like stepping out of the realm
of mathematics; however, these results are all purely mathematical.
To be more precise, we consider these laws as axioms. Given a connected network whose
edges are conductors, and an assignment of potentials (voltages) to a subset S of its nodes,
we want to find an assignment of voltages Uij and currents Iij to the edges ij ∈ E) so that
Uij = −Uji , Iij = −Iji (so the values are defined if we give the edge an orientation, and the
values are antisymmetric), and the following laws are satisfied:
Ohm’s Law: Uij = Rij Iij (where Rij is the resistance of edge e; this does not depend
on the orientation).
Kirchhoff ’s Current Law:
∑
u∈N (v) Iuv
= 0 for every node v ∈
/ S;
Kirchhoff ’s Voltage Law:
∑
Uij = 0
(4.9)
→
−
ij∈E( C )
−
→
for every oriented cycle C in the graph. Note that every cycle has two orientations, but
because of antisymmetry, these two oriented cycles give the same condition. Therefore, in
a bit sloppy way, we can talk about Kirchhoff’s Voltage Law for an undirected cycle. As a
degenerate case, we could require the Voltage Law for oriented cycles of length 2, consisting
of an edge and its reverse, in which case the law says that Uij + Uji = 0, which means that
that U is antisymmetric.
The following reformulation of Kirchhoff’s Voltage Law will be useful for us: there exist
potentials p(i) ∈ R (i ∈ V ) so that Uij = p(j) − p(i). This sounds like a statement in physics,
but in fact it is a rather simple fact about graphs:
42
CHAPTER 4. HARMONIC FUNCTIONS ON GRAPHS
Proposition 4.2.1 Given a connected graph G and an antisymmetric assignment Uij of real
numbers to the oriented edges, there exist real numbers p(i) (i ∈ V ) so that Uij = p(j) − p(i)
if and only if (4.9) holds for every cycle C.
Proof. It is trivial that if a “potential” exists for a given assignment of voltages Uij , then
(4.9) is satisfied. Conversely, suppose that (4.9) is satisfied. We start with some consequences.
∑
−
→
→ U = 0. This follows by induction
If W is any closed walk on the graph, then ij∈E(−
W ) ij
on the length of the walk. Consider the first time when we encounter a node already visited.
If this is the end of the walk, then the walk is a single oriented cycle, and the assertion follows
by Kirchhoff’s Voltage Law. Otherwise, this events splits the walk into two closed walks, and
since the sum of voltages is zero for each of them by induction, it is zero for the whole walk.
∑
−→
−→
→ U =
If W1 and W2 are open walks starting and ending at the same nodes, then ij∈E(−
W1 ) ij
∑
−→ Uij . Indeed, concatenating W1 with the reversal of W2 we get a closed walk whose
ij∈E(W2 )
∑
∑
→ U −
−→ U , and this is 0 by what we proved before.
total voltage is ij∈E(−
W ) ij
ij∈E(W ) ij
1
2
Now let is turn to the construction of the potentials. Fix any node v as a root, and assign
to it an arbitrary potential p(v). For every other node u, consider any walk W from v to u,
∑
→ U . This value is independent of the choice of W . Furthermore,
and define p(u) = ij∈E(−
W ) ij
′
for any neighbor u of u, considering the walk W + uu′ , we see that p(u′ ) = p(u) + Uuu′ ,
which shows that p is indeed a potential for the voltages U .
To check condition (4.9) for every cycle in the graph is a lot work; but these equations
are not independent, and it is enough to check for a basis whether U satisfies them. There
are several useful ways to select a basis; here we state one that is particularly useful, and
another one as Exercise 4.3.6.
Corollary 4.2.2 Let G be a 2-connected planar graph and let Uij be an antisymmetric assignment of real numbers to the oriented edges. There exist real numbers p(i) (i ∈ V ) so
that Uij = p(j) − p(i) if and only if (4.9) is satisfied by the boundary cycle of every bounded
country.
Proof. We want to show that (4.9) is satisfied by every cycle C. Fix any embedding of
the graph in the plane, and (say) the clockwise orientation of C. Let us sum equations (4.9)
corresponding to countries contained inside C; the terms corresponding to internal edges
cancel (by antisymmetry), and we get equation (4.9) corresponding to C.
We have given these simple arguments in detail, because essentially the same argument
will used repeatedly in the book, and we are not going to describe it every time; we will just
refer to it as the “potential argument”. (Alternatively, we could formulate Proposition 4.2.1
and Corollary 4.2.2 for an arbitrary abelian group instead of the real numbers.)
4.3. RELATING DIFFERENT MODELS
43
The main result from the theory of electrical networks we are going to use is that given
a connected graph with resistances assigned to a nonempty set S of nodes, there is always a
unique assignment of potentials to the remaining nodes and currents to the edges satisfying
Ohm’s and Kirchhoff’s Laws.
Returning to the proof of Theorem 4.1.3, consider the graph G as an electrical network,
where each edge represents a conductor with unit resistance. Given a set S ⊆ V and a
function f0 : S → R, let a current flow through the network while keeping each v ∈ S at
potential f0 (v). Then the potential f (v) defined for all nodes v is an extension of f0 , and it
is harmonic at every node v ∈ V \ S. Indeed, the current through an edge uv is f (u) − f (v)
∑
by Ohm’s Law, and hence by Kirchhoff’s Current Law, j∈N (i) (f (j) − f (i)) = 0.
4.2.4
Rubber bands
Consider the edges of the graph G as ideal rubber bands (or springs) with unit Hooke constant
(i.e., it takes h units of force to stretch them to length h). Grab the nodes a and b and stretch
the graph so that their distance is 1. Let the structure find its equilibrium, and let f (v) be the
distance of node v from a. Since these nodes are at equilibrium, the function f is harmonic
for v ̸= a, b, and satisfies f (a) = 0 and f (b) = 1. To see the last fact, note that the edge uv
pulls v with force f (u) − f (v) (the force is positive if it pulls in the positive direction), and
so (4.5) is just the condition of the equilibrium.
More generally, if we have a set S ⊆ V and we fix the positions of each node v ∈ S at
a given point f0 (v) of the real line, and let the remaining nodes find their equilibrium, then
the position f (v) of a node will be a function that extends f0 and that is harmonic at each
node v ∈
/ S.
We’ll come back to this construction, extending it to higher dimensions. In particular,
we’ll prove that the equilibrium exists and is unique without any reference to physics.
4.3
Relating different models
A consequence of the uniqueness property in Theorem 4.1.3 is that the harmonic functions
constructed in sections 4.2.1, 4.2.2, 4.2.3 and 4.2.4 are the same. As an application of this
idea, we show the following interesting identities (see Nash-Williams [190], Chandra at al.
[43]).
Considering the graph G as an electrical network, where every edge has resistance 1, let
R(s, t) denote the effective resistance between nodes s and t. Considering the graph G as a
rubber band structure in equilibrium, with two nodes s and t nailed down at 1 and 0, let Fab
denote the force pulling the nails. Doing a random walk on the graph, let comm(a, b) denote
the commute time between nodes a and b (i.e., the expected time it takes to start at a, walk
until you first hit b, and then walk until you first hit a again).
44
CHAPTER 4. HARMONIC FUNCTIONS ON GRAPHS
Theorem 4.3.1 The effective resistance R(a, b) between two nodes a, b of a graph, the force
Fab needed to pull them distance 1 apart, and the commute time comm(a, b) of the random
walk between them, are related by the equations
R(a, b) =
1
comm(a, b)
=
.
Fab
2m
Proof.
Consider the function f ∈ RV satisfying f (a) = 0, f (b) = 1 and harmonic for
v ̸= a, b (we know that this is uniquely determined). We are going to express the quantities
∑
in the formula by Lf (a) = v∈N (a) f (v) (this will prove the theorem):
R(a, b) =
1
,
Lf (a)
Fab = Lf (a),
comm(a, b) =
2m
.
Lf (a)
(4.10)
By the discussion in Section 4.2.3, f (v) is equal to the potential of v if we fix the potential
of a and b. The current through the network is Lf (a). So the effective resistance is R(a, b) =
1/Lf (a).
The equation Fab = Lf (a) is easily derived from Hooke’s Law: every edge aj pulls a with
∑
a force of f (j) − f (a) = f (j), and hence the total force pulling a is j∈N (a) f (j) = Lf (a).
The assertion about commute time is more difficult to prove. We know that f (u)
is the probability that a random walk starting at u visits a before b. Hence p =
∑
(1/d(a)) u∈N (a) f (u) = Lf (a)/d(a) is the probability that a random walk starting at a
hits b before returning to a.
Let T be the first time when a random walk starting at a returns to a and let S be the
first time when it returns to a after visiting b. We know from the theory of random walks
that E(T) = 2m/d(a) and by definition, E(S) = comm(a, b). Clearly T ≤ S and T = S
means that the random walk starting at a hits b before returning to a. Thus P(T = S) = p.
If T < S, then after the first T steps, we have to walk from a until we reach b and then
return to b. Hence E(S − T) = (1 − p)E(S), and so
2m
Lf (a)
= E(T) = E(S) − E(S − T) = pE(S) =
comm(a, b).
d(a)
d(a)
Simplifying, we get 4.10.
For the rubber band model, imagine that we slowly stretch the graph until nodes a and
b will be at distance 1. When they are at distance t, the force pulling our hands is tFab , and
hence the energy we have to spend is
∫ 1
1
tFab dt = Fab .
2
0
This energy accumulates in the rubber bands. By a similar argument, the energy stored in
the rubber band ij is (f (i) − f (j))2 /2. By conservation of energy, we get the identity
∑
(f (i) − f (j))2 = Fab .
(4.11)
ij∈E
4.3. RELATING DIFFERENT MODELS
45
A further application of this equivalence between our three models and this formula for energy
is the following fact.
Theorem 4.3.2 Let G = (V, E) be a connected graph, a, b ∈ V , and let us connect two nodes
by a new edge.
(a) The ratio comm(a, b)/|E| does not increase.
(b) (Raleigh Monotonicity) The effective resistance R(a, b) does not increase.
(c) If nodes a and b nailed down at 0 and 1, the force pulling the nails in the equilibrium
does not decrease.
Proof.
The three statements are equivalent by Theorem 4.3.1; we prove (c). By (4.11),
it suffices to prove that the equilibrium energy does not decrease. Consider the equilibrium
position of G′ , and delete the new edge. The contribution of this edge to the energy was
nonnegative, so the total energy decreases. The current position of the nodes may not be in
equilibrium; but the equilibrium position minimizes the energy, so when they move to the
equilibrium of G′ , the energy further decreases.
Exercise 4.3.3 Let L be the Laplacian of a connected graph G. Prove that L+J
is nonsingular.
Exercise 4.3.4 The Laplace matrix L is singular, and hence it does not have an
inverse. (a) Prove that it has a “Moore-Penrose pseudoinverse”: a V × V matrix
L−1 , such that LL−1 and LL−1 are symmetric, LL−1 L = L, and L−1 LL−1 = L−1 .
(b) Prove that L−1 is symmetric and positive semidefinite. (c) Prove that L−1 is
uniquely determined. (d) Show how the quasi-inverse L−1 can be computed by
inverting the nonsingular matrix L + J.
Exercise 4.3.5 Prove that for every graph G we can define a family of symmetric
matrices Lt (t ∈ R) so that L1 = L, Lt is a continuous function of t, and Lt+s =
Lt Ls for every two s, t ∈ R. Prove that such a family of symmetric matrices is
unique.
Exercise 4.3.6 Let G by a connected graph and T , a spanning tree of G. For
each edge e ∈ E(G) \ E(T ), let Ce denote the cycle of G consisting of e and the
path in T connecting the endpoints of e. Prove that equations (4.9) corresponding
to these cycles Ce for a basis for all equations (4.9).
Exercise 4.3.7 Prove that for every connected graph G, the effective resistances
R(i, j) as well as their square roots satisfy the triangle
inequality:
for any
√
√
√ three
nodes a, b and c, R(a, c) ≤ R(a, b) + R(b, c) and R(a, c) ≤ R(a, b) + R(b, c).
V
Exercise 4.3.8 Let G = (V, E) be a connected graph
√ and let ui ∈ R (i ∈ V )
−1/2
. Prove that |ui − uj | = R(i, j) for any two nodes
be the i-th column of L
i and j.
Exercise 4.3.9 Let a, b be two nodes of a connected graph G. We define the
hitting time H(a, b) as the expected number of steps before a random walk, started
at a, will reach b. Consider the graph as a rubber band structure as above, but
also attach a weight of d(v) to each node v. Nail the node b to the wall and let
the graph find its equilibrium. Prove that a will be at a distance of H(a, b) below
b.
46
CHAPTER 4. HARMONIC FUNCTIONS ON GRAPHS
Exercise 4.3.10 Show by an example that the commute time comm(a, b) can
increase when we add an edge to the graph.
Exercise 4.3.11 Let G′ denote the graph obtained from the graph G by identifying nodes a and b, and let T (G) denote the number of spanning trees of G.
Then R(a, b) = T (G′ )/T (G).
Exercise 4.3.12 Let G be a graph and e, f ∈ E(G), e ̸= f .
T (G)T (G − e − f ) ≤ T (G − e)T (G − f ).
Prove that
Chapter 5
Rubber bands
5.1
Geometric representations, a.k.a. vector labelings
In the previous chapter, we already used the idea of looking at the graph in a geometric
or physical way, by placing its nodes on the line and replacing the edges by rubber bands.
Since, however, the positions of the nodes could be described by real numbers, this was
more a visual support for an algebraic treatment than geometry. In this chapter we study
a similar construction in higher dimensions, where the geometric point of view will play a
more important role.
A vector-labeling in d dimensions of a graph G = (V, E) is a map x : V → Rd . We will
also write (xi : i ∈ V ) for such a representation. We can also think of the vectors xi as
the positions of the nodes in Rd . The mapping i 7→ xi can be thought of as a “drawing”,
or “embedding”, or “geometric representation” of the node set of the graph in a euclidean
space. (We think of the edges as straight line segments.) These phrases are useful as a visual
help for following certain arguments, but all three are ambiguous, and we are going to use
“vector labeling” in formal statements. At this time, we don’t assume that the mapping
i 7→ xi is injective; this will be a pleasant property to have, but not always achievable.
For every vector labeling x : V → Rd , it is often useful to consider the matrix X whose
columns are the vectors xi . This is a [d] × V matrix (the rows are indexed by 1, . . . , d, the
columns are indexed by the nodes). We denote the space of such matrices by Rd V . Of course,
this matrix X contains the same information as the labeling itself, and we will call it the
matrix of the labeling.
Remark 5.1.1 While “vector labeling” and “geometric representation” (when defined appropriately) mean the same thing, they do suggest two different ways of visualizing the graph.
The former is the computer science view: we have a graph and store additional information
for each node. The latter considers the graph as a structure in euclidean space. The main
point in this book is to relate geometric and graph-theoretic properties, so the second way
47
48
CHAPTER 5. RUBBER BANDS
Figure 5.1: Two ways of looking at a graph with vector labeling (not all vector
labels are shown).
of visualizing is often very useful (see Figure 5.1).
In this chapter, we study a special kind of vector labeling, obtained from a simple physical
model of a graph.
5.2
Rubber band representation
Let G = (V, E) be a connected graph and ∅ ̸= S ⊆ V . Fix an integer d ≥ 1 and a map
x0 : S → Rd . We extend this to a representation x : V → Rd (a vector-labeling of G) as
follows.
First, let’s give an informal description. Replace the edges by ideal rubber bands (satisfying Hooke’s Law). Think of the nodes in S as nailed to their given position (node i ∈ S
to x0i ∈ Rd ), but let the other nodes settle in equilibrium. (We are going to see that this
equilibrium position is uniquely determined.) We call this equilibrium position of the nodes
the rubber band representation of G in Rd extending x0 . The nodes in S will be called nailed,
and the other nodes, free (Figure 5.2).
Figure 5.2: Rubber band representation of a planar graph and of the Petersen graph.
To be precise, let xi = (xi1 , . . . , xid )T ∈ Rd be the position of node i ∈ V . By definition,
5.2. RUBBER BAND REPRESENTATION
49
xi = x0i for i ∈ S. The energy of this representation is defined as
E(x) =
∑
|xi − xj |2 =
ij∈E
d
∑∑
(xik − xjk )2 .
(5.1)
ij∈E k=1
We want to find the representation with minimum energy, subject to the boundary conditions:
minimize E(x)
subject to xi = x0i for all i ∈ S.
(5.2)
Lemma 5.2.1 The function E(x) is strictly convex.
Proof. In (5.1), every function (xik − xjk )2 is convex, so E is convex. Suppose that it
1
is not strictly convex, which means that E( x+y
2 ) = 2 (E(x) + E(y)) for two representations
x ̸= y : V → Rd . Then for every edge ij and every 1 ≤ k ≤ d we have
(
xik + yik
xjk + yjk
−
2
2
)2
=
(xik − xjk )2 + (xik − xjk )2
,
2
which implies that xik − yik = xjk − yjk . Since xik = yik for every i ∈ S, using that G is
connected and S is nonempty it follows that xik = yik for every i ∈ V . So x = y, which is a
contradiction.
It is trivial that if any of the xi tends to infinity, then E(x) tends to infinity (still assuming
the boundary conditions 5.2 hold, where S is nonempty). Together with Lemma 5.2.1, this
implies that the representation with minimum energy is uniquely determined. If i ∈ V \ S,
then at the representation with minimal energy the partial derivative of E(x) with respect
to any coordinate of xi , where i is a free node, must be 0. This means that
∑
(xi − xj ) = 0.
(5.3)
j∈N (i)
We can rewrite this as
xi =
1 ∑
xj .
di
(5.4)
j∈N (i)
This equation means that every free node is placed in the center of gravity of its neighbors.
Equation (5.3) has a nice physical meaning: the rubber band connecting i and j pulls i with
force xj − xi , so (5.3) states that the forces acting on i sum to 0 (as they should at the
equilibrium).
We saw in Chapter 4 that every 1-dimensional rubber band representation gives rise to a
function that is harmonic on the free nodes. Equation (5.4) extends this to higher dimensional
rubber band representations: Every coordinate function is harmonic at every free node.
50
CHAPTER 5. RUBBER BANDS
It will be useful to extend the rubber band construction to the case when the edges of G
have arbitrary positive weights (or “strengths”). Let wij > 0 denote the strength of the edge
ij. We define the energy function of a representation i 7→ xi by
∑
Ew (x) =
wij |xi − xj |2 .
(5.5)
ij∈E
The simple arguments above remain valid: Ew is strictly convex if at least one node is nailed,
there is a unique optimum, and for the optimal representation every i ∈ V \ S satisfies
∑
wij (xi − xj ) = 0.
(5.6)
j∈N (i)
This we can rewrite as
xi = ∑
∑
1
j∈N (i)
wij
wij xj .
(5.7)
j∈N (i)
Thus xi is no longer in the center of gravity of its neighbors, but it is still a convex combination
of them with positive coefficients. In particular, it is in the relative interior of the convex
hull of its neighbors.
5.3
5.3.1
Rubber bands, planarity and polytopes
How to draw a graph?
The rubber band method was first analyzed by Tutte [236]. In this classical paper he describes
how to use “rubber bands” to draw a 3-connected planar graph with straight edges and convex
faces.
Let G = (V, E) be a 3-connected planar graph, and let p0 be any country of it. Let C0
be the cycle bounding p0 . Let us nail the nodes of C0 to the vertices of a convex polygon P0
in the plane, in the same cyclic order. Let i 7→ vi be the rubber band representation of G in
the plane extending this map. We draw the edges of G as straight line segments connecting
the appropriate endpoints. Figure 5.3 shows the rubber band representation of the skeletons
of the five platonic bodies.
By the above, we know that each node not on C0 is positioned at the center of gravity of
its neighbors. Tutte’s main result about this embedding is the following:
Theorem 5.3.1 If G is a simple 3-connected planar graph, then every rubber band representation of G (with the nodes of a particular country p0 nailed to a convex polygon) gives
an embedding of G in the plane.
Proof. Let G = (V, E), and let x : V → R2 be a rubber band representation of G. Let ℓ
be a line intersecting the interior of the polygon P0 , and let U0 , U1 and U2 denote the sets
of nodes of G mapped on ℓ and on the two (open) sides of ℓ, respectively.
5.3. RUBBER BANDS, PLANARITY AND POLYTOPES
51
Figure 5.3: Rubber band representations of the skeletons of platonic bodies
The key to the proof is the following claim.
Claim 1. The sets U1 and U2 induce connected subgraphs of G.
Let us prove this for U1 . Clearly the nodes of p0 in U1 form a (nonempty) path P1 . We
may assume that ℓ does not through any node (by shifting it very little in the direction of U1 )
and that it is not parallel to any edge (by rotating it with a small angle). Let a ∈ U1 \ V (C0 ),
we show that it is connected to P1 by a path in U1 . Since a is a convex combination of its
neighbors, it must have a neighbor a1 such that xa1 is in U1 and at least as far away from ℓ
as xa .
At this point, we have to deal with an annoying degeneracy. There may be several nodes
represented by the same vector xa (later it will be shown that this does not occur). Consider
all nodes represented by xa , and the connected component H of the subgraph of G induced
by them containing a. If H contains a nailed node, then it contains a path from a to P1 , all
in U1 . Else, there must be an edge connecting a node a′ ∈ V (H) to a node outside H (since
G is connected). Since the system is in equilibrium, a′ must have a neighbor a1 such that
xa1 is farther away from ℓ than xa = xa′ (here we use that no edge is parallel to ℓ). Hence
a1 ∈ U , and thus a is connected with a1 by a path in U1 .
Either a1 is nailed (and we are done), or we can find a node a2 ∈ U1 such that a1 is
connected to a2 by a path in U1 , and xa2 is farther from ℓ than xa1 , etc. This way we get a
path Q in G that starts at a, and stays in U1 , and eventually must hit P1 . This proves the
claim (Figure 5.4).
Next, we exclude some possible degeneracies. (Note that we are not assuming any more
that no edge is parallel to e: this assumption could be made for the proof of Claim 1 only.)
Claim 2. Every node u ∈ U0 has neighbors in both U1 and U2 .
This is trivial if u ∈ V (C0 ), so suppose that u is a free node. If u has a neighbor in U1 ,
52
CHAPTER 5. RUBBER BANDS
Figure 5.4: Left: every line cuts a rubber band representation into connected parts.
Right: Each node on a line must have neighbors on both sides.
then it must also have a neighbor in U2 ; this follows from the fact xu is the center of gravity
of the points xv , v ∈ N (u). So it suffices to prove that not all neighbors of u are contained
in U0 .
Let T be the set of nodes u ∈ U0 with N (u) ⊆ U0 , and suppose that this set is nonempty.
Consider a connected component H of G[T ] (H may be a single node), and let S be the set
of neighbors of H outside H. Since V (H) ∪ S ⊆ U0 , the set V (H) ∪ S cannot contain all
nodes, and hence S is a cutset. Thus |S| ≥ 3 by 3-connectivity.
If a ∈ S, then a ∈ U0 by the definition of S, but a has a neighbor not in U0 , and so it has
neighbors in both U1 and U2 by the argument above (see Figure 5.4). The set V (H) induces
a connected graph by definition, and U1 and U2 induce connected subgraphs by Claim 1. So
we can contract these sets to single nodes. These three nodes will be adjacent to all nodes in
S. So G can be contracted to K3,3 , which is a contradiction since it is planar. This proves
Claim 2.
Claim 3. Every country has at most two nodes in U0 .
Suppose that a, b, c ∈ U0 are nodes of a country p. Clearly p ̸= p0 . Let us create a new
node d and connect it to a, b and c; the resulting graph G′ is still planar. On the other hand,
the same argument as in the proof of Claim 2 (with V (H) = d and S = {a, b, c}) shows that
G′ has a K3,3 minor, which is a contradiction.
Claim 4. Let p and q be the two countries sharing an edge ab, where a, b ∈ U0 . Then
V (p1 ) \ {a, b} ⊆ U1 and V (p2 ) \ {a, b} ⊆ U2 (or the other way around).
Suppose not, then p has a node c ̸= a, b and q has a node d ̸= a, b such that (say) c, d ∈ U1 .
(Note that c, d ∈
/ U0 by Claim 3.) By Claim 1, there is a path P in U1 connecting c and d
(Figure 5.5). Claim 2 implies that both a and b have neighbors in U2 , and again Claim 1,
these can be connected by a path in U2 . This yields a path P ′ connecting a and b whose
inner nodes are in U2 . By their definition, P and P ′ are node-disjoint. But look at any
planar embedding of G: the edge ab, together with the path P ′ , forms a Jordan curve that
5.3. RUBBER BANDS, PLANARITY AND POLYTOPES
53
does not go through c and d, but separates them, so P cannot exist.
Figure 5.5: Two adjacent countries having nodes on the same side of ℓ (left), and
the supposedly disjoint paths in the planar embedding (right).
Claim 5. The boundary of every country q is mapped onto a convex polygon Pq .
This is immediate from Claim 4, since the line of an edge of a country cannot intersect
its interior.
Claim 6. The interiors of the polygons Pq (q ̸= p0 ) are disjoint.
Let x be a point inside Pp0 , we want to show that it is covered by one Pq only. Clearly
we may assume that x is not on the image of any edge. Draw a line through x that does not
go through the image any node, and see how many times its points are covered by interiors
of such polygons. As we enter Pp0 , this number is clearly 1. Claim 4 says that as the line
crosses an edge, this number does not change. So x is covered exactly once.
Now the proof is essentially finished. Suppose that the images of two edges have a
common point. Then two of the countries incident with them would have a common interior
point, which is a contradiction except if these countries are the same, and the two edges are
consecutive edges of this country.
Before going on, let’s analyze this proof a little. The key step, namely Claim 1, is very
similar to a basic fact concerning convex polytopes, namely Corollary 3.1.2. Let us call a
vector-labeling of a graph section-connected, if for every open halfspace, the subgraph induced
by those nodes that are mapped into this halfspace is either connected or empty. The skeleton
of a polytope, as a representation of itself, is section-connected; and so is the rubber-band
representation of a planar graph. Note that the proof of Claim 1 did not make use of the
planarity of G; in fact, the same proof gives:
Lemma 5.3.2 Let G be a connected graph, and let u be a vector-labeling of an induced
subgraph H of G (in any dimension). If u is section-connected, then its rubber-band extension
to G is section-connected as well.
54
CHAPTER 5. RUBBER BANDS
5.3.2
How to lift a graph?
An old construction of Cremona and Maxwell can be used to “lift” Tutte’s rubber band
representation to a Steinitz representation. We begin with analyzing the reverse procedure:
projecting a convex polytope on a face. This procedure is similar to the construction of
the Schlegel diagram from Section 2.3, but we consider orthogonal projection instead of
projection from a point.
Let P be a convex 3-polytope, let F be one of its faces, and let Σ be the plane containing
H. Suppose that for every vertex v of P , its orthogonal projection onto H is an interior
point of F ; we say that the polytope P is straight over the face F .
Theorem 5.3.3 (a) Let P be a 3-polytope that is straight over its face F , and let G be the
orthogonal projection of the skeleton of P onto the plane of F . Then we can assign positive
strengths to the edges of G so that G will be the rubber band representation of the skeleton
with the vertices of F nailed.
(b) Let G = (V, E) be a 3-connected planar graph, and let T be a triangular country of G.
Let
i 7→ ui =
( )
ui1
∈ R2
ui2
be a rubber band representation of G in the plane obtained by nailing T to any triangle in
the plane. Then there is a convex polytope P in 3-space such that T is a face of P , and the
orthogonal projection of P to the plane of T gives the rubber band representation u.
In other words (b) says that we can assign a number ηi ∈ R to each node i ∈ V such that
ηi = 0 for i ∈ V (T ), ηi > 0 for i ∈ V \ V (T ), and the mapping
 
( )
ui1
ui
i 7→ vi =
= ui2  ∈ R3
ηi
ηi
is a Steinitz representation of G.
Example 5.3.4 (Triangular Prism) Consider the rubber band representation of a triangular prism in Figure 5.6. If this is an orthogonal projection of a convex polyhedron, then
the lines of the three edges pass through one point: the point of intersection of the planes of
the three quadrangular faces. It is easy to see that this condition is necessary and sufficient
for the picture to be a projection of a prism. To see that it is satisfied by a rubber band
representation, it suffices to note that the inner triangle is in equilibrium, and this implies
that the lines of action of the forces acting on it must pass through one point.
Now we are ready to prove theorem 5.3.3.
5.3. RUBBER BANDS, PLANARITY AND POLYTOPES
55
Figure 5.6: The rubber band representation of a triangular prism is the projection
of a polytope.
Proof. (a) Let’s call the plane of the face F “horizontal”, spanned by the first two coordinate
axes, and the third coordinate direction “vertical”, so that the polytope is “above” the plane
of F . For each face p, let gp be a normal vector. Since no face is vertical, no normal vector
gp is horizontal, and hence we can normalize gp so that its third coordinate is 1. Clearly for
each face p, gp will be an outer normal, except for p = F , when gp is an inner normal.
( )
Write gp = h1p . Let ij be any edge of G, and let p and q be the two countries on the
left and right of ij. Then
(hp − hq )T (ui − uj ) = 0.
(5.8)
Indeed, both gp and gq are orthogonal to the edge vi vj of the polytope, and therefore so is
their difference, and
(hp − hq )T (ui − uj ) =
(
)T (
)
hp − hq
ui − uj
= (gp − gq )T (vi − vj ) = 0.
0
ηi − ηj
We have hT = 0, since the face T is horizontal.
Let R denote the counterclockwise rotation in the plane by 90◦ , then it follows that hp −hq
is parallel to R(uj − ui ), and so there are are real numbers cij such that
hp − hq = cij R(uj − ui ).
(5.9)
We claim that cij > 0. Let k be any node on the boundary of p different from i and j. Then
uk is to the left from the edge ij, and hence
(uk − ui )T R(uj − ui ) > 0.
(5.10)
Going up to the space, convexity implies that the point vk is below the plane of the face q,
and hence gqT vk < gqT vi . Since gpT vk = gpT vi , this implies that
cij (R(uj − ui ))T (uk − ui ) = (hp − hq )T (uk − ui ) = (gp − gq )T (vk − vi ) > 0.
Comparing with (5.10), we see that cij > 0.
(5.11)
56
CHAPTER 5. RUBBER BANDS
Let us make a remark here that will be needed later. Using that not only gp − gq , but
also gp is orthogonal to vj − vi , we get that
0 = gpT (vj − vi ) = hT
p (uj − ui ) + ηj − ηi ,
and hence
ηj − ηi = −hT
p (uj − ui ).
(5.12)
To complete the proof of (a), we argue that the projection of the skeleton is indeed a
rubber band embedding with strengths cij , with F nailed. We want to prove that every free
node i is in equilibrium, i.e.,
∑
cij (uj − ui ) = 0.
(5.13)
j∈N (i)
Using the definition of cij , it suffices to prove that
∑
cij R(uj − ui ) =
j∈N (i)
∑
(hpj − hqj ) = 0,
j∈N (i)
where pj is the face to the left and qj is the face to the right of the edge ij. But this clear,
since every term occurs once with positive and once with negative sign.
(b) The proof consists of going through the steps of the proof of part (a) in reverse order:
given the Tutte representation, we first reconstruct the vectors hp so that all equations (5.8)
are satisfied, then using these, we reconstruct the numbers ηi so that equations (5.12) are
satisfied. It will not be hard to verify then that we get a Steinitz representation.
We need a little preparation to deal with edges on the boundary triangle. Recall that
we can think of Fij = cij (uj − ui ) as the force with which the edge ij pulls its endpoint i.
Equilibrium means that for every free node i,
∑
Fij = 0.
(5.14)
j∈N (i)
This does not hold for the nailed nodes, but we can modify the definition of Fij along the
three boundary edges so that Fij remains parallel to the edge uj − ui and (5.14) will hold
for all nodes (this is the only point where we use that the outer country is a triangle). This
is natural by a physical argument: let us replace the outer edges by rigid bars, and remove
the nails. The whole structure will remain in equilibrium, so appropriate forces must act in
the edges ab, bc and ac to keep balance. To translate this to mathematics, one has to work
a little; this is left to the reader as Exercise 5.4.10.
We claim that we can choose vectors hp for all faces p so that
hp − hq = RFij
(5.15)
5.3. RUBBER BANDS, PLANARITY AND POLYTOPES
57
if ij is any edge and p and q are the two faces on its left and right. This follows by the
“potential argument” already familiar from Section 4.2.3. Starting with hT = 0, and moving
from face to adjacent face, this equation will determine the value of hp for every face p. What
we have to show is that we don’t run into contradiction, i.e., if we get to the same face p in
two different ways, then we get the same vector hp . This is equivalent to saying that if we
walk around a closed cycle of faces, then the total change in the vector hp is zero. It suffices
to verify this when we move around countries incident with a single node. In this case, the
condition boils down to
∑
RFij = 0,
i∈N (j)
which follows by (5.14). This proves that the vectors hp are well defined.
Second, we construct numbers ηi satisfying (5.12) by a similar argument (just working on
the dual graph). We set ηi = 0 if i is a node of the unbounded country. Equation (5.12) tells
us what the value at one endpoint of an edge must be, if we have it for the other endpoint.
One complication is that (5.12) gives two conditions for each difference ηi − ηj , depending
on which face incident with it we choose. But if p and q are the two countries incident with
the edge ij, then
T
T
T
hT
p (uj − ui ) − hq (uj − ui ) = (hp − hq ) (uj − ui ) = (RFij ) (uj − ui ) = 0,
since Fij is parallel to ui − uj and so RFij is orthogonal to it. Thus the two conditions on
the difference ηi − ηj are the same.
As before, equation (5.12) determines the values ηi , starting with ηa = 0. To prove that
it does not lead to a contradiction, it suffices to prove that the sum of changes is 0 if we walk
around a face p. In other words, if C is the cycle bounding a face p (oriented, say, clockwise),
then
∑
hT
p (uj − ui ) = 0,
→
−
ij ∈E(C)
which is clear. It is also clear that ηb = ηc = 0.
( )
( )
Now define vi = uηii for every node i, and gp = h1p for every face p. It remains to prove
that i 7→ vi maps the nodes of G onto the vertices of a convex polytope, so that edges go to
edges and countries go to faces. We start with observing that if p is a face and ij is an edge
of p, then
gpT vi − gpT vj = hT
p (ui − uj ) + (ηi − ηj ) = 0,
and hence there is a scalar αp so that all nodes of p are mapped onto the hyperplane gpT x = αp .
We know that the image of p under i 7→ ui is a convex polygon, and so the same follows for
the map i 7→ vi .
58
CHAPTER 5. RUBBER BANDS
Figure 5.7: Lifting a rubber band representation to a polytope.
To conclude, it suffices to prove that if ij is any edge, then the two convex polygons
obtained as images of countries incident with ij “bend” in the right way; more exactly, let p
and q be the two countries incident with ij, and let Qp and Qq be two corresponding convex
polygons (see Figure 5.7). We claim that Qp lies on the same side of the plane gpT x = αp as
the bottom face. Let vk be any vertex of the polygon Qq different from vi and vj . We want
to show that gpT vk < αp . Indeed,
gpT vk − αp = gpT vk − gpT vi = gpT (vk − vi ) = (gp − gq )T (vk − vi )
(since both vk and vi lie on the plane gqT x = αq ),
(
)T
hp − hq
=
(vk − vi ) = (hp − hq )T (uk − ui ) = (RFij )T (uk − ui ) < 0
0
(since uk lies to the right from the edge ui uj ). This completes the proof.
5.3.3
How to find the starting face?
Theorem 5.3.3 proves Steinitz’s theorem in the case when the graph has a triangular face.
We are also home if the dual graph has a triangular face; then we can represent the dual
graph as the skeleton of a 3-polytope, choose the origin in the interior of this polytope, and
consider its polar; this will represent the original graph.
So the proof of Steinitz’s theorem is complete, if we prove the following simple fact:
Lemma 5.3.5 Let G be a 3-connected simple planar graph. Then either G or its dual has a
triangular face.
Proof. If G∗ has no triangular face, then every node in G has degree at least 4, and so
|E(G)| ≥ 2|V (G)|.
5.4. RUBBER BANDS AND CONNECTIVITY
59
If G has no triangular face, then similarly
|E(G∗ )| ≥ 2|V (G∗ )|.
Adding up these two inequalities and using that |E(G)| = |E(G∗ )| and |V (G)| + |V (G∗ )| =
|E(G)| + 2 by Euler’s theorem, we get
2|E(G)| ≥ 2|V (G)| + 2|V (G∗ )| = 2|E(G)| + 4,
a contradiction.
5.4
Rubber bands and connectivity
Rubber band representations can be related to graph connectivity, and can be used to give
a test for k-connectivity of a graph (Linial, Lov´asz and Wigderson [151]).
5.4.1
Degeneracy: essential and non-essential
We start with a discussion of what causes degeneracy in rubber band embeddings. Consider
the two graphs in Figure 5.8. It is clear that if we nail the nodes on the convex hull, and
then let the rest find its equilibrium, then there will be a degeneracy: the grey nodes will
all move to the same position. However, the reasons for this degeneracy are different: In the
first case, it is due to symmetry; in the second, it is due to the node that separates the grey
nodes from the rest, and thereby pulls them onto itself.
One can distinguish the two kinds of degeneracy as follows: In the first graph, the
strengths of the rubber bands must be strictly equal; varying these strengths it is easy
to break the symmetry and thereby get rid of the degeneracy. However, in the second graph,
no matter how we change the strengths of the rubber bands (as long as they remain positive),
the grey nodes will always be pulled together into one point.
Figure 5.8: Two reasons why two nodes end up on top of each other: symmetry, or
a separating node
60
CHAPTER 5. RUBBER BANDS
Figure 5.9 illustrates a bit more delicate degeneracy. In all three pictures, the grey points
end up collinear in the rubber band embedding. In the first graph, the reason is symmetry
again. In the second, there is a lot of symmetry, but it does not explain why the three grey
nodes are collinear in the equilibrium. (It is not hard to argue though that they are collinear:
a good exercise!) In the third graph (which is not drawn in its equilibrium position, but
before it) there are two nodes separating the grey nodes from the nailed nodes, and the grey
nodes will end up on the segment connecting these two nodes. In the first two cases, changing
the strength of the rubber bands will pull the grey nodes off the line; in the third, this does
not happen.
Figure 5.9: Three reasons why three nodes can end up collinear: symmetry, just
accident, or a separating pair of nodes
5.4.2
Connectivity and degeneracy
Our goal is to prove that essential degeneracy in a rubber band embedding is always due to
low connectivity. A vector labeling in Rd is in general position if any d+1 representing points
are affine independent. We start with the easy direction of the connection, formalizing the
examples from the previous section.
Lemma 5.4.1 Let G = (V, E) be a graph and S, T ⊆ V . Then for every rubber band
representation x of G with S nailed, rk(x(T )) ≤ κ(S, T ).
Proof.
There is a subset U ⊆ V with |U | = κ(S, T ) such that V \ U contains no (S, T )-
paths. Let W be the union of connected components of G \ U containing a vertex from
T . Then x, restricted to W , gives a rubber band representation of G[W ] with boundary U .
Clearly x(W ) ⊆ conv(x(U )), and so
rk(x(T )) ≤ rk(x(W )) = rk(x(U )) ≤ |U | = κ(S, T ).
The Lemma gives a lower bound on the connectivity between two sets S and T . The
following theorem asserts that if we take the best convex representation, this lower bound is
tight:
5.4. RUBBER BANDS AND CONNECTIVITY
61
Figure 5.10: Three nodes accidentally on a line. Strengthening the edges along the
three paths pulls them apart.
Theorem 5.4.2 Let G = (V, E) be a graph and S, T ⊆ V with κ(S, T ) = d + 1. Then G has
a rubber band representation in Rd with S nailed such that rk(xT ) = d + 1.
This theorem has a couple of consequences about connectivity not between sets, but
between a set and any node, and between any two nodes.
Corollary 5.4.3 Let G = (V, E) be a graph, d ≥ 1 and S ⊆ V . Then G has a rubber band
representation in general position in Rd with S nailed if and only if no node of G can be
separated from S by fewer than d + 1 nodes.
Corollary 5.4.4 A graph G is k-connected if and only if for every S ⊆ V with |S| = k, G
has a rubber band representation in Rk−1 in general position with S nailed.
To prove Theorem 5.4.2, we choose generic edge weights.
Theorem 5.4.5 Let G = (V, E) be a graph and S, T ⊆ V with κ(S, T ) ≥ d + 1. Choose
algebraically independent edgeweights cij > 0. Map the nodes of S into Rd in general position.
Then the rubber band extension of this map satisfies rk(xT ) = d + 1.
If the edgeweights are chosen randomly, independently and uniformly from [0, 1], then the
algebraic independence condition is satisfied with probability 1.
Proof.
The proof will consist of two steps: first, we show that there is some choice of the
edgeweights for which the conclusion holds; then we use this to prove that the conclusion
holds for every generic choice of edge-weights.
For the first step, we use that by Menger’s Theorem, there are d + 1 disjoint paths Pi
connecting a node i ∈ S with a node i′ ∈ T . The idea is to make the rubber bands on
these paths very strong (while keeping the strength of the other edges fixed). Then these
paths will pull each node i′ very close to i. Since the positions of the nodes i ∈ S are affine
independent, so will be the positions of the nodes i′ (Figure 5.10).
62
CHAPTER 5. RUBBER BANDS
To make this precise, let D be the diameter of the set {xi : i ∈ S} and let E ′ = ∪i∈S E(Pi ).
Fix any R > 0, and define a weighting w = wR of the edges by
{
R, if e ∈ E ′ ,
we =
1, otherwise.
Let x be the rubber band extension of the given mapping of the nodes of S with these
strengths.
Recall that f minimizes the potential Ew (defined by (5.5)) over all representations of G
with the given nodes nailed. Let y be the representation with yj = xi if j ∈ Pi (in particular,
yi = xi if i ∈ S); for any node j not on any Pi , let yj be any point in conv{xi : i ∈ S}. In
the representation y the edges with strength R have 0 length, and so
Ew (x) ≤ Ew (y) ≤ D2 |E|.
On the other hand,
∑
Ew (x) ≥
R|xu − xv |2 .
uv∈E ′
By the triangle inequality and Cauchy-Schwartz Inequality we get that
)1/2
(
∑
∑
|xu − xv |2
|xu − xv | ≤ |E(Pi )|
|xi − xi′ | ≤
uv∈E(Pi )
√
)1/2
n|E|D
√
≤
Ew (x)
≤
.
R
R
uv∈E(Pi )
(n
Thus xi′ → xi if R → ∞. Since {xi : i ∈ S} are affine independent, this implies that the
nodes in {xi′ : i ∈ T } = {xi : i ∈ S} are affine independent.
This completes the first step. Now we argue that this holds for all possible choices
of generic edge-weights. To prove this, we only need some general considerations. The
embedding minimizing the energy is unique, and so it can be computed from the equations
(5.6) (say, by Cramer’s Rule). What is important from this is that the vectors xi can
be expressed as rational functions of the edgeweights. Furthermore, the value det((1 +
d
xT
ti xtj )i,j=0 ) is a polynomial in the coordinates of the xi , and so it is also a rational function
of the edgeweights. We know that this rational function is not identically 0; hence it follows
that it is nonzero for every algebraically independent substitution.
5.4.3
Rubber bands and connectivity testing
Rubber band representations yield a (randomized) graph connectivity algorithm with good
running time (Linial, Lov´
asz and Wigderson [151]); however, a number of algorithmic ideas
are needed to make it work, which we describe in their simplest form. We refer to the paper
for a more thorough analysis.
5.4. RUBBER BANDS AND CONNECTIVITY
63
Connectivity between given sets. We start with describing a test checking whether or
not two given k-tuples S and T of nodes are connected by k node-disjoint paths. For this,
we assign random weights cij to the edges, and map the nodes in S into Rk−1 in general
position. Then we compute the rubber band representation extending this map, and check
whether the points representing T are in general position.
This procedure requires solving a system of linear equations, which is easy, but the system
is quite large, and it depends on the choice of S and T . With a little work, we can save quite
a lot.
Let G = (V, E) be a connected graph on V = [n] and S ⊆ V , say S = [k]. Let d = k − 1.
Given a map x : S → Rd , we can compute its rubber band extension by solving the system
of linear equations
∑
cij (xi − xj ) = 0
(i ∈ V \ S).
(5.16)
j∈N (i)
This system has (n − k)d unknowns (the coordinates of nodes i ∈ S are considered as
constants) and the same number of equations, and we know that it has a unique solution,
since this is where the gradient of a strictly convex function (which tends to ∞ at ∞) vanishes.
At the first sight, solving (5.16) takes inverting an (n − k)d × (n − k)d matrix. However,
we can immediately see that the coordinates can be computed independently, and since they
satisfy the same equations except for the right hand side, it suffices to invert the matrix of
the system once.
Below, we will face the task of computing several rubber band representations of the same
graph, changing only the nailed set S. Can we make use of some of the computation done
for one of these representations when computing the others?
The answer is yes. First, we do something which seems to make things worse: We create
new “equilibrium” equations for the nailed nodes, introducing new variables for the forces
∑
that act on the nails. Let yi = j∈N (i) cij (xj − xi ) be the force acting on node i in the
equilibrium position. Let X and Y be the matrices of the vector labelings x and y. Let Lc
be the symmetric V × V matrix


if ij ∈ E,
ij
−c
∑
(Lc )ij =
k∈N (i) cik , if i = j,


0,
otherwise
(the weighted Laplacian of the graph). Then we can write (5.16) as XLc = Y . To simplify
matters a little, let us assume that the center of gravity of the representing nodes is 0, so
/ S.
that XJ = 0. We clearly also have Y J = 0 and Y ei = yi = 0 for i ∈
Now we use the same trick as we did for harmonic functions: Lc is positive semidefinite,
and “almost” invertible in the sense that the only vector in its nullspace is 1. Hence Lc + J
is invertible; let M = (Lc + J)−1 . Since (Lc + J)J = J 2 = nJ, it follows that M J = n1 J.
64
CHAPTER 5. RUBBER BANDS
Furthermore, from M (Lc + J) = I it follows that M Lc = I − M J = I −
(
)
X = X I − n1 J = Y M .
1
n J.
Hence
It is not clear that we are nearer our goal, since how do we know the matrix Y (i.e., the
forces acting on the nails)? But the trick is this: we can prescribe these forces arbitrarily, as
∑
long as i∈S yi = 0 and yj = 0 for j ∈
/ S are satisfied. Then the rows of X = Y M will give
(
)
some position for each node, for which XLc = Y M Lc = Y I − n1 J = Y . So in particular
the nodes in V \ S will be in equilibrium, so if we nail the nodes of S, the rest will be in
equilibrium. If Y has rank d, then so does X, which implies that the vectors representing
S will span the space Rd . Since their center of gravity is 0, it follows that they are not on
one hyperplane. We can apply an affine transformation to move the points xi (i ∈ S) to
any other affine independent position if we wish, but this is irrelevant: what we want is to
check whether the nodes of T are in general position, and this is not altered by any affine
transformation. A convenient choice is


−1 1 0 . . . 0 0 . . .
−1 0 1 . . . 0 0 . . .


Y = .
.
..
 ..

.
−1 0 0 . . . 1 0 . . .
To sum up, to check whether the graph is k-connected between S and T (where S, T ⊆ V ,
|S| = |T | = k), we select random weights cij for the edges, compute the matrix X =
Y (Lc + J)−1 , and check whether the rows with indices from T are affine independent.
Connectivity between all pairs. If we want to apply the previous algorithm for connectivity testing, it seems that we have to apply it for all pairs of k-sets. Even though we
can use the same edgeweights and we only need to invert Lc + J once, we have to compute
X = Y (Lc + J)−1 for potentially exponentially many different sets S, and then we have to
test whether the nodes of T are represented in general position for exponentially many sets
T . The following lemma shows how to get around this.
Lemma 5.4.6 For every vertex v ∈ V we select an arbitrary k-subset S(v) of N (v). Then
G is k-connected iff S(u) and S(v) are connected by k node-disjoint paths for every u and v.
Proof. The ”only if” part follows from the well-known property of k-connected graphs that
any two k-subsets are are connected by k node-disjoint paths. The ”if” part follows from the
observation that if S(u) and S(v) are connected by k node-disjoint paths, then u and v are
connected by k openly disjoint paths.
Thus the subroutine in the first part needs be called only O(n2 ) times. In fact, we do
not even have to check this for every pair (u, v), and further savings can be achieved through
randomization, using Exercises 5.4.16 and 5.4.17.
5.4. RUBBER BANDS AND CONNECTIVITY
65
Numerical issues. The computation of the rubber band representation requires solving a
system of linear equations. We have seen (see Figure 5.11) that for a graph with n nodes,
the positions of the nodes can get exponentially close in a rubber band representation, which
means that we might have to compute with exponentially small numbers (in n), which means
that we have to compute with linearly many digits, which gives an extra factor of n in the
running time. For computational purposes it makes sense to solve the system in a finite
field rather than in R. Of course, this ”modular” embedding has no physical or geometrical
meaning any more, but the algebraic structure remains!
Figure 5.11: The rubber band embedding gives nodes that are exponentially close.
Let G = (V, E) be a graph and let S ⊆ V , |S| = k, and d = k − 1. Let p be a prime and
ce ∈ Fp for e ∈ E. A modular rubber band representation of G (with respect to S, p and c)
is defined as an assignment i 7→ xi ∈ Fdp satisfying
∑
cij (xi − xj ) = 0
(∀i ∈ V \ S).
(5.17)
j∈N (i)
This is formally the same equation as for real rubber bands, but we work over Fp , so no
notion of convexity can be used. In particular, we cannot be sure that the system has a
solution. But things work if the prime is chosen at random.
Lemma 5.4.7 Let N > 0 be an integer. Choose uniformly a random a prime p < N and
random weights ce ∈ Fp (e ∈ E).
(a) With probability at least 1 − n2 /N , there is a modular rubber band representation of
G (with respect to S, p and c), such that the vectors xi (i ∈ S) are affine independent. This
representation is unique up to affine transformations of Fdp .
(b) Let T ⊆ V , |T | = k. Then with probability at least 1 − n2 /N , the representation xc
in (a) satisfies rk({xi : i ∈ T }) = κ(S, T ).
Proof. The determinant of the system (5.17) is a polynomial of degree at most n2 of the
weights cij . The Schwartz–Zippel Lemma [217, 252] gives a bound on the probability that
this determinant vanishes, which gives (a). The proof of (b) is similar.
66
CHAPTER 5. RUBBER BANDS
We sum up the complexity of this algorithm without going into fine details. In particular,
we restrict this short analysis to the case when k < n/2. We have to fix an appropriate N ;
N = n5 will be fine. Then we pick a random prime p < N . Since N has O(log n) digits, the
selection of p and every arithmetic operation in Fp can be done in polylogarithmic time, and
we are going to ignore them. We have to generate O(n2 ) random weights for the edges. We
have to invert (over Fp ) the matrix Lc +J, this takes O(M (n)) operations, where M (n) is the
cost of matrix multiplication (currently known to be O(n2.3727 ), Vassilevska Williams [237]).
Then we have to compute Y (Lc + J)−1 for a polylogarithmic number of different matrices Y
(using Exercise 5.4.17(b) below). For each Y , we have to check affine independence for one
k-set in the neighborhood of every node, in O(nM (k)) time. Up to polylogarithmic factors,
this takes O(M (n) + nM (k)) work.
Exercise 5.4.8 Prove that Emin (w) := minx Ew (x) (where the minimum is taken
over all representations x with some nodes nailed) is a concave function of w.
Exercise 5.4.9 Let G = (V, E) be a connected graph, ∅ ̸= S ⊆ V , and x0 : S →
Rd . Extend x0 to x : V \ S → Rd as follows: starting a random walk at j, let i
be the (random) node where S is first hit, and let xj denote the expectation of
the vector x0i . Prove that x is the same as the rubber band extension of x0 .
Exercise 5.4.10 Let u be a rubber band representation of a planar map G in
the plane with the nodes of a country T nailed to a convex polygon. Define
Fij = ui − uj for all edges in E \ E(T ). (a) If T is a triangle, then we can
2
define
∑Fij ∈ R for ij ∈ E(T ) so that Fij = −Fji , Fij is parallel to uj − ui ,
and i∈N (j) Fij = 0 for every node i. (b) Show by an example that (a) does not
remain true if we drop the condition that T is a triangle.
Exercise 5.4.11 Prove that every Schlegel diagram with respect to a face F can
be obtained as a rubber band representation of the skeleton with the vertices of
F nailed (the strengths of the rubber bands must be chosen appropriately).
Exercise 5.4.12 Let G be a 3-connected simple planar graph with a triangular country p = abc. Let q, r, s be the countries adjacent to p. Let G∗ be the
dual graph. Consider a rubber band representation x : V (G) → R2 of G with
a, b, c nailed down (both with unit rubber band strengths). Prove that the segments representing the edges can be translated so that they form a rubber band
representation of G∗ − p with q, r, s nailed down (Figure 5.12).
Figure 5.12: Rubber band representation of a dodecahedron with one node deleted,
and of an icosahedron with the edges of a triangle deleted. Corresponding edges are
parallel and have the same length.
5.4. RUBBER BANDS AND CONNECTIVITY
Exercise 5.4.13 A convex representation of a graph G = (V, E) (in dimension
d, with boundary S ⊆ V ) is a mapping of V → Rd such that every node in V \ S
is in the relative interior of the convex hull of its neighbors.(a) The rubber band
representation extending any map from S ⊆ V to Rd is convex with boundary S.
(b) Not every convex representation is constructible this way.
Exercise 5.4.14 In a rubber band representation, increase the strength of an
edge between two non-nailed nodes (while leaving the other edges invariant).
Prove that the length of this edge decreases.
Exercise 5.4.15 A 1-dimensional convex representation with two boundary
nodes s and t is called an s-t-numbering. (a) Prove that every 2-connected graph
G = (V, E) has an s-t-numbering for every s, t ∈ V , s ̸= t. (b) Show that instead
of the 2-connectivity of G, it suffices to assume that deleting any node, either the
rest is connected or it has two components, one containing s and one containing
t.
Exercise 5.4.16 Let G be any graph, and let H be a k-connected graph with
V (H) = V (G). Then G is k-connected if and only if u and v are connected by k
openly disjoint paths in G for every edge uv ∈ E(H).
Exercise 5.4.17 Let G be any graph and k < n. (a) For t pairs of nodes chosen
randomly and independently, we test whether they are connected by k openly
disjoint paths. Prove that if G is not k-connected, then this test discovers this
with probability at least 1−exp(−2(n−k)t/n2 ). (b) For r nodes chosen randomly
and independently, we test whether they are connected by k openly disjoint paths
to every other node of the graph. Prove that if G is not k-connected, then this
test discovers this with probability at least 1 − ((k − 1)/n)r .
Exercise 5.4.18 Let G = (V, E) be a graph, and let Z be a matrix obtained
from its Laplacian LG by replacing its nonzero entries by algebraically independent transcendentals. For S, T ⊆ V (G), |S| = |T | = k, let ZS,T denote the
matrix obtained from Z by deleting the rows corresponding to S and the columns
corresponding to T . Then det(ZS,T ) ̸= 0 if and only if κ(S, T ) = k.
67
68
CHAPTER 5. RUBBER BANDS
Chapter 6
Bar-and-joint structures
We can replace the edges of a graph by rigid bars, instead of rubber bands. This way we
obtain a physical model, which is related to rubber bands, but is more important from the
point of view of applications. In fact, this topic has a vast literature, dealing with problems
of this kind arising in architecture, engineering, and (replacing the reference to rigid bars by
distance measurements) in the theory of sensor networks. We have to restrict ourselves to a
few basic results of this rich theory; see e.g. Recski [198] or Graver, Servatius and Servatius
[97] for more.
For this chapter, we assume that the graphs we consider have no isolated nodes; this
excludes some trivial complications here and there.
6.1
Stresses
Let us fix a vector-labeling u : V → Rd of the graph G. A function σ : E → R is called a
stress on (G, u) if for every node i,
∑
σij (uj − ui ) = 0.
(6.1)
j∈N (i)
(Since the stress is defined on edges, rather than oriented edges, we have tacitly assumed
that σij = σji .)
In the language of physics, we want to keep node i in position ui (think of the case
d = 3), and we can do that by forces acting along the edges of G (which we think of as rigid
bars). Then the force acting along edge ij is parallel to uj − ui , and so it can be written as
σij (uj − ui ). Newton’s Third Law implies that σij = σji . Note that the stress on an edge
can be negative or positive.
Sometimes we consider a version where instead of (6.1), we assume that
∑
σij uj = 0.
j∈N (i)
69
(6.2)
70
CHAPTER 6. BAR-AND-JOINT STRUCTURES
Such an assignment is called a homogeneous stress.
Most of the time we will assume that this representation is not contained in a lower
dimensional linear subspace of Rd . This means that the corresponding matrix U has rank d,
or (equivalently) its columns are linearly independent. For the rest of this section, we can
make a stronger assumption, namely that the representing vectors ui are not all contained
in a lower dimensional affine subspace. This is justified because (6.1) is invariant under
translation, so if the vectors ui were contained in a lower dimensional affine subspace, then
we could translate them so that this subspace would contain the origin, and this way we
could reduce the dimension of the ambient space, without changing the geometry of the
representation.
Every stress on the given vector-labeling of graph G gives rise to a matrix S = S[σ] ∈
R
, whose entries are defined by


,
if ij ∈ E,
σij∑
Sij = − k∈N (i) σk , if i = j,


0,
otherwise.
V ×V
We call S the stress matrix associated with the stress. Clearly this matrix is symmetric, and
it is a G-matrix (this means that all entries Sij , where ij ∈ E, are zero).
The definition of a stress means that U S = 0, where U is the [d] × V matrix describing
the vector-labeling u. Each row-sum of S is 0, thus the vector 1 is in the nullspace. Together
with the rows of U , we get d + 1 vectors in the nullspace that are linearly independent, due
to the assumption that the representation does not lie in a lower dimensional affine subspace.
Hence
rk(S) ≤ n − d − 1.
(6.3)
Clearly stresses on a given graph G with a given vector-labeling u form a linear subspace
Str = Str(G, u) ⊆ RE .
In a rubber band representation, the strengths of the edges “almost” form a stress: (6.1)
holds for all nodes that are not nailed. In this language, Exercise 5.4.10 said the following:
Every rubber band representation of a simple 3-connected planar graph with the nodes of a
triangular country nailed has a stress that is 1 on the internal edges and negative on the
boundary edges.
An important result about stresses is a classic indeed: it was proved by Cauchy in 1822.
Theorem 6.1.1 (Cauchy’s theorem) The skeleton of a 3-polytope has no nonzero stress.
Proof. Suppose that the edges of a convex polytope P carry a nonzero stress σ. Let G′
be the subgraph of its skeleton formed by those edges which have a nonzero stress, together
with the vertices they are incident with. Color an edge ij of G′ red if σij > 0, and blue
6.1. STRESSES
71
otherwise. The graph G′ is planar, and in fact it comes embedded in the surface of P , which
is homeomorphic to the sphere. By Lemma 2.1.7, G′ has a node i that such that the red
edges (and then also the blue edges) incident with v are consecutive in the given embedding
of G′ . This implies that we can find a plane through ui which separates (in their embedding
on the surface of P ) the red and blue edges incident with i. Let e be the normal unit vector
of this plane pointing in the halfspace which contains the red edges. Then for every red edge
ij we have eT (uj − ui ) > 0, and for every blue edge ij we have eT (uj − ui ) < 0. By the
definition of the coloring, this means that we have σij eT (uj − ui ) > 0 for every edge ij of G′ .
Also by definition, we have σij eT (uj − ui ) = 0 for those edges ij of GP that are not edges of
G. Thus
∑
σij eT (uj − ui ) > 0.
j∈N (i)
But
∑
σij eT (uj − ui ) = eT
j∈N (i)
( ∑
)
σij (uj − ui ) = 0
j∈N (i)
by the definition of a stress, a contradiction.
The most interesting consequence of Cauchy’s Theorem is that if we make a convex
polyhedron out of cardboard, then it will be rigid (see Section 6.2.2).
Remark 6.1.2 Let us add a purely linear algebraic discussion. For a given simple graph
G = (V, E), we are interested in the equation
M U T = 0,
(6.4)
where M is a G-matrix, and U is a [d] × V matrix. The columns of U define a vector-labeling
of G in Rd .
We can study equation (6.4) from two different points of view: either we are given the
vector-labeling U , and want to understand which G-matrices M satisfy it; this has lead us to
“stresses”, one of the main topics of this chapter. Or we can fix M and ask for properties of
the vector-labelings that satisfy it; this leads us to “nullspace representations”, an important
tool in Chapter 12. (It is perhaps needless to say that these two points of view cannot be
strictly separated.)
6.1.1
Braced stresses
Let G = (V, E) be a simple graph, and let u : V → Rd be a vector-labeling of G. We say
that a function σ : E → R is a braced stress, if there are appropriate values σii ∈ R such
that for every node i,
∑
σij uj = 0
j∈{i}∪N (i)
(6.5)
72
CHAPTER 6. BAR-AND-JOINT STRUCTURES
Clearly homogeneous stresses are special braced stresses, and so are “proper” stresses: if σ
is a stress on (G, u), then
( ∑
)
∑
σij uj −
σij ui = 0.
j∈N (i)
j∈N (i)
We can define the braced-stress matrix as a G-matrix S with
{
σij , if ij ∈ E or i = j,
Sij =
0,
otherwise,
(6.6)
so that we have the equation SU = 0, where U is the matrix whose rows give the vectorlabeling u. It follows that if the labeling vectors span the space Rd , then the corank of the
matrix S is at least d.
We get a physical interpretation of (6.5) by adding a new node 0 to G, and connect it
to all the old nodes, to get a graph G′ . We represent the new node by u0 = 0, and define
σ0i = σi . Then σ will be a stress on this representation of G′ : the equilibrium condition at
the old nodes is satisfied by the definition of σ, and this easily implies that it is also satisfied
at the new node.
In the case when all stresses (on the original edges) have the same sign, we can imagine
this structure as follows: the old edges are strings or rubber bands of appropriate strength,
the new edges are rod emanating from the origin, and the whole structure is in equilibrium.
We have defined three kinds of stress matrices: “ordinary”, homogeneous, and braced.
Each of these versions is a G-matrix satisfying the stress equation SU = 0. Homogeneous
stress matrices are those that, in addition, have 0’s in their diagonal. “Ordinary” stress
matrices have zero row and column sums. “Ordinary” stresses remain stresses when the
geometric representation is translated, but homogeneous and braced stresses do not.
In contrast to Cauchy’s Theorem 6.1.1, convex polytopes do have braced stresses.
Theorem 6.1.3 The skeleton of every convex 3-polytope has a braced stress that is negative
on all edges of the polytope.
We call the special braced stress constructed below the canonical braced stress of the polytope
P , and denote the corresponding matrix by SP .
Proof. Let P be a convex polytope in R3 such that the origin is in the interior of P , and
let G = (V, E) be the skeleton of P . Let P ∗ be its polar, and let G∗ = (V ∗ , E ∗ ) denote the
skeleton of P ∗ .
Let ui and uj be two adjacent vertices of P , and let wf and wg be the endpoints of the
corresponding edge f g ∈ E ∗ , where the labeling of f and g is chosen so that f is on the left
of the edge ij, oriented from i to j, and viewed from the origin. Then by the definition of
polar, we have
T
T
T
uT
i wf = ui wg = uj wf = uj wg = 1.
6.1. STRESSES
73
This implies that the vector wf − wg is orthogonal to both vectors ui and uj . Here ui and
uj are not parallel, since they are the endpoints of an edge and the origin is not on this line.
Hence wg − wf is parallel to the vector ui × uj ̸= 0, and we can write
wg − wf = Sij (ui × uj )
(6.7)
with some real numbers Sij . Clearly Sij = Sji for every edge ij, and our choice for the
orientation of the edges implies that Sij < 0.
We claim that the values Sij form a braced stress on G. Let i ∈ V , and consider the
vector
u′i =
∑
Sij uj .
j∈N (i)
Then
ui × u′i =
∑
Sij (ui × uj ) =
∑
(wg − wf ),
j∈N (i)
where the last sum extends over all edges f g of the facet of P ∗ corresponding to i, oriented
clockwise. Hence this sum vanishes, and we get that ui × u′i = 0. This means that ui and
u′i are parallel, and there are real numbers Sii such that u′i = −Sii ui . These values Sij form
a braced stress.
Let us make a few remarks about this construction. The matrix SP depends on the choice
of the origin. We can even chose the origin outside P , as long as it is not on the plane of any
facet. The only step in the argument above that does not remain valid is that the stresses
on the edges are negative.
Taking the cross product of (6.7) with wf we get the equation
)
(
)
(
T
wg × wf = Sij (ui × uj ) × wf = Sij (uT
i wf )uj − (uj wf )ui = Sij (uj − ui ).
(6.8)
This shows that the canonical braced stresses on the edges of the polar polytope are simply
the reciprocals of the canonical braced stresses on the original.
The vector vi = (1/|ui |2 )ui is the projection of the origin onto the plane of face i of P ∗ .
With the notation above, f g is an edge of this face, and 16 viT (wf × wg ) is the signed volume
of the tetrahedron spanned by the vectors vi , wf and wg . Summing over all edges of face i,
we get that
∑ 1
1 ∑
1∑
viT (wf × wg ) =
Sij viT (uj − ui ) =
Sij viT (uj − ui )
6
6
6 j
f g∈E(i)
j∈N (i)
1∑
1∑
Sij viT ui = −
Sij
=−
6 j
6 j
is the volume of the convex hull of the face i and the origin. Hence − 16 S · J = − 61
is the volume of P ∗ .
∑
i,j
Sij
74
CHAPTER 6. BAR-AND-JOINT STRUCTURES
6.1.2
Nodal lemmas and braced stresses on 3-polytopes
The Perron–Frobenius Theorem immediately implies the following fact about the spectrum
of a well-signed G-matrix.
Lemma 6.1.4 Let G be a connected graph. For every well-signed G-matrix M , the smallest
eigenvalue of M has multiplicity 1.
Proof. For an appropriately large constant C > 0, the matrix CI − M is non-negative and
irreducible. Hence it follows from the Perron-Frobenius Theorem that the smallest eigenvalue
of CI − M has multiplicity 1. This implies that the same holds for M .
We need the following lemma about the signature of eigenvectors belonging to the second
smallest eigenvalue of a well-signed G-matrix, proved by van der Holst [116]. It can be
considered as a discrete analogue of the Nodal Theorem of Courant (see e.g. [56]).
Lemma 6.1.5 (Van der Holst) Let G be connected graph, and let λ be the second smallest
eigenvalue of a well-signed G-matrix. Let u be an eigenvector belonging to λ with minimal
support, and let S+ and S− be the positive and negative supports of u. Then G[S1 ] and G[S2 ]
are connected.
Proof. We may assume that λ = 0. Let u = (ui : i ∈ V ). Assume that (say) G[S1 ] is
disconnected, then it can be decomposed as G[S1 ] = H1 ∪ H2 , where H1 and H2 are disjoint
nonempty graphs, and let H3 = G[S2 ]. Let ui be the restriction of u onto Hi , extended
by 0’s so that it is a vector in RV . Clearly u1 , u2 and u3 are linearly independent, and
u = u1 + u2 + u3 .
T
Let us make a few observations about the values uT
i M uj . Clearly u1 M u2 = 0 (since
there is no edge between V (H1 ) and V (H2 )), uT
1 M u3 ≥ 0 (since Mij ≤ 0 and ui uj < 0 for
T
i ∈ V (H1 ), j ∈ V (H3 )) and similarly u2 M u3 ≥ 0. Thus uT
i M uj ≥ 0 for i ̸= j. Furthermore,
∑
∑ T
T
T
T
j uj = ui M u = 0, and hence ui M ui ≤ 0 for i = 1, 2, 3.
j ui M uj = ui M
Let v be the (positive) eigenvector belonging to the negative eigenvalue, and consider the
T
vector w = c2 u1 − c1 u2 , where ci = uT
i v > 0. Clearly w ̸= 0, and w v = 0. Hence w is in
the span of eigenvectors belonging to the non-negative eigenvalues, and so wT M w ≥ 0. On
the other hand, using the above observations,
T
2
T
wT M w = c22 (uT
1 M u1 ) − 2c1 c2 (u1 M u2 ) + c1 (u2 M u2 ) ≤ 0.
This implies that wT M w = 0, and since w is in the span of eigenvectors belonging to the
non-negative eigenvalues, this implies that M w = 0. So w is an eigenvector belonging to the
eigenvalue λ = 0. However, the support of w is smaller than the support of u, which is a
contradiction.
6.1. STRESSES
75
Figure 6.1: Paths connecting three nodes of a face to the negative and positive
support
Several versions and generalizations of this lemma have been proved by Colin de Verdi`ere
[51] and van der Holst, Lov´
asz and Schrijver [119, 120, 169]. For example, instead of assuming
that u has minimal support, one could assume that u has full support. We will return to
these versions in Section 12.3.1.
Proposition 6.1.6 Let G be a 3-connected planar graph and let M be a well-signed G-matrix
with exactly one negative eigenvalue. Then the corank of M is at most 3.
Proof. Suppose that the nullspace of M has dimension more than 3. Let a1 , a2 and a3
be three nodes of the same face (say, the unbounded face), then there is a nonzero vector
u = (ui : i ∈ V ) such that M u = 0 and ua1 = ua2 = ua3 = 0. We may assume that u has
minimal support.
By 3-connectivity, there are three paths Pi connecting the support S of u to ai (i = 1, 2, 3),
such that they have no node in common except possibly their endpoints bi in S. Let ci be
node on Pi next to bi , then clearly ubi ̸= 0 but uci = 0. Furthermore, from the equation
M u = 0 it follows that ci must have a neighbor di such that udi is nonzero, and has opposite
sign from ubi . We may assume without loss of generality that ubi > 0 but udi < 0 for all
i = 1, 2, 3 (Figure 6.1).
Using that the positive support S+ as well as the negative support S− of u induce connected subgraphs by Lemma 6.1.5, we find two nodes j+ ∈ S+ and j− ∈ S− , along with six
openly disjoint paths, three of them connecting j+ to a, b and c, and three others connecting
j− to a, b and c. Creating a new node in the interior of the face F and connecting it to a, b
and c, we get K3,3 embedded in the plane, which is impossible.
Lemma 6.1.7 Let G be the skeleton of a 3-polytope P that contains the origin in its interior.
Then the G-matrix SP of the canonical braced stress on P has exactly three zero eigenvalues
and exactly one negative eigenvalue.
76
CHAPTER 6. BAR-AND-JOINT STRUCTURES
Proof.
From the stress equation SP U = 0 (where U is the matrix whose rows are the
vertices of P ), it follows that SP has at least three zero eigenvalues. By Lemma 6.1.5, it has
exactly three.
Applying the Perron–Frobenius Theorem to the nonnegative matrix cI − SP , where c is
a large real number, we see that the smallest eigenvalue of SP has multiplicity 1. Thus it
cannot be the zero eigenvalue, which means that SP has at least one negative eigenvalue.
The most difficult part of the proof is to show that M has only one negative eigenvalue.
First we show that there is at least one polytope P with the given skeleton for which
M (P ) has exactly one negative eigenvalue.
Observe that if we start with any true Colin de Verdi`ere matrix M of the graph G, and
we construct the polytope P from its null space representation as in Section 12.3.2, and then
we construct a matrix M (P ) from P , then we get back (up to positive scaling) the matrix
M.
Steinitz proved that any two 3-dimensional polytopes with isomorphic skeletons can be
transformed into each other continuously through polytopes with the same skeleton (see
[201]). Each vertex moves continuously, and hence so does their center of gravity. So we can
translate each intermediate polytope so that the center of gravity of the vertices stays 0. Then
the polar of each intermediate polytope is well-defined, and also transforms continuously.
Furthermore, each vertex and each edge remains at a positive distance from the origin, and
therefore the entries of M (P ) change continuously. It follows that the eigenvalues of M (P )
change continuously.
Assume that M (P0 ) has more than 1 negative eigenvalue. Let us transform P0 into a
polytope P1 for which M (P1 ) has exactly one negative eigenvalue. There is a last polytope
Pt with at least 5 non-positive eigenvalues. By construction, each M (P ) has at least 3
eigenvalues that are 0, and we know that it has at least one that is negative. Hence it follows
that M (Pt ) has exactly four 0 eigenvalues and one negative. But this contradicts Proposition
6.1.6.
The conclusion of the previous lemma holds for every braced stress.
Theorem 6.1.8 Let P be a convex polytope in R3 containing the origin, and let G = (V, E)
be its skeleton. Then every braced stress matrix with negative values on the edges has exactly
one negative eigenvalue and exactly three 0 eigenvalues.
Proof.
It follows just as in the proof of Lemma 6.1.7 that every braced stress matrix M
with negative values on the edges has at least one negative and at least three zero eigenvalues. Furthermore, if it has exactly one negative eigenvalue, then it has exactly three zero
eigenvalues by Proposition 6.1.6.
6.2. RIGIDITY
77
Suppose that M has more than one negative eigenvalue. The braced stress matrix SP has
exactly one. All the matrices Mt = tSP + (1 − t)M are braced stress matrices with negative
values on the edges, and hence they have at least four nonpositive eigenvalues. Let t be the
smallest number for which Mt has at least 5 nonpositive eigenvalues. Then Mt must have
exactly one negative and at least four 0 eigenvalues, contradicting Proposition 6.1.6.
Corollary 6.1.9 If a braced stress on a polytope is nonpositive on every edge, then its stress
matrix has at most one negative eigenvalue.
Proof. Let S be this stress matrix. Note that we have a stress matrix S ′ which is negative
on the edges: we can take the matrix constructed in Theorem 6.1.3. Then Theorem 6.1.8
can be applied to the stress matrix S + tS ′ (t > 0), an hence S + tS ′ has exactly one negative
eigenvalue. Since this holds for every t > 0, it follows that S has at most one negative
eigenvalue.
6.2
Rigidity
Let G = (V, E) be a graph and x : V → Rd , a vector-labeling of G. We think of the edges of
G as rigid bars, and of the nodes, as flexible joints. We are interested in the rigidity of such
a structure, or more generally, in its possible motions.
6.2.1
Infinitesimal motions
Suppose that the nodes of the graph move smoothly in d-space; let xi (t) denote the position
of node i at time t. The fact that the bars are rigid says that for every edge ij ∈ E(G),
|xi (t) − xj (t)| = const.
Squaring and differentiating, we get
(
)T (
)
xi (t) − xj (t) x˙ i (t) − x˙ j (t) = 0.
This motivates the following definition. Given a vector-labeling u : V → Rd of G, a map
v : V → Rd is called an infinitesimal motion if the equation
(ui − uj )T (vi − vj ) = 0
(6.9)
holds for every edge ij ∈ E. The vector vi can be thought of as the velocity of node i.
There is another way of describing this. Suppose that we move each node i from ui to
ui + εvi for some vectors vi Rd . Then the squared length of an edge ij changes by
∆ij (ε) = |(ui + εvi ) − (uj + εvj )|2 − |ui − uj |2 = 2ε(ui − uj )T (vi − vj ) + ε2 |vi − vj |2 .
78
CHAPTER 6. BAR-AND-JOINT STRUCTURES
Hence we can read off that if (vi : i ∈ V ) is an infinitesimal motion, then ∆ij (ε) = O(ε2 )
(ε → 0); else, there is an edge ij for which ∆ij (ε) = Θ(ε).
There are some trivial infinitesimal motions, called infinitesimal rigid motions, which are
velocities of rigid motions of the whole space.
Example 6.2.1 If vi is the same vector v for every i ∈ V , then (6.9) is trivially satisfied.
This solution corresponds to the parallel translation of the structure with velocity v.
In the plane, let R denote the rotation by 90◦ about the origin in the positive direction.
Then vi = Rui (i ∈ V ) defines an infinitesimal motion; this motion corresponds to the
velocities of the nodes when the structure is rotated about the origin.
In 3-space, vi = w×ui gives a solution for every fixed vector w, corresponding to rotation
with axis w.
We say that the structure is infinitesimally rigid, if every infinitesimal motion is rigid.
Whether or not a structure is infinitesimally rigid can be decided by simple linear algebra.
There are (at least) two other versions of the notion of rigidity. Given a vector-labeling
u : V → Rd of a graph, we can ask the following questions.
• Is there a “finite motion” in the sense that we can specify a continuous orbit xi (t)
(t ∈ [0, 1]) for each node i so that xi (0) = ui , and |xi (t) − xj (t)| is constant for every pair of
adjacent nodes, but not for all pairs of nodes.
• Is there a “different realization” of the same structure: another representation v : V →
Rd such that |vi − vj | = |ui − uj | for every pair of adjacent nodes, but not for all pairs of
nodes. (In this case, it is possible that the representation v cannot be obtained from the
representation u by a continuous motion preserving the lengths of the edges.) If the structure
has no different realization, it is called globally rigid.
Figure 6.2 shows two examples of structures rigid in one sense but not in the other. It is
in general difficult to decide whether the structure is rigid in these senses, and we’ll say little
only about these rigidity notions.
Let us conclude this section with a little counting. The system of equations (6.9) has
dn unknowns (the coordinates of the velocities) and m equations. We also know that if the
representation does not lie in a lower dimensional space, then (6.9) has a solution space of
( )
dimension at least d+1
2 .
How many of these equations are linearly independent? Let λij (ij ∈ E) be multipliers
such that combining the equations with them we get 0. This means that
∑
λij (uj − ui ) = 0
j∈N (i)
for every node i; in other words, λ is a stress!
The solutions of this equation in v form a linear space Inf = Inf(G, u), the space of
infinitesimal motions. The trivial solutions mentioned above form a subspace Rig(G, u) ⊆
6.2. RIGIDITY
79
(a)
(b)
(c)
Figure 6.2: Structure (a) is infinitesimally rigid and has no finite motion in the
plane, but has a different realization (b). Structure (c) has no finite motion and no
other realization, but it is not infinitesimally rigid.
Inf(G, u) of infinitesimal rigid motions. Recall that stresses on a given graph G with a given
representation u also form a linear space Str = Str(G, u).
( )
The dimension of Rig(G, u) is d+1
2 , provided the vectors ui are not contained in an affine
hyperplane. The discussions above imply the following relation between the dimensions of
the spaces of stresses and of infinitesimal motions of a representation of a graph in d-space,
assuming that the representation is not contained in a hyperplane:
dim(Inf) − dim(Str) = dn − m.
(6.10)
( )
An interesting special case is obtained when m = dn − d+1
2 . The structure is infinitesimally
(d+1)
rigid if and only dim(Inf) = dim(Rig) = 2 , which in this case is equivalent to dim(Str) = 0,
which means that there is no nonzero stress.
If the representation is contained in a proper affine subspace, then the space of rigid
motions may have lower dimension, and so instead of (6.10) we get an inequality:
dim(Inf) − dim(Str) ≤ dn − m.
(6.11)
Let Kn be a complete graph and x : [n] → Rd , a representation of it. To avoid some
uninteresting complications, let us assume that the representing vectors span Rd . The graph
Kn is trivially infinitesimally rigid in any representation x, and hence the space Inf of its
( )
infinitesimal motions has dimension d+1
2 .
We say that a set of edges Z ⊆ E(Kn ) is stress-free [infinitesimally rigid ], if the graph
([n], Z) is stress-free [infinitesimally rigid]. Equation (6.1) says that the set Z of edges has a
stress if and only if the following matrices X e ∈ Rd×n (e = uv ∈ Z) are linearly independent:


(xu )r − (xv )r if i = u,
(X e )ri = (xv )r − (xu )r if i = v,


0,
otherwise.
Let L denote the linear space spanned by the matrices X e (e ∈ E(Kn )). It follows by
( )
(6.10) that dim(L) = dn − d+1
2 . Hence all maximal stress-free sets of edges have the same
80
CHAPTER 6. BAR-AND-JOINT STRUCTURES
(a)
(b)
(c)
Figure 6.3: Rigid and nonrigid representations of the 3-prism in the plane.
( )
cardinality dn − d+1
2 . Furthermore, a set Z of edges is infinitesimally rigid if and only if
e
the matrices {X : e ∈ Z} span L, and hence the minimal infinitesimally rigid sets are the
same as the maximal stress-free sets of edges (corresponding to bases of L).
Example 6.2.2 (Triangular Prism II) Let us return to the triangular prism in Example
5.3.4, represented in the plane. From (6.10) we get dim(Inf) − dim(Str) = 2 · 6 − 9 =
3, and hence dim(Inf) − dim(Rig) = dim(Str) (assuming that not all nodes are collinear
in the representation). Thus a representation of the triangular prism in the plane has an
infinitesimal motion if and only if it has a stress. This happens if and only if the lines of the
three thick edges pass through one point (which may be at infinity; Figure 6.3).
Not all these representations have a finite motions, but some do. In the first, fixing the
black nodes, the white triangle can “circle” the black triangle. The second is infinitesimally
rigid and stress-free, but it has a different realization (reflect the picture in a vertices axis);
the third has an infinitesimal motion (equivalently, it has a stress) but no finite motion.
6.2.2
Rigidity of convex 3-polytopes
Let G be the skeleton of a convex 3-polytope P , with the representation given by the positions
of the vertices. Cauchy’s Theorem 6.1.1 tells us that the space of stresses has dimension 0,
so we get that the space of infinitesimal motions has dimension dim(Inf) = 3n − m. We know
()
that dim(Inf) ≥ 42 = 6, and hence m ≤ 3n − 6. Of course, this simple inequality we know
already (Corollary 2.1.2). It also follows that if equality holds, i.e., when all faces of P are
triangles, then the space of infinitesimal motions is 6-dimensional; in other words,
Corollary 6.2.3 The structure consisting of the vertices and edges of a convex polytope with
triangular faces is rigid.
If not all faces of the polytope P are triangles, then it follows by the same computation
that the skeleton is not infinitesimally rigid. However, if the faces are “made out of cardboard”, which means that they are forced to preserve their shape, then the polytope will be
6.2. RIGIDITY
81
rigid. To prove this, one can put a pyramid over each face (flat enough to preserve convexity),
and apply Corollary 6.2.3 to this new polytope P ′ with triangular faces. Every nontrivial
infinitesimal motion of P ′ would extend to a nontrivial infinitesimal motion of P , but we
know already that P has no such motion.
Corollary 6.2.3 does not remain true if instead of convexity we only assume that the
polytope is simple (the surface of the polytope is homeomorphic to the sphere), as shown by
a tricky counterexample of Connelly [52].
6.2.3
Generic rigidity
In Chapter 5 we used vector-labelings in general position. Here we need a stronger condition:
we say that a vector-labeling is generic, if all coordinates of the representing vectors are
algebraically independent real numbers. This assumption is more restrictive than general
position, and it is generally an overkill, and we could always replace it by “the coordinates
of the representing vectors don’t satisfy any algebraic relation that they don’t have to, and
which is important to avoid in the proof”. Another way of handling the genericity assumption
is to choose independent random values for the coordinates from any distribution that is
absolutely continuous with respect to the Lebesgue measure (e.g. Gaussian, or uniform from
a bounded domain). We call this a random representation of G. The random representation
is generic with probability 1, and we can state our results as probabilistic statements holding
with probability 1.
We say that a graph is generically stress-free in Rd , if every generic representation of
it in Rd is stress-free; similarly, we say that G is generically rigid in Rd , if every generic
representation of it in Rd is infinitesimally rigid. The following fact was observed by Gluck
[95]:
Proposition 6.2.4 Let G be a graph and d ≥ 1. If G has a representation in Rd that is
infinitesimally rigid [stress-free], then G is generically rigid [stress-free] in Rd .
Proof.
Both infinitesimal rigidity and stress-freeness can be expressed in terms of the
non-vanishing of certain determinants composed of the coordinates of the representation. If
this holds for some choice of the coordinates, then it holds for the algebraically independent
choice.
Another way of stating this observation:
Corollary 6.2.5 If G is generically rigid [stress-free] in Rd , then a random representation
of G is infinitesimally rigid [stress-free] in Rd with probability 1.
Generic rigidity and finite motion
In a generic position, there is no difference between infinitesimal non-rigidity and finite motion, as proved by Asimov and Roth [22]:
82
CHAPTER 6. BAR-AND-JOINT STRUCTURES
Theorem 6.2.6 Suppose that a generic representation of a graph in Rd admits a non-rigid
infinitesimal motion. Then it admits a finite motion.
Proof. Let x be a generic representation of G in Rd , and let v be an infinitesimal motion
that is not rigid. For every minimal set C of linearly dependent columns of Mx , there is a
nonzero infinitesimal motion vC whose support is C. Note that vC is uniquely determined
up to a scalar factor, and v is a linear combination of the vectors vC . Therefore there is a
minimal dependent set C for which vC is not rigid. So we may assume that v = vC for some
C.
From Cramer’s Rule it follows that the entries of v can be chosen as polynomials of the
entries of Mx with integral coefficients. Let v(x) denote such a vector. Note that Mx v(x) = 0
is an algebraic equation in the entries xi,t , and so by the assumption that these entries are
algebraically independent, it follows that it holds identically. In other words, My v(y) = 0
for every representation y : V → Rd , and so v(y) is an infinitesimal motion of G (in position
y) for every y.
Consider the differential equations
d
y(t) = v(y(t)),
dt
y(0) = x.
Since the right side is a continuous function of y, this system has a solution in a small interval
0 ≤ t ≤ T . This defines a non-rigid motion of the graph. Indeed, for every edge ij,
d
|yi (t)−yj (t)|2 = (yi (t)−yj (t))T (y˙ i (t)− y˙ j (t)) = (yi (t)−yj (t))T (v(yi (t))−v(yj (t))) = 0,
dt
since v(y(t)) is an infinitesimal motion. On the other hand, this is not a rigid motion, since
then v(x) = y′ (0) would be an infinitesimal rigid motion.
6.2.4
Generically stress-free structures in the plane
Laman [144] gave a characterization of generically stress-free graphs in the plane. To formulate the theorem, we need a couple of definitions. A graph G = (V, E) is called a Laman
graph, if every subset S ⊆ V with |S| ≥ 2 spans at most 2|S| − 3 edges. Note that the Laman
condition implies that the graph has no multiple edges.
To motivate this notion, we note that every planar representation of a graph with n ≥ 2
nodes and more than 2n − 3 edges will admit a nonzero stress, by (6.10). If a graph has
exactly 2n − 3 edges, then a representation of it will be rigid if and only if it is stress-free.
The Henneberg construction increases the number of nodes of a graph G in one of two
ways: (H1) create a new node and connect it to at most two old nodes, (H2) subdivide an
edge and connect the new node to a third node. If a graph can be obtained by iterated
Henneberg construction, starting with a single edge, then we call it a Henneberg graph.
6.2. RIGIDITY
83
Theorem 6.2.7 (Laman’s Theorem) For a graph G = (V, E) the following are equivalent:
(a) G is generically stress-free in the plane;
(b) G is a Laman graph;
(c) G is a Henneberg graph.
Proof. (a)⇒(b): Suppose that V has a subset S with |S| ≥ 2 spanning more than 2|S| − 3
edges. Then equation (6.11) (applied to this induced subgraph) implies that there is a stress
on the graph:
dim(Str) ≥ dim(Inf) − 2|S| + (2|S| − 2) ≥ 1.
(b)⇒(c): We prove by induction on the number of nodes that every Laman graph G can
be built up by the Henneberg construction. Since the average degree of G is 2|E|/|V | ≤
(4|V | − 6)/|V | < 4, there is a node i of degree at most 3. If i has degree 2, then we can delete
i and proceed by induction and (H1). So we may assume that i has degree 3. Let a, b, c be
its neighbors.
Call a set S ⊆ V spanning exactly 2|S| − 3 edges of G a tight set. Clearly |S| ≥ 2. The
key observation is:
Claim 1 If two tight sets have more than one node in common, then their union is also
tight.
Indeed, suppose that S1 and S2 are tight, and let E1 and E2 be the sets of edges they
span. Then E1 ∩ E2 is the set of edges spanned by S1 ∩ S2 , and so |E1 ∩ E2 | ≤ 2|S1 ∩ S2 | − 3
(using the assumption that |S1 ∩ S2 | ≥ 2). The set E1 ∪ E2 is a subset of the set of edges
spanned by S1 ∪ S2 , and so the number of edges spanned by S1 ∪ S2 is at least
|E1 ∪ E2 | = |E1 | + |E2 | − |E1 ∩ E2 | ≥ (2|S1 | − 3) + (2|S2 | − 3) − (2|S1 ∩ S2 | − 3)
= 2|S1 ∪ S2 | − 3.
Since G is a Laman graph, we must have equality here, implying that S1 ∪ S2 is tight.
Claim 2 There is no tight set containing a, b and c but not i.
Indeed, adding i to such a set the Laman property would be violated.
We want to prove that we can delete i and create a new edge between two of its neighbors
to get a Laman graph G′ ; then G arises from G′ by Henneberg construction (H2). Let us try
to add the edge ab to G − i; if the resulting graph G′ is not a Laman graph, then there is a
set S ⊆ V \ {i} with |S| ≥ 2 spanning more than 2|S| − 3 edges of G′ . Since S spans at most
2|S| − 3 edges of G, this implies that S is tight, and a, b ∈ S. (If a and b are adjacent in
G, then {a, b} is such a tight set.) If there are several sets S with this property, then Claim
84
CHAPTER 6. BAR-AND-JOINT STRUCTURES
1 implies that their union is also tight. We denote by Sab this union. Similarly we get the
tight sets Sbc and Sca .
Claim 2 implies that the sets Sab , Sbc and Sca are distinct, and hence Claim 1 implies
that any two of them have one node in common. Then the set S = Sab ∪ Sbc ∪ Sca ∪ {i} spans
at least
(2|Sab | − 3) + (2|Sbc | − 3) + (2|Sca | − 3) + 3 = 2|S| − 2
edges, a contradiction. This completes the proof of (b)⇒(c).
(c)⇒(a): We use induction on the number of edges. Let G arise from G′ by one step of
the Henneberg construction, and let σ be a nonzero stress on G. We want to construct a
nonzero stress on G′ (which will be a contradiction by induction, and so it will complete the
proof).
First suppose that we get G by step (H1), and let i be the new node. It suffices to argue
that σ is 0 on the new edges, so that σ gives a stress on G′ . If i has degree 0 or 1, then
this is trivial. Suppose that i has two neighbors a and b. Then by the assumption that the
representation is generic, we know that the points xi , xa and xb are not collinear, so from the
stress condition
σia (xi − xa ) + σib (xi − xb ) = 0
it follows that σia = σib = 0.
Suppose that G arises by the Henneberg construction (H2), by subdividing the edge ab ∈
E(G′ ) and connecting the new node i to c ∈ V (G′ ) \ {a, b}. Let us modify the representation
x as follows:
{
x′j =
1
2 (xa
xj
+ xb )
if j = i,
otherwise.
This representation for G is not generic any more, but it is generic if we restrict it to G′ .
If the generic representation x of G admits a nonzero stress, then by Proposition 6.2.4
so does every other representation, in particular x′ admits a nonzero stress σ ′ . Consider the
stress condition for i:
′
′
′
σia
(x′i − x′a ) + σib
(x′i − x′b ) + σic
(x′i − x′c ) = 0.
Here x′i − x′a = 12 (xb − xa ) and x′i − x′b = 12 (xa − xb ) are parallel but x′i − x′c is not parallel to
′
′
′
′
′
them. So it follows that σic
= 0 and σia
= σib
. Defining σab
= σia
, we get a nonzero stress
′
on G , a contradiction.
The discussion at the end of Section 6.2.1 remains valid if we consider a generic representation. Theorem 6.2.7 gives the following combinatorial characterization of stress-free sets
in any generic representation (which we consider fixed for now): a subset Z ⊆ E(Kn ) is
6.2. RIGIDITY
85
stress-free if and only if every subset S ⊆ [n] with |S| ≥ 2 spans at most 2|S| − 3 edges of Z.
Furthermore, minimal generically rigid sets of edges are exactly the maximal Laman graphs,
and they all have 2n − 3 edges. In other words,
Corollary 6.2.8 A graph with exactly 2n − 3 edges is generically rigid in the plane if and
only if it is a Laman graph.
Laman’s Theorem can be used to prove the following characterization of generically rigid
graphs (not just the minimal ones) in the plane, as observed by Lov´asz and Yemini [174]. Let
us say that G satisfies the partition condition, if for every decomposition G = G1 ∪ · · · ∪ Gk
into the union of graphs with at least one edge, we have
∑
(2|V (Gi )| − 3) ≥ 2|V | − 3.
(6.12)
i
Note that this condition implies that G has at least 2|V | − 3 edges (considering the partition
into single edges).
Theorem 6.2.9 A graph G = (V, E) is generically rigid in the plane if and only if it satisfies
the partition condition.
In the spacial case when G has exactly 2|V | − 3 edges, Corollary 6.2.8 and Theorem 6.2.9
give two different conditions for generic rigidity, which are in a sense dual to each other.
We’ll prove the equivalence of these conditions as part of the proof of Theorem 6.2.9.
Proof.
To prove that the partition condition is necessary, consider any decomposition
G = G1 ∪ · · · ∪ Gk into the union of graphs with at least one edge. Let E ′ ⊂ E be a smallest
rigid subset. We know that E ′ is stress-free, and hence E ′ ∩E(Gi ) is stress-free, which implies
that |E ′ ∩ E(Gi )| ≤ 2V (Gi ) − 3 by Theorem 6.2.7 (here we use the easy direction). It follows
∑
∑
that 2|V | − 3 = |E ′ | ≤ i |E ′ ∩ E(Gi )| ≤ i (2|V (Gi )| − 3).
To prove the converse, suppose that G is not generically rigid, and fix any generic representation x of it. Let v : V → R2 be an infinitesimal motion of G that is not a rigid infinitesimal
motion, and define a graph H on node set V , where ij ∈ E(H) if (vi − vj )T (xi − xj ) = 0.
Since v is an infinitesimal motion of G, we have E(G) ⊆ E(H), and since v is not a rigid
motion, H is not complete.
Let V1 , . . . , Vr be the maximal cliques of H. Then |Vi ∩ Vj | ≤ 1, since otherwise Vi ∪ Vj
would be moved by v in a rigid way, and so it would be a clique in H.
∑
We claim that i (2|Vi | − 3) < 2n − 3. This will show that the the partition E(G) =
( )
E1 ∪ . . . Er , where Ei = E(G) ∩ V2i , violates the partition condition.
Claim 1 If a subgraph H0 of H with at least one edge is rigid, then V (H0 ) is a clique in H.
Indeed, the restriction v|V (H0 ) is an infinitesimal motion of H0 , and since H0 is rigid, it
must be a rigid infinitesimal motion, implying that (vi − vj )T (xi − xj ) = 0 for any two nodes
i, j ∈ V (H0 ). This claim implies that V (H0 ) ⊆ Vi for some i.
86
CHAPTER 6. BAR-AND-JOINT STRUCTURES
Choose a maximal stress-free subset Fi ⊆
(Vi )
2
, and let F = ∪i Fi .
Claim 2 F is stress-free.
Suppose that this is not the case, and let Z be a minimal subset of F that carries a stress.
Theorem 6.2.7 implies that |Z| = 2|V (Z)| − 2, but |Z ′ | ≤ 2|V (Z ′ )| − 3 for every ∅ ⊂ Z ′ ⊂ Z.
Corollary 6.2.8 implies that Z \ {e} is rigid for every e ∈ Z, and so Z is rigid. By Claim 1,
this implies that V (Z) ⊆ Vi for some i, and so Z ⊆ Fi . But this is a contradiction since Fi
is stress-free.
Now it is easy to complete the proof. Clearly |Fi | = 2|Vi | − 3, so we want to prove that
|F | < 2n − 3. By Claim 2 F is stress-free, and hence |F | ≤ 2n − 3. If equality holds here,
then F is rigid, and this contradicts Claim 1.
Remark 6.2.10 A 3-dimensional analogue of Laman’s Theorem, or Theorem 6.2.9 is not
known. It is not even known whether there is a positive integer k such that every k-connected
graph is generically rigid in 3-space.
We conclude with an application of Theorem 6.2.9 (Lov´asz and Yemini [174]):
Corollary 6.2.11 Every 6-connected graph is generically rigid in the plane.
Proof. Suppose that G = (V, E) is 6-connected but not generically rigid, and consider such
a graph with a minimum number of nodes. Then by Theorem 6.2.9 there exist subgraphs
Gi = (Vi , Ei ) (i = 1, . . . , k) such that G = G1 ∪ · · · ∪ Gk and
k
∑
(2|Vi | − 3) < 2|V | − 3.
(6.13)
i=1
We may assume that every Gi is complete, since adding edges inside a set Vi does not change
(6.13). Furthermore, we may assume that every node belong to at least two of the Vi : indeed,
if v ∈ Vi is not contained in any other Vj , then deleting it preserves (6.13) and also preserves
6-connectivity (deleting a node from a graph whose neighbors form a complete graph does
not decrease the connectivity of this graph).
We claim that
∑(
3 )
2−
≥2
|Vi |
(6.14)
Vi ∋v
for each node v. Let (say) V1 , . . . , Vr be those Vi containing v, |V1 | ≤ |V2 | ≤ . . . Each term
on the left hand side is at least 1/2, so if r ≥ 4, we are done. Furthermore, the graph is
6-connected, hence v has degree at least 6, and so
r
∑
(|Vi | − 1) ≥ 6.
i=1
(6.15)
6.2. RIGIDITY
87
If r = 3, then |V3 | ≥ 3, and so the left hand side of (6.14) is at least 1/2 + 1/2 + 1 = 2. If
r = 2 and |V1 | ≥ 3, then the left hand side of (6.14) is at least 1 + 1 = 2. Finally, if r = 2 and
|V1 | = 2, then |V2 | ≥ 6 by (6.15), and so the left hand side of (6.14) is at least 1/2 + 3/2 = 2.
Now summing (6.14) over all nodes v, we get
∑ ∑(
2−
v∈V Vi ∋v
k
k
∑
3 ) ∑(
3 )∑
=
2−
1=
(2|Vi | − 3)
|Vi |
|Vi |
i=1
i=1
v∈Vi
on the left hand side and 2|V | on the right hand side, which contradicts (6.13).
Remark 6.2.12 We can put generic stressfreeness in a more general context, using some
terminology and results from matroid theory. For a vector labeling of Kn in Rd , stress-free
sets of edges of Kn define a matroid on E(Kn ), which we call the d-dimensional rigidity
matroid of (Kn , x). For an explicit representation x, we can do computations in this matroid
using elementary linear algebra.
The setfunction |V (E)| (E ⊆ E(Kn )) is submodular, and hence so is the setfunction
2|V (E)| − 3. This latter setfunction is negative on E = ∅ (it is positive on every nonempty
set of edges), and hence we consider the slightly modified setfunction
{
0,
if E = ∅,
f (E) =
2|V (E)| − 3, otherwise.
The setfunction f is submodular on intersecting pairs of sets (since for these the values have
not been adjusted). From such a setfunction, we can define a matroid, in which a set X ⊆ E
is independent if and only if f (Y ) ≥ |Y | for every nonempty Y ⊆ X. The rank function of
this matroid is then given by
r(X) = min
m
∑
f (Xi ),
i=1
where the minimum is taken over all partitions X = X1 ∪ · · · ∪ Xr into nonempty sets.
Exercise 6.2.13 Suppose that a vector-labeled graph has a stress. (a) Prove
that applying an affine transformation to the positions of the nodes, the structure
obtained has a stress. (a) Prove that applying a projective transformation to the
positions of the nodes, the structure obtained has a stress.
Exercise 6.2.14 Prove that infinitesimal rigid motions are characterized by the
property that (6.9) holds for every pair of nodes i ̸= j (not just adjacent pairs).
Exercise 6.2.15 Suppose that a structure whose nodes are in general position is
globally rigid in Rd . Prove that (a) it is infinitesimally redundantly rigid: deleting
any edge, it remains infinitesimally rigid; (b) it is (d+1)-connected. [Hendrickson]
Exercise 6.2.16 Let u be a representation of G = (V, E) in Rd . Prove that a
vector τ ∈ RE is in the orthogonal complement of Str(G, u) if and only if there
are vectors vi ∈ Rd such that τij = (vi − vj )T (ui − uj ) for every edge ij.
Exercise 6.2.17 Construct a 5-connected graph that is not generically rigid in
the plane.
88
CHAPTER 6. BAR-AND-JOINT STRUCTURES
Chapter 7
Coin representation
7.1
Koebe’s theorem
We prove Koebe’s important theorem on representing a planar graph by touching circles
[138], and its extension to a Steinitz representation, the Cage Theorem.
Theorem 7.1.1 (Koebe’s Theorem) Let G be a 3-connected planar graph. Then one can
assign to each node i a circle Ci in the plane so that their interiors are disjoint, and two
nodes are adjacent if and only if the corresponding circles are tangent.
Figure 7.1: The coin representation of a planar graph
If we represent each of these circles by their center, and connect two of these centers by a
segment if the corresponding circles touch, we get a planar map, which we call the tangency
graph of the family of circles. Koebe’s Theorem says that every planar graph is the tangency
graph of a family of openly disjoint circular discs.
Koebe’s Theorem was rediscovered and generalized by Andre’ev [19, 20] and Thurston
[233]. One of these extensions strengthens Koebe’s Theorem in terms of a simultaneous
representation of a 3-connected planar graph and of its dual by touching circles. To be
89
90
CHAPTER 7. COIN REPRESENTATION
precise, we define a double circle representation in the plane of a planar map G as two
families of circles, (Ci : i ∈ V ) and (Dp : p ∈ V ∗ ) in the plane, so that for every edge ij,
bordering countries p and q, the following holds: the circles Ci and Cj are tangent at a point
xij ; the circles Dp and Dq are tangent at the same point xij ; and the circles Dp and Dq
intersect the circles Ci and Cj at this point orthogonally. Furthermore, the interiors of the
bi bounded by the circles Ci are disjoint and so are the disks D
b j , except that
circular discs C
the circle Dp0 representing the outer country contains all the other circles Dp in its interior.
The conditions in the definition of a double circle representation imply other useful properties. The proof of these facts is left to the reader as an exercise.
Proposition 7.1.2 (a) Every country p is bounded by a convex polygon, and the circle Dp
touches every edge of p. (a) For every bounded country p, the circles Dp and Ci (i ∈ V (p))
bi and D
b p are disjoint.
cover p. (b) If i ∈ V is not incident with p ∈ V ∗ , then C
Figure 7.2: Two sets of circles, representing (a) K4 and its dual (which is another
K4 ); (b) a planar graph and its dual.
Figure 7.2(a) shows this double circle representation of the simplest 3-connected planar
graph, namely K4 . For one of the circles, the exterior domain could be considered as the
disk it “bounds”. The other picture shows part of the double circle representation of a larger
planar graph.
The main theorem in this chapter is that such representations exist.
Theorem 7.1.3 Every 3-connected planar map G has a double circle representation in the
plane.
In the next sections, we will see several proofs of this basic result.
7.1.1
Conditions on the radii
We fix a triangular country p0 in G or G∗ (say, G) as the outer face; let a, b, c be the nodes of
p0 . For every node i ∈ V , let F (i) denote the set of bounded countries containing i, and for
every country p, let V (p) denote the set of nodes on the boundary of p. Let U = V ∪V ∗ \{p0 },
and let J denote the set of pairs ip with p ∈ V ∗ \ {p0 } and i ∈ V (p).
7.1. KOEBE’S THEOREM
91
Figure 7.3: Notation.
Let us start with assigning a positive real number ru to every node u ∈ U . Think of this
as a guess for the radius of the circle representing u (we don’t guess the radius for the circle
representing p0 ; this will be easy to add at the end). For every ip ∈ J, we define
αip = arctan
rp
ri
and
αpi = arctan
ri
π
= − αip .
rp
2
(7.1)
Suppose that i is an internal node. If the radii correspond to a correct double circle
representation, then 2αip is the angle between the two edges of the country p at i (Figure
7.3). Since these angles fill out the full angle around i, we have
∑
rp
arctan
=π
(i ∈ V \ {a, b, c}).
(7.2)
ri
V (p)∋i
We can derive similar conditions for the external nodes and the countries:
∑
rp
π
arctan
=
(i ∈ {a, b, c}),
ri
6
(7.3)
V (p)∋i
and
∑
i∈V (p)
arctan
ri
=π
rp
(p ∈ V ∗ \ {p0 }).
(7.4)
The key to the construction of a double circle representation is that these conditions are
sufficient.
Lemma 7.1.4 Suppose that the radii ru > 0 (u ∈ U ) are chosen so that (7.2), (7.3) and
(7.4) are satisfied. Then there is a double circle representation with these radii.
Proof. Let us construct two right triangles with sides ri and rp for every ip ∈ J, one with
each orientation, and glue these two triangles together along their hypotenuse to get a kite
Kip . Starting from a node i1 of p0 , put down all kites Ki1 p in the order of the corresponding
countries in a planar embedding of G. By (7.3), these will fill an angle of π/3 at i1 . Now
92
CHAPTER 7. COIN REPRESENTATION
proceed to a bounded country p1 incident with i1 , and put down all the remaining kites Kp1 i
in the order in which the nodes of p1 follow each other on the boundary of p1 . By (7.4),
these triangles will cover a neighborhood of p1 . We proceed similarly to the other bounded
countries containing i1 , then to the other nodes of p1 , etc. The conditions (7.2), (7.3) and
(7.4) will guarantee that we tile a regular triangle.
Let Ci be the circle with radius ri about the position of node i constructed above, and
define Dp analogously. We still need to define Dp0 . It is clear that p0 is drawn as a regular
triangle, and hence we necessarily have ra = rb = rc . We define Dp0 as the inscribed circle
of the regular triangle abc.
It is clear from the construction that we get a double circle representation of G.
Of course, we cannot expect conditions (7.2), (7.3) and (7.4) to hold for an arbitrary choice
of the radii ru . In the next sections we will see three methods to construct radii satisfying
the conditions in Lemma 7.1.4; but first we give two simple lemmas proving something like
that—but not quite what we want.
For a given assignment of radii ru (u ∈ U ), consider the defects of the conditions in
Lemma 7.1.4. To be precise, for a node i ̸= a, b, c, define the defect by
∑
αip − π;
(7.5)
δi =
p∈F (i)
for a boundary node i ∈ {a, b, c}, we modify this definition:
∑
π
δi =
αip − .
6
(7.6)
p∈F (i)
If p is a bounded face, we define its defect by
∑
δp =
αpi − π.
(7.7)
i∈V (p)
(These defects may be positive or negative.) While of course an arbitrary choice of the radii
ru will not guarantee that all these defects are 0, the following lemma shows that this is true
at least “on the average”:
Lemma 7.1.5 For every assignment of radii, we have
∑
δu = 0.
u∈U
Proof. From the definition,


∑
∑
∑

δu =
αip − π  +
i∈V \{a,b,c}
u∈U
+

∑
p∈V
∗ \{p

0}
p∈F (i)
∑
i∈V (p)

αpi − π  .
∑
i∈{a,b,c}


∑
p∈F (i)

π
αip −
6
7.1. KOEBE’S THEOREM
93
Every pair ip ∈ J contributes αip + αpi = π/2. Since |J| = 2m − 3, we get
(2m − 3)
π
π
− (n − 3)π − 3 − (f − 1)π = (m − n − f + 2)π
2
6
By Euler’s formula, this proves the lemma.
7.1.2
An almost-proof
To prove Theorem 7.1.3, we want to prove the existence of an assignment of radii so that all
defects are zero. Let us start with an arbitrary assignment; we measure its “badness” by its
total defect
D=
∑
|δu |.
u∈U
How to modify the radii so that we reduce the total defect? The key observation is the
following. Let i be a node with positive defect δi > 0. Suppose that we increase the radius
ri , while keep the other radii fixed. Then αip decreases for every country p ∈ F (i), and
correspondingly αpi increases, but nothing else changes. Hence δi decreases, δp increases for
every p ∈ F (i), and all the other defects remain unchanged.
Since the sum of (signed) defects remains the same by Lemma 7.1.5, we can say that
some of the defect of node i is distributed to its neighbors (in the graph (U, J)). (It is more
difficult to describe in what proportion this defect is distributed, and we’ll try to avoid to
have to explicitly describe this).
It is clear that the total defect does not increase, and if at least one of the countries in
F (i) had negative defect, then in fact the total defect strictly decreases.
The same argument applies to the defects of nodes in V ∗ , as well as to nodes with negative
defects. Thus if the graph (U, J) contains a node with positive defect, which is adjacent to a
node with negative defect, then we can decrease the total defect.
Unless we already have a double circle packing, there certainly are nodes in U with
positive defect and other nodes with negative defect. What if these positive-defect nodes are
isolated from the negative-defect nodes by a sea of zero-defect nodes? This is only a minor
problem: we can repeatedly distribute some of the defect of positive-defect nodes, so that
more and more zero-defect nodes acquire positive defect, until eventually we reach a neighbor
of a negative-defect node.
Thus if we start with an assignment of radii that minimizes the total defect, then all
defects must be 0, and (as we have seen) we can construct a double circle representation
from this.
Are we done? What remains to be proved is that there is a choice of positive radii
that minimizes the total defect. This kind of argument about the existence of minimizers is
usually straightforward based on some kind of compactness argument, but in this case it is
94
CHAPTER 7. COIN REPRESENTATION
more awkward to prove this existence directly. Below we will see how to measure the “total
defect” so that this difficulty will be easier to overcome.
7.1.3
A strange objective function
The proof of Theorem 7.1.3 to be described here is due to Colin de Verdi`ere [49]. As a
preparation, we prove a lemma showing that conditions (7.2), (7.3) and (7.4), considered as
linear equations for the angles αip , can be satisfied. (We don’t claim here that the solutions
are obtained in the form (7.1).)
Lemma 7.1.6 Let G be a planar map with a triangular unbounded face p0 = abc. Then
there are real numbers 0 < βip < π/2 (p ∈ V ∗ \ {p0 }, i ∈ V (p)) such that
∑
βip = π
(i ∈ V \ {a, b, c},
(7.8)
V (p)∋i
∑
βip =
V (p)∋i
π
6
(i ∈ {a, b, c},
(7.9)
and
)
∑ (π
− βip = π
2
(p ∈ V ∗ \ {p0 }.
(7.10)
i∈V (p)
Proof. Consider any straight line embedding of the graph (say, the Tutte rubber band
embedding), with p0 nailed to a regular triangle. For i ∈ V (p), let βpi denote the angle of
the polygon p at the vertex i. Then the conclusions are easily checked.
Proof of Theorem 7.1.3. For every number x ∈ R, define
∫x
arctan(et ) dt.
ϕ(x) =
−∞
It is easy to verify that the function ϕ is monotone increasing, convex, and
{ π }
{ π }
max 0, x ≤ ϕ(x) ≤ 1 + max 0, x
2
2
(7.11)
for all x ∈ R.
∗
Let x ∈ RV , y ∈ RV . Using the numbers βip from Lemma 7.1.6, consider the function
∑ (
)
F (x, y) =
ϕ(yp − xi ) − βip (yp − xi ) .
i,p: p∈F (i)
We want to minimize the function F ; in this case, the existence of the minimum follows
by standard arguments from the next claim:
7.1. KOEBE’S THEOREM
95
Claim 1 If |x| + |y| → ∞ while (say) x1 = 0, then F (x, y) → ∞.
We need to fix one of the xi , since if we add the came value to each xi and yp , then the
value of F does not change. To prove the claim, we use (7.11):
∑
(
)
ϕ(yp − xi ) − βip (yp − xi )
i,p: p∈F (i)
∑
(
i,p: p∈F (i)
)
}
{ π
max 0, (yp − xi ) − βip (yp − xi )
2
(
{
(π
)
})
max −βip (yp − xi ),
− βip (yp − xi ) .
2
F (x, y) =
≥
∑
=
i,p: p∈F (i)
Since −βip is negative but π2 − βip is positive, each term here is nonnegative, and a given
term tends to infinity if |xi − yp | → ∞. If x1 remains 0 but |x| + |y| → ∞, then at least one
difference |xi − yp | must tend to infinity. This proves the Claim.
It follows from this Claim that F attains its minimum at some point (x, y), and here
∂
∂
F (x, y) =
F (x, y) = 0
∂xi
∂yp
(i ∈ V, p ∈ V ∗ \ {p0 }).
(7.12)
Claim 2 The radii
ri = exi
(i ∈ V )
and
rp = eyp
(p ∈ V ∗ )
satisfy the conditions of Lemma 7.1.4.
Let i be an internal node, then
∑
∑
∑
∂
arctan(eyp −xi ) + π,
βip = −
ϕ′ (yp − xi ) +
F (x, y) = −
∂xi
p∈F (i)
p∈F (i)
p∈F (i)
and so by (7.12),
∑
p∈F (i)
arctan
∑
rp
=
arctan(eyp −xi ) = π.
ri
(7.13)
p∈F (i)
This proves (7.2). Conditions (7.3) and (7.4) follow by a similar computation.
Applying Lemma 7.1.4 completes the proof.
Remark 7.1.7 The proof described above provides an algorithm to compute a double circle
representation (up to an arbitrary precision). It is in fact quite easy to program: We can
use an “off the shelf” optimization algorithm for smooth convex functions to compute the
minimizer of F and through it, the representation.
96
CHAPTER 7. COIN REPRESENTATION
7.2
Formulation in space
7.2.1
The Cage Theorem
One of the nicest consequences of the coin representation (more exactly, the double circle
representation in Theorem 7.1.3) is the following theorem, due to Andre’ev [19], which is a
rather far reaching generalization of the Steinitz Representation Theorem.
Theorem 7.2.1 (The Cage Theorem) Every 3-connected planar graph can be represented
as the skeleton of a convex 3-polytope such that every edge of the polytope touches a given
sphere.
Figure 7.4: A convex polytope in which every edge is tangent to a sphere creates
two families of circles on the sphere.
Proof. Let G = (V, E) be a 3-connected planar map, and let p0 be its unbounded face. Let
(Ci : i ∈ V ) and Dj : j ∈ V ∗ ) be a double circle representation of G in the plane. Take a
sphere S touching the plane at the center c of Dq0 , where q0 ∈ V ∗ \ {p0 }. Consider the point
of tangency as the south pole of S. Project the plane onto the sphere from the north pole
(the antipode of c). This transformation, called inverse stereographic projection, has many
useful properties: it maps circles and lines onto circles, and it preserves the angle between
them. It will be convenient to choose the radius of the sphere so that the image of Dq0 is
contained in the southern hemisphere, but the image of the interior of Dp0 covers more than
a hemisphere.
Let (Ci′ : i ∈ V ) and Dj′ : j ∈ V ∗ ) be the images of the circles in the double circle
bi : i ∈ V ) and D
b j : j ∈ V ∗ ) with boundaries Ci and Dj ,
representation. We define caps (C
respectively: we assign the cap not containing the north pole to every circle except to Dp0 ,
to which we assign the cap containing the north pole. This way we get two families of caps
bi (i ∈ V ) and D
bp
on the sphere. Every cap covers less than a hemisphere, since the caps C
b p and D
b q have this
(p ∈ V ∗ \ {p0 , q0 }) miss both the north pole and the south pole, and D
0
0
7.2. FORMULATION IN SPACE
97
bi are openly disjoint, and so
property by the choice of the radius of the sphere. The caps C
b j . Furthermore, for every edge ij ∈ E(G), the caps C
ci and C
cj are tangent to
are the caps D
cp and D
cq representing the endpoints of the dual edge pq,
each other, and so are the caps D
the two points of tangency are the same, and Ci′ and Cj′ are orthogonal to Dp′ and Dq′ . The
tangency graph of the caps Ci , drawn on the sphere by arcs of large circles, is isomorphic to
the graph G.
This nice picture translates into polyhedral geometry as follows. Let ui be the point
bi whose “horizon” is the circle C ′ , and let vp be defined analogously
above the the center of C
i
for p ∈ V ∗ . Let ij ∈ E(G), and let pq be the corresponding edge of G∗ . The points ui and
uj are contained in the tangent of the sphere that is orthogonal to the circles Ci′ and Cj′ and
their common point x; this is clearly the same as the common tangent of Dp′ and Dq′ at x.
The plane vpT x = 1 intersects the sphere in the circle Dp′ , and hence it contains it tangents,
in particular the points ui and uj , and similarly, all points uk where k is a node of the facet
b p is disjoint from C
bk if k is not a node of the facet p, we have vpT uk < 1
p. Since the cap D
for every such node.
This implies that the polytope P = conv{ui : i ∈ V } is contained in the polyhedron
P = {x ∈ R3 : vpT x ≤ 1 ∀p ∈ V ∗ }. Furthermore, every inequality vpT x ≤ 1 defines a facet
Fp of P with vertices ui , where i is a node of p. Every ray from the origin intersects one of
′
the countries p in the drawing of the graph on the sphere, and therefore it intersects Fp . This
implies that P has no other facets, and thus P = P ′ . It also follows that every edge of P is
connecting two vertices ui and uj , where ij ∈ E, and hence the skeleton of P is isomorphic
to G and every edge is tangent to the sphere.
Conversely, the double circle representation follows easily from this theorem, and the
argument is obtained basically by reversing the proof above. Let P be a polytope as in the
theorem. It will be convenient to assume that the center of the sphere is in the interior of
P ; this can be achieved by an appropriate projective transformation of the space (we come
back to this in Section 7.2.2).
We start with constructing a double circle representation on the sphere: each node is represented by the horizon on the sphere when looking from the corresponding vertex, and each
facet is represented by the intersection of its plane with the sphere (Figure 7.4). Elementary
geometry tells us that the circles representing adjacent nodes touch each other at the point
where the edge touches the sphere, and the two circles representing the adjacent countries
also touch each other at this point, and they are orthogonal to the first two circles. Furthermore, the interiors (smaller sides) of the horizon-circles are disjoint, and the same holds for
the facet-circles. These circles can be projected to the plane by stereographic projection. It
is easy to check that the projected circles form a double circle representation.
Remark 7.2.2 There is another polyhedral representation that can be read off from the
double circle representation. Let (Ci : i ∈ V ) and Dp : p ∈ V ∗ ) form a double circle
98
CHAPTER 7. COIN REPRESENTATION
representation of G = (V, E) on the sphere S, and for simplicity assume that interiors (smaller
sides) of the Ci are disjoint, and the same holds for the interiors of the Dp . We can represent
each circle Ci as the intersection of a sphere Ai with S, where Ai is orthogonal to S. Similarly,
bi and B
bp denote the
we can write Dp = Bp ∩ S, where Bp is a sphere orthogonal to S. Let A
corresponding closed balls.
bi and
Let P denote the set of points in the interior of S that are not contained in any A
bp . This set is open and nonempty (since it contains the origin). The balls A
bi and B
bp cover
B
the sphere S, and even their interiors do so with the exception of the points xij where two
circles Ci and Cj touch. It follows that P is a domain whose closure P contains a finite
number of points of the sphere S. It also follows that no three of the sphere Ai and Bp have
a point in common except on the sphere S. Hence those points in the interior of S that
belong to two of these spheres form circular arcs that go from boundary to boundary, and
it is easy to see that they are orthogonal to S. For every incident pair (i, p) (i ∈ V, p ∈ V ∗ )
there is such an “edge” of P . The “vertices” of P are the points xij , which are all on the
sphere S, and together with the “edges” as described above they form a graph isomorphic to
the medial graph G+ of G.
All this translates very nicely, if we view the interior of S as a Poincar´e model of the
3-dimensional hyperbolic space. In this model, spheres orthogonal to S are “planes”, and
hence P is a polyhedron. All the “vertices” of P which are at infinity, but if we allow them,
then P is a Steinitz representation of G+ in hyperbolic space. Every dihedral angle of P is
π/2.
Conversely, a representation of G+ in hyperbolic space as a polyhedron with dihedral
angles π/2 and with all vertices at infinity gives rise to a double circle representation of G
(in the plane or on the sphere, as you wish). Andreev gave a general necessary and sufficient
condition for the existence of representation of a planar graph by a polyhedron in hyperbolic
3-space with prescribed dihedral angles. From this representation, he was able to derive
Theorems 7.2.1 and 7.1.3.
Schramm [214] proved the following very general extension of Theorem 7.2.1, for whose
proof we refer to the paper.
Theorem 7.2.3 (Caging the Egg) For every smooth strictly convex body K in R3 , every
3-connected planar graph can be represented as the skeleton of a polytope in R3 such that all
of its edges touch K.
7.2.2
Conformal transformations
The double circle representation of a planar graph is uniquely determined, once a triangular
unbounded country is chosen and the circles representing the nodes of this triangular country
are fixed. This follows from Lemma ??. Similar assertion is true for double circle representations in the sphere. However, in the sphere there is no difference between faces, and we may
7.3. APPLICATIONS OF COIN REPRESENTATIONS
99
not want to “normalize” by fixing a face. Often it is more useful to apply a circle-preserving
transformation that distributes the circles on the sphere in a “uniform” way. The following
Lemma shows that this is possible with various notions of uniformity.
Lemma 7.2.4 Let F : (B 3 )n → B 3 be a continuous map with the property that whenever
n − 2 of the vectors ui are equal to v ∈ S 2 then vT F (u1 , . . . , un ) ≥ 0. Let (C1 , . . . , Cn ) be a
family of openly disjoint caps on the sphere. Then there is a circle preserving transformation
τ of the sphere such that F (v1 , . . . , vn ) = 0, where vi is the center of τ (Ci ).
Examples of functions F to which this lemma applies are the center of gravity of u1 , . . . , un ,
or the center of gravity of their convex hull, or the center of the inscribed ball of this convex
hull.
Proof. For every interior point x of the unit ball, we define a conformal (circle-preserving)
transformation τx of the unit sphere such that if xk → p ∈ S 2 , then τxk (y) → −p for every
y ∈ S 2 , y ̸= p.
We can define such maps as follows. For x = 0, we define τx = idB . If x ̸= 0, then we
take a tangent plane T at x0 , and project the sphere stereographically onto T ; blow up the
plane from center x0 by a factor of 1/(1 − |x|); and project it back stereographically to the
sphere. Let vi (x) denote the center of the cap τx (Ci ). (Warning: this is not the image of
the center of Ci in general! Conformal transformations don’t preserve the centers of circles.)
We want to show that the range of F (v1 (x), . . . , vn (x)) (as a function of x) contains the
origin. Suppose not. For 0 < t < 1 and x ∈ tB, define
ft (x) = tF (v1 (x), . . . , vn (x))0 .
Then ft is a continuous map tB → tB, and so by Brouwer’s Fixed Point Theorem, it has a
fixed point pt , satisfying
pt = tF (v1 (pt ), . . . , vn (pt ))0 .
Clearly |pt | = t. We may select a sequence of numbers tk ∈ (0, 1) such that tk → 1, ptk →
q ∈ B and vi (pt ) → wi ∈ B for every i. Clearly |q| = 1 and |wi | = 1. By the continuity of
F , we have F (v1 (pt ), . . . , vn (pt ) → F (w1 , . . . , wn ), and hence q = F (w1 , . . . , wn )0 . From
the properties of τx it follows that if Ci does not contain q, then wi = −q. Since at most
two of the discs Ci contain q, at most two of {w1 , . . . , wn } are different from −q, and hence
(−q)T F (w1 , . . . , wn )0 = −qT q ≥ 0 by our assumption about F . This is a contradiction. 7.3
7.3.1
Applications of coin representations
Planar separators
Koebe’s Theorem has several important applications. We start with a simple proof by Miller
and Thurston [185] of the Planar Separator Theorem 2.2.1 of Lipton and Tarjan [152] (we
100
CHAPTER 7. COIN REPRESENTATION
present the proof with a weaker bound of 3n/4 on the sizes of the components instead of
2n/3; see [223] for an improved analysis of the method).
We need the notion of the “statistical center”, which is important in many other studies
in geometry. Before defining it, we prove a simple lemma.
Lemma 7.3.1 For every set S ⊆ Rd of n points there is a point c ∈ Rn such that every
closed halfspace containing c contains at least n/(d + 1) elements of S.
Proof. Let H be the family of all closed halfspaces that contain more than dn/(d+1) points
of S. The intersection of any d + 1 of these still contains an element of S, so in particular
it is nonempty. Thus by Helly’s Theorem, the intersection of all of them is nonempty. We
claim that any point c ∈ ∩H satisfies the conclusion of the lemma.
If H be any open halfspace containing c, then Rd \ H ∈
/ H, which means that H contains
at least n/(d + 1) points of S. If H is a closed halfspace containing c, then it is contained
in an open halfspace H ′ that intersects S in exactly the same set, and applying the previous
argument to H ′ we are done.
A point c as in Lemma 7.3.1 is sometimes called a “statistical center” of the set S. To make
this point well-defined, we call the center of gravity of all points c satisfying the conclusion
of Lemma 7.3.1 the statistical center of the set. (Note: points c satisfying the conclusion of
the lemma form a convex set, whose center of gravity is well defined.)
Proof of Theorem 2.2.1. Let (Ci : i ∈ V ) be a Koebe representation of G on the unit
sphere, and let ui be the center of Ci on the sphere, and ρi , the spherical radius of Ci . By
Lemma 7.2.4, we may assume that the statistical center of the points ui is the origin.
Take any plane H through 0. Let S denote the set of nodes i for which Ci intersects H,
and let S1 and S2 denote the sets of nodes for which Ci lies on one side and the other of H.
Clearly there is no edge between S1 and S2 , and so the subgraphs G1 and G2 are disjoint and
their union is G \ S. Since 0 is a statistical center of the ui , it follows that |S1 |, |S2 | ≤ 3n/4.
It remains to make sure that S is small. To this end, we choose H at random, and
estimate the expected size of S.
What is the probability that H intersects Ci ? If ρi ≥ π/2, then this probability is 1, but
there is at most one such node, so we can safely ignore it, and suppose that ρi < π/2 for
every i. By symmetry, instead of fixing Ci and choosing H at random, we can fix H and
choose the center of Ci at random. Think of H as the plane of the equator. Then Ci will
intersect H if and only if it is center is at a latitude at most ρi (North or South). The area
of this belt around the equator is, by elementary geometry, 4π sin ρi , and so the probability
that the center of Ci falls into here is 2 sin ρi . It follows that the expected number of caps
∑
Ci intersected by H is i∈V 2 sin ρi .
7.3. APPLICATIONS OF COIN REPRESENTATIONS
101
To get an upper bound on this quantity, we use the surface area of the cap Ci is 2π(1 −
cos ρi ) = 4π sin2 (ρi /2), and since these are disjoint, we have
∑(
ρi )2
sin
< 1.
(7.14)
2
i∈V
Using that sin ρi ≤ 2 sin ρ2i , we get by Cauchy-Schwartz
∑
i∈V
)
(
(
)1/2
)2 1/2
∑
∑(
√
√
√
ρ
i
< 4 n.
2 sin ρi ≤ 2 n
(sin ρi )2
sin
≤4 n
2
i∈V
i∈V
√
So the expected size of S is less than 4 n, and so there is at least one choice of H for which
√
|S| < 4 n.
7.3.2
Laplacians of planar graphs
The Planar Separator theorem was first proved by direct graph-theoretic arguments (and
other elegant graph-theoretic proofs are available [16]). But for the following theorem on
the eigenvalue gap of the Laplacian of planar graphs by Spielman and Teng [222] there is no
proof known avoiding Koebe’s theorem.
Theorem 7.3.2 For every connected planar graph G = (V, E) on n nodes and maximum
degree D, the second smallest eigenvalue of LG is at most 8D/n.
Proof. Let Ci : i ∈ V be a Koebe representation of G on the unit sphere, and let ui be the
∑
center of Ci , and ρi , the spherical radius of Ci . By Lemma 7.2.4 may assume that i ui = 0.
The second smallest eigenvalue of LG is given by
∑
2
ij∈E (xi − xj )
∑
,
λ2 = min
2
∑ x̸=0
i∈V xi
x =0
i
i
Let ui = (ui1 , ui2 , ui3 ), then this implies that
∑
∑
(uik − ujk )2 ≥ λ2
u2ik
ij∈E
i∈V
holds for every coordinate k, and summing over k, we get
∑
∑
|ui − uj |2 ≥ λ2
|ui |2 = λ2 n.
ij∈E
i∈V
On the other hand, we have
)2
(
(
ρi
ρj
ρj
ρi )2
ρi + ρj
= 4 sin cos
+ sin
cos
|ui − uj |2 = 4 sin
2
2
2
2
2
(
(
(
ρj )2
ρi
ρi )2
ρj )2
≤ 4 sin + sin
≤ 8 sin
+ 8 sin
,
2
2
2
2
(7.15)
102
CHAPTER 7. COIN REPRESENTATION
and so by (7.14)
∑
ij∈E
|ui − uj |2 ≤ 8D
∑(
i∈V
sin
ρi )2
≤ 8D.
2
Comparison with (7.15) proves the theorem.
This theorem says that planar graphs are very bad expanders. The result does not
translate directly to eigenvalues of the adjacency matrix or the transition matrix of the
random walk on G, but for graphs with bounded degree it does imply the following:
Corollary 7.3.3 Let G be a connected planar graph on n nodes with maximum degree D.
Then the second largest eigenvalue of the transition matrix is at least 1 − 8D/n, and the
mixing time of the random walk on G is at least Ω(n/D).
Among other applications, we mention a bound on the cover time of the random walk on
a planar graph by Jonasson and Schramm [127].
7.4
Circle packing and the Riemann Mapping Theorem
Koebe’s Circle Packing Theorem and the Riemann Mapping Theorem in complex analysis
are closely related. More exactly, we consider the following generalization of the Riemann
Mapping Theorem.
Theorem 7.4.1 (The Koebe-Poincar´
e Uniformization Theorem) Every connected
open domain in the sphere whose complement has a finite number of connected components
is conformally equivalent to a domain obtained from the sphere by removing a finite number
of disjoint disks and points.
The Circle Packing Theorem and the Uniformization Theorem are mutually limiting cases
of each other (Koebe [138], Rodin and Sullivan [207]). The exact proof of this fact has
substantial technical difficulties, but it is not hard to describe the idea.
1. To see that the Uniformization Theorem implies the Circle Packing Theorem, let G
be a planar map and G∗ its dual. We may assume that G and G∗ are 3-connected, and that
G∗ has straight edges (these assumptions are not essential, just convenient). Let ε > 0, and
let U denote the ε-neighborhood of G∗ . By Theorem 7.4.1, there is a conformal map of U
onto a domain D′ ⊆ S 2 which is obtained by removing a finite number of disjoint caps and
points from the sphere (the removed points can be considered as degenerate caps). If ε is
small enough, then these caps are in one-to-one correspondence with the nodes of G. We
normalize using Lemma 7.2.4 and assume that the center of gravity of the cap centers is 0.
Letting ε → 0, we may assume that the cap representing any given node v ∈ V (G)
converges to a cap Cv . One can argue that these caps are non-degenerate, caps representing
7.5. AND MORE...
103
different nodes tend to openly disjoint caps, and caps representing adjacent nodes tend to
caps that are touching.
2. In the other direction, let U = S 2 \ K1 \ · · · \ Kn , where K1 , . . . , Kn are disjoint closed
connected sets which don’t separate the sphere. Let ε > 0. It is not hard to construct a
family C(ε) of openly disjoint caps such that the radius of each cap is less than ε and their
tangency graph G is a triangulation of the sphere.
Let Hi denote the subgraph of G consisting of those edges intersecting Ki′ . If ε is small
enough, then the subgraphs Hi are node-disjoint, and each Hi is nonempty except possibly
if Ki is a singleton. It is also easy to see that the subgraphs Hi are connected.
Let us contract each nonempty connected Hi to a single node wi . If Ki is a singleton set
and Hi is empty, we add a new node wi to G in the triangle containing Ki′ , and connect it
to the nodes of this triangle. The spherical map G′ obtained this way can be represented as
the tangency graph of a family of caps D = {Du : u ∈ V (G′ )}. We can normalize so that
the center of gravity of the centers of Dw1 , . . . , Dwn is the origin.
Now let ε → 0. We may assume that each Dwi = Dwi (ε) tends to a cap Dwi (0). Furthermore, we have a map fε that assigns to each node u of Gε the center of the corresponding
cap Du . One can prove (but this is nontrivial) that these maps fε , in the limit as ε → 0,
give a conformal map of U onto S 2 \ Dw1 (0) \ · · · \ Dwn (0).
7.5
7.5.1
And more...
Tangency graphs of general convex domains
Let H be a family of closed convex domains in the plane, such that their interiors are disjoint.
If we represent each of these domains by a node, and connect two of these nodes if the
corresponding domains touch, we get a GH , which we call the tangency graph of the family.
It is easy to see that if no four of the domains touch at the same point, then this graph is
planar.
In what follows, we restrict ourselves to the case when the members of H are all homothetical copies of a centrally symmetric convex domain. It is natural in this case to represent
each domain by its center.
Schramm [213] proved the following deep converse to this construction.
Theorem 7.5.1 For every smooth strictly convex domain D, every planar graph can be
represented as the tangency graph of a family of homothetical copies of D.
In fact, Schramm proves the more general theorem that if we assign a smooth convex
domain Di to each node i of a triangulation G of the plane, then there is a family D =
(Di′ : i ∈ V (G)) of openly disjoint convex domains such that Di′ is positively homothetic to
Di and the tangency graph of D is G.
104
CHAPTER 7. COIN REPRESENTATION
Figure 7.5: Straight line embedding of a planar graph from touching convex figures.
The smoothness of the domain is a condition that cannot be dropped. For example, K4
cannot be represented by touching squares. A strong result about representation by touching
squares, also due to Schramm, will be stated and proved in Section 8.
7.5.2
Other angles
There are extensions by Andre’ev [19, 20] and Thurston [233] to circles meeting at other
angles.
The double circle representation of G gives a representation of the lozenge graph G♢ by
circles such that adjacent nodes correspond to orthogonal circles.
One of the many extensions of Koebe’s Theorem characterizes triangulations of the plane
that have a representation by orthogonal circles: more exactly, circles representing adjacent
nodes must intersect at 90◦ , other pairs, at > 90◦ (i.e., their centers must be farther apart)
(Figure 7.6. See [19, 20, 233, 143].
Figure 7.6: Representing a planar graph by orthogonal circles
Such a representation, if it exists, can be projected to a representation by orthogonal
circles on the unit sphere; with a little care, one can do the projection so that each disk
bounded by one of the circles is mapped onto a “cap” which covers less than half of the
sphere. Then each cap has a unique pole: the point in space from which the part of the
sphere you see is exactly the given cap. The key observation is that two circles are orthogonal
if and only if the corresponding poles have inner product 1 (Figure 7.7). This translates a
7.5. AND MORE...
105
representation with orthogonal circles into a representation i 7→ ui ∈ R3 such that


= 1 if ij ∈ E,
uT
u
< 1 if ij ∈ E,
i j


> 1 if i = j.
Ci
Cj
(7.16)
ui
uj
Figure 7.7: Poles of circles
To formulate the theorem, we need two simple definitions. We have defined separating
cycles in Section 2.1. The cycle C will be called strongly separating, if G \ V (C) has at
least two connected components, and very strongly separating, if G \ V (C) has at least two
connected components, each of which has at least 2 nodes. The following theorem from [143]
is quoted without proof.
Theorem 7.5.2 Let G be a maximal planar graph different from K4 . Then G has a representation by orthogonal circles on the 2-sphere if and only if it has no strongly separating 3or 4-cycles.
Exercise 7.5.3 Prove part (a) of Lemma ??
Exercise 7.5.4 Prove that in a representation of a 3-connected planar graph by
orthogonal circles on the sphere the circles cover the whole sphere.
Exercise 7.5.5 Prove that every double circle representation of a planar graph
is a rubber band representation with appropriate rubber band strengths.
Exercise 7.5.6 Show by an example that the bound in Lemma 7.3.1 is sharp.
Exercise 7.5.7 Let H be a family of convex domains in the plane with smooth
boundaries and disjoint interiors. Then the tangency graph of H is planar.
Exercise 7.5.8 Let H be a family of homothetical centrally symmetric convex
domains in the plane with smooth boundaries and disjoint interiors. Then the
centers of the bodies give a straight line embedding in the plane of its tangency
graph (Figure 7.5).
106
CHAPTER 7. COIN REPRESENTATION
Chapter 8
Square tilings
We can also represent planar graphs by squares, rather then circles, in the plane (with some
mild restrictions). There are in fact two quite different ways of doing this: the squares can
correspond to the edges (a classic result of Brooks, Smith, Stone and Tutte), or the squares
can correspond to the nodes (a quite recent result of Schramm).
8.1
Electric current through a rectangle
A beautiful connection between square tilings and harmonic functions was described in the
classic paper of Brooks, Smith, Stone and Tutte [40]. They considered tilings of squares by
smaller squares, and used a physical model of current flows to show that such tilings can
be obtained from any connected planar graph. Their ultimate goal was to construct tilings
of a square with squares whose edge-lengths are all different; this will not be our concern;
we’ll allow squares that are equal and also the domain being tiled can be a rectangle, not
necessarily a square.
Consider tiling T of a rectangle R with a finite number of squares, whose sides are parallel
to the coordinate axes. We can associate a planar map with this tiling as follows. Represent
any maximal horizontal segment composed of edges of the squares by a single node (say,
positioned at the midpoint of the segment). Each square “connects” two horizontal segments,
and we can represent it by an edge connecting the two corresponding nodes, directed topdown. We get a directed graph GT (Figure 8.1), with a single source s (representing the
upper edge of the rectangle) and a single sink t (representing the upper edge). It is easy to
see that graph GT is planar.
A little attention must be paid to points where four squares meet. Suppose that squares
A, B, C, D share a corner p, where A is the upper left, and B, C, D follow clockwise. In this
case, we may consider the lower edges of A and B to belong to a single horizontal segment,
or to belong to different horizontal segments. In the latter case, we may or may not imagine
that there is an infinitesimally small square sitting at p. What this means is that we have to
107
108
CHAPTER 8. SQUARE TILINGS
10
3
3
4
1
2
3
2
5
3
2
10
Figure 8.1: The Brooks–Smith–Stone–Tutte construction
declare if the four edges of GT corresponding to A, B, C and D form two pairs of parallel
edges, an empty quadrilateral, or a quadrilateral with a horizontal diagonal. We can orient
this horizontal edge arbitrarily (Figure 8.2).
Figure 8.2: Possible declarations about four squares meeting at a point
If we assign the edge length of each square to the corresponding edge, we get a flow f
from s to t: If a node v represents a segment I, then the total flow into v is the sum of edge
length of squares attached to I from the top, while the total flow out of v is the sum of edge
length of squares attached to I from the bottom. Both of these sums are equal to the length
of I.
Let h(v) denote the distance of node v from the upper edge of R. Since the edge-length
of a square is also the difference between the y-coordinates of its upper and lower edges, the
function h is harmonic:
1 ∑
h(i) =
h(j)
di
j∈N (i)
for every node different from s and t (Figure 8.1).
8.2. TANGENCY GRAPHS OF SQUARE TILINGS
109
It is not hard to see that this construction can be reversed:
Theorem 8.1.1 For every connected planar map G with two specified nodes s and t on the
unbounded face, there is a unique tiling T of a rectangle such that G ∼
= GT .
Figure 8.3: A planar graph and the square tiling generated from it.
8.2
Tangency graphs of square tilings
Let R be a rectangle in the plane, and consider a tiling of R by squares. Let us add four
further squares attached to each edge of R from the outside, sharing the edge with R. We
want to look at the tangency graph of this family of squares, which we call shortly the
tangency graph of the tiling of R.
Since the squares do not have a smooth boundary, the assertion of Exercise 7.5.7 does
not apply, and the tangency graph of the squares may not be planar. Let us try to draw the
tangency graph in the plane by representing each square by its center and connecting the
centers of touching squares by a straight line segment. Similarly as in the preceding section,
we get into trouble when four squares share a vertex, in which case two edges will cross at this
vertex. In this case we specify arbitrarily one diametrically opposite pair as “infinitesimally
overlapping”, and connect the centers of these two square but not the other two centers. We
call this a resolved tangency graph.
Every resolved tangency graph is planar, and it is easy to see that it has exactly one
country that is a quadrilateral (namely, the unbounded country), and its other countries are
triangles; briefly, it is a triangulation of a quadrilateral (Figure 8.4).
Under some mild conditions, this fact has a converse, due to Schramm [215].
Theorem 8.2.1 Every planar map in which the unbounded country is a quadrilateral, all
other countries are triangles, and is not separated by a 3-cycle or 4-cycle, can be represented
as a resolved tangency graph of a square tiling of a rectangle.
110
CHAPTER 8. SQUARE TILINGS
10
3
9
3
3
2
4
1
9
2
5
3
2
10
Figure 8.4: The resolved tangency graph of a tiling of a rectangle by squares. The
numbers indicate the edge length of the squares.
Schramm proves a more general theorem, in which separating cycles are allowed; the prize
to pay is that he must allow degenerate squares with edge-length 0. It is easy to see that
a separating triangle forces everything inside it to degenerate in this sense, and so we don’t
loose anything by excluding these. Separating 4-cycles may or may not force degeneracy
(Figure 8.5), and it does not seem easy to tell when they do.
Figure 8.5: A separating 4-cycle that causes degeneracy and one that does not.
Proof. Let G be the planar map with unbounded face abcd. We consider the a-c node-path
and a-c node-cut polyhedra of the graph G \ {b, d} (see Example 15.5.5).
It will be convenient to set V ′ = V \ {a, b, c, d}. Recall that V ′ (P ) denotes the set of inner
nodes of the path P . For a path P and two nodes u and v on the path, let us denote by
P (u, v) the subpath of P consisting of all nodes strictly between u and v. Later on we will
also need the notation P [u, v) for the subpath formed by the nodes between u and v, with u
included but not v, and P [u, v] for the subpath with both included.
The first important fact to notice is the following.
Claim 1 The a-c node-cut polyhedron of G\{b, d} is the same as the b-d node-path polyhedron
of G \ {a, c}.
A special case of this claim is the fact that in every game of Hex, one or the other player
wins, but not both.
8.2. TANGENCY GRAPHS OF SQUARE TILINGS
111
Let P be the set of a–c paths in G \ {b, d} and Q, a set of b–d paths in G \ {a, c}. From
the planarity of G it is immediate that V ′ (Q) separates a and c in G \ {b, d} for any Q ∈ Q,
and hence it contains an a-c node-cut. Conversely, using that all faces of G except abcd are
triangles, it is not hard to see that every a-c node-cut in G \ {b, d} contains V ′ (Q) for some
Q ∈ Q.
The a-c cut polyhedron is described by the linear constraints
xi ≥ 0
x(V (P )) ≥ 1
(i ∈ V ′ )
(8.1)
(P ∈ P).
(8.2)
∑
Consider the solution x of (8.1)–(8.2) minimizing the objective function i x2i , and let R2 be
the minimum value. By Theorem 15.5.9, y = (1/R2 )x minimizes the same objective function
over K bl . Let us rescale these vectors to get z =
1
R
z(V ′ (Q)) ≥ R
∑
zi2 = 1.
z(V ′ (P )) ≥
1
Rx
= Ry. Then we have
(P ∈ P)
(8.3)
(Q ∈ Q)
(8.4)
(8.5)
i∈V ′
We know that y is in the polyhedron defined by (8.1)–(8.2), and so it can be written as a
convex combination of vertices, which are indicator vectors of sets V ′ (P ), where P is an a–c
path. Let 1P denote the indicator vector of V ′ (P ), then z can be written as
∑
z=
λP 1P ,
(8.6)
P ∈P
∑
where λP ≥ 0 and P λP = R. Similarly, we have a decomposition
∑
z=
µQ 1Q ,
(8.7)
Q∈Q
∑
where µQ ≥ 0 and j µj = 1/R. Let P ′ = {P ∈ P : λP > 0}, and define Q′ analogously.
Trivially a node u ∈ V ′ has zu > 0 if and only if it is contained in one of the paths in P ′
(equivalently, in one of the paths in Q′ ).
Claim 2 |V (P ) ∩ V (Q)| = 1 for all P ∈ P ′ , Q ∈ Q′ .
It is trivial from the topology of G that |V (P ) ∩ V (Q)| ≥ 1. On the other hand, (8.5)
implies that
(∑
)T (∑
) ∑
∑
1=
zi2 =
λP 1P
µQ 1Q =
λP µQ |V (P ) ∩ V (Q)|
i∈V ′
P
≥
∑
P,Q
Q
λP µQ =
(∑
P
λP
)(∑
P,Q
)
µQ = 1.
Q
We must have equality here, which proves the Claim.
112
CHAPTER 8. SQUARE TILINGS
Claim 3 For every path P ∈ P ′ , equality holds in (8.3), and for every path Q ∈ Q′ , equality
holds in (8.4).
Indeed, if (say) Q ∈ Q′ , then
∑
∑
∑
z(V ′ (Q)) = z T 1Q =
λP 1T
λP |V ′ (P ) ∩ V ′ (Q)| =
λP = R.
P 1Q =
P ∈P ′
P ∈P ′
P ∈P ′
′
′
One consequence of this claim is that the paths in P and Q are chordless, since if (say)
P ∈ P ′ had a cord uv, then bypassing the part of P between u and v would decrease its
length, which would contradict (8.3).
Consider a node u ∈ V ′ . Think of G embedded in the plane so that the quadrilateral abcd
is the unbounded country, with a on top, b at the left, c on the bottom and d at right. Every
path in P goes either “left” of u (i.e., separates u from b), or through u, or “right” of u. Let
+
Pu− , Pu and Pu+ denote these three sets. Similarly, we can partition Q = Q−
u ∪ Q u ∪ Qu
according to whether a path in Q goes “above” u (separating u from a), or through u, or
“below” u. Clearly λ(Pu ) = µ(Qu ) = zu . If P ∈ Pu is any path through u, then it is easy
to tell which paths in Q go above u: exactly those, whose unique common node with P is
between a and u.
Claim 4 Every node u ∈ V ′ satisfies zu > 0.
Suppose that there are nodes with zu = 0, and let H be a connected component of the
subgraph induced by these nodes. Since no path in P ′ goes through any node in H, every
path in P ′ goes either left of all nodes in H or right of all nodes in H. Every neighbor v
of H has zv > 0, and hence it is contained both on a path in P ′ and a path in Q′ . By our
assumption that there are no separating 3- and 4-cycles, H has at least 5 neighbors, so there
must be two neighbors v and v ′ of H that are both contained (say) in paths in P ′ to the left
of H and in paths in Q above H. Let P ∈ P ′ go through v and left of H, and let Q ∈ Q′ go
through v and above H. Let P ′ and Q′ be defined analogously.
The paths P and Q intersect at v and at no other node, by Claim 2. Clearly H must
be in the (open) region T bounded by P [v, c] ∪ Q[v, d] ∪ {cd}, and since v ′ is a neighbor
of H, it must be in T or on its boundary. If v ′ ∈ V ′ (P [v, c], then Q′ goes below v, hence
Q′ goes below the neighbor of v in H, which contradicts its definition. We get a similar
contradiction if v ′ ∈ V ′ (Q(v, d)). Finally if v ′ ∈ T , then both P ′ and Q′ must enter T (when
starting at a and b, respectively). Since P ′ is to the left of v, its unique intersection with
Q must be on Q[a, v], and hence P ′ must enter R crossing P [v, c]. Similarly Q′ must enter
T crossing Q[v, d]. But then P ′ and Q′ must have an intersection point before entering R,
which together with v ′ violates Claim 2. This completes the proof of Claim 4.
Set Xi := λ(Pi− ) and Yi := λ(Pi− ). From the observation above we get another expression
for Xi and Yi : If Q ∈ Q and P ∈ P go through i, then
Xi = z(Q(b, i)) and
Yi = z(P (a, i)).
8.2. TANGENCY GRAPHS OF SQUARE TILINGS
113
Claim 5 Let ij ∈ E(G).
(i) If ij lies on one of the paths in P ′ , but on no paths in Q′ , and (say) i is closer to a
than j, then Yj = Yi + zi , and −zj < Xj − Xi < zi .
(ii) If ij lies on one of the paths in Q′ , but on no paths in P ′ , and (say) i is closer to b
than j, then Xj = Xi + zi , but −zj < Yj − Yi < zi .
(iii) If no path in P ′ ∪ Q′ goes through ij, and (say) Xi ≤ Xj , then Xj = Xi + zi and
Yi = Yj + zj .
Claim 2 implies that the fourth possibility, namely that an edge can be contained in a
P ′ -path as well as in a Q′ -path, cannot occur.
To prove (i), assume that ij ∈ E(P ) for some P ∈ P ′ . Then Q−
j consists of those
−
paths in Q′ that intersect P either at i or above it, hence Q−
j = Qi ∪ Qi , showing that
−
−
−
µ(Q−
j ) = µ(Qi ) + zi . It is also clear that Pj ⊂ Pi ∪ Pi (no path can go to the left of j
but to the right of i; proper containment follows because the path P is not contained in the
set on the right side). Hence λ(Pj− ) < λ(Pi− ) + λ(Pi ). The other inequality in (i) follows
similarly. The proof of (ii) is analogous.
Finally, to prove (iii), let P ∈ Pi , and suppose that P ∈ P ′ goes left of j. We claim that
every path in Pj goes right of i. Suppose that there is a path P ′ ∈ Pj ∩ Pi− . Then clearly P
and P ′ must intersect at a node w such that i ∈ P [a, w] and j ∈ P ′ [w, c] (perhaps with the
roles of a and c interchanged). Then P [a, i] ∪ {ij} ∪ P ′ [j, c] and P ′ [a, w] ∪ P [w, c] are two a–c
paths with
z(P [a, i] ∪ {ij} ∪ P ′ [j, c]) + z(P ′ [a, w] ∪ P [w, c]) < z(P ) + z(P ′ ) =
2
,
R
which contradicts inequality 8.3.
So it follows that Pi− ∪ Pi = Pj− , and hence λ(Pj− ) = λ(Pi− ) + zi . It follows similarly
−
that µ(Q−
i ) = µ(Qj ) + zi . This completes the proof of Claim 5.
We need a similar assertion concerning non-adjacent pairs.
Claim 6 Let i, j ∈ V , ij ∈
/ E(G), and suppose that (say) Xi ≤ Xj and Yi ≤ Yj . Then either
Xj > Xi + zi , or Yj > Yi + zi , or Xj = Xi + zi and Yj = Yi + zi .
If there is a path Q ∈ Q going through both i and j, then Xi ≤ Xj implies that i is closer
to b than j. Thus
Xj = z(Q(b, j)) = z(Q(b, i)) + z(Q[i, j)) > z(Q(b, i)) + zi = Xi + zi .
So we may assume that there is no path of Q, and similarly no path of P, through i and j.
Similarly as in the proof of Claim 5(iii), we get that Pi− ∪Pi = Pj− (or the other way around),
−
which implies that λ(Pj− ) = λ(Pi− ) + zi . It follows similarly that µ(Q−
j ) = µ(Qi ) + zi .
114
CHAPTER 8. SQUARE TILINGS
We can tell now what will be the squares representing G: every node i ∈ V ′ will be
represented by a square Si with lower left corner at (Xi , Yi ) and side length zi . We represent
node a by the square Sa of side length R attached to the top of the rectangle R with one
vertex at (0, 0) and the opposite vertex at (R, 1/R). We represent b, c and d similarly by
squares attached to the other edges of R.
After all the preparations above, it is quite easy to verify that the squares Si defined at
the beginning of the proof give a representation of G. Claims 5 and 6 imply that the squares
Si (i ∈ V ) have no interior point in common. The three alternatives (i)–(iii) in Claim 5 imply
that if ij ∈ E, then Si and Sj do have a boundary point in common. Thus we can consider
G as drawn in the plane so that node i is at the center pi of the square Si , and every edge
ij is a straight segment connecting pi and pj .
This gives a planar embedding of G. Indeed, we know from Claim 5 that every edge ij
is covered by the squares Si and Sj . This implies that edges do not cross, except possibly
in the degenerate case when four squares Si , Sj , Sk , Sl share a vertex (in this clockwise order
around the vertex, starting at the Northwest), and ik, jl ∈ E. Since the squares Si and Sj
are touching along their vertical edges, Claim 6 implies that ij ∈ E. Similarly, jk, kl, li ∈ E,
and hence i, j, k, l form a complete 4-graph. But this is impossible in a triangulation of a
quadrilateral that has no separating triangles.
Next, we argue that the squares Si (i ∈ V \ {a, b, c, d}) tile the rectangle R = [0, R] ×
[0, 1/R]. It is easy to see that all these squares are contained in the rectangle R. Every point
of R is contained in a finite country F of G, which is a triangle uvw. The squares Su , Sv and
Sw are centered at the vertices, and each edge is covered by the two squares centered at its
endpoints. Elementary geometry implies that these three squares cover the whole triangle.
Finally, we show that an appropriately resolved tangency graph of the squares Si is equal
to G. By the above, it contains G (where for edges of type (iii), the 4-corner is resolved so
as to get the edge of G). Since G is a triangulation, the containment cannot be proper, so G
is the whole tangency graph.
Exercise 8.2.2 Prove that if G is a resolved tangency graph of a square tiling of
a rectangle, then every triangle in G is a face.
Exercise 8.2.3 Construct a resolved tangency graph of a square tiling of a rectangle, which contains a quadrilateral with a further node in its interior.
Chapter 9
Discrete holomorphic forms
9.1
Circulations and homology
Let S be a closed orientable compact surface, and consider a map on S, i.e., a graph G =
(V, E) embedded in S so that each country is an open disc. We can describe the map as a
triple G = (V, E, V ∗ ), where V is the set of nodes, E is the set of edges, and V ∗ is the set of
countries of G. We fix a reference orientation of G; then each edge e ∈ E has a tail t(e) ∈ V ,
a head h(e) ∈ V , a right shore r(e) ∈ V ∗ , and a left shore l(e) ∈ V ∗ .
The embedding of G defines a dual map G∗ . Combinatorially, we can think of G∗ as the
triple (V ∗ , E, V ), where the meaning of “node” and “face”, “head” and “right shore”, and
“tail” and “left shore” is interchanged. Note, however, that the dual of the dual is not the
original oriented graph, but the original graph with the edges reversed.
Let G be a finite graph with a reference orientation. For each node v, let δv ∈ RE denote
the coboundary of v:


if h(e) = v,
1,
(δv)e = −1, if t(e) = v,


0,
otherwise.
Thus |δv|2 = dv is the degree of v.
For every country F ∈ V ∗ , we denote by ∂F ∈ RE the boundary of F :


if r(e) = F ,
1,
(∂F )e = −1, if l(e) = F ,


0,
otherwise.
Then dF = |∂F |2 is the length of the cycle bounding F .
A vector φ ∈ RE is a circulation if
∑
∑
φ · δv =
φ(e) −
φ(e) = 0
(∀v ∈ V ).
e: h(e)=v
e: t(e)=v
115
116
CHAPTER 9. DISCRETE HOLOMORPHIC FORMS
Each vector ∂F is a circulation; circulations that are linear combinations of vectors ∂F
are called null-homologous. Two circulations φ and φ′ are called homologous if φ − φ′ is
null-homologous.
A vector φ ∈ RE is rotation-free if for every country F ∈ V ∗ , we have
φ · ∂F = 0.
This is equivalent to saying that φ is a circulation on the dual map G∗ .
Rotation-free circulations can be considered as discrete holomorphic 1-forms and they are
related to analytic functions. These functions were introduced for the case of the square grid
a long time ago by Ferrand and Duffin [85, 64, 65]. For the case of a general planar graph,
the notion is implicit in [40]. For a detailed treatment see [182].
To explain the connection with complex analytic functions, let φ be a rotation-free circulation on a graph G embedded in a surface. Consider a planar piece of the surface. Then on
the set F ′ of countries contained in this planar piece, we have a function σ : F ′ → R such
that ∂σ = φ, i.e., φ(e) = σ(r(e)) − σ(l(e)) for every edge e. Similarly, we have a function
π : V ′ → R (where V ′ is the set of nodes in this planar piece), such that δπ = φ, i.e.,
φ(e) = π(t(e)) − π(h(e)) for every edge e. We can think of π and σ as the real and imaginary
parts of a (discrete) analytic function, which satisfy
δπ = ∂σ = φ,
(9.1)
which can be thought of as a discrete analogue of the Cauchy–Riemann equations.
Thus we have the two orthogonal linear subspaces: A ⊆ RE generated by the vectors δv
(v ∈ V ) and B ⊆ RE generated by the vectors ∂F (F ∈ V ∗ ). Vectors in B are 0-homologous
circulations. The orthogonal complement A⊥ is the space of all circulations, and B ⊥ is the
space of circulations on the dual graph. The intersection C = A⊥ ∩ B ⊥ is the space of
rotation-free circulations. So RE = A ⊕ B ⊕ C. From this picture we conclude the following.
Lemma 9.1.1 Every circulation is homologous to a unique rotation-free circulation.
It also follows that C is isomorphic to the first homology group of S (over the reals), and
hence we get the following:
Theorem 9.1.2 The dimension of the space C of rotation-free circulations is 2g.
9.2
Discrete holomorphic forms from harmonic functions
We can use harmonic functions to give a more explicit description of rotation-free circulations
in a special case. For any edge e of G, let ηe be the orthogonal projection of 1e onto C.
9.2. DISCRETE HOLOMORPHIC FORMS FROM HARMONIC FUNCTIONS
117
Lemma 9.2.1 Let a, b ∈ E be two edges of G. Then (ηa )b is given by
(πb )h(a) − (πb )t(a) + (πb∗ )r(a) − (πb∗ )l(a) + 1,
if a = b, and by
(πb )h(a) − (πb )t(a) + (πb∗ )r(a) − (πb∗ )l(a) ,
if a ̸= b.
Proof.
Let x1 , x2 and x3 be the projections of 1b on the linear subspaces A, B and
C, respectively. The vector x1 can be expressed as a linear combination of the vectors δv
(v ∈ V ), which means that there is a vector y ∈ RV so that x1 = M y. Similarly, we can
write x2 = N z. Together with x = x3 , these vectors satisfy the following system of linear
equations:


x + M y + N z = 1b
M Tx = 0

 T
N x=0
(9.2)
Multiplying the first m equations by the matrix M T , and using the second equation and the
fact that M T N = 0, we get
M T M y = M T 1b ,
(9.3)
and similarly,
N T N z = N T 1b .
(9.4)
Here M T M is the Laplacian of G and N T N is the Laplacian of G∗ , and so (9.3) implies that
y = πb + c1 for some scalar c. Similarly, z = πb∗ + c∗ 1 for some scalar c′ . Thus
x
1b − M T (πb + c1) − N T (πb∗ + c∗ 1)
= 1b − M T πb − N T πb∗ ,
=
which is just the formula in the lemma, written in matrix form.
The case a = b of the previous formula has the following formulation:
Corollary 9.2.2 For an edge E = st of a map G on any surface, let R(e) = R(s, t) denote the effective resistance between the endpoints of st, and let R∗ (e) denote the effective
resistance of the dual map between the endpoints of the edge dual to st. Then
(ηe )e = 1 − R(e) − R∗ (e).
118
CHAPTER 9. DISCRETE HOLOMORPHIC FORMS
As a corollary of the corollary, we get
Corollary 9.2.3 For a map on any surface and any edge of it, R(e)+R∗ (e) ≤ 1. For planar
maps, equality holds.
Proof. We have
(ηe )e = ηe · 1e = |ηe |2 ,
since ηe is a projection of 1e . Hence (ηe )e ≥ 0, and the inequality follows by Corollary 9.2.2.
In the case of a planar map, the space C is the null space, and so (ηe )e = 0.
9.3
Operations
9.3.1
Integration
Let f and g be two functions on the nodes of a discrete weighted map in the plane. Integration
is easiest to define along a path P = (v0 , v1 , . . . , vk ) in the lozenge graph G♢ (this has the
advantage that it is symmetric with respect to G and G∗ ). We define
∫
f dg =
P
k−1
∑
i=0
1
(f (vi+1 ) + f (vi ))(g(vi+1 ) − g(vi )).
2
The nice fact about this integral is that for analytic functions, it is independent of the
path P , depends on the endpoints only. More precisely, let P and P ′ be two paths on G♢
with the same beginning node and endnode. Then
∫
∫
f dg =
f dg.
(9.5)
P
P′
This is equivalent to saying that
∫
f dg = 0
(9.6)
P
if P is a closed walk. It suffices to verify this for the boundary of a country of G♢ , which
only takes a straightforward computation. It follows that we can write
∫ v
f dg
u
as long as the homotopy type of the path from u to v is determined (or understood).
Similarly, it is also easy to check the rule of integration by parts: If P is a path connecting
u, v ∈ V ∪ V ∗ , then
∫
∫
f dg = f (v)g(v) − f (u)g(u) −
g df.
P
P
(9.7)
9.3. OPERATIONS
119
Let P be a closed path in G♢ that bounds a disk D. Let f be an analytic function and g
an arbitrary function. Define gb(e) = g(he ) − g(te ) − i(g(le ) − g(re )) (the “analycity defect”
of g on edge e). Then it is not hard to verify the following generalization of (9.6):
∫
∑
f dg =
(f (he ) − f (te ))b
g (e).
(9.8)
P
e⊂D
This can be viewed as a discrete version of the Residue Theorem. For further versions, see
[182].
9.3.2
Critical analytic functions
These have been the good news. Now the bad part: for a fixed starting node u, the function
∫ v
F (v) =
f dg
u
is uniquely determined, but it is not analytic in general. In fact, a simple computation gives
the analycity defect of the edge e:
F (he ) − F (te )
F (le ) − F (re )
Fb(e) =
−i
ℓe
ℓ e∗
[
]
f (he ) − f (te )
=i
g(te ) + g(he ) − g(re ) − g(le ) .
ℓe
(9.9)
So it would be nice to have an analytic function g such that the factor in brackets in (9.9) is
zero for every edge:
g(te ) + g(he ) = g(re ) + g(le )
Let us call such an analytic function critical. What we found above is that
(9.10)
∫v
u
f dg is an
analytic function of v for every analytic function f if and only if g is critical.
This notion was introduced in a somewhat different setting by Duffin [65] under the name
of rhombic lattice. Mercat [182] defined critical maps: these are maps which admit a critical
analytic function.
Geometrically, this condition means the following. Consider the critical function g as a
mapping of G ∪ G∗ into the complex plane C. This defines embeddings of G, G∗ and G♢ in
the plane with following (equivalent) properties.
(a) The countries of G♢ are rhomboids.
(b) Every edge of G♢ has the same length.
(c) Every country of G is inscribed in a unit circle.
(d) Every country of G∗ is inscribed in a unit circle.
If you want to construct such maps, you can start with a planar map whose countries are
rhombi; this is bipartite, and you can define a critical graph on its white nodes by adding
one diagonal to each rhombus and deleting the original edges (see Figure 9.1).
120
CHAPTER 9. DISCRETE HOLOMORPHIC FORMS
Figure 9.1: A graph with rhombic countries, and the critical graph constructed from
it.
Consider any country F0 of the lozenge graph G♢ , and a country F1 incident with it.
This is a quadrilateral, so there is a well-defined country F2 so that F0 and F2 are attached
to F1 along opposite edges. Repeating this, we get a sequence of countries (F0 , F1 , F2 . . . ).
Using the country attached to F0 on the opposite side to F1 , we can extend this to a two-way
infinite sequence (. . . , F−1 , F0 , F1 , . . . ). We call such a sequence a track (see Figure 9.2).
Figure 9.2: A track in a rhombic graph.
Criticality can be expressed in terms of holomorphic forms as well. Let φ be a (complex
valued) holomorphic form on a weighted map G. We say that φ is critical if the following
condition holds: Let e = xy and f = yz be two edges of G bounding a corner at y, with (say)
directed so that the corner is on their left, then
ℓe φ(e) + ℓf φ(f ) = ℓe∗ φ(e∗ ) − ℓf ∗ φ(f ∗ ).
(9.11)
Note that both f and f ∗ are directed into hf , which explains the negative sign on the right
hand side. To digest this condition, consider a plane piece of the map and a primitive function
g of ψ. Then (9.11) means that
g(y ′ ) − g(y) = g(q) − g(q ′ ),
9.3. OPERATIONS
121
which we can rewrite in the following form:
g(x) + g(y) − g(p) − g(q) = g(x) + g(y ′ ) − g(p) − g(q ′ ).
This means that the “deficiency” g(he ) + g(te ) − g(le ) − g(re ) is the same for any two adjacent
edges. Since the graph is connected, it follows that this value is the same for every edge.
Since we are free to add a constant to the value of g at every node in V ∗ (say), we can choose
the primitive function g so that g is critical.
Whether or not a weighted map in the plane has a critical holomorphic form depends on
the weighting. Which maps can be weighted this way? Kenyon and Schlenker [135] answered
this question.
Theorem 9.3.1 A planar map has a rhomboidal embedding in the plane if and only if every
track consists of different countries and any two tracks have at most one country in common.
9.3.3
Polynomials, exponentials and approximation
Once we can integrate, we can define polynomials. More exactly, let G be a map in the plane,
and let us select any node to be called 0. Let Z denote a critical analytic function on G such
that Z(0) = 0. Then we have
∫
x
1 dZ = Z(x).
0
Now we can define higher powers of Z by repeated integration:
∫ x
Z :n: (x) = n
Z :n−1: dZ.
0
We can define a discrete polynomial of degree n as any linear combination of
1, Z, Z :2: , . . . , Z :n: . The powers of Z of course depend on the choice of the origin, and the
formulas describing how it is transformed by shifting the origin are more complicated than
in the classical case. However, the space of polynomials of a given degree is invariant under
shifting the origin [183]).
Further, we can define the exponential function as a discrete analytic function Exp(x) on
V ∩ V ∗ satisfying
dExp(x) = Exp(x)dZ.
More generally, it is worth while to define a 2-variable function Exp(x, λ) as the solution of
the difference equation
dExp(λ, x) = λExp(λ, x)dZ.
122
CHAPTER 9. DISCRETE HOLOMORPHIC FORMS
It can be shown that there is a unique such function, and there are various more explicit
formulas, including
Exp(λ, x) =
∞
∑
λn Z :n:
,
n!
n=0
(at least as long as the series on the right hand side is absolute convergent).
Mercat [183, 184] uses these tools to show that exponentials form a basis for all discrete
analytic functions, and to generalize results of Duffin [64] about approximability of analytic
functions by discrete analytic functions.
9.4
Nondegeneracy properties of rotation-free circulations
We state and prove two key properties of rotation-free circulations: one, that the projection of
a basis vector to the space of rotation-free circulations is non-zero, and two, that rotation-free
circulations are spread out essentially over the whole graph in the sense that every connected
piece of the graph where a non-zero rotation-free circulation vanishes can be isolated from
the rest by a small number of points. These results were proved in [28, 29].
We start with a simple lemma about maps. For every country F , let aF denote the
number of times the orientation changes if we move along the the boundary of F . For every
node v, let bv denote the number of times the orientation of the edges changes in their cyclic
order as they emanate from v.
Lemma 9.4.1 Let G = (V, E, V ∗ ) be any directed map on an orientable surface S of genus
g such that all countries are homeomorphic to an open disc. Then
∑
∑
(aF − 2) +
(bv − 2) = 4g − 4.
F ∈V ∗
v∈V
Proof. Counting corners of countries with both incident edges directed in or both directed
out, we get the equation
∑
∑
aF =
(dv − bv ).
F
v
Using Euler’s formula,
∑
∑
∑
dv = 2m = 2n + 2f + 4g − 4.
bv =
aF +
F
v
v
Rearranging and dividing by 2, we get the equality in the lemma.
If g = 0, then there is no nonzero rotation-free circulation by Theorem 9.1.2, and hence
ηe = 0 for every edge e. But for g > 0 we have:
9.4. NONDEGENERACY PROPERTIES OF ROTATION-FREE CIRCULATIONS
123
Theorem 9.4.2 If g > 0, then ηe ̸= 0 for every edge e.
Proof. Suppose that ηe = 0 for some edge e. Then by Lemma 9.2.1, there are vectors
π = π(e) ∈ RV and π ∗ = π ∗ (e) ∈ RF such that
∗
∗
− πl(a)
πh(a) − πt(a) = πr(a)
(9.12)
for every edge a ̸= e, but
∗
∗
πh(e) − πt(e) = 1 + πr(e)
− πl(a)
.
(9.13)
We define a convenient orientation of G. Let E(G) = E1 ∪ E2 , where E1 consists of
edges a with φ(h(a)) ̸ φ(t(a)), and E2 is the rest. Every edge a ∈ E1 is oriented so that
πh(a) > πt(a). Consider any connected component C of the subgraph formed by edges in E2 .
Let u1 , . . . , uk be the nodes of C that are incident with edges in E1 . Add a new node v to C
and connect it to u1 , . . . , uk to get a graph C ′ . Clearly C ′ is 2-connected, so it has an acyclic
orientation such that every node is contained in a path from v to u1 . The corresponding
orientation of C is acyclic and every has the property that it has no source or sink other than
possibly u1 , . . . , uk .
Carrying this out for every connected component of G′ , we get an orientation of G. We
claim this orientation is acyclic. Indeed, if we had a directed cycle, then walking around it
π would never decrease, so it would have to stay constant. But then all edges of the cycle
would belong to E2 , contradicting the way these edges were oriented.
We also claim this orientation has only one source and one sink. Indeed, if a node
v ̸= h(e), t(e) is incident with an edge of E1 , then it has at least one edge of E1 entering it
and at least one leaving it. If v is not incident with any edge of E1 , then it is an internal node
of a component C, and so it is not a source or sink by the construction of the orientation of
C.
Take the union of G and the dual graph G∗ . This gives a graph H embedded in S. Clearly
H inherits an orientation from G and from the corresponding orientation of G∗ .
We are going to apply Lemma 9.4.1. Every country of H will aF = 2 (this just follows
from the way how the orientation of G∗ was defined). Those nodes of H which arise as the
intersection of an edge of G with an edge of G∗ will have bv = 2.
Consider a node v of G. If v = h(a) then clearly all edges are directed toward v, so
bh(a) = 0. Similarly, we have bt(v) = 0. We claim that bv = 2 for every other node. Since
obviously v is not a source or a sink, we have bv ≥ 2. Suppose that bv > 2. Then we have for
edges e1 , e2 , e3 , e4 incident with v in this cyclic order, so that e1 and e2 form a corner of a
country F , e3 and e4 form a corner of a country F ′ , h(e1 ) = h(e3 ) = v and t(e2 ) = t(e3 ) = v.
Consider a country π ∗ incident with v. We may assume that π ∗ (F ) ≤ π ∗ (F ′ ). From the
orientation of the edges e1 and e2 it follows that π ∗ (F ) is larger than π ∗ of its neighbors.
124
CHAPTER 9. DISCRETE HOLOMORPHIC FORMS
Let F be the union of all countries F ′′ with π ∗ (F ′′ ) ≥ π ∗ (F ). The boundary of F is an
eulerian subgraph, and so it can be decomposed into edge-disjoint cycles D1 , . . . , Dt . Since
the boundary goes through v twice (once along e1 and e2 , once along two other edges with
the corner of F ′ on the left hand side), we have t ≥ 2, and so one of these cycles, say D1 , does
not contain e. But then by the definition of the orientation and by (9.12), D1 is a directed
cycle, which is a contradiction.
A similar argument shows that if v is a node corresponding to a country not incident with
e, then bv = 0; while if v comes from r(e) or from l(e), then bv = 2.
So substituting in Lemma 9.4.1, only two terms on the left hand side will be non-zero,
yielding −4 = 4g − 4, or g = 0.
Corollary 9.4.3 If g > 0, then there exists a rotation-free circulation that does not vanish
on any edge.
Corollary 9.4.4 If g > 0, then for every edge e, (ηe )e ≥ n−n f −f .
Indeed, combining with the remark after Corollary 9.2.2, we see that (ηe )e > 0 if g > 0.
But (ηe )e = 1 − Re − Re∗ is a rational number, and it is easy to see that its denominator is
not larger than nn f f .
Theorem 9.4.5 Let G be a graph embedded in an orientable surface S of genus g > 0 so
that all countries are discs. Let φ be a non-zero rotation-free circulation on G and let G′ be
the subgraph of G on which φ does not vanish. Suppose that φ vanishes on all edges incident
with a connected subgraph U of G. Then U can be separated from G′ by at most 4g − 3 points.
The assumption that the connectivity between U and the rest of the graph must be
linear in g is sharp in the following sense. Suppose X is a connected induced subgraph of G
separated from the rest of G by ≤ 2g nodes, and suppose (for simplicity) that X is embedded
in a subset of S that is topologically a disc. Contract X to a single point x, and erase the
resulting multiplicities of edges. We get a graph G′ still embedded in S so that each country
is a disc. Thus this graph has a (2g)-dimensional space of circulations, and hence there is a
non-zero rotation-free circulation ψ vanishing on 2g − 1 of the edges incident with x. Since
this is a circulation, it must vanish on all the edges incident with x. Uncontracting X, and
extending ψ with 0-s to the edges of X, it is not hard to check that we get a rotation-free
circulation.
Proof.
Let W be the connected component of G \ V (G′ ) containing U , and let Y denote
the set of nodes in V (G) \ V (W ) adjacent to W .
Consider an edge e with φ(e) = 0. If e is not a loop, then we can contract e and get a
map on the same surface with a rotation-free flow on it. If G − e is still a map, i.e., every
country is a disc, then φ is a rotation-free flow on it. If G − e is not a map, then both sides
9.5. GEOMETRIC REPRESENTATIONS AND DISCRETE ANALYTIC FUNCTIONS125
of e must be the same face. So we can eliminate edges with φ(e) = 0 unless h(e) = t(e) and
r(e) = l(e) (we call these edges strange loops). In this latter case, we can change φ(e) to any
non-zero value and still have a rotation-free flow.
Applying this reduction procedure, we may assume that W = {w} consists of a single
node, and the only edges with φ = 0 are the edges between w and Y , or between two nodes
of Y . We cannot try to contract edges between nodes in Y (we don’t want to reduce the size
of Y ), but we can try to delete them; if this does not work, then every such edge must have
r(e) = l(e).
Also, if more than one edge remains between w and a node y ∈ Y , then each of them has
r(e) = l(e) (else, one of them could be deleted). Note that we may have some strange loops
attached at w. Let D be the number of edges between w and Y .
Re-orient each edge with φ ̸= 0 in the direction of the flow φ, and orient the edges between
w and Y alternatingly in an out from w. Orient the edges with φ = 0 between two nodes of
Y arbitrarily. We get a digraph G1 .
It is easy to check that G1 has no sources or sinks, so bv ≥ 2 for every node v, and
of course bw ≥ |Y | − 1. Furthermore, every country either has an edge with φ > 0 on its
boundary, or an edge with r(e) = l(e). If a country has at least one edge with φ > 0, then
it cannot be bounded by a directed cycle, since φ would add up to a positive number on its
boundary. If a country boundary goes through an edge with r(e) = l(e), then it goes through
it twice in different directions, so again it is not directed. So we have aF ≥ 2 for every face.
Substituting in Lemma 9.4.1, we get that |Y | − 1 ≤ 4g − 4, or |Y | ≤ dw ≤ 4g − 3. Since
Y separates U from G′ , this proves the theorem.
9.5
Geometric representations and discrete analytic
functions
9.5.1
Rubber bands
We apply the method to toroidal graphs.
Let G be a toroidal map. We consider the torus as R2 /Z2 , endowed with the metric
coming from the euclidean metric on R2 . Let us replace each edge by a rubber band, and
let the system find its equilibrium. Topology prevents the map from collapsing to a single
point. In mathematical terms, we are minimizing
∑
ℓ(ij)2 ,
(9.14)
ij∈E(G)
where the length ℓ(ij) of the edge ij is measured in the given metric, and we are minimizing over all continuous mappings of the graph into the torus homeomorphic to the original
embedding.
126
CHAPTER 9. DISCRETE HOLOMORPHIC FORMS
It is not hard to see that the minimum is attained, and the minimizing mapping is unique
up to isometries of the torus. We call it the rubber band mapping. Clearly, the edges are
mapped onto geodesic curves. A nontrivial fact is that if G is a simple 3-connected toroidal
map, then the rubber band mapping is an embedding.
We can lift this optimizing embedding to the universal cover space, to get a planar map
which is doubly periodic, and the edges are straight line segments. Moreover, every node is at
the center of gravity of its neighbors. This follows simply from the minimality of (9.14). This
means that both coordinate functions are harmonic and periodic, and so their coboundaries
are rotation-free circulations on the original graph. Since the dimension of the space C of
rotation-free circulations on a toroidal map is 2, this construction gives us the whole space
C.
This last remark also implies that if G is a simple 3-connected toroidal map, then selecting
any basis φ1 , φ2 in C, the primitive functions of φ1 and φ2 give a doubly periodic straight-line
embedding of the universal cover map in the plane.
9.5.2
Square tilings
A beautiful connection between square tilings and rotation-free flows was described in the
classic paper of Brooks, Smith, Stone and Tutte [40]. For our purposes, periodic tilings of
the whole plane are more convenient to use.
Consider tiling of the plane with squares, whose sides are parallel to the coordinate axes.
Assume that the tiling is discrete, i.e., every bounded region contains only a finite number
of squares. We associate a map in the plane with this tiling as follows. As in Section 8,
we represent each maximal horizontal segment composed of edges of the squares by a single
node at the midpoint of the segment. Each square “connects” two horizontal segments, and
we can represent it by an edge connecting the two corresponding nodes, directed top-down.
(We handle points where four squares meet just as in the finite case.) We get an (infinite)
directed graph G (Figure 9.3).
It is not hard to see that G is planar. If we assign the edge length of each square to the
corresponding edge, we get a circulation: If a node v represents a segment I, then the total
flow into v is the sum of edge length of squares attached to I from the top, while the total
flow out of v is the sum of edge length of squares attached to I from the bottom. Both of
these sums are equal to the length of I (let’s ignore the possibility that I is infinite for the
moment). Furthermore, since the edge-length of a square is also the difference between the
y-coordinates of its upper and lower edges, this flow is rotation-free.
Now suppose that the tiling is double periodic with period vectors a, b ∈ R2 (i.e., we
consider a square tiling of the torus). Then so will be the graph G, and so factoring out the
period, we get a map on the torus. Since the tiling is discrete, we get a finite graph. This also
fixes the problem with the infinite segment I: it will become a closed curve on the torus, and
9.5. GEOMETRIC REPRESENTATIONS AND DISCRETE ANALYTIC FUNCTIONS127
10
0
2
8
6
11
8
10
11
3
3
8
2
6
14
9
10
6
20
25
Figure 9.3: The Brooks–Smith–Stone–Tutte construction
so we can argue with its length on the torus, which is finite now. The flow we constructed
will also be periodic, so we get a rotation-free circulation on the torus.
We can repeat the same construction using the vertical edges of the squares. It is not
hard to see this gives the dual graph, with the dual rotation-free circulation on it.
This construction can be reversed. Take a toroidal map G∗ and any rotation-free circulation on it. Then this circulation can be obtained from a doubly periodic tiling of the plane
by squares, where the edge-length of a square is the flow through the corresponding edge.
(We suppress details.)
If an edge has 0 flow, then the corresponding square will degenerate to s single point.
Luckily, we know (Corollary 9.4.3) that for a simple 3-connected toroidal map, there is always
a nowhere-zero rotation-free circulation, so these graphs can be represented by a square tiling
with no degenerate squares.
9.5.3
Circle packings
We can also relate discrete holomorphic forms to circle representations.
b Then the for every 3-connected toroidal
It is again best to go to the universal cover map G.
graph G we can construct two (infinite, but discrete) families F and F ∗ of circles in the plane
so that they are double periodic modulo a lattice L = Za + Zb, F (mod L) corresponds to
the nodes of G, F ∗ (mod L) corresponds to the countries of G, and for ever edge e, there
are two circles C, C ′ representing he and te , and two circles D and D′ representing re and le
so that C, C ′ are tangent at a point p, D, D′ are tangent at the same point p, and C, D are
orthogonal.
If we consider the centers the circles in F as nodes, and connect two centers by a straight
line segment if the circles touch each other, then we get a straight line embedding of the
128
CHAPTER 9. DISCRETE HOLOMORPHIC FORMS
universal cover map in the plane (periodic modulo L). Let xv denote the point representing
node v of the universal cover map or of its dual.
To get a holomorphic form out of this representation, consider the plane as the complex
b or G
b ∗ . Clearly φ is invariant under
plane, and define φ(uv) = xv − xu for every edge of G
translation by any vector in L, so it can be considered as a function on E(G). By the
orthogonality property of the circle representation, φ(e)/φ(e∗ ) is a positive multiple of i. In
other words,
φ(e∗ )
φ(e)
=i
|φ(e)|
|φ(e∗ )|
It follows that if we consider the map G with weights
ℓe = |φ(e)|,
ℓe∗ = |φ(e∗ )|,
then φ is a discrete holomorphic form on this weighted map.
It would be nice to be able to turn this construction around, and construct a circle
representation using discrete holomorphic forms.
For more on the subject, see the book by Kenneth Stephenson [225].
9.5.4
Touching polygons
Kenyon’s ideas in [134] give a another geometric interpretation of analytic functions. Let G
be a map in the plane and let f be an analytic function on G. Let us assume that there is
a straight-line embedding of G in the plane with convex faces, and similarly, a straight-line
embedding of G∗ with convex faces. Let Pu denote the convex polygon representing the
country of G (or G∗ ) corresponding to u ∈ V ∗ (or u ∈ V )). Shrink each Fu from the point
f (u) by a factor of 2. Then we get a system of convex polygons where for every edge uv ∈ G♢ ,
the two polygons Pu and Pv share a vertex at the point (f (u)+f (v))/2 (Figure 9.4(a)). There
are two kinds of polygons (corresponding to the nodes in V and V ∗ , respectively). It can be
shown that the interiors of the polygons Pu will be disjoint (the point f (u) is not necessarily in
the interior of Pu ). The white rectangles between the polygons correspond to the edges of G.
They are rectangles, and the sides of the rectangle corresponding to edge e are f (he ) − f (te )
and f (le ) − f (re ).
Take two analytic functions f and g, and construct the polygons f (u)Pu (multiplication
by the complex number f (u) corresponds to blowing up and rotating). The resulting polygons
will not meet at the appropriate vertices any more, but we can try to translate them so that
they do. Now equation (9.6) tells us that we can do that (Figure 9.4(b)). Conversely, every
“deformation” of the picture such that the polygons Pu remain similar to themselves defines
an analytic function on G.
9.5. GEOMETRIC REPRESENTATIONS AND DISCRETE ANALYTIC FUNCTIONS129
Figure 9.4: The deformation of the touching polygon representation given by another analytic function.
130
CHAPTER 9. DISCRETE HOLOMORPHIC FORMS
Chapter 10
Orthogonal representations I:
the theta function
In the previous chapters, the vector-labelings were mostly in low dimensions, and planar
graphs (or at least graphs embedable in a given surface) played the major role. Now we turn
to representations in higher dimensions, where planarity plays no role, or at least little role.
An orthogonal representation of a simple graph G = (V, E) in Rd assigns to each i ∈ V a
vector ui ∈ Rd such that uT
i uj = 0 whenever ij ∈ E. An orthonormal representation is an
orthogonal representation in which all the representing vectors have unit length. Clearly we
can always scale the nonzero vectors in an orthogonal representation this way, and usually
this does not change any substantial feature of the problem.
Note that we did not insist that different nodes are represented by different vectors, nor
that adjacent nodes are mapped on non-orthogonal vectors. If these conditions also hold, we
call the orthogonal representation faithful.
Example 10.0.1 Every graph has a trivial orthonormal representation in RV , in which node
i is represented by the standard basis vector ei . This representation is not faithful unless the
graph has no edges.
Of course, we are interested in “nontrivial” orthogonal representations, which are more
“economical” than the trivial one.
Example 10.0.2 Figure 10.1 below shows that for the graph obtained by adding a diagonal
to the pentagon a simple orthogonal representation in 2 dimensions can be constructed.
This example can be generalized as follows.
Example 10.0.3 Let k = χ(G), and let {B1 , . . . , Bk } be a family of disjoint complete subgraphs covering all the nodes. Let {e1 , . . . , ek } be the standard basis of Rk . Then mapping
every node of Bi to ei is an orthonormal representation.
131
132 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
b
a=b=c
c
a
d
e
0
c=d
Figure 10.1: An (almost) trivial orthogonal representation
A more “geometric” orthogonal representation is described by the following example.
Example 10.0.4 Since χ(C5 ) = 3, the previous example gives a representation of C5 in
3-space (Figure 10.2, left). To get a less trivial representation, consider an “umbrella” in R3
with 5 ribs of unit length (Figure 10.2). Open it up to the point when non-consecutive ribs
are orthogonal. This way we get 5 unit vectors u0 , u1 , u2 , u3 , u4 , assigned to the nodes of
C5 so that each ui forms the same angle with the “handle” and any two non-adjacent nodes
are labeled with orthogonal vectors. These vectors give an orthogonal representation of C5
in 3-space.
Figure 10.2: Two orthogonal representations of C5 .
When looking for “economic” orthogonal representations, we can define “economic” in
several ways. For example, we may want to find an orthogonal representation in a dimension
as low as possible (even though this particular way of phrasing the question does not seem
to be the most fruitful; we will return to more interesting versions of the minimum dimension problem in the next chapter). We start with an application that provided the original
motivation for considering orthogonal representations.
10.1
Shannon capacity
We start with the problem from information theory that motivated the introduction of orthogonal representations [157] and several of the results to be discussed in this chapter.
10.1. SHANNON CAPACITY
133
Consider a noisy channel through which we are sending messages over a finite alphabet
V . The noise may blur some letters so that certain pairs can be confused. We want to select
as many words of length k as possible so that no two can possibly be confused. As we shall
see, the number of words we can select grows as Θk for some Θ ≥ 1, which is called the
Shannon zero-error capacity of the channel.
In terms of graphs, we can model the problem as follows. We consider V as the set of
nodes of a graph, and connect two of them by an edge if they can be confused. This way we
obtain a graph G = (V, E), which we call the confusion graph of the alphabet. We denote by
α(G) the maximum number of nonadjacent nodes (the maximum size of a stable set) in the
graph G. If k = 1, then the maximum number of non-confusable messages is α(G).
To describe longer messages, we use the notion of strong product of two graphs (see
the Appendix). In terms of the confusion graph, α(Gk ) is the maximum number of nonconfusable words of length k: words composed of elements of V , so that for every two words
there is at least one i (1 ≤ i ≤ k) such that the i-th letters are different and non-adjacent in
G, i.e., non-confusable. It is easy to see that
α(G H) ≥ α(G)α(H).
(10.1)
This implies that
α(G(k+l) ) ≥ α(Gk )α(Gl ),
(10.2)
α(Gk ) ≥ α(G)k .
(10.3)
and
The Shannon capacity of a graph G is the value
Θ(G) = lim α(Gk )1/k .
k→∞
(10.4)
Inequality (10.2) implies that the limit exists, and (10.3) implies that
Θ(G) ≥ α(G).
(10.5)
Rather little is known about this graph parameter. For example, it is not known whether
Θ(G) can be computed for all graphs by any algorithm (polynomial or not), although there
are several special classes of graphs for which this is not hard. The behavior of Θ(G) and
the convergence in (10.4) are rather erratic; see Alon [10] and Alon and Lubetzky [13].
Example 10.1.1 Let C4 denote a 4-cycle with nodes (a, b, c, d). By (10.5), we have Θ(G) ≥
2. On the other hand, if we use a word, then all the 2k words obtained from it by replacing
a and b by each other, as well as c and d by each other, are excluded. Hence α(C4k ) ≤
4k /2k = 2k , which implies that Θ(C4 ) = 2.
134 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
The argument for bounding Θ from above in this last example can be generalized as
follows. Let χ(G) denote the minimum number of complete subgraphs covering the nodes of
G (this is the same as the chromatic number of the complementary graph.) Trivially
α(G) ≤ χ(G).
(10.6)
Any covering of G by χ(G) cliques and of H by χ(H) cliques gives a “product covering” of
G H by χ(G)χ(H) cliques, and so
χ(G H) ≤ χ(G)χ(H)
(10.7)
Hence
α(Gk ) ≤ χ(Gk ) ≤ χ(G)k ,
and thus
Θ(G) ≤ χ(G).
(10.8)
It follows that if α(G) = χ(G), then Θ(G) = α(G); for such graphs, nothing better can be
done than reducing the alphabet to the largest mutually non-confusable subset.
Example 10.1.2 The smallest graph for which Θ(G) cannot be computed by these means
is the pentagon C5 . If we set V (C5 ) = {0, 1, 2, 3, 4} with E(C5 ) = {01, 12, 23, 34, 40}, then
C52 contains the stable set {(0, 0), (1, 2), (2, 4), (3, 1), (4, 3)}. So α(C5 )2k = α(C52 )k ≥ 5k ,
√
and hence Θ(C5 ) ≥ 5.
We will show that equality holds here [157], but first we need some tools.
10.2
Definition and basic properties of the thetafunction
Comparing Example 10.0.3 and the bound 10.6 suggests that using orthogonal representations
of G other than those obtained from clique coverings might give better bounds on the Shannon
capacity. To this end, we have to figure out what should replace the number of different cliques
in this more general setting. it turns out that the smallest half-angle ϕ of a rotational cone
(in arbitrary dimension) which contains all vectors in an orthogonal representation of the
graph does the job [157]. We will work with a transformed version of this quantity, namely
ϑ(G) =
1
1
= min max T 2 ,
(cos ϕ)2
(ui ),c i∈V (c ui )
where the minimum is taken over all orthonormal representations (ui : i ∈ V ) of G and all
unit vectors c. (We call c the “handle” of the representation; for the origin of the name, see
Example 10.1.2. Of course, we could fix c, but this is not always convenient.)
10.2. DEFINITION AND BASIC PROPERTIES OF THE THETA-FUNCTION
135
From the trivial orthogonal representation (Example 10.0.1) we get that ϑ(G) ≤ |V |.
Tighter inequalities can be proved:
Theorem 10.2.1 For every graph G,
α(G) ≤ ϑ(G) ≤ χ(G).
Proof. First, let S ⊆ V be a maximum stable set of nodes in G. Then in every orthonormal
representation (ui ), the vectors {ui : i ∈ S} are mutually orthogonal unit vectors. Hence
1 = cT c ≥
∑
(cT ui )2 ≥ |S| min(cT ui )2 ,
i∈S
i
and so
max
i∈V
1
≥ |S| = α(G).
(cT ui )2
This implies the first inequality.
The second inequality follows from Example 10.0.3, using c =
√1 (e1
k
+ · · · + ek ) as the
handle.
From Example 10.0.4 we get, using elementary trigonometry that
ϑ(C5 ) ≤
√
5.
(10.9)
We’ll see that equality holds here.
Lemma 10.2.2 For any two graphs G and H, we have ϑ(G H) ≤ ϑ(G)ϑ(H).
We will prove later (Corollary 10.2.9) that equality holds here.
Proof. The tensor product of two vectors u = (u1 , . . . , un ) ∈ Rn and v = (v1 , . . . , vm ) ∈ Rm
is the vector
u ◦ v = (u1 v1 , . . . , u1 vm , u2 v1 , . . . , u2 vm , . . . , un v1 , . . . , un vm ) ∈ Rnm .
The inner product of two tensor products can be expressed easily: if u, x ∈ Rn and v, y ∈ Rm ,
then
(u ◦ v)T (x ◦ y) = (uT x)(vT y).
(10.10)
Now let (ui : i ∈ V ) be an optimal orthogonal representation of G with handle c (ui , c ∈ Rn ),
and let (vj : j ∈ V (H)) be an optimal orthogonal representation of H with handle d
(vj , d ∈ Rm ). It is easy to check, using (10.10), that the vectors ui ◦vj ((i, j) ∈ V (G)×V (H))
136 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
form an orthogonal representation of G H. Furthermore, taking c ◦ d as its handle, we
have by (10.10) again that
(
(c ◦ d)T (ui ◦ vj )
)2
= (cT ui )2 (d ◦ vj )2 ≥
1
1
·
,
ϑ(G) ϑ(H)
and hence
ϑ(G H) ≤ max (
i,j
1
(c ◦
d)T (ui
◦ vj )
)2 ≤ ϑ(G)ϑ(H).
This inequality has some important corollaries. We have
α(Gk ) ≤ ϑ(Gk ) ≤ ϑ(G)k ,
which implies
Corollary 10.2.3 For every graph G, we have Θ(G) ≤ ϑ(G).
In particular, combining with (10.9) and Example 10.1.2, we get a solution of Shannon’s
problem:
√
√
5 ≤ Θ(C5 ) ≤ ϑ(C5 ) ≤ 5,
and equality must hold throughout.
We can generalize this argument. The product graph G G has nonadjacent nodes in
the diagonal, implying that
ϑ(G G) ≥ α(G G) ≥ |V |.
Together with Lemma 10.2.2, this yields:
Corollary 10.2.4 For every graph G = (V, E), we have ϑ(G)ϑ(G) ≥ |V |.
√
Since C5 ∼
= C5 , Corollary 10.2.4 implies that ϑ(C5 ) ≥ 5, and together with (10.9), we
√
get another proof of the fact that ϑ(C5 ) = 5. Equality does not hold in general in Corollary
10.2.4, but it does when G has a node-transitive automorphism group. We postpone the proof
of this fact until some further formulas for ϑ will be developed (Corollary 10.2.11).
We continue with a number of formulas for ϑ(G), which together have a lot of implications.
The first bunch of these will be proved together. We define a number of parameters, which
will all turn out to be equal to ϑ(G).
The following geometric definition was discovered by Karger, Motwani and Sudan. In
terms of the complementary graph, this value is sometimes called the “vector chromatic
number”. Let
{
}
1
(∀ij ∈ E) . (10.11)
ϑ1 = min t ≥ 2 : ∃wi ∈ Rd such that |wi | = 1, wiT wj = −
t−1
10.2. DEFINITION AND BASIC PROPERTIES OF THE THETA-FUNCTION
137
Next, we give a couple of formulas for ϑ in terms of semidefinite optimization. Consider
the following two semidefinite programs, where Y, Z ∈ RV ×V are (unknown) symmetric
matrices:
minimize t
maximize
∑
Zij
i,j∈V
subject to Y ≽ 0
subject to Z ≽ 0
Yij = −1 (∀ ij ∈ E)
Zij = 0
Yii = t − 1
tr(Z) = 1
(∀ ij ∈ E)
(10.12)
It is not hard to check that these are dual programs. The first program has a strictly feasible
solution (just choose t large enough), so by the Duality Theorem of semidefinite programming,
the two programs have the same objective value; we call this number ϑ2 .
Finally, we use orthonormal representations of the complementary graph: define
ϑ3 = max
∑
(dT vi )2 ,
(10.13)
i∈V
where the maximum extends over all orthonormal representations (vi : i ∈ V ) of the complementary graph G and all unit vectors d.
The main theorem of this section asserts that all these definitions lead to the same value.
Theorem 10.2.5 For every graph G, ϑ(G) = ϑ1 = ϑ2 = ϑ3 .
Proof. We prove that
ϑ(G) ≤ ϑ1 ≤ ϑ2 ≤ ϑ3 ≤ ϑ(G).
(10.14)
To prove the first inequality, let (wi : i ∈ V ) be the representation that achieves the
minimum in (10.11). Let c be a vector orthogonal to all the wi (we increase the dimension
of the space if necessary). Let
1
ui = √ (c + wi ).
ϑ1
Then |ui | = 1 and uT
i uj = 0 for ij ∈ E, so (ui ) is an orthonormal representation of G.
√
Furthermore, with handle c we have cT ui = 1/ ϑ1 , which implies that ϑ(G) ≤ ϑ1 .
Second, let (Y, t) be an optimal solution of (10.12). Then ϑ2 = t, and the matrix 1/(t−1)Y
is positive semidefinite, and so it can be written as a Gram matrix, i.e., there are vectors
wi ∈ Rn such that

− −1 , if ij ∈ E,
1
t−1
wiT wj = √
Yij =
1,
t−1
if i = j.
138 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
(no condition if ij ∈ E). So these vectors satisfy the conditions in (10.11) for the given t.
Since ϑ1 is the smallest t for which this happens, this proves that ϑ1 ≤ t = ϑ2 .
To prove the third inequality in (10.14), let Z be an optimum solution of the dual program
in (10.12) with objective function value ϑ2 . We can write Z is a Gram matrix: Zij = zT
i zj
k
0
where zi ∈ R for some k ≥ 1. Let us rescale the vectors zi to get the unit vectors vi = zi (if
∑
zi = 0 then we take a unit vector orthogonal to everything else as vi ). Define d = ( i zi )0 .
By the properties of Z, the vectors vi form an orthonormal representation of G, and
hence
ϑ3 ≥
∑
(dT vi )2 .
i
To estimate the right side, we use the equation
∑
i
|zi |2 =
∑
zT
i zi 1 = tr(Z) = 1
i
and Cauchy Schwarz:
(∑
)(∑
) (∑
)2
∑
(dT vi )2 =
|zi |2
(dT vi )2 ≥
|zi |dT vi
i
i
=
(∑
dT zi
)2
i
(
= dT
i
∑
)2
zi
i
∑ 2 ∑
=
zi =
zT
i zj = ϑ2 .
i
i
i,j
This proves that ϑ3 ≥ ϑ2 .
Finally, to prove the last inequality in (10.14), it suffices to prove that if (ui : i ∈ V ) is
an orthonormal representation of G in Rn with handle c, and (vi : i ∈ V ) is an orthonormal
representation of G in Rm with handle d, then
∑
(dT vi )2 ≤ max
i∈V
i∈V
1
.
(cT ui )2
(10.15)
By a similar computation as in the proof of Lemma 10.2.2, we get that the vectors ui ◦ vi
(i ∈ V ) are mutually orthogonal unit vectors, and hence
∑
∑(
)2
(cT ui )2 (dT vj )2 =
(c ◦ d)T (ui ◦ vi ) ≤ 1.
i
i
On the other hand,
∑
∑
(cT ui )2 (dT vj )2 ≥ min(cT ui )2
(dT vj )2 ,
i
i
i
which implies that
∑
(dT vj )2 ≥
i
1
1
= max T 2 .
T
2
i
(c ui )
min(c ui )
i
(10.16)
10.2. DEFINITION AND BASIC PROPERTIES OF THE THETA-FUNCTION
139
This proves (10.15) and completes the proof of Theorem 10.2.5.
These formulations have many versions, one of which is stated below without proof (which
is left to the reader as an exercise).
Proposition 10.2.6 For every graph G = (V, E),
ϑ(G) = min λmax (A),
A
where A ranges over all symmetric V × V matrices such that Aij = 1 whenever ij ∈ E or
i = j.
From the fact that equality holds in (10.14), it follows that equality holds in all the
arguments above. Let us formulate some consequences. Considering the optimal orthogonal
representation constructed in the first step of the proof, we get:
Corollary 10.2.7 Every graph G has an orthonormal representation (ui ) with handle c such
that for every node i,
1
cT ui = √
.
ϑ(G)
For optimal orthogonal representations in the dual definition of ϑ(G) we also get some
information.
Proposition 10.2.8 For every graph G, every optimal dual orthonormal representation
(vi , d) satisfies
∑
(dT vi )vi = ϑ(G)d.
i
Proof. Fixing (vi ), the best choice of d minimizes the function
(∑
)
∑
(dT vi )2 = dT
vi viT d,
i
(10.17)
i
subject to dT d = 1. At the optimum, the gradients of the two functions are parallel, i.e.
∑
(viT d)vi = ad
i
with some scalar a. Taking inner product with d, we see that a = ϑ(G).
As a further application of the duality established in Section 10.2, we prove that equality
holds in Lemma 10.2.2:
Proposition 10.2.9 For any two graphs G and H,
ϑ(G H) = ϑ(G)ϑ(H).
140 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
Proof. Let (vi , d) be an orthonormal representation of G which is optimal in the sense
∑
that i (dT vi )2 = ϑ(G), and let (wj , e) be an orthonormal representation of H such that
∑ T
2
i (e wi ) = ϑ(H). It is easy to check that the vectors vi ◦ wj form an orthonormal
representation of G H, and so using handle d ◦ e we get
ϑ(G H) ≥
∑(
)2 ∑ T 2 T
(d ◦ e)T (vi ◦ wj ) =
(d vi ) (e wj )2 = ϑ(G)ϑ(H).
i,j
i,j
We already know the reverse inequality, which completes the proof.
The matrix descriptions of ϑ imply an important property of optimal orthogonal representations. We say that an orthonormal representation (ui , c) in Rd of a graph G is
automorphism invariant, if every automorphism γ ∈ Aut(G) can be lifted to an orthogonal
transformation Oγ of Rd such that Oγ c) = c and uγ(i) = Oγ ui for every node i.
Proposition 10.2.10 Every graph G has an optimal orthonormal representation and an
optimal dual orthonormal representation that are both automorphism invariant.
Proof. We give the proof for the orthonormal representation of the complement. The set
of optimum solutions of the dual semidefinite program in (10.12) form a bounded convex set,
which is invariant under the transformations Z 7→ P T ZP , where P is the permutation matrix
defined by an automorphism of G. The center of gravity of this convex set is a matrix Z which
is fixed by these transformations, i.e., it satisfies P T ZP = Z for all automorphisms P . The
construction of an orthonormal representation of G in the proof of ϑ3 ≥ ϑ2 in Theorem 10.2.5
can be done in a canonical way (e.g., choosing the rows of Z 1/2 as the vectors zi ), and so
the obtained optimal orthonormal representation will be invariant under the automorphism
group of G (acting as permutation matrices).
Corollary 10.2.11 If G has a node-transitive automorphism group, then
ϑ(G)ϑ(G) = |V |.
Proof. It follows from Proposition 10.2.10 that G has an orthonormal representation (vi , d)
∑
in Rn such that i (dT vi )2 = ϑ(G), and dT vi is the same for every i (see Exercise 10.6.5.
So (dT vi )2 = ϑ(G)/|V | for all nodes i, and hence
ϑ(G) ≤ max
i
1
(dT v
i
)2
=
|V |
.
ϑ(G)
Since we already know the reverse inequality (10.2.4), this proves the Corollary.
Corollary 10.2.12 If G is a self-complementary graph with a node-transitive automorphism
√
group, then Θ(G) = ϑ(G) = |V |.
10.3. RANDOM GRAPHS
141
Proof.
The diagonal in G G is stable, so α(G G) = α(G G) ≥ |V |, and hence
√
√
Θ(G) ≥ |V |. On the other hand, ϑ(G) ≤ |V | follows by Corollary 10.2.11.
Example 10.2.13 (Paley graphs) The Paley graph Palp is for a prime p ≡ 1 (mod 4).
We take the {0, 1, . . . , p − 1} as nodes, and connect two of them if their difference is a
quadratic residue. It is clear that these graphs have a node-transitive automorphism group,
and it is easy to see that they are self-complementary. So Corollary 10.2.12 applies, and
√
gives that Θ(Palp ) = ϑ(Palp ) = p. To determine the stability number of Paley graphs is a
difficult unsolved number-theoretic problem, but it is conjectured that α(Palp ) = O((log p)2 ).
Supposing this conjecture is true, we get an infinite family for which the Shannon capacity
is non-trivial (i.e., Θ > α), and can be determined exactly.
10.3
Random graphs
The Paley graphs are quite similar to random graphs, and indeed, for random graphs ϑ
√
behaves similarly as for the Paley graphs, namely it is of the order n (Juh´asz [128]). (It is
not known, however, how large the Shannon capacity of a random graph is.) As usual, G(n, p)
denotes a random graph on n nodes with edge density p. Here we assume that p is a constant
and n → ∞. The analysis extends to the case when (ln n)1/6/n ≤ p ≤ 1 − (ln n)1/6/n (CojaOghlan and Taraz [55]). See Coja-Oghlan [54] for more results about the concentration of
this value.
√
√
Theorem 10.3.1 With high probability, (1 − p)n/3 < ϑ(G(n, p)) < 3 n/p.
Proof. First, we prove that with high probability,
√
ϑ(G(n, p)) < 3 n/p.
(10.18)
The proof of Juh´asz uses the result of F¨
uredi and Koml´os [94] bounding the eigenvalues of
random matrices, but we give the whole argument for completeness. Let G(n, p) = (V, E).
Set s = (1 − p)/p, and let A be the matrix defined by
{
−s, if ij ∈ E,
Aij =
1,
otherwise.
Note that these values are chosen so that E(Aij ) = 0 and E(A2ij ) = s for i ̸= j. Then
A is a matrix that satisfies the conditions in Proposition 10.2.6, and hence we get that
ϑ(G) ≤ λmax (A), where λmax (A) is the largest eigenvalue of A. To estimate λmax (A), we fix
an integer m = Ω(n1/6 ), and note that λmax (A)2m ≤ tr(A2m ).
Our next goal is to estimate the expectation of tr(A2m ). This trace can be expressed as
∑
∑
tr(A2m ) =
Ai1 i2 . . . Ai2m−1 i2m Ai2m i1 =
A(W ),
i1 ,...,i2m ∈V
W
142 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
where W = (i1 , . . . , i2m , i1 ) is a closed walk on the complete graph (with loops) on n nodes.
If we take expectation, then every term in which a non-loop edge occurs an odd number of
times gives 0. This implies that the number of different noon-loop edges that are traversed
is at most m, and since the graph traversed is connected, the of different nodes that are
touched is at most m + 1. Let V (W ) denote the set of these nodes.
The main contribution comes from walks with |V (W )| = m + 1; in this case no loop edges
are used, the subgraph traversed is a tree, and every edge of it is traversed exactly twice.
The traversing walk corresponds to laying out the tree in the plane. It is well known that the
(2m)
1
number of plane trees with m + 1 nodes is given by the Catalan number m+1
m . We can
associate these m + 1 nodes with nodes in V in n(n − 1) . . . (n − m) ways, and the expectation
of such a term is sm . So the contribution of these terms is
( )
1
2m
T =
n(n − 1) . . . (n − m)sm .
m+1 m
It is easy to see that T < (4n/p)m if n is large enough.
Those walks with |V (W )| ≤ m nodes will contribute less, and we need a rather rough
estimate of their contribution only. Consider a walk W = (i1 , . . . , i2m , i1 ) which covers a
graph G on nodes V (G) = V (W ) = [r] (r ≤ m) that is not a tree.
First, suppose that there is a non-loop edge (it , it+1 ) that is not a cut-edge of G, and it
is traversed (at least) twice in the same direction. Then we can write W = W1 + (it , it+1 ) +
W2 + (it , it+1 ) + W3 , where W1 is a walk from i1 to it , W1 is a walk from it+1 to it , and W3
is a walk from it+1 to i1 . Define a walk
←
−
W ′ = W1 + (it , r + 1, it ) + W 2 + W3 ,
←
−
where W 2 is the walk W2 in reverse order.
If no such edge exists, then we have W = W1 + (it , it+1 ) + W2 + (it+1 , it ) + W3 , where
W1 is a walk from i1 to it , W2 is a walk from it+1 to it+1 , and W3 is a walk from it to i1 .
Furthermore, the two closed walks W1 + W3 and W2 must have a node v in common, since
otherwise it it+1 would be a cut-edge. Reversing the order on the segment of W between
these two copies of v, we get a new walk W0 that has the same contribution, and which
passes it it+1 twice in the same direction, and so W0′ can be defined as above. Let us define
W ′ = W0′ .
In both cases, the new closed walk W ′ covers nodes [r + 1]. Furthermore, by simple
computation,
E(A(W )) ≤ (s + s−1 )E(A(W ′ )).
Finally, every closed walk covering nodes 1, . . . , r + 1 arises at most 4m3 ways in the form
of W ′ : we can get W back by merging r + 1 with one of the other nodes (r ≤ m choices),
10.4. THE TSTAB BODY AND THE WEIGHTED THETA-FUNCTION
143
and then flipping the segment between two occurrences of the same node (fewer than 4m2
choices).
Hence
( ) ∑
( )
∑
∑
n
n
E(A(W )) =
E(A(W )) ≤
(s + s−1 )
E(A(W ′ ))
r
r
|V (W )|=r
V (W )=[r]
V (W )=[r]
( )
∑
n
≤ 4m3
(s + s−1 )
E(A(W ))
r
V (W )=[r+1]
( )(
)−1
∑
n
3 n
= 4m
(s + s−1 )
E(A(W ))
r
r+1
|V (W )|=r+1
≤
≤
−1
3
4m (m + 1)(s + s
n−m
1
2
∑
∑
)
E(A(W ))
|V (W )|=r+1
E(A(W )),
|V (W )|=r+1
by the choice of m, if n is large enough. This implies that
E(tr(A2m )) =
∑
E(A(W )) ≤
W
m
∑
T
< 2T,
j
2
j=0
and hence by Markov’s Inequality,
√ )
(
(
( 9n )m )
( 4 )m
2T
n
P λmax (A) ≥ 3
= P λmax (A)2m ≥
≤
≤2
.
m
p
p
(9n/p)
9
This proves that (10.18) holds with high probability.
To prove the lower bound, it suffices to invoke Corollary 10.2.4:
n
n
=
ϑ(G(n, p)) = ϑ(G(n, 1 − p)) ≥
≥ √
ϑ(G(n, 1 − p))
3 n/(1 − p)
√
(1 − p)n
3
with high probability.
10.4
The TSTAB body and the weighted theta-function
Stable sets and cliques give rise to important polyhedra associated with graphs, namely the
stable set polytope STAB(G) and the fractional stable set polytope QSTAB(G) (see Example
15.5.3). We are going to show that orthogonal representations provide an interesting related
convex body, with nicer and deeper duality properties than those stated in Example 15.5.3.
For every orthonormal representation (vi , d) of G, we consider the linear constraint
∑
i∈V
(dT vi )2 xi ≤ 1,
(10.19)
144 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
which we call an orthogonality constraint. The solution set of non-negativity and orthogonality constraints is a convex body, which we denote by TSTAB(G). The orthogonality
constraints are valid if x is the indicator vector of a stable set of nodes, and therefore they
are valid for STAB(G); hence STAB(G) ⊆ TSTAB(G). This construction was introduced by
Gr¨otschel, Lov´
asz and Schrijver [100], who also established the properties below. The proof
of these properties is based on extending the previous ideas to a node-weighted version, at
the cost of more computation, which is reproduced in the book [101], and hence we only state
the main facts without proof.
It is clear that TSTAB is a closed, convex set. The incidence vector of any stable set
S satisfies (10.19) (we have seen this in the proof of Theorem 10.2.1). Furthermore, every
clique constraint is an orthogonality constraint. Indeed, for every clique B, the constraint
∑
i∈B xi ≤ 1 is obtained from the orthogonal representation
{
e1 , i ∈ A,
i 7→
c = e1 .
ei , otherwise,
Hence STAB(G) ⊆ TSTAB(G) ⊆ QSTAB(G) for every graph G.
There is a dual characterization of TSTAB. For every orthonormal representation u =
(ui : i ∈ V ), consider the vector x(u) = ((cT ui )2 : i ∈ V ). Then
TSTAB(G) = {x(u) : (u, c) is an orthonormal representation of G}.
(10.20)
Is TSTAB(G) a polyhedron? Not in general; to give a characterization, recall a notion of
perfect graphs. If G is a perfect graph, then α(G) = χ(G), and so by Theorem 10.2.1,
Proposition 10.4.1 For every perfect graph G, Θ(G) = ϑ(G) = α(G) = χ(G).
Not every orthogonality constraint follows from the clique constraints; in fact, the number
of essential orthogonality constraints is infinite in general:
Proposition 10.4.2 TSTAB(G) is polyhedral if and only if the graph is perfect. In this case
TSTAB = STAB = QSTAB.
While TSTAB is a rather complicated set, in many respects it behaves much better than,
say, STAB. For example, it has a very nice connection with graph complementation:
Proposition 10.4.3 TSTAB(G) is the antiblocker of TSTAB(G).
10.5
Theta and stability number
How good an approximation does ϑ provide for the stability number? We have seen that
(Theorem 10.2.1) that
α(G) ≤ ϑ(G) ≤ χ(G).
10.5. THETA AND STABILITY NUMBER
145
But α(G) and χ(G) can be very far apart, and unfortunately, the approximation of α by
ϑ can be quite as poor. Feige [85] showed that there are graphs G on n nodes for which
α(G) = no(1) and ϑ(G) = n1−o(1) ; in other words, ϑ/α can be larger than n1−ε for every
ε > 0. (The existence of such graphs also follows from the P ̸= N P hypothesis and the
results of H˚
astad [108], asserting that it is NP-hard to determine α(G) with a relative error
less than n1−ε , where n = |V |.) By results of M. Szegedy [229], this also implies that ϑ(G)
does not approximate the chromatic number within a factor of n1−ε .
Thus, at least in the worst case, the value ϑ(G) can be a very bad approximation for
α(G). The above construction leaves a little room for something interesting, namely in the
cases when either α is very small, or if ϑ is very large. There are indeed (rather week, but
useful) results in both cases.
First, consider the case when α is very small. Koniagin [139] constructed a graph that
has α(G) = 2 and ϑ(G) = Ω(n1/3 ). This is the largest ϑ(G) can be; in fact, Alon and Kahale
[12], improving results of Kashin and Koniagin [133], proved the following theorem.
Theorem 10.5.1 Let G be a graph with n nodes and α(G) = 2. Then ϑ(G) ≤ 1 + n1/3 .
In fact, they prove the more general inequality
( α(G)−1 )
ϑ(G) = O n α(G)+1 ,
(10.21)
valid for graphs with arbitrary stability number. The general result follows by extending the
proof below.
Proof.
The main step in the proof is the following inequality, which is valid for graphs
with arbitrary stability number, and is of interest on its own. Let G = (V, E) be a graph,
and for i ∈ V , let N (i) = V \ N (i) \ {i} and Gi = G[N (i)]. Then
∑
ϑ(G)(ϑ(G) − 1)2 ≤
ϑ(Gi )2 .
(10.22)
i∈V
To prove this inequality, let (vi , d) be an optimal dual orthogonal representation of G, and
let i ∈ V (G). By Proposition 10.2.8, we have
∑
∑
ϑ(G)dT vi =
(dT vj )(viT vj ) = dT vi +
(dT vj )(viT vj )
j∈V
j∈N (i)
(since for j ∈ N (i) we have viT vj = 0). Reordering and squaring, we get
( ∑
)2
(ϑ(G) − 1)2 (dT vi )2 =
(dT vj )(viT vj ) .
j∈N (i)
We can bound the right side using Cauchy–Schwarz:
∑
∑
(ϑ(G) − 1)2 (dT vi )2 ≤
(dT vj )2
(viT vj )2 ≤ ϑ(Gi )2 ,
j∈N (i)
j∈N (i)
146 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
since (vj : j ∈ N (i)) is a dual orthonormal representation of Gi , which we consider either
with handle d or with handle vi . Summing over i ∈ V , we get inequality (10.22).
Now the theorem follows easily: α(G) ≤ 2 implies that Gi is complete for every node i,
and hence ϑ(Gi ) ≤ 1. So (10.22) implies the theorem.
To motivate the next theorem, let us think of the case when chromatic number is small:
we know that χ(G) ≤ 3. Trivially, α(G) ≥ n/3. From the weaker assumption that ω(G) ≤ 3
we get a much weaker bound: using the result of Ajtai, Koml´os and Szemer´edi [4] in the
theory of off-diagonal Ramsey numbers,
α(G) = Ω(
√
n log n).
(10.23)
The inequality ϑ(G) ≤ 3 is a condition that is between the previous two, and the following
theorem of Karger, Motwani and Sudan [132] does give a lower bound on α(G) that is better
than (10.23). In addition, the proof provides a polynomial time randomized algorithm to
construct a stable set of the appropriate size.
Theorem 10.5.2 For every 3-colorable graph G, we can construct in randomized polynomial
√
time a stable set of size n3/4 /(3 ln n).
Proof. Let G = (V, E). We may assume that G contains a triangle (just add one hanging
from an edge). Then ω(G) = χ(G) = ϑ(G) = 3. If G contains a node i with degree n3/4 ,
then G[N (i)] is bipartite, and so it contains a stable set with n3/4 /2 nodes (and we can find
in it easily). So we may suppose that all degrees are less than n3/4 , and hence |E| < n7/4 /4.
By Theorem 10.2.5, there are unit vectors ui ∈ Rn such that uT
i uj = −1/2 whenever
◦
ij ∈ E. In other words, the angle between ui and uj is 120 .
For given 0 < s < 1 and v ∈ S n−1 , let Cv,s denote a cap on S n−1 cut off by a halfspace
v x ≥ s. This cup has center v. Let its surface area be voln−1 (Cv,s ) = a(s)voln−1 (S n−1 ),
and let S = {i ∈ V : ui ∈ Cv,s }. Choosing the center v uniformly at random on S n−1 , the
T
expected number of points ui in Cv is a(s)n.
We want to bound the expected number of edges spanned by S. Let ij ∈ E, then the
probability that Cv contains both ui and uj is
b(s) =
voln−1 (Cui ∩ Cuj )
voln−1 (S n−1 )
So the expected number of edges induced by S is b(s)|E| < b(s)n7/4 .
Deleting one endpoint of every edge induced by S, we get a stable set of nodes. Hence
α(G) ≥ |S| − |E(S)|, and taking expectation, we get
α(G) ≥ a(s)n − b(s)n7/4 .
(10.24)
10.6. ALGORITHMIC APPLICATIONS
147
So it suffices to estimate a(s) and b(s) as functions of s, and then choose the best s. This
is elementary computation in geometry, but it is spherical geometry in n-space, and the
computations are a bit tedious.
Any point x ∈ Cui ∩ Cuj satisfies uiT x ≥ s and also uT
j x ≥ s, hence it satisfies
(ui + uj )T x ≥ 2s.
(10.25)
Since ui + uj is a unit vector, this implies that Cui ,s ∩ Cuj ,s ⊆ Cui +uj ,2s , and so b(s) ≤ a(2s).
Thus
α(G) ≥ a(s)n − a(2s)n7/4 .
(10.26)
It is known from geometry that
1
1
√ (1 − s2 )(n−1)/2 < a(s) < √ (1 − s2 )(n−1)/2 .
10s n
s n
Thus
α(G) ≥ a(s)n − a(2s)n
7/4
√
n
n5/4
≥
(1 − s2 )(n−1)/2 −
(1 − 4s2 )(n−1)/2 .
10s
4s
We want to choose s so that it maximizes the right side. By elementary computation we get
√
that s = (ln n)/(2n) is an approximately optimal choice, which gives the estimate in the
theorem.
10.6
Algorithmic applications
Perhaps the most important consequence of the formulas proved in Section 10.2 is that the
value of ϑ(G) is polynomial time computable [98]. More precisely,
Theorem 10.6.1 There is a polynomial time algorithm that computes, for every graph G
and every ε > 0, a real number t such that
|ϑ(G) − t| < ε.
Algorithms proving this theorem can be based on almost any of our formulas for ϑ. The
simplest is to refer to Theorem 10.2.5 giving a formulation of ϑ(G) as the the optimum of a
semidefinite program (10.12), and the polynomial time solvability of semidefinite programs
(see Section 15.7.2 in the Appendix).
The significance of this fact is underlined if we combine it with Theorem 10.2.1: The two
important graph parameters α(G) and χ(G) are both NP-hard, but they have a polynomial
time computable quantity sandwiched between them. This fact in particular implies:
Corollary 10.6.2 The stability number and the chromatic number of a perfect graph are
polynomial time computable.
148 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
Theorem 10.6.1 extends to the weighted version of the theta function. Maximizing a
linear function over STAB(G) or QSTAB(G) is NP-hard; but, surprisingly, TSTAB behaves
much better: Every linear objective function can be maximized over TSTAB(G) (with an
∑
arbitrarily small error) in polynomial time. The maximum of i xi over TSTAB(G) is the
familiar function ϑ(G). See [101, 137] for more detail.
We conclude with another important application to a coloring problem. Suppose that
somebody gives a graph and guarantees that the graph is 3-colorable, without telling us its
3-coloring. Can we find this 3-coloring? (This may sound artificial, but this kind of situation
does arise in cryptography and other data security applications; one can think of the hidden
3-coloring as a “watermark” that can be verified if we know where to look.)
It is easy to argue that knowing that the graph is 3-colorable does not help: it is still
NP-hard to find the 3-coloration. But suppose that we would be satisfied with finding a
4-coloration, or 5-coloration, or (log n)-coloration; is this easier? It is known that to find a
4-coloration is still NP-hard, but little is known above this. Improving earlier results, Karger,
Motwani and Sudan [132] gave a polynomial time algorithm that, given a 3-colorable graph,
computes a coloring with O(n1/4 (ln n)3/2 ) colors. More recently, this was improved by Blum
and Karger [38] to O(n3/14 ).
The algorithm of Karger, Motwani and Sudan starts with computing ϑ(G), which is at
√
most 3 by Theorem 10.2.1. Using Theorem 10.5.2, they find a stable set of size Ω(n3/4 / ln n).
Deleting this set from G and iterating, they get a coloring of G with O(n1/4 (ln n)3/2 ) colors.
Exercise 10.6.3 If ϑ(G) = 2, then G is bipartite.
Exercise 10.6.4 (a) If H is an induced subgraph of G, then ϑ(H) ≤ ϑ(G). (b)
If H is a spanning subgraph of G, then ϑ(H) ≥ ϑ(G).
Exercise 10.6.5 Show that about the optimal orthogonal
√ representation (ui , c)
in Proposition 10.2.10, one can require that cT ui = 1/ ϑ(G) for all nodes i.
Exercise 10.6.6 Let G be a graph and v ∈ V . (a) ϑ(G − v) ≥ ϑ(G) − 1. (b) If
v is an isolated node, then equality holds. (c) If v is adjacent to all other nodes,
then ϑ(G − v) = ϑ(G).
Exercise 10.6.7 Let G = (V, E)∑
be a graph and let V = S1 ∪ · · · ∪ Sk be a
partition of V . (a) Then ϑ(G) ≤ i ϑ(G[Si ]). (b) If no edge connects nodes in
different sets Si , then equality holds. (c) Suppose that any two nodes in different
sets Si are adjacent. How can ϑ(G) be expressed in terms of the ϑ(G[Si ])?
Exercise 10.6.8 Every graph G = (V, E) has a node j such that (with the
notation of (10.22))
∑
(ϑ(G) − 1)2 ≤
ϑ(Gi )2 .
i∈V
Exercise 10.6.9 For a graph G, let t(G) denote that radius of the smallest sphere
(in any dimension) on which a given graph G can be drawn (so that the euclidean
)
distance between adjacent nodes is 1. Prove that t(G)2 = 21 1 − 1/ϑ(G) .
10.6. ALGORITHMIC APPLICATIONS
Exercise 10.6.10 (a) Show that any stable set S provides a feasible solution of
dual program in (10.12). (b) Show that any k-coloring of G provides a feasible
solution of the primal program in (10.12). (c) Give a new proof of the Sandwich
Theorem 10.2.1 based on (a) and (b).
Exercise 10.6.11 Prove that ϑ(G) is the maximum of the largest eigenvalues of
matrices (viT vj ), taken over all orthonormal representations (vi ) of G.
Exercise 10.6.12 (Alon and Kahale) Let G be a graph, v ∈ V , and let H be
obtained from G by removing v and all its neighbors. Prove that
√
ϑ(G) ≤ 1 + |V (H)|ϑ(H).
Exercise 10.6.13 The fractional chromatic number χ∗ (G) is defined as the least
t for which there exists a family (Aj : j = 1, .∑
. . , p) of stable∑sets in G, and
nonnegative weights (τj : j = 1, . . . , p) such that j τj = t and j τj 1Aj ≥ 1V .
(a) Prove that χ∗ (G) is equal ∑
to the largest s∑for which there exist nonnegative
weights (σi : i ∈ V ) such that i σi = s and i∈A σi ≤ 1 for every stable set A.
(b) Prove that ϑ(G) ≤ χ∗ (G) ≤ χ(G).
Exercise 10.6.14 Let G = (V, E) be a graph and let S ⊆ V . Then
ϑ(G) ≤ ϑ(G[S]) + ϑ(G[V \ S]).
149
150 CHAPTER 10. ORTHOGONAL REPRESENTATIONS I: THE THETA FUNCTION
Chapter 11
Orthogonal representations II:
minimum dimension
Perhaps the most natural way to be “economic” in constructing an orthogonal representation
is to minimize the dimension. We can say only a little about the minimum dimension of all
orthogonal representations, but we get interesting results if we impose some “non-degeneracy”
conditions. We will study three nondegeneracy conditions: general position, faithfulness, and
the strong Arnold property.
11.1
Minimum dimension with no restrictions
Let dmin (G) denote the minimum dimension in which G has an orthonormal representation.
The following facts are not hard to prove.
Lemma 11.1.1 For every graph G,
ϑ(G) ≤ dmin (G),
and
c log χ(G) ≤ dmin (G) ≤ χ(G).
Proof. To prove the first inequality, let u : V (G) → Rd be an orthonormal representation
of G in dimension d = dmin (G). Then i 7→ ui ◦ ui is another orthonormal representation; this
is in higher dimension, but ha the good “handle”
1
c = √ (e1 ◦ e1 + · · · + ed ◦ ed ),
d
for which we have
1 ∑ T 2
1
1
cT (ui ◦ ui ) = √
(ej ui ) = √ |ui |2 = √ ,
d j=1
d
d
d
151
152 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
which proves that ϑ(G) ≤ d.
The second inequality follows from Theorem 3.4.3.
The following inequality follows by the same tensor product construction as Lemma 10.2.2:
dmin (G H) ≤ dmin (G)dmin (H).
(11.1)
It follows by Lemma 11.1.1 that we can use dmin (G) as an upper bound on Θ(G); however,
it would not be better than ϑ(G). On the other hand, if we consider orthogonal representations over fields of finite characteristic, the dimension may be a better bound on the Shannon
capacity than ϑ [106, 10].
11.2
General position and connectivity
The first non-degeneracy condition we study is general position: we assume that any d of the
representing vectors in Rd are linearly independent.
A result of Lov´
asz, Saks and Schrijver [167] finds an exact condition for this type of
geometric representability.
Theorem 11.2.1 A graph with n nodes has a general position orthogonal representation in
Rd if and only if it is (n − d)-connected.
The condition that the given set of representing vectors is in general position is not
easy to check (it is NP-hard). A weaker, but very useful condition will be that the vectors
representing the nodes nonadjacent to any node v are linearly independent. We say that
such a representation is in locally general position. To exclude some trivial complications,
we assume in this case that all the representing vectors are nonzero (or equivalently, unit
vectors).
Theorem 11.2.1 is proved in the following slightly more general form:
Theorem 11.2.2 If G is a graph with n nodes, then the following are equivalent:
(i) G has a general position orthogonal representation in Rd ;
(ii) G has a locally general position orthogonal representation in Rd .
(iii) G is (n − d)-connected;
We describe the proof in a couple of installments. First, to illustrate the connection
between connectivity and orthogonal representations, we prove that (ii)⇒(iii). Let V0 be a
cutset of nodes of G, then V = V0 ∪ V1 ∪ V2 , where V1 , V2 ̸= ∅, and no edge connects V1 and
V2 . This implies that the vectors representing V1 are linearly independent, and similarly, the
vectors representing V2 are linearly independent. Since the vectors representing V1 and V2
11.2. GENERAL POSITION AND CONNECTIVITY
153
are mutually orthogonal, all vectors representing V1 ∪ V2 are linearly independent. Hence
d ≥ |V1 ∪ V2 | = n − |V0 |, and so |V0 | ≥ n − d.
Since the implication (i)⇒(ii) is trivial. the difficult part of the proof will be the construction of a general position orthogonal representation for (n − d)-connected graphs, and
we describe and analyze the algorithm constructing the representation first. As a matter of
fact, we describe a construction that is quite easy, the difficulty lies the proof of its validity.
11.2.1
G-orthogonalization
The procedure can be viewed as a certain extension of the Gram-Schmidt orthogonalization
algorithm1 . Let G = (V, E) be any simple graph, where V = [n], and let u : V → Rd be
any vector labeling. Let us choose vectors f1 , f2 , . . . consecutively as follows. Let f1 = u1 .
Suppose that fi (1 ≤ i ≤ j) are already chosen, then we define fj+1 as the orthogonal
projection of uj+1 onto the subspace Lj+1 = {fi : i ∈ [j] \ N (j + 1)}⊥ . We call the sequence
(f1 , . . . , fn ) the G-orthogonalization of the vector sequence (u1 , . . . , un ). If E(G) = ∅, then
(f1 , . . . , fn ) is just the Gram–Schmidt orthogonalization of (u1 , . . . , un ).
Lemma 11.2.3 For every j ∈ [n], lin{u1 , . . . , uj } = lin{f1 , . . . , fj }.
Proof. The proof is straightforward by induction on i
From now on, we assume that the vectors u1 , . . . , un are chosen independently and uniformly from S d .
Lemma 11.2.4 If every node of G has degree at least n − d, then f1 , . . . , fn are nonzero
vectors almost surely.
Proof. There are at most d − 1 vectors fi (i ∈ [j] \ N (j + 1)), hence they don’t span Rd ,
and hence almost surely fj+1 is linearly independent of them, and then its projection onto
Lj+1 is nonzero.
The main fact we want to prove is the following.
Lemma 11.2.5 If G is (n − d)-connected, then the vectors f1 , . . . , fn are in general position
almost surely.
The random sequential representation may depend on the initial ordering σ of the nodes.
We denote by f σ the random sequential representation obtained by processing the nodes in
the order given by σ, and by µσ , the distribution of f σ .
Let us consider a simple example.
1I
am grateful to Robert Weismantel for pointing this out.
154 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
Example 11.2.6 Let G have four nodes a, b, c and d and two edges ac and bd. Consider a
random sequential representation f in R3 , associated with the given ordering. Since every
node has one neighbor and two non-neighbors, we can always find a vector orthogonal to
the earlier non-neighbors. The vectors fb and fb are orthogonal, and fc is constrained to the
plane fb⊥ ; almost surely, fc will not be parallel to fa , so together they span the plane f ⊥ . This
means that fd , which must be orthogonal to fa and fc , must be parallel to fb .
Now suppose that we choose fc and fd in the opposite order: then fd will almost surely
not be parallel to fb , but fc will be forced to be parallel with fa . So not only are the
two distributions µ(a,b,c,d) and µ(a,b,d,c) different, but an event, namely fb ∥fd , occurs with
probability 0 in one and probability 1 in the other.
Let us modify this example by connecting a and c by an edge. Then the planes fa⊥ and
fb⊥ are not orthogonal any more (almost surely). Choosing fc ∈ fb⊥ first still determines the
direction of fd , but now it does not have to be parallel to fb ; in fact, depending on fc , it can
ba any unit vector in fa⊥ . The distributions µ(a,b,c,d) and µ(a,b,d,c) are still different, but any
event that occurs with probability 0 in one will also occur with probability 0 in the other
(this is not obvious; see Exercise 11.4.12 below).
This example motivates the following considerations. The distribution of a random sequential orthogonal representation may depend on the initial ordering of the nodes. The key
to the proof will be that this dependence is not too strong. We say that two probability
measures µ and ν on the same sigma-algebra S are mutually absolute continuous, if for any
measurable subset A of S, µ(A) = 0 if and only if ν(A) = 0. The crucial step in the prof is
the following lemma.
Lemma 11.2.7 If G is (n − d)-connected, then for any two orderings σ and τ of V , the
distributions µσ and µτ are mutually absolute continuous.
Before proving this lemma, we have to state and prove a simple technical fact. For a
subspace A ⊆ Rd , we denote by A⊥ its orthogonal complement. We need the elementary
relations (A⊥ )⊥ = A and (A + B)⊥ = A⊥ + B ⊥ .
Lemma 11.2.8 Let A, B and C be mutually orthogonal linear subspaces of Rd with
dim(C) ≥ 2. Select a unit vector a1 uniformly from A + C, and then select a unit vector b1 uniformly from (B + C) ∩ a⊥
1 . Also, select a unit vector b2 uniformly from B + C, and
then select a unit vector a2 uniformly from (A + C) ∩ b⊥
2 . Then the distributions of (a1 , b1 )
and (a2 , b2 ) are mutually absolute continuous.
Proof. Let r = dim(A), s = dim(B) and t = dim(C). The special case when r = 0 or s = 0
is trivial. Suppose that r, s ≥ 1 (by hypothesis, t ≥ 2).
Observe that a unit vector a in A + C can be written uniquely in the form (cos θ)x +
(sin θ)y, where x is a unit vector in A, y is a unit vector in C, and θ ∈ [0, π/2]. Uniform
11.2. GENERAL POSITION AND CONNECTIVITY
155
selection of a means independent uniform selection of x and y, and an independent selection
of θ from a distribution ζr,t that depends only on s and t. Using that s, t ≥ 1, it is easy to
see that ζs,t is mutually absolute continuous with respect to the uniform distribution on the
interval [0, π/2].
So the pair (a1 , b1 ) can be generated through five independent choices: a uniform unit
vector x1 ∈ A, a uniform unit vector z1 ∈ B, a pair of orthogonal unit vectors (y1 , y2 )
selected from C (uniformly over all such pairs: this is possible since dim(C) ≥ 2), and two
numbers θ1 selected according to ζs,t and θ2 is selected according to ζr,t−1 . The distribution
of (a2 , b2 ) is described similarly except that θ2 is selected according to ζr,t and θ1 is selected
according to ζs,t−1 .
Since t ≥ 2, the distributions ζr,t and ζr,t−1 are mutually absolute continuous and similarly, ζs,t and ζs,t−1 are mutually absolute continuous, from which we deduce that the distributions of (a1 , b1 ) and (a2 , b2 ) are mutually absolute continuous.
Next, we prove our main Lemma.
Proof of Lemma 11.2.7. It suffices to prove that if τ is the ordering obtained from σ by
swapping the nodes in positions j and j + 1 (1 ≤ j ≤ n − 1), then µσ an µτ are mutually
absolute continuous. Let us label the nodes so that σ = (1, 2, . . . , n).
Let f and g be random sequential representations from the distributions µσ and µτ . It
suffices to prove that the distributions of f1 , . . . , fj+1 and g1 , . . . , gj+1 are mutually absolute
continuous, since conditioned on any given assignment of vectors to [j + 1], the remaining
vectors fk and gk are generated in the same way, and hence the distributions µσ and µτ are
identical. Also, note that the distributions of f1 , . . . , fj−1 and g1 , . . . , gj−1 are identical.
We have several cases to consider.
Case 1. j and j + 1 are adjacent. When conditioned on f1 , . . . , fj−1 , the vectors fj and
fj+1 are independently chosen, so it does not mater in which order they are selected.
Case 2. j and j + 1 are not adjacent, but they are joined by a path that lies entirely in
[j + 1]. Let P be a shortest such path and t be its length (number of edges), so 2 ≤ t ≤ j.
We argue by induction on j and t. Let i be any internal node of P . We swap j and j + 1 by
the following steps (Figure 11.1):
(1) Interchange i and j, by successive adjacent swaps among the first j elements.
(2) Swap i and j + 1.
(3) Interchange j + 1 and j, by successive adjacent swaps among the first j elements.
(4) Swap j and i.
(5) Interchange j + 1 and i, by successive adjacent swaps among the first j elements.
In each step, the new and the previous distributions of random sequential representations
are mutually absolute continuous: in steps (1), (3) and (5) this is so because the swaps take
156 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
Figure 11.1: Interchanging j and j + 1.
place place among the first j nodes, and in steps (2) and (4), because the nodes swapped are
at a smaller distance than t in the graph distance.
Case 3. There is no path connecting j to j + 1 in [j + 1]. Let x1 , . . . , xj−1 any selection
of vectors for the first j − 1 nodes. It suffices to show that the distributions of (fj , fj+1 ) and
(gj , gj+1 ), conditioned on fi = gi = xi (i = 1, . . . , j − 1), are mutually absolute continuous.
Let J = [j − 1], U0 = N (j) ∩ J and U1 = N (j + 1) ∩ J. Then J has a partition W0 ∪ W1 so
that U0 ⊆ W0 , U1 ⊆ W1 , and there is no edge between W0 and W1 . Furthermore, it follows
that V \ [j + 1] is a cutset, whence n − j − 1 ≥ n − d and so j ≤ d − 1.
For S ⊆ J, let lin(S) denote the linear subspace of Rd generated by the vectors xi , i ∈ S.
Let
L = lin(J),
L0 = lin(J \ U0 ),
L1 = lin(J \ U1 ).
Then fj is selected uniformly from the unit sphere in L⊥
0 , and then fj+1 is selected uniformly
⊥
.
On
the
other
hand,
gj+1 is selected uniformly from the
from the unit sphere in L⊥
∩
f
1
j
⊥
⊥
unit sphere in L⊥
1 , and then gj is selected uniformly from the unit sphere in L0 ∩ gj+1 .
⊥
⊥
Let A = L ∩ L⊥
0 , B = L ∩ L1 and C = L . We claim that A, B and C are mutually
orthogonal. It is clear that A ⊆ C ⊥ = L, so A ⊥ C, and similarly, B ⊥ C. Furthermore,
⊥
⊥
⊥
L0 ⊇ lin(W1 ), and hence L⊥
0 ⊆ lin(W1 ) . So A = L ∩ L0 ⊆ L ∩ lin(W1 ) = lin(W0 ). Since,
similarly, B ⊆ lin(W1 ), it follows that A ⊥ B.
Furthermore, we have L0 ⊆ L, and hence L0 and L⊥ are orthogonal subspaces. This
⊥ ⊥
⊥
⊥
⊥
implies that (L0 +L⊥ )∩L = L0 , and hence L⊥
0 = (L0 +L ) +L = (L0 ∩L)+L = A+C.
⊥
It follows similarly that L1 = B + C.
Finally, notice that dim(C) = d − dim(L) ≥ d − |J| ≥ 2. So Lemma 11.2.8 applies, which
completes the proof.
Proof of Lemma 11.2.5. Observe that the probability that the first d vectors in a sequential
representation are linearly dependent is 0. This event has then probability 0 in any other
sequential representation. Since we can start the ordering with any d-tuple of nodes, it follows
that the probability that the representation is not in general position is 0.
11.3. FAITHFUL ORTHOGONAL REPRESENTATIONS
157
Proof of Theorem 11.2.2. We have seen the implications (i)⇒(ii) and (ii)⇒(iii), and
Lemma 11.2.5 implies that (iii)⇒(i).
11.3
Faithful orthogonal representations
An orthogonal representation is faithful if different nodes are represented by non-parallel
vectors and adjacent nodes are represented by non-orthogonal vectors.
We do not know how to determine the minimum dimension of a faithful orthogonal representation. It was proved by Maehara and R¨odl [176] that if the maximum degree of the
complementary graph G of a graph G is D, then G has a faithful orthogonal representation in
2D dimensions. They conjectured that the bound on the dimension can be improved to D+1.
We show how to obtain their result from the results in Section 11.2, and that the conjecture
is true if we strengthen its assumption by requiring that G is sufficiently connected.
Theorem 11.3.1 Every (n − d)-connected graph on n nodes has a faithful general position
orthogonal representation in Rd .
Proof.
It suffices to show that in a random sequential orthogonal representation, the
probability of the event that two nodes are represented by parallel vectors, or two adjacent
nodes are represented by orthogonal vectors, is 0. By the Lemma 11.2.7, it suffices to prove
this for the representation obtained from an ordering starting with these two nodes. But
then the assertion is obvious.
Using the elementary fact that a graph with minimum degree n − D − 1 is at least
(n − 2D)-connected, we get the result of Maehara and R¨odl:
Corollary 11.3.2 If the maximum degree of the complementary graph G of a graph G is D,
then G has a faithful orthogonal representation in 2D dimensions.
11.4
Strong Arnold Property and treewidth
Strong Arnold Property. Consider an orthogonal representation i 7→ vi ∈ Rd of a graph
G. We can view this as a point in the Rd|V | , satisfying the quadratic equations
viT vj = 0
(ij ∈ E).
(11.2)
Each of these equation defines a hypersurface in Rd|V | . We say that the orthogonal representation i 7→ vi has the Strong Arnold Property if the surfaces (11.2) intersect transversally
at this point. This means that their normal vectors are linearly independent.
This can be rephrased in more explicit terms as follows. For each nonadjacent pair
T
i, j ∈ V , form the d × V matrix V ij = eT
i vj + ej vi . Then the Strong Arnold Property says
that the matrices V ij are linearly independent.
158 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
Another way of saying this is that there is no symmetric V × V matrix X ̸= 0 such that
Xij = 0 if i = j or ij ∈ E, and
∑
Xij vj = 0
(11.3)
j
for every node i. Since (11.3) means a linear dependence between the non-neighbors of i,
every orthogonal representation of a graph G in locally general position has the Strong Arnold
Property. This shows that the Strong Arnold Property can be thought of as some sort of
symmetrized version of locally general position. But the two conditions are not equivalent,
as the following example shows.
Example 11.4.1 (Triangular grid) Consider the graph ∆3 obtained by attaching a triangle on each edge of a triangle (Figure 11.2). This graph has an orthogonal representation (vi : i = 1, . . . , 6) in R3 : v1 , v2 and v3 are mutually orthogonal unit vectors, and
v4 = v1 + v2 , v5 = v1 + v3 , and v6 = v2 + v3 .
This representation is not in locally general position, since the nodes non-adjacent to
(say) node 1 are represented by linearly dependent vectors. But this representation satisfies
the Strong Arnold Property. Suppose that a symmetric matrix X satisfies (11.3). Since node
4 is adjacent to all the other nodes except node 3, X4,j = 0 for j ̸= 3, and therefore case
i = 4 of (11.3) implies that X4,3 = 0. By symmetry, X3,4 = 0, and hence case i = 3 of (11.3)
simplifies to X3,1 v1 + X3,2 v2 = 0, which implies that X3,j = 0 for all J. Going on similarly,
we get that all entries of X must be zero.
Figure 11.2: The graph ∆3 with an orthogonal representation that has the strong
Arnold property but is not in locally general position.
Algebraic width. Based on this definition, Colin de Verdi`ere [50] introduced an interesting
graph invariant related to connectivity (this is different from the better known Colin de
Verdi`ere number in Chapter 12). Let d be the smallest dimension in which G has a faithful
orthogonal representation with the Strong Arnold Property, and define walg (G) = n − d. We
call walg (G) the algebraic width of the graph (the name refers to its connection with tree
width, see below).
This definition is meaningful, since it is easy to construct a faithful orthogonal representation in Rn (where n = |V (G)|), in which the representing vectors are almost orthogonal and
hence linearly independent, which implies that the Strong Arnold Property is also satisfied.
11.4. STRONG ARNOLD PROPERTY AND TREEWIDTH
159
This definition can be rephrased in terms of matrices: consider a matrix N ∈ RV ×V with
the following properties:
{
= 0 if ij ∈
/ E, i ̸= j;,
(N1) Nij
̸= 0 if ij ∈ E.
(N2) N is positive semidefinite;
(N3) [Strong Arnold Property] If X is a symmetric n × n matrix such that Xij = 0
whenever i = j or ij ∈ E, and N X = 0, then X = 0.
Lemma 11.4.2 The algebraic width of a graph G is the maximum corank of a matrix with
properties (N1)–(N3).
The proof is quite straightforward, and is left to the reader.
Example 11.4.3 (Complete graphs) The Strong Arnold Property is void for complete
graphs, and every representation is orthogonal, so we can use the same vector to represent
every node. This shows that walg (Kn ) = n − 1. For every noncomplete graph G, walg (G) ≤
n − 2, since a faithful orthogonal representation requires at least two dimensions.
Example 11.4.4 (Edgeless graphs) To have a faithful orthogonal representation, all representing vectors must be mutually orthogonal, hence walg (K n ) = 0. It is not hard to see
that every other graph G has walg (G) ≥ 1.
Example 11.4.5 (Paths) Every matrix N satisfying (N1) has an (n − 1) × (n − 1) nonsingular submatrix, and hence by Lemma 11.4.2, walg (Pn ) ≤ 1. Since Pn is connected, we know
that equality holds here.
Example 11.4.6 (Triangular grid II) To see a more interesting example, let us have a
new look at the graph ∆3 in Figure 11.2 (Example 11.4.1). Nodes 1, 2 and 3 must be
represented by mutually orthogonal vectors, hence every faithful orthogonal representation
of ∆3 must have dimension at least 3. On the other hand, we have seen that ∆3 has a
faithful orthogonal representation with the Strong Arnold Property in R3 . It follows that
walg (∆3 ) = 3.
We continue with some easy bounds on the algebraic width. The condition that the
representation must be faithful implies that the vectors representing a largest stable set of
nodes must be mutually orthogonal, and hence the dimension of the representation is at least
α(G). This implies that
walg (G) ≤ n − α(G) = τ (G).
(11.4)
By Theorem 11.2.2, every k-connected graph G has a faithful general position orthogonal
representation in Rn−k , and hence
walg (G) ≥ κ(G).
(11.5)
160 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
( )
We may also use the Strong Arnold Property. There are n2 − m orthogonality conditions,
and in an optimal representation they involve (n−walg (G))n variables. If their normal vectors
( )
are linearly independent, then n2 − m ≤ (n − walg (G))n, and hence
walg (G) ≤
n+1 m
+ .
2
n
(11.6)
The most important consequence of the Strong Arnold Property is the following.
Lemma 11.4.7 The graph parameter walg (G) is minor-monotone.
Proof. Let H be a minor of G; we show that walg (H) ≤ walg (G). It suffices to do so in
three cases: when H arises from G by deleting an edge, by deleting an isolated node, or by
contracting an edge. We consider a faithful orthogonal representation u of H in dimension
d = |V (H)| − walg (H), with the Strong Arnold property, and modify it to get a faithful
orthogonal representation of G.
The case when H arises by deleting an isolated node i is trivial: Representing i by the
(d + 1)-st unit vector, we get a representation of G in dimension d + 1. It is straightforward
to see that this faithful representation of G satisfies the Strong Arnold property. Hence
walg (G) ≥ |V (G)| − (d + 1) = walg (H).
Suppose that H = G − ab, where ab ∈ E(G). By the Strong Arnold property, for a
sufficiently small ε > 0, the system of equations
(ij ∈ E(H) \ {ab})
xT
i xj = 0
xT
a xb = ε
(11.7)
(11.8)
has a solution in xi ∈ Rd , i ∈ V (G), which also has the Strong Arnold property. This is a
faithful orthogonal representation of G, showing that walg (G) ≥ |V (G)| − d = walg (H).
The most difficult case is when H arises from G by contracting an edge ab to a new node
ab. [To be written.]
The parameter has other nice properties, of which the following will be relevant:
Lemma 11.4.8 Let G be a graph, and let B ⊆ V (G) induce a complete subgraph. Let
G1 , . . . , Gk be the connected components of G \ B, and let Hi be the subgraph induced by
V (Gi ) ∪ B. Then
walg (G) = max walg (Hi ).
i
Proof. [To be written.]
Algebraic width, tree-width and connectivity. The monotone connectivity κmon (G) of
a graph G is defined as the maximum connectivity of any minor of G.
11.4. STRONG ARNOLD PROPERTY AND TREEWIDTH
161
Tree-width is a parameter related to connectivity, introduced by Robertson and Seymour
[202] as an important element in their graph minor theory (see [164] for a survey). Colin de
Verdi`ere [50] defines a closely related parameter, which we call the product-width wprod (G)
of a graph G. This is the smallest positive integer r for which G is a minor of a Cartesian
sum Kr T , where T is a tree.
The difference between the tree-width wtree (G) and product-width wprod (G) of a graph
G is at most 1:
wtree (G) ≤ wprod (G) ≤ wtree (G) + 1.
(11.9)
The lower bound was proved by Colin de Verdi`ere, the upper, by van der Holst [117]. It
is easy to see that κmon (G) ≤ wtree (G) ≤ wprod (G). The parameter wprod (G) is clearly
minor-monotone.
The algebraic width is sandwiched between two of these parameters:
Theorem 11.4.9 For every graph G,
κmon (G) ≤ walg (G) ≤ wprod (G).
The upper bound was proved by Colin de Verdi`ere [50], while the lower bound will follow
easily from the results in Section 11.2.
Proof.
Since walg (G) is minor-monotone, (11.5) implies the first inequality. To prove the
second, we use again the minor-monotonicity of walg (G), which implies that it suffices to
prove that
walg (Kr ⊕ T ) ≤ r
(11.10)
for any r ≥ 1 and tree T . Note that if T has at least two nodes, then walg (Kr ⊕ T ) ≥
walg (Kr+1 ) = r follows by the minor monotonicity of r.
We use induction on |V (T )|. If T is a single node, then Kr ⊕ T is a complete r-graph and
the assertion is trivial.
Suppose that T has two nodes. We want to prove then that Kr ⊕ K2 has no faithful
orthogonal representation in Rr−1 (the Strong Arnold Property is not needed here). Suppose
that i 7→ vi is such a representation. Let {a1 , . . . , ar } and {b1 , . . . , br } be the two r-cliques
corresponding to the two nodes of T , where ai is adjacent to bi . Then vbi is orthogonal to
every vector vaj with j ̸= i, but not orthogonal to vai . This implies that vai is linearly
independent of the vectors {vaj : j ̸= i}. Since this holds for every j, we get that the set
of vectors {vai : i = 1, . . . , r} is linearly independent. But this is impossible, since these
vectors are all contained in Rr−1 .
If T has more than two nodes, then it has an internal node v, and the clique corresponding
to this node separates the graph. So (11.10) follows by Lemma 11.4.8 and induction.
162 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
For small values, equality holds in Theorem 11.4.9; this was proved by Van der Holst [117]
and Kotlov [140].
Proposition 11.4.10 If walg (G) ≤ 2, then κmon (G) = walg (G) = wprod (G).
Proof. It is clear that
walg (G) = 1 ⇒ κmon (G) = 1 ⇒ G is a forest ⇒ wprod (G) = 1 ⇒ walg (G) = 1.
Suppose that walg (G) = 2, then trivially κmon (G) = 2. Furthermore, neither K4 nor the
graph ∆3 in Example 11.4.1 is a minor of G, since walg (K4 ) = walg (∆3 ) = 3. In particular,
G is series-parallel.
A planar graph has κmon (G) ≤ 5 (since every simple minor of it has a node with degree
at most 5). Since planar graphs can have arbitrarily large treewidth, the lower bound in
Theorem 11.4.9 can be very far from equality. The following example of Kotlov [141] shows
that in general, equality does not hold in the upper bound in Theorem 11.4.9 either. (For
the proof, we refer to the paper.) It is not known whether wprod (G) can be bounded from
above by some function walg (G)2 .
Example 11.4.11 The k-cube Qk has walg (Qk ) = O(2k/2 ) but wprod (Qk ) = Θ(2k ). The
complete graph K2m+1 is a minor of Q2m+1 (see Exercise 11.4.13).
Exercise 11.4.12 Let Σ and ∆ be two planes in R3 that are neither parallel nor
weakly orthogonal. Select a unit vector a1 uniformly from Σ, and a unit vector
b1 ∈ ∆ ∩ a⊥
1 . Let the unit vectors a2 and b2 be defined similarly, but selecting
b2 ∈ ∆ first (uniformly over all unit vectors), and then a2 from Σ ∩ b⊥
2 . Prove
that the distributions of (a1 , b1 ) and (a2 , b2 ) are different, but mutually absolute
continuous.
Exercise 11.4.13 (a) For every bipartite graph G, the graph G K2 is a minor
of GC4 . (b) K2m+1 is a minor of Q2m+1 .
11.5
The variety of orthogonal representations
In this section we take a global view of all orthogonal representations of a graph. Our
considerations will be easier to follow if we describe everything in terms of the complementary
graph. Note that G is (n−d)-connected if and only if G does not contain a complete bipartite
graph on d + 1 nodes (i.e., a complete bipartite graph Ka,b with a, b ≥ 1, a + b ≥ d + 1).
Let G = (V, E) be a simple graph on n nodes, which are labeled 1, . . . , n. Every vector
labeling i 7→ vi ∈ Rd can be considered as a point in Rdn , and orthogonal representations of
G in Rd form an algebraic variety ORd (G) ⊆ Rdn , defined by the equations
uT
i uj = 0 (ij ∈ E).
(11.11)
11.5. THE VARIETY OF ORTHOGONAL REPRESENTATIONS
163
We could consider orthonormal representations, whose variety would be defined by the equations
uT
i ui = 1 (i ∈ V ),
uT
i uj = 0 (ij ∈ E).
(11.12)
We note that the set of orthogonal representations of G that are not in general position
also forms an algebraic variety: we only have to add to (11.11) the equation
∏
det(ui1 , . . . , uid ) = 0.
(11.13)
1≤i1 <···<id ≤n
Similarly, the set of orthogonal representations that are not in locally general position form
a variety. The set GORd (G) of general position orthogonal representations of G in Rd , as
well as the set LORd (G) of locally general position orthogonal representations of G, are
semialgebraic.
11.5.1
Examples
Let us a look at some examples showing the complications (or interesting features, depending
on your point of view) of the variety of orthogonal representations of a graph.
Example 11.5.1 Two nodes.
Example 11.5.2 Let Q denote the graph of the (ordinary) 3-dimensional cube. Note that
Q is bipartite; let U = {u1 , u2 , u3 , u4 } and W = {v1 , v2 , v3 , v4 } be its color classes. The
indices are chosen so that ui is opposite to vi .
Since Q contains neither K1,4 nor K2,3 , its complement Q has a general position orthogonal representation in R4 . This is in fact easy to construct. Choose any four linearly
independent vectors x1 , . . . , x4 to represent u1 , . . . , u4 , then the vector yi representing wi is
uniquely determined (up to sign) by the condition that it has to be orthogonal to the three
vectors xj , j ̸= i. It follows from Theorem 11.2.2 that for almost all choices of x1 , . . . , x4 ,
the orthogonal representation obtained this way is in general position.
We show that every general position orthogonal representation satisfies a certain algebraic
equation. Let x′i denote the orthogonal projection of xi onto the first two coordinates, and
let yi′ denote the orthogonal projection of yi onto the third and fourth coordinate. We claim
that
det(x′1 , x′3 ) det(x′2 , x′4 )
det(y1′ , y3′ ) det(y2′ , y4′ )
=
.
det(x′1 , x′4 ) det(x′2 , x′3 )
det(y1′ , y4′ ) det(y2′ , y3′ )
(11.14)
Before proving (11.14), we remark that in a geometric language, this equation says that
the cross ratio (x′1 : x′2 : x′3 : x′4 ) is equal to the cross ratio of (y1′ : y2′ : y3′ : y4′ ). (The
cross ratio (v1 : v2 : v3 : v4 ) of four vectors v1 , v2 , v3 , v4 in a 2-dimensional subspace
164 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
can be defined as follows: we write v3 = λ1 v1 + λ2 v2 and v4 = µ1 v1 + µ2 v2 , and define
(v1 : v2 : v3 : v4 ) = (λ1 µ2 )/(λ2 µ1 ). This number is invariant under linear transformations.)
To prove (11.14), we ignore (temporarily) the condition on the length of the vectors
T
xi , and rescale them so that xT
i yi = 1 for i = 1, . . . , 4. This can be done, since xi yi ̸=
0 (else, x1 , . . . , x4 would all be contained in the 3-space orthogonal to yi , contradicting
the assumption that the representation is in general position). This means that the 4 × 4
matrices X = (x1 , x2 , x3 , x4 ) and Y = (y1 , y2 , y3 , y4 ) satisfy X T Y = I. By the well-known
relationship between the subdeterminants of inverse matrices, we have
det(x′1 , x′3 ) = det(X) det(y2′ , y4′ ),
and similarly for the other determinants in (11.14). Substituting these values, we get (11.14)
after simplification.
Now if we scale back the vectors xi to get an orthogonal representation, the numerator
and denominator on the left hand side of (11.14) are scaled by the same factor, and so the
relationship remains. This completes the proof of (11.14).
There is another type of orthogonal representation, in which the elements of U are represented by vectors x1 , . . . , x4 in a linear subspace L of R4 and the elements of W are
represented by vectors y1 , . . . , y4 in the orthogonal complement L⊥ of L. Clearly, this is an
orthogonal representation for any choice of the representing vectors in these subspaces. As a
special case, we can choose 4 vectors xi in the plane spanned by the first two coordinates, and
4 vectors yi in the plane spanned by the third and fourth coordinates. Since these choices
are independent, equation 11.14 will not hold in general.
It follows from these arguments that the equation
(
det(x1 , x2 , x3 , x4 ) det(x′1 , x′3 ) det(x′2 , x′4 ) det(y1′ , y4′ ) det(y2′ , y3′ )
)
− det(x′1 , x′4 ) det(x′2 , x′3 ) det(y2′ , y4′ ) det(y1′ , y3′ ) = 0
is satisfied by every orthogonal representation, but the first factor is nonzero for the orthogonal representations in general position, while the second is nonzero for some of those of the
second type. So the variety ORd (Q) is reducible.
11.5.2
Creating orthogonal representations
To any vector labeling x : V → Rd , the G-orthogonalization procedure described in Section
11.2.1 assigns an orthogonal representation ΦG x : V → Rd . The mapping ΦG : Rdn → Rdn
is well-defined, and it is a retraction: if x itself is an orthogonal representation, then ΦG x = x.
In particular, the range of ΦG is just ORd (G). The map ΦG is not necessarily continuous,
as Example 11.5.1 shows.
Suppose that all degrees of G are at most d−1. Then we know that G has an orthonormal
representation in Rd ; in fact, every orthogonal representation of an induced subgraph of G
11.5. THE VARIETY OF ORTHOGONAL REPRESENTATIONS
165
can be extended to an orthogonal representation of G, just using the sequential construction
in the proof of Theorem 11.2.2.
Next, suppose that G does not contain a complete bipartite graph with d + 1 nodes; in
other words, G is (n − d)-connected. Let x : V → Rd be a vector labeling such that the
orthogonal representation y = ΦG x is in locally general position (we know that almost all
vector labelings are like this). Then y can be expressed as
∑
yj = xj −
αi yi ,
(11.15)
i∈N (j)∩[j]
where the coefficients αi are determined by the orthogonality conditions:
∑
αi yiT yk = xT
(k ∈ N (j) ∩ [j]).
j yk
(11.16)
i∈N (j)∩[j]
The determinant of this linear system is nonzero by the assumption that y is in (locally)
general position, and so we can imagine that we solve it by by Cramer’s rule. We see that
the numbers αi , and hence also the entries of yj , can be expressed as rational functions of
the coordinates of the vectors yi (i ∈ N (j) ∩ [j]) and xj ; and so, recursively, as rational
functions of the coordinates of the vectors xi (i ≤ j).
We can modify this procedure so that we get polynomials, by multiplying with the denominator in Cramer’s rule. To be more precise, we define the vectors zj ∈ Rd recursively
by setting N (J) ∩ [j] = {i1 , . . . , ir } and
zi1
...
zir
xj T
zi zi1 . . . zT
zT
i1 zir
i1 x j 1
zj = .
.
(11.17)
.
.
..
.. ..
zT zi . . . zT zi zT xj ir 1
ir r
ir
This way we get a vector labeling z = ΨG x, where each entry of each zj is a polynomial
in the coordinates of x. Furthermore, zj is a positive real multiple of yj (using that y is
in general position). The map ΨG : x 7→ z is defined for all vector labelings x (not only
almost all) in a natural way, and z is an orthogonal representation for every x ∈ RdV , and it
is in [locally] general position whenever y is in [locally] general position. (This holds almost
everywhere in x.)
Suppose that we scale one of the vectors xj by a scalar λj ̸= 0. From the orthogonalization
procedure we see that yi does not change for i ̸= j, and yj is scaled by λj . Trivially, the
vectors zi don’t change for i < j, and zj will be scaled by the λj . The vectors zi with i > j
change by appropriate factors that are powers of λj . It follows that if we want to scale each
zj by some given scalar λj ̸= 0, then we can scale x1 , . . . , xn one by one to achieve that.
As a consequence, we see that the range Rng(ΨG ) is closed under rescaling each vector
independently by a non-zero scalar. We have seen that every orthogonal representation is
a fixed point of ΦG , and so Rng(ΦG ) = OR(G). For every orthogonal representation x in
166 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
locally general position, ΨG x arises from x = ΦG x by scaling, and hence by the remarks
above, x is also in the range of ΨG . So
LOR(G) ⊆ Rng(ΨG ) ⊆ OR(G) = Rng(ΦG ).
11.5.3
Irreducibility
One may wonder whether various algebraic properties of these varieties have any combinatorial significance. In this section we address only one question, namely the irreducibility of
ORd (G), which does have some interesting combinatorial meaning.
A set A ⊆ RN is reducible if there exist two polynomials p, q ∈ R[x1 , . . . , xN ] such that
their product pq vanishes on A, but neither p nor q vanishes on A; equivalently, the polynomial
ideal {p : p vanishes on A} is not a prime ideal. (It is easy to see that instead of two
polynomials, we could use any number k ≥ 2 of them in this definition.) If an algebraic
variety A is reducible, then it is the union of two algebraic varieties that are both proper
subvarieties, and therefore they can be considered “simpler”.
Proposition 11.5.3 If G contains a complete bipartite graph with d+1 nodes, then ORd (G)
is reducible.
Proof. In this case G has no general position orthogonal representation by Theorem 11.2.1.
Hence equation (11.13) holds on ORd (G). On the other hand, none of the equations
det(vi1 , . . . , vid ) = 0
(1 ≤ i1 < · · · < id ≤ n)
holds on ORd (G), because we can construct a sequential orthogonal representation starting
with the nodes i1 , . . . , id , and in this, the vectors vi1 , . . . , vid will be linearly independent. From now on, we assume that G contains no complete bipartite graph on d + 1 nodes.
Since this condition does imply a lot about orthogonal representations of G in Rd by Theorem
11.2.1, we may expect that ORd (G) will be irreducible in this case. This is, however, false
in general.
Let us return to the variety of orthogonal representations. We know that G has a general
position orthogonal representation in Rd . One may suspect that more is true: perhaps every
orthogonal representation in Rd is the limit of orthogonal representations in general position.
Example 11.5.2 shows that this is not true in general.
One reason for asking the question whether GORd (G) is dense in ORd (G) is that in this
case we can answer the question of irreducibility of ORd (G). Let us begin with the question
of irreducibility of the set GORd (G) of general position orthogonal representations of G. This
can be settled quite easily.
Theorem 11.5.4 Let G be a graph containing no complete bipartite subgraph on d+1 nodes.
Then GORd (G) is irreducible.
11.5. THE VARIETY OF ORTHOGONAL REPRESENTATIONS
167
Proof. First we show that there exist vectors pv = pv (X) ∈ R[X]d (v ∈ V ) whose entries
are multivariate polynomials with real coefficients in variables X1 , X2 , . . . (the number of
these variables does not matter) such that whenever u and v are adjacent then pT
u pv is
identically 0, and such that every general position representation of G arises from p by
substitution for the variables Xi . We do this by induction on n.
Let v ∈ V . Suppose that the vectors of polynomials pu (X ′ ) of dimension d exist for all
u ∈ V −{v} satisfying the requirements above for the graph G−v. Let x′ = (xu : u ∈ V −v).
Let N (v) = {u1 , . . . , um }; clearly m ≤ d − 1. Let z1 , . . . , zd−1−m be vectors of length d
composed of new variables and let y be another new variable; X will consist of X ′ and these
new variables. Consider the d × (d − 1) matrix
F = (pu1 , . . . , pum , z1 , . . . , zd−1−m ),
and let pj be (−1)j times the determinant of the submatrix obtained by dropping the j-th
row. Then we define pv = (yp1 , . . . , ypd )T ∈ R[X].
It is obvious from the construction and elementary linear algebra that pv is orthogonal to
the vectors pui (i ≤ m). We show by induction on n that every general position orthogonal
representation of G can be obtained from p by substitution. In fact, let x be any general
position orthogonal representation of G. Then x′ = x|V −v is a general position orthogonal
representation of G−v in Rd , and hence by the induction hypothesis, x′ can be obtained from
p by substituting for the variables X ′ . The vectors xu1 , . . . , xum are linearly independent
and orthogonal to xv ; let a1 , . . . , ad−1−m be vectors completing the system {xu1 , . . . , xum }
T
to a basis of x⊥
v . Substitute zi = ai . Then the vector (p1 , . . . , pd ) will become a non-zero
vector parallel to xv and hence y can be chosen so that pv (X) will be equal to xv .
Note that from the fact that x(X) is in general position for some substitution it follows
that the set of substitutions for which it is in general position is everywhere dense.
From here the proof is quite easy. Let f and g be two polynomials such that f g vanishes
on GORd (G). Then f (p(X))g(p(X)) vanishes on an everywhere dense set of substitutions
for X, and hence it vanishes identically. So either f (p(X)) or g(p(X)) vanishes identically;
say the first occurs. Since every general position orthogonal representation of G arises from
p by substitution, it follows that p vanishes on GORd (G).
The proof above in fact gives more.
Corollary 11.5.5 Every locally general position orthogonal representation of G can be obtained from p by substitution. Hence LORd (G) is irreducible, and GORd (G) is dense in
LORd (G).
Now let us return to the (perhaps more natural) question of the irreducibility of ORd (G).
Our goal is to prove the following.
168 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
Theorem 11.5.6 Let G be a graph on n nodes containing no complete bipartite subgraph on
d + 1 nodes. (a) If d ≤ 3, then GORd (G) is dense in ORd (G). (b) If d = 4, then GORd (G)
is dense in ORd (G) unless G has a connected component that is the 3-cube.
A description of graphs for which ORd (G) is irreducible is open for d ≥ 5.
To give the proof, we need some definitions and lemmas. Let us say that an orthogonal
representation of a graph G in Rd is special if it does not belong to the topological closure of
GORd (G). A graph G is special, if it does not contain a complete bipartite graph on d + 1
nodes, and G has a special orthogonal representation. A graph G is critical, if it is special,
but no proper induced subgraph of it is special.
If x is a special representation of G in Rd then, by definition, there exists an ε > 0 such
that if y is another orthogonal representation of G in Rd , and |x−y| < ε, then y is also special.
There must be linear dependencies among the vectors xv ; if ε is small enough then there will
be no new dependencies among the vectors yv , while some of the linear dependencies may
be broken. We say that an orthogonal representation x is locally freest if there exists an
ε > 0 such that for every orthogonal representation y with |x − y| < ε, and every subset
U ⊆ V , x|U is linearly dependent if and only if y|U is. Clearly every graph having a special
representation in Rd also has a locally freest special representation arbitrarily close to it.
Corollary 11.5.5 says in this language that a locally freest representation in LORd (G) is in
GORd (G).
If a graph has a special induced subgraph, then it is also special. (Indeed, we can extend
the special representation of the induced subgraph to an orthogonal representation of the
whole graph, which is clearly special.) A graph is special if and only if at least one component
of it is special. This implies that every critical graph is connected. Every special graph has
more than d nodes.
Let x be an orthogonal representation of G. For v ∈ V , define Av = lin{xu : u ∈ N (v)}.
Clearly xv ∈ A⊥
v.
Lemma 11.5.7 Let G be a critical graph, and let x be a locally freest special orthogonal
representation of G.
(a) The neighbors of any node are represented by linearly dependent vectors.
(b) For every node u, dim(Au ) ≤ d − 2 and dim(A⊥
u ) ≤ d − 2.
(c) If xu is linearly dependent on the set X = {xu : u ∈ S}, then A⊥
u ⊆ lin(X).
(d) Any two representing vectors are linearly independent.
(e) Every node of G has degree at least 3.
Proof. (a) Call a node good if its neighbors are represented by linearly independent vectors.
We want to prove that in a locally freest representation of a critical graph, either every node
11.5. THE VARIETY OF ORTHOGONAL REPRESENTATIONS
169
is “good” (then the representation is in locally general position and hence in general position)
or no node is “good” (then the representation is special).
Let x be a special orthogonal representation of G that has a “good” node i and a “bad”
node j. We may choose i and j so that they are non-adjacent; indeed, else G would contain
a complete bipartite graph on all nodes, which would imply that n ≤ d, which is impossible.
The representation x′ , obtained by restricting x to V (G − i), is an orthogonal representation of G − i, and hence by the criticality of G, there exists a general position orthogonal
representation y′ of G − i in Rd arbitrarily close to x′ . We can extend y′ to an orthogonal
representation y of G; since the neighbors of i are represented by linearly independent vectors
in x′ , we can choose yi so that it is arbitrarily close to xi if y′ is sufficiently close to x′ .
This extended y is locally freer than x: every ”good” node remains “good” (if x′ and
y′ are close enough), and (since any d nodes different from i are represented by linearly
independent vectors) j is “good” in y. This contradicts the assumption that x is locally
freest.
(b) is an immediate consequence of (a).
(c) Suppose not, then A⊥
u contains a vector b arbitrarily close to xu but not in X.
Replacing xu by b we obtain another orthogonal representation x′ of G. Moreover, b does
not depend linearly on the other vectors xv (v ∈ S), which contradicts the definition of locally
freest representations.
(d) By (b) and (c), every linearly dependent set of representing vectors has at least three
elements.
(e) By (d), any two neighbors of a node v are represented by linearly independent vectors,
while by (a), the vectors representing the neighbors are linearly dependent. This implies that
v has at least 3 neighbors.
Proof of Theorem 11.5.6. For d = 3, Lemma 11.5.7(e) implies that there are no critical
graphs, and hence no special graphs. So assume that d = 4. Then the same argument gives
that G is 3-regular.
Claim 1 G is bipartite.
Consider the subspaces Av defined above. Statement (b) of Lemma 11.5.7 says that
dim(Av ) ≤ 2 and (d) implies that dim(Av ) ≥ 2, so dim(Av ) = 2, and hence also dim(A⊥
v ) = 2.
By (d), any two vectors xu (u ∈ N (v)) generate Av , and thus the third one is dependent on
them. Thus (c) implies that for any two adjacent nodes u and v, Au = A⊥
v . We know that
G is connected, so fixing any node v, the rest of the nodes fall into two classes: those with
Au = Av and those with Au = A⊥
v . Moreover, any edge in G connects nodes in different
classes. Hence G is bipartite as claimed.
Let G have color classes U and W , where |U | = |W | = m. By the above argument,
{xi : i ∈ U } and {xi : i ∈ W } span orthogonal 2-dimensional subspaces L and L⊥ .
170 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
Let us try to replace xi by xi + yi , where
{
L⊥ , if i ∈ U ,
yi ∈
L,
if i ∈ W
(11.18)
(so the modifying vector yi is orthogonal to xi ). Let us assume that the vectors yi also
satisfy the equations
T
xT
i yj + xj yi = 0
for all ij ∈ E.
(11.19)
Then for every ij ∈ E, we have
T
T
T
(xi + yi )T (xj + yj ) = xT
i xj + xi yj + xj yi + yi yj = 0,
so the modified vectors form an orthogonal representation of G.
One way to construct such vectors yi is the following: we take any linear map B : L → L⊥ ,
and define
{
Bxi
if i ∈ U ,
yi =
(11.20)
−B T xi if i ∈ W .
It is easy to check that these vectors satisfy (11.18) and (11.19). We claim that this is all
that can happen.
Claim 2 If the vectors yi ∈ R4 satisfy (11.18) and (11.19), then they can be obtained as in
(11.20).
The computations above yield that the vectors xi +εyi form an orthogonal representation
of G for every ε > 0, and for a sufficiently small ε, this representation is a locally freest
special representation. This implies that sets {xi + εyi : i ∈ U } and {xi + εyi : i ∈ W }
span orthogonal 2-dimensional subspaces. Scaling the vectors yi by ε, we may assume that
ε = 1.
Let, say, 1, 2 ∈ U . The vectors x1 and x2 are linearly independent by Lemma 11.5.7(b),
and hence there is a linear map B : L → L⊥ such that y1 = Bx1 and y2 = Bx2 . The
vectors x1 + y1 and x2 + y2 are linearly independent as well, and hence for every i ∈ U we
have scalars α1 and α2 such that xi + yi = α1 (x1 + y1 ) + α2 (x2 + y2 ). Projecting to L, we
see that xi = α1 x1 + α2 x2 and similarly yi = α1 y1 + α2 y2 . It follows that
yi = α1 y1 + α2 y2 = α1 Bx1 + α2 Bx2 = B(α1 x1 + α2 x2 ) = Bxi .
By the same argument, we have a linear map C : L⊥ → L such that yj = Cxj for every
j ∈ W . Hence for i ∈ U and j ∈ W , we have
T
T
T
T
T
0 = xT
i yj + xj yi = xi Cxj + xj Bxi = xi (C + B )xj .
11.5. THE VARIETY OF ORTHOGONAL REPRESENTATIONS
171
Since the vectors xi span L and the vectors xj span L⊥ , this implies that C = −B T . This
proves the Claim.
Now it is easy to complete the proof. Equation (11.19), viewed as a system of linear
equations for the yi , has 4m unknowns (each yi has two free coordinates), and consists of
|E(G)| = 3m equations, so it has a solution set that is at least m-dimensional. By Claim
2, all these solutions can be represented by (11.20). This gives a 4-dimensional space of
solutions, so m ≤ 4. Thus G is a 3-regular bipartite graph on at most 8 nodes; it is easy
to see that this means that G = K3,3 or G = Q. The former is ruled out, since G must be
connected.
Remark 11.5.8 A common theme in connection with various representations is that imposing non-degeneracy conditions on the representation often makes it easier to analyze and
therefore more useful (basically, by eliminating the possibility of numerical coincidence).
There are at least 3 types of non-degeneracy conditions that are used in various geometric
representations; we illustrate the different possibilities by formulating them in the case of unit
distance representations in Rd . All three are easily extended to other kinds of representations.
The most natural non-degeneracy condition is faithfulness: we want to represent the
graph so that adjacent nodes and only those are at unit distance.
A representation is in general position if no d + 1 of the points are contained in a hyperplane. This condition (or its relaxations) is popping up in several of our studies, for example,
in connection with rigidity of bar-and-joint structures and orthogonal representations.
Perhaps the deepest non-degeneracy notion is the Strong Arnold Property. Suppose that
we have a system of polynomial equations f1 (x) = 0, . . . , fm (x) = 0 in n variables, and
a solution x0 . Each of these equations defines a hypersurface in Rn , and x0 is a point of
the intersection of these surfaces. We say that x0 has the Strong Arnold Property, if the
hypersurfaces intersect transversally at x0 , which means that the gradients gradfi (x0 ) are
linearly independent.
This condition means that the intersection point is not just accidental, but is forced by
some more fundamental structure; for example, if the system has the Strong Arnold Property,
and we perturb each equation by a small amount, then we get another solvable system.
We can write many of our conditions on the representation as a system of algebraic
equations (for example, unit distance representations or orthogonal representations). Colin
de Verdi`ere matrices with a fixed corank (discussed in Chapter 12) can also be thought of
like this. In all these cases, the Strong Arnold Property enables us to prove basic properties
like minor-monotonicity, and thereby it is the key to make these quantities treatable.
172 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
11.6
And more...
11.6.1
Other inner product representations
So far, we have considered orthogonal representations, where adjacency or non-adjacency is
reflected by the vanishing of the inner product. We can impose other conditions on the inner
product, and indeed, several variations of this type have been studied.
The following representation was considered by Kotlov, Lov´asz and Vempala [143]. A
vector-labeling u : V → Rd of a graph G = (V, E) will be called huje, if
uT
i uj = 1if ij ∈ E,
(11.21)
uT
/ E.
i uj < 1if ij ∈
(11.22)
We say that a huje representation has the Strong Arnold Property, if the the hypersurfaces in
Rdn defined by the equations (11.21) intersect transversally at u. This leads to a very similar
condition as for orthogonal representations: the representation u has the Strong Arnold
Property if and only if there is no symmetric V × V matrix X ̸= 0 such that Xij = 0 if i = j
∑
or ij ∈ E, and j Xij uj = 0 for every node i.
Example 11.6.1 (Regular polytopes) Let P be a polytope in Rd whose congruence
group is transitive on its edges. We may assume that 0 is its center of gravity. Then
there is a real number c such that uT v = c for the endpoints of every edge. We will assume
that c > 0 (this excludes only very simple cases, see Exercise 11.6.5). Then we may also
assume that c = 1. It is not hard to see that for any two nonadjacent vertices, we must have
√
uT v < c. Suppose not, then c ≤ uT v < |u| |v|, and so we may assume that |u| > c.
Let w be the vertex of P maximizing the linear function uT x. We claim that w = u.
Suppose not, then there is a path on the skeleton of P from u to w along which the linear
functional uT x is monotone increasing. Let z be the first vertex on this path after u, then
uT z > uT u > c, contradicting the fact that uz is an edge of P .
There is a path on the skeleton of P from v to a vertex u along which the linear functional
uT x is monotone increasing. Let y be the last vertex on this path before u, then uT y >
uT v ≥ c, a contradiction again as before. Thus the vertices of the polytope provide a huje
representation of its skeleton.
For d = 3, this representation has the Strong Arnold Property: this is a direct consequence
of Cauchy’s Theorem 6.1.1.
Several other representations we have considered so far can be turned into huje representations (at some cost in the dimension usually).
Example 11.6.2 (Tangential polytopes) Let u and v be two vectors in Rd , and suppose
that the segment connecting them touches the unit sphere at a point w. Then u − w and
11.6. AND MORE...
173
v − w are parallel vectors, orthogonal to w, and pointing in opposite directions. Their length
is easily computed by Pythagoras’ Theorem. This gives
uT v = 1 + (u − w)T (v − w) = 1 −
√
√
|u|2 − 1 |v|2 − 1.
We can write this as
(
)T (
)
u
v
√
√
= 1.
|u|2 − 1
|v|2 − 1
Let v : V → R3 be the cage representation in Theorem 7.2.1 of a planar graph G = (V, E).
Then the vectors
)
(
vi
ui = √ 2
|vi | − 1
give a representation in R4 such that uT
i uj = 1 for every edge ij ∈ E. It is not hard to check
that uT
i uj < 1 for every ij ∈ E.
This representation has the Strong Arnold Property by Cauchy’s Theorem 6.1.1.
Let us give a summary of some results about huje representations, without going into the
proofs.
Theorem 11.6.3 (a) If a graph has not twins and has a huje representation in R3 then it
is planar. Every planar graph has a huje representation in R4 .
(b) If a graph has no twins and has a huje representation in R2 then it is outerplanar.
Every outerplanar graph has a huje representation in R3 .
(c) Every graph has a subdivision that has a huje representation in R4 .
A related representation was studied by Kang, Lov´asz, M¨
uller and Scheinerman [131]:
they impose the conditions
uT
i uj ≥ 1if ij ∈ E,
(11.23)
uT
/ E.
i uj < 1if ij ∈
(11.24)
The results are similar to those described above: Every outerplanar graph has a representation in R3 satisfying (11.23), and every planar graph has a such representation in R4 . There
are outerplanar graphs that don’t have such a representation in R2 , and there are planar
graphs that don’t have such a representation in R3 .
11.6.2
Unit distance representation
Given a graph G, how can we represent it by unit distances? To be precise, a unit distance
representation of a graph G = (V, E) is a mapping u : V → Rd for some d ≥ 1 such that
|ui − uj | = 1 for every ij ∈ E (we allow that |ui − uj | = 1 for some nonadjacent nodes i, j).
174 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
Every finite graph has a unit distance representation in a sufficiently high dimension. (For
example, we can map its nodes onto the vertices of a regular simplex with edges of length
1.) The first problem that comes to mind, first raised by Erd˝os, Harary and Tutte [78], is
to find the minimum dimension dim(G) in which a graph has a unit distance representation.
Figure 11.3 shows a 2-dimensional unit distance representation of the Petersen graph [78]. It
turns out that this minimum dimension is linked to the chromatic number of the graph:
lg χ(G) ≤ d(G) ≤ 2χ(G).
The lower bound follows from the results of Larman and Rogers [146] mentioned above; the
upper bound follows from a modification of Lenz’s construction.
Figure 11.3: A unit distance representation of the Petersen graph.
There are many other senses in which we may want to find the most economical unit
distance representation. We only discuss one: what is the smallest radius of a ball containing
a unit distance representation of G (in any dimension)? Considering the Gram matrix A =
(uT
i uj ), it is easy to obtain the following reduction to semidefinite programming (see Appendix
15.38):
Proposition 11.6.4 A graph G has a unit distance representation in a ball of radius R (in
some appropriately high dimension) if and only if there exists a positive semidefinite matrix
A such that
Aii ≤ R2
Aii − 2Aij + Ajj = 1
(i ∈ V )
(ij ∈ E).
In other words, the smallest radius R is the square root of the optimum value of the semidef-
11.6. AND MORE...
175
inite program
minimize w
subject to A ≽ 0
Aii ≤ w
(i ∈ V )
Aii − 2Aij + Ajj = 1
(ij ∈ E).
The unit distance embedding of the Petersen graph in Figure 11.3 is not an optimal solution of this problem. Let us illustrate how semidefinite optimization can find the optimal
embedding by determining this for the Petersen graph. In the formulation above, we have
to find a 10 × 10 positive semidefinite matrix A satisfying the given linear constraints. For a
given w, the set of feasible solutions is convex, and it is invariant under the automorphisms of
the Petersen graph. Hence there is an optimum solution which is invariant under these automorphisms (in the sense that if we permute the rows and columns by the same automorphism
of the Petersen graph, we get back the same matrix).
Now we know that the Petersen graph has a very rich automorphism group: not only
can we transform every node into every other node, but also every edge into every other
edge, and every nonadjacent pair of nodes into every other non-adjacent pair of nodes. A
matrix invariant under these automorphisms has only 3 different entries: one number in
the diagonal, another number in positions corresponding to edges, and a third number in
positions corresponding to nonadjacent pairs of nodes. This means that this optimal matrix
A can be written as
A = xP + yJ + zI,
where P is the adjacency matrix of the Petersen graph, J is the all-1 matrix, and I is the
identity matrix. So we only have these 3 unknowns x, y and z to determine.
The linear conditions above are now easily translated into the variables x, y, z. But what
to do with the condition that A is positive semidefinite? Luckily, the eigenvalues of A can
also be expressed in terms of x, y, z. The eigenvalues of P are well known (and easy to
compute): they are 3, 1 (5 times) and -2 (4 times). Here 3 is the degree, and it corresponds
to the eigenvector 1 = (1, . . . , 1). This is also an eigenvector of J (with eigenvalue 10), and
so are the other eigenvectors of P , since they are orthogonal to 1, and so are in the nullspace
of J. Thus the eigenvalues of xP + yJ are 3x + 10y, x, and −2x. Adding zI just shifts the
spectrum by z, so the eigenvalues of A are 3x + 10y + z, x + z, and −2x + z. Thus the positive
semidefiniteness of A, together with the linear constraints above, gives the following linear
176 CHAPTER 11. ORTHOGONAL REPRESENTATIONS II: MINIMUM DIMENSION
program for x, y, z, w:
minimize w


3x + 10y + z





x + z
subject to −2x + z



y+z



2z − 2x
≥ 0,
≥ 0,
≥ 0,
≤ w,
= 1.
It is easy to solve this: x = −1/4, y = 1/20, z = 1/4, and w = 3/10. Thus the smallest
√
radius of a ball in which the Petersen graph has a unit distance representation is 3/10.
The corresponding matrix A has rank 4, so this representation is in R4 .
It would be difficult to draw a picture of this representation, but we can offer the following
nice matrix, whose columns will realize this representation (the center of the smallest ball
containing it is

1 1
1 0
1
0 1
2
0 0
0 0
not the origin!):
1
0
0
1
0
1
0
0
0
1
0
1
1
0
0
0
1
0
1
0
0
1
0
0
1
0
0
1
1
0
0
0
1
0
1

0
0

0

1
1
(11.25)
(This matrix reflects the fact that the Petersen graph is the complement of the line-graph of
K5 .)
Exercise 11.6.5 Determine all convex polytopes with an edge-transitive isometry group for which uT v ≤ 0 for every edge uv.
Chapter 12
The Colin de Verdi`
ere Number
In 1990, Colin de Verdi`ere [49] introduced a spectral invariant µ(G) of a graph G (for a
survey, see [120]).
12.1
The definition
12.1.1
Motivation
Let G be a connected graph. We know that (by the Perron–Frobenius Theorem) the largest
eigenvalue of the adjacency matrix has multiplicity 1. What about the second largest?
The eigenvalues of most graphs are all different, so the multiplicity of any of them gives
any information about the graph in rare cases only. Therefore, it makes sense to try to
maximize the multiplicity of the second largest eigenvalue by weighting the edges by positive
numbers. The diagonal entries of the adjacency matrix don’t carry any information, so we
allow to put there any real numbers. The off-diagonal matrix entries that correspond to
nonadjacent nodes remain 0.
There is a technical restriction, called the Strong Arnold Property, which excludes very
degenerate choices of edgeweights and diagonal entries. We’ll discuss this condition later.
If we maximize the multiplicity of the largest eigenvalue this way, we get the number of
connected components of the graph. So for the second largest, we expect to get some parameter related to connectivity. However, the situation is more complex (and more interesting).
Since we don’t put any restriction on the diagonal entries, we may add a constant to
the diagonal entries to shift the spectrum so that the second largest eigenvalue becomes 0,
without changing its multiplicity. The multiplicity in question is then the corank of the
matrix (the dimension of its nullspace).
Finally, we multiply the matrix by −1, to follow convention.
This discussion hopefully makes most of the formal definition in the next section clear.
177
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
178
12.1.2
Formal definition
For a simple graph G = (V, E), we consider various sets of matrices generalizing its Laplacian.
[2]
Let RV denote the space of all symmetric V × V real matrices. Let MG denote the
[2]
linear space of all matrices A ∈ RV such that Aij = 0 for all ij ∈ E. We also call these
matrices G-matrices. Let M0G consist of those matrices A ∈ MG for which Aij < 0 for all
ij ∈ E (these matrices form a convex cone). Let M1G consist of those matrices in M0G with
exactly one negative eigenvalue (of multiplicity 1). Finally, let M2G consist of those matrices
in A ∈ M1G that have the Strong Arnold Property: whenever X is a symmetric n × n matrix
such that Xij = 0 for i = j or ij ∈ E, and AX = 0, then X = 0.
The Colin de Verdi`ere number µ(G) of the graph G is defined as the maximum corank of
any matrix in M2G . We call every matrix in M2G a Colin de Verdi`ere matrix of the graph G.
The Strong Arnold Property, also called transversality, needs some illumination. It means
some type of non-degeneracy. (This property depends seemingly both on the graph G and the
matrix A; but since A determines the graph G uniquely, we can speak of the Strong Arnold
Property of any symmetric matrix.) To give a (hopefully) more transparent formulation of
[2]
this property, we consider the manifold Rk of all matrices in RV with rank k. We need the
following lemma describing the tangent space of Rk and its orthogonal complement.
Lemma 12.1.1 (a) The tangent space TA of Rk at A ∈ Rk consists of all matrices of the
form AU + U T A, where U is a V × V matrix. Note that U is not necessarily symmetric, but
AU + U T A is.
(b) The normal space NA of Rk at A consists of all symmetric V × V matrices X such
that AX = 0.
Proof. (a) Suppose that Y = AU + U T A. Consider the family
A(t) = (I + tU )T A(I + tU ).
Clearly rk(A(t)) ≤ rk(A) = k and equality holds if |t| is small enough. Furthermore, A′ (0) =
AU + U T A = Y . Hence Y ∈ TA .
Conversely, let Y ∈ TA . Then there is a one-parameter differentiable family A(t) of
symmetric matrices, defined in a neighborhood of t = 0, so that A(t) ∈ Rk , A(0) = A and
A′ (0) = Y . Let S(t) denote the matrix of the orthogonal projection onto the nullspace of
A(t), and set S = S(0). By definition, we have A(t)S(t) = 0, and hence by differentiation,
we get A′ (0)S(0) + A(0)S ′ (0) = 0, or Y S + AS ′ (0) = 0. Multiplying by S from the left, we
get SY S = 0. So we can write
Y =
1
((I − S)Y (I + S) + (I + S)Y (I − S)).
2
12.1. THE DEFINITION
179
Notice that I − S is the orthogonal projection onto the range of A, and hence we can write
I − S = AV with some matrix V . By transposition, we have I − S = V T A. Then
Y =
1
1
AV Y (I + S) + (I + S)Y V T A = AU + U T A,
2
2
where U = 12 V Y (I + S).
(b) We have X ∈ N (A) iff X · Y = 0 for every Y ∈
T TA . By (a), this means that X · (AU + U T A) = 0 for every U . This can be written as
0 = Tr(X(AU + U T A)) = Tr(XAU ) + Tr(XU T A) = Tr(XAU ) + Tr(AU X) = 2Tr(XAU ).
This holds for every U if and only if XA = 0.
Aij = 0 ∀ ij ∉ E
M
rk( A) = rk( M )
Figure 12.1: The Strong Arnold Property
Using this description, we can easily prove the following lemma giving a geometric meaning to the Strong Arnold Property. We say that two manifolds M1 , M2 ⊆ Rn intersect
transversally at a point x ∈ M1 ∩ M2 if their normal subspaces at this point intersect in the
0 space. This is equivalent to saying that their tangent spaces span Rn .
Lemma 12.1.2 For a simple graph G, a matrix A ∈ M1G has the Strong Arnold Property if
and only if the manifolds Rk (k = rk(A)) and MG intersect transversally at A.
Proof. Then Xij = 0 for all ij ∈ E ∪∆ means that X is orthogonal to MG . Moreover, using
some elementary linear algebra and differential geometry one can easily show that AX = 0
is equivalent to saying that X is orthogonal to TA , the tangent space of Rk at M .
Hence the Strong Arnold Property is equivalent to requiring that TA and MG span the
space of all symmetric V × V matrices.
We note that every matrix A with corank at most 1 automatically satisfies the Strong
Arnold Property. Indeed, if AX = 0, then X has rank at most 1, but since X has 0’s in the
diagonal, this implies that X = 0.
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
180
12.1.3
Examples
Example 12.1.3 It is clear that µ(K1 ) = 0. We have µ(G) > 0 for every graph with at
least two nodes. Indeed, we can put “generic” numbers in the diagonal of −AG to make all
the eigenvalues different, and then we can subtract the second smallest eigenvalue from all
diagonal entries to get one negative and one 0 eigenvalue. The Strong Arnold Property, as
remarked at the end of the previous section, holds automatically.
Example 12.1.4 For a complete graph Kn with n > 1 nodes, it is easy to guess the −J as
a Colin de Verdi`ere matrix. It is trivial that −J ∈ M2K−n . The corank of −J is n − 1, and
one cannot beat this, since at least one eigenvalue must be negative. Thus µ(Kn ) = n − 1.
Example 12.1.5 Next, let us consider the graph Kn consisting of n ≥ 2 isolated nodes. All
entries of a Colin de Verdi`ere matrix M except for the entries in the diagonal must be 0.
We must have exactly one negative entry in the diagonal. Trying to minimize the rank, we
would like to put 0’s in the rest of the diagonal, getting corank n − 1. But it is here where
the Strong Arnold Property appears: we can put at most one 0 in the diagonal! In fact,
assuming that M1,1 = M2,2 = 0, consider the matrix X with
{
1, if {i, j} = {1, 2},
Xi,j =
0, otherwise.
Then X violates (M3). So we must put n−2 positive numbers in the diagonal, and we are left
with a single 0. It is easy to check that this matrix will satisfy (M3), and hence µ(Kn ) = 1.
Example 12.1.6 Consider the path Pn on n ≥ 2 nodes. We may assume that these are
labelled {1, 2, . . . , n} in their order on the path. Consider any Colin de Verdi`ere matrix M
for Pn , and delete its first column and last row. The remaining matrix has negative numbers
in the diagonal and 0’s above the diagonal, and hence it is nonsingular. Thus the corank of
M is at most 1. We have seen that a corank of 1 can always be achieved. Thus µ(Pn ) = 1.
Example 12.1.7 Consider a complete bipartite graph Kp,q , where p ≤ q and q ≥ 2. In
analogy with Kn , one can try to guess a Colin de Verdi`ere matrix with low rank. The
natural guess is
(
0
M=
−J T
)
−J
,
0
where J is the p × q all-1 matrix. This clearly belongs to M1Kp,q and has rank 2. But it turns
out that this matrix violates the Strong Arnold Property unless p, q ≤ 3. In fact, a matrix
X has the form
)
(
Y 0
,
X=
0 Z
12.1. THE DEFINITION
181
where Y is a p × p symmetric matrix with 0’s in its diagonal and Z is a q × q symmetric
matrix with 0’s in its diagonal. The condition M X = 0 says that Y and Z have 0 row-sums.
It is easy to see that this implies that Y = Z = 0 if p, q ≤ 3, but not if p ≥ 4.
This establishes that µ(Kp,q ) ≥ p + q − 2 if p, q ≤ 3, and Exercise 12.5.8 implies that that
equality holds here; but if p ≥ 4, then our guess for the matrix M realizing µ(Kp,q ) does not
work. We will see that in this case µ will be smaller (equal to min{p, q} + 1, in fact).
Remark 12.1.8 There is a quite surprising fact in this last Example, which also underscores
some of the difficulties associated with the study of µ. The graph K4,4 (say) has a nodetransitive and edge-transitive automorphism group, and so one would expect that at least
some optimizing matrix in the definition of µ will share these symmetries: it will have the
same diagonal entries and the same nonzero off-diagonal entries. But this is not the case:
it is easy to see that this would force us to consider the matrix we discarded above. So the
optimizing matrix must break the symmetry!
12.1.4
Basic properties
Theorem 12.1.9 The Colin de Verdi`ere number is minor-monotone.
Proof. [To be written.]
Theorem 12.1.10 If µ(G) > 2, then µ(G) is invariant under subdivision.
Proof. [To be written.]
The following result was proved by Bacher and Colin de Verdi`ere [23].
Theorem 12.1.11 If µ(G) > 3, then µ(G) is invariant under ∆ − Y transformation.
Proof.
Lemma 12.1.12 Let M ∈ M1G , where G is a k-connected graph and rk(M ) = n − k. Then
every (n − k + 1) × (n − k + 1) principal minor of M has one negative eigenvalue.
Proof.
It follows by interlacing eigenvalues that this principal minor M ′ has at least
two non-positive eigenvalues, but only one of these can be negative. If M ′ has no negative
eigenvalue, then it has 2 minimal eigenvalues (zeros), contradicting the Perron-Frobenius
Theorem.
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
182
12.2
Special values
Graphs with Colin de Verdi`ere number up to 4 are characterized. To state the results, we
need some definitions. Every (finite) graph can be embedded in 3-space, but there are ways
to distinguish “simpler” embeddings.
First, we have to define when two disjoint Jordan curves J1 and J2 in R3 are “linked”.
There are several nonequivalent ways to define this. Perhaps the most direct (and let this
serve as out definition) that two curves are linked if there is no topological 2-sphere in R3
separating them (so that one curve is in the interior of the sphere, while the other is in its
exterior). This is equivalent to saying that we can continuously shrink either one of the
curves to a single point without ever intersecting the other.
An alternative definition uses the notion of linking number. We consider an embedding of
the 2-disc in R3 with boundary J1 , which intersects J1 a finite number of times, and always
transversally. Walking along J2 , we count the number of times the 2-disc is intersected from
one side to the other, and subtract the number of times it is intersected in the other direction.
If this linking number is 0, we say that the two curves are homologically unlinked. Reasonably
elementary topological arguments show that the details of this definition are correct: such
2-disc exists, the linking number as defined is independent of the choice of the 2-disk, and
whether or not two curves are homologically linked does not depend on which of them we
choose as J1 .
In a third version, we count all intersection points of J2 with the disc, and we only care
about its parity. If this parity is even, we say that the two curves are unlinked modulo 2.
If two curves are unlinked, then they are homologically unlinked, which in turn implies
that they are unlinked modulo 2. (Neither one of this implications can be reversed.)
An embedding of a graph G into R3 is called linkless if any two disjoint cycles in G are
unlinked. A graph G is linklessly embedable if it has a linkless embedding in R3 .
There are a number of equivalent characterizations of linklessly embedable graphs. Call
an embedding of G flat if for each cycle C in G there is a disc D bounded by C (a ‘panel’)
whose interior is disjoint from G. Clearly, each flat embedding is linkless, but the converse
does not hold. However, if G has an embedding that is linkless in the weakest sense (any two
disjoint cycles are unlinked modulo 2), then it also has an embedding that is linkless in the
strongest sense (it is flat). So the class of linklessly embedable graphs is very robust with
respect to variations in the definition.
These facts follow from the work of Robertson, Seymour, and Thomas [206, 204], as a
byproduct of the a deep forbidden minor characterization of linklessly embedable graphs.
Now we are able to state the the following theorem characterizing small values of the
Colin de Verdi`ere number.
Theorem 12.2.1 Let G be a connected graph.
12.3. NULLSPACE REPRESENTATION
183
(a) µ(G) ≤ 1 if and only if G is a path;
(b) µ(G) ≤ 2 if and only if G is outerplanar;
(c) µ(G) ≤ 3 if and only if G is planar;
(d) µ(G) ≤ 4 if and only if G is linklessly embedable.
Statements (a)–(c) were proved by Colin de Verdi`ere [48]; an elementary proof is due to
Hein van der Holst [116]; (d) is due to Lov´asz and Schrijver [168].
Graphs with large Colin de Verdi`ere number, n − 4 and up, have been studied by Kotlov,
Lov´asz and Vempala [143], but a full characterization is not available. The results are in a
sense analogous to those for small numbers, but they are stated for the complementary graph
(this is natural), and they are less complete (this is unfortunate). We only state two cases.
Theorem 12.2.2 Let G be a graph without twins.
(a) If G is outerplanar, then µ(G) ≥ n − 4; if G is not outerplanar, then µ(G) ≤ n − 4;
(b) If G is planar, then µ(G) ≥ n − 5; if G is not planar, then µ(G) ≤ n − 5.
We are not going to give the full proof of either of these theorems, but will prove some
of the statements, and through this, try to show the techniques that are applied here. The
main tools will be two vector-labelings discussed in the next two sections.
12.3
Nullspace representation
Every G-matrix M defines a vector-labeling of the graph G in cork(M ) dimensions as follows.
We take a basis v1 , . . . , vd of the null space of M , write them down as column vectors. Each
node i of the graph corresponds to a row ui of the matrix obtained, which is a vector in Rd ,
and so we get the nullspace representation of the graph. Saying this in a fancier way, each
coordinate function is a linear functional on the nullspace, and so the nullspace representation
is a representation in the dual space of the nullspace.
The nullspace representation u satisfies the equations
∑
Mij uj = 0;
(12.1)
i
in other words, M defines a braced stress on (G, u). The nullspace representation is uniquely
determined up to a linear transformation of Rd . Furthermore, if D is a diagonal matrix with
positive diagonal, then (D−1 ui : i ∈ V ) is a nullspace representation derived from the scaled
version DM D of M (which in almost all cases we consider can replace M ). If all representing
vectors ui are nonzero, then a natural choice for D is Dii = 1/|ui |: this scales the vectors
ui to unit vectors, i.e., we get a mapping of V into the unit sphere. We will see that other
choices of this scaling give other nice representations.
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
184
As a special case of this construction, any Colin de Verdi`ere matrix yields a representation
of the graph in µ(G)-space.
In this case we have Mij < 0 for every edge ij, and so writing the definition as
∑
(−Mij )uj = Mii ui
for all i ∈ V,
j∈N (i)
we see that each vector ui is in the cone spanned by its neighbors, or in the negative of this
cone (depending on the sign of Mii ).
12.3.1
Nodal theorems and planarity
In this section, let us fix a connected graph G with µ(G) = µ, let M be a Colin de Verdi`ere
matrix of G, and let u be a nullspace representation belonging to M . If H is a hyperplane,
then we denote by VH , VH+ and VH− the set of nodes represented by vectors on H, on one side
of H, and on the other side of H, respectively. (It does not matter at this point which side
defines VH+ or VH− .)
We start with an easy observation.
Lemma 12.3.1 Let H be any hyperplane through the origin, then N (VH+ ) ∩ VH = N (VH− ) ∩
VH .
Proof.
Let a ∈ VH . We know that some scalar multiple of ua is contained in the relative
interior of the convex hull of the vectors {ui : i ∈ N (a)}. Hence either {ui : i ∈ N (a)} ⊆ H
or {ui : i ∈ N (a)} intersects both sides of H.
We will denote the set N (VH+ ) ∩ VH = N (VH− ) ∩ VH by VH′ .
Van der Holst [116] proved a key lemma about Colin de Verdi`ere matrices, which was
subsequently generalized [119, 169]. We state a generalization in a geometric form.
Lemma 12.3.2 Let G be a connected graph with µ(G) = µ, let M be a Colin de Verdi`ere
matrix of G, and let u be a nullspace representation belonging to M . Let Σ be an open
halfspace in Rµ whose closure contains the origin. Then the subgraph G′ of G induced by
nodes represented by vectors in Σ is nonempty, and either
(i) G′ is connected, or
(ii) There is a linear subspace S contained in the boundary of Σ, of dimension µ − 2, and
there are half-hyperplanes T1 , T2 , T3 with boundary S, so that each node of G lies on ∪i Ti ,
and each edge of G is contained in one of the Ti . Furthermore, each node in S is connected
to nodes in every Ti , or else only to nodes in S. The halfspace Σ contains at least two of the
half-hyperplanes Ti .
(See Figure 12.2.)
12.3. NULLSPACE REPRESENTATION
185
Figure 12.2: Two possibilities how the nullspace representation of a connected graph
can be partitioned by a hyperplane through the origin.
Proof.
Corollary 12.3.3 Let G be a connected graph with µ(G) = µ, let M be a Colin de Verdi`ere
matrix of G, and let u be a nullspace representation belonging to M , and let H be a hyperplane
through 0 in Rµ . Then VH+ and VH− induce a non-empty connected subgraphs, provided one
of the following conditions hold:
(a) (Van der Holst’s Lemma) The subspace H is spanned by some of the the vectors ui .
(b) H does not contain any of the vectors ui .
Sometimes it is more useful to set this in an algebraic terms.
Corollary 12.3.4 Let G be a connected graph with µ(G) = µ, let M be a Colin de Verdi`ere
matrix of G, and let x ∈ ker M . Then both supp+ (x) and supp− (x) are nonempty and induce
connected subgraphs of G, provided one of the following conditions hold:
(a) The support of x is minimal among all vectors in ker M .
(b) supp(x) = V .
12.3.2
Steinitz representations from Colin de Verdi`
ere matrices
For 3-connected planar graphs, the nullspace representation gives a Steinitz representation
[169, 162]; in fact, Steinitz representations naturally correspond to Colin de Verdi`ere matrices,
as we will see in the next section. (See Exercises 12.5.11 and 12.5.12 for interesting scaling
results in other, simpler cases.)
For this section, let G be a 3-connected planar graph, let M be a Colin de Verdi`ere matrix
of G with corank 3, and let (ui : i ∈ V ) be the nullspace representation derived from M .
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
186
Since G is 3-connected, its embedding in the sphere is essentially unique, and so we can talk
about the countries of G.
We say that the Colin de Verdi`ere matrix M of G is polyhedral if the vectors ui are the
vertices of a convex polytope P , and the map i 7→ ui is an isomorphism between G and the
skeleton of P . Note that the vectors ui are determined by M up to a linear transformation
of R3 , and the skeleton graph of this polytope does not depend on this linear transformation.
We start with showing that van der Holst’s Lemma 12.3.2 holds in a stronger form in this
case.
Lemma 12.3.5 For every plane H through the origin, the induced subgraphs G[VH+ ] and
G[VH− ] are connected.
Proof. We have to exclude alternative (ii) in Lemma 12.3.2. Suppose that it occurs, then
VH′ is a cutset, and so by 3-connectivity, |VH′ | ≥ 3. Contract each subgraph in the interior of
a halfspace Ti to a single node, then the graph remains planar. But the new nodes, together
with any 3 nodes in VH′ , form a K3,3 , which is impossible.
Next, we want to show that no node is represented by the 0 vector, and any two adjacent
nodes are presented by non-parallel vectors. We prove a stronger lemma, which will be useful
later.
Lemma 12.3.6 If a, b, c are distinct nodes of the same face, then ua , ub and uc are linearly
independent.
Proof. Suppose not. Then there exists a plane H through the origin that contains ua , ub
and uc . Lemma 12.3.5 implies that the sets of nodes VH+ and VH− induce nonempty connected
subgraphs of G. We know that |VH | ≥ 3; clearly either VH′ = VH or VH′ is a cutset, and hence
|VH′ | ≥ 3. Thus G has three pairwise node-disjoint paths, connecting a, b and c to distinct
points VH′ . We may assume that the internal nodes of these paths are not contained in VH′ ,
and so they stay in VH . Contracting every edge on these paths as well as the edges induced
by VH+ and VH− , we obtain a planar graph in which a, b and c are still on one face, and they
have two common neighbors, which is clearly impossible.
This Lemma implies that each ui is nonzero, and hence we may scale them to unit vectors.
It also implies that ui and uj are non-parallel if i and j are adjacent, and so for adjacent
nodes i and j there exists a unique shortest geodesic on S 2 connecting ui and uj . This gives
an extension of the mapping i → ui to a mapping ψ : G → S 2 . We will show that this is in
fact an embedding.
The main result in this section is the following [162, 168]:
Theorem 12.3.7 Every Colin de Verdi`ere matrix of a 3-connected planar graph G can be
scaled to be polyhedral.
12.3. NULLSPACE REPRESENTATION
187
Proof. The proof will consist of two parts: first we show that the nullspace representation
with unit vectors gives an embedding of the graph in the sphere; then we show how to rescale
these vectors to get a Steinitz representation.
Claim 1 Let F1 and F2 be two countries of G sharing an edge ab. Let H be the plane through
0, ua and ub . Then all the other nodes of F1 are in VH+ , and all the other nodes of F2 are in
VH− (or the other way around).
Suppose not, then there is a node c of F1 and a node d of F2 such that c, d ∈ VH+ (say).
(Note that by Claim 12.3.6, no node of F1 or F2 belongs to VH .) By Lemma 12.3.5, there is
a path P connecting c and d with V (P ) ⊆ VH+ . By Lemma 12.3.1, a and b have neighbors a′
and b′ in VH− , and by Lemma 12.3.5 again, there is a path connecting a′ and b′ contained in
VH− . This gives us a path Q connecting a and b, with at least one internal node, contained
in VH− (up to its endpoints).
What is important from this is that P and Q are disjoint. But trying to draw this in the
planar embedding of G, we see that this is impossible (see Figure 12.3).
Figure 12.3: Why two countries don’t fold.
Claim 1 implies the following.
Claim 2 If 1, . . . , k are the nodes of a face, in this cyclic order, then u1 , . . . , uk are the
extremal rays of the convex cone they generate in R3 , in this cyclic order.
Next we turn to the cyclic order around nodes.
Claim 3 Let a ∈ V , and let (say) 1, . . . , k be the nodes adjacent to a, in this cyclic order in
the planar embedding. Let H denote a plane through 0 and ua , and suppose that u1 ∈ VH+
and uk ∈ VH− . Then there is an h, 1 ≤ h ≤ k − 1 such that u1 , . . . , uh−1 ∈ VH+ and
uh+1 , . . . , uk ∈ VH− .
If the Claim fails to hold, then there are two indices 1 < i < j < k such that ui ∈ VH−
and uj ∈ VH+ . By Lemma 12.3.5, there is a path P connecting 1 and j in VH+ , and a path P ′
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
188
connecting i and k in VH− . In particular, P and P ′ are disjoint. But this is clearly impossible
in the planar drawing.
Claim 4 With the notation of Claim 3, the geodesics on S 2 connecting va to v1 , . . . , vk issue
from va in this cyclic order.
Indeed, let us consider the halfplanes Ti bounded by the line ℓ through 0 and ua , and
containing ui (i = 1, . . . , k). Claim 1 implies that Ti+1 is obtained from Ti by rotation about
ℓ in the same direction. Furthermore, it cannot happen that these halfspaces Ti rotate about
ℓ more than once for i = 1, . . . , k, 1. Indeed, in such a case any plane through ℓ not containing
would violate Claim 3.
This leads to our first goal:
Claim 5 The unit nullspace mapping is an embedding of G in the sphere. Furthermore, each
country f of the embedding is the intersection of a convex polyhedral cone Cf with the sphere.
Claims 2 and 4 imply that we can extend the map ψ : G → S 2 to a mapping ψ : S 2 →
S 2 such that ψ is a local homeomorphism (i.e., every point x ∈ S 2 has a neighborhood
that is mapped homeomorphically to a neighborhood of ψ(x).) This implies that ψ is a
homeomorphism (see Exercise 12.5.13).
As a second part of the proof of Theorem 12.3.7, we construct appropriate scaling of a
nullspace representation. The construction will start with the polar polytope. Let G∗ =
(V ∗ , E ∗ ) denote the dual graph of G.
Claim 6 We can assign a vector wf to each f ∈ V ∗ so that whenever ij ∈ E and f g is
corresponding edge of G∗ , then
wf − wg = Mij (ui × uj ).
(12.2)
Let vf g = Mij (ui × uj ). It suffices to show that the vectors vf g sum to 0 over the edges
of any cycle in G∗ . Since G∗ is a planar graph, it suffices to verify this for the facets of G∗ .
Expressing this in terms of the edges of G, it suffices to show that
∑
Mij (ui × uj ) = 0.
j∈N (i)
But this follows from the basic equation (12.1) upon multiplying by ui , taking into account
that ui × ui = 0 and Mij = 0 for j ∈
/ N (i) ∪ {i}. This proves the Claim.
An immediate property of the vectors wf is that if f and g are two adjacent nodes of
T
G∗ such that the corresponding facets of G share the node i ∈ V , then uT
i wf = ui wg . This
follows immediately from (12.2), upon scalar multiplication by ui . Hence the inner product
uT
i wf is the same number αi for every facet f incident with ui . In other words, these vectors
wf all lie on the plane uT
i x = αi .
∗
Let P be the convex hull of the vectors wf .
12.3. NULLSPACE REPRESENTATION
189
Claim 7 Let f ∈ V ∗ , and let u ∈ R3 , u ̸= 0. The vector wf maximizes uT x over P ∗ if and
only if u ∈ Cf .
First suppose that wf maximizes uT x over P ∗ . Let i1 , . . . , is be the nodes of the facet
of G corresponding to f (in this counterclockwise order). By Claim 2, the vectors uit are
precisely the extreme rays of the cone Cf .
Let f gt be the edge of G∗ corresponding to it it+1 . Then uT (vf − vgt ) ≥ 0 for t = 1, . . . , s,
and hence by (12.2),
uT Mit it+1 (uit+1 × uit ) ≥ 0,
or
uT (uit × uit+1 ) ≥ 0.
This means that u is on the same side of the plane through uit and uit+1 as Cf . Since this
holds for every i, it follows that u ∈ Cf .
Conversely, let u ∈ Cf . Arbitrarily close to u we may find a vector u′ in the interior of
Cf . Let wg be a vertex of P ∗ maximizing (u′ )T x over x ∈ P ∗ . Then by the first part of our
Claim, u ∈ Cg . Since the cones Ch , h ∈ V ∗ are openly disjoint, it follows that f = g. Thus
(u′ )T x is optimized over x ∈ P ∗ by wf . Since u′ may be arbitrarily close to u, it follows that
uT x too is optimized by wf . This proves Claim 7.
This claim immediately implies that every vector wf (f ∈ V ∗ ) is a vertex of P ∗ .
Claim 8 The vertices wf and wg form an edge of P ∗ if and only if f g ∈ E ∗ .
Let f g ∈ E ∗ , and let ij be the edge of G corresponding to f g ∈ E ∗ . Let u be a vector in
the relative interior of the cone Ci ∩ Cj . Then by Claim 7, uT x is maximized by both wf and
wg . No other vertex wh of P ∗ maximizes uT x, since then we would have u ∈ Ch by Claim 7,
but this would contradict the assumption that u is in the relative interior of Cf ∩ Cg . Thus
wf and wg span an edge of P ∗ .
Conversely, assume that wf and wg span an edge of P ∗ . Then there is a non-zero vector
u such that uT x is maximized by wf and wg , but by no other vertex of P ∗ . By Claim 7, u
belongs to Cf ∩ Cg , but to no other cone Ch , and thus Cf and Cg must share a face, proving
that f g ∈ E ∗ . This completes the proof of Claim 8. As an immediate corollary, we get the
following.
Claim 9 Every vector wf is a vertex of P ∗ , and the map f 7→ wf is an isomorphism between
G∗ and the skeleton of P ∗ .
To complete the proof, let us note that the vectors wf are not uniquely determined by
(12.2): we can add the same (but otherwise arbitrary) vector to each of them. Thus we
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
190
may assume that the origin is in the interior of P ∗ . Then the polar P of P ∗ is well-defined,
and its skeleton is isomorphic to G. Furthermore, the vertices of P are orthogonal to the
corresponding facets of P ∗ , i.e., they are positive multiples of the vectors ui . Thus the vertex
of P representing node i is u′i = αi ui , where αi = uT
i wf > 0 for any facet f incident with i.
The scaling of M constructed in this proof is almost canonical. There were only two
free steps: the choice of the basis in the nullspace of M , and the translation of P ∗ so that
the origin be in its interior. The first corresponds to a linear transformation of P . It is
not difficult to show that if this linear transformation is A, then the scaling constants are
multiplied by | det(A)|.
The second transformation is a bit more complicated. Translation of P ∗ corresponds to a
projective transformation of P . Suppose that instead of P ∗ , we take the polyhedron P ∗ + w.
T
T
Then the new scaling constant αi′ = uT
i (wf + w) = αi + ui w. Note that here ui w is just a
generic vector in the nullspace of M . Thus we get that in the above construction, the scaling
constants αi are uniquely determined by M up to multiplication by the same positive number
and addition of a vector in the nullspace of M .
Of course, there might be many other scalings that yield polyhedra. For example, if P
is simple (every facet is a triangle), then any scaling “close” to the one constructed above
would also yield a Steinitz representation.
12.3.3
Colin de Verdi`
ere matrices from Steinitz representations
Let P be any polytope in R3 , containing the origin in its interior. Let G = (V, E) be the
skeleton of P . Let P ∗ be its polar, and G∗ = (V ∗ , E ∗ ), the skeleton of P ∗ .
Let ui and uj be two adjacent vertices of P , and let wf and wg be the endpoints of the
corresponding edge of P ∗ . Then by the definition of polar, we have
T
T
T
uT
i wf = ui wg = uj wf = uj wg = 1.
This implies that the vector wf − wg is orthogonal to both vectors ui and uj , and hence it
is parallel to the vector ui × uj . Thus we can write
wf − wg = Mij (ui × uj ),
where the labeling of wf and wg is chosen so that Mij < 0; this means that ui , uj and wg
form a right-handed basis.
Let i ∈ V , and consider the vector
u′i =
∑
j∈N (i)
Mij uj .
12.3. NULLSPACE REPRESENTATION
191
Then
ui × u′i =
∑
Mij (ui × uj ) =
∑
(wf − wg ),
j∈N (i)
where the last sum extends over all edges f g of the facet of P ∗ corresponding to i, oriented
counterclockwise. Hence this sum vanishes, and we get that
ui × u′i = 0.
Hence ui and u′i are parallel, and we can write
u′i = −Mii ui
(12.3)
with some real Mii . We complete the definition of a matrix M = M (P ) by setting Mij = 0
whenever i and j are distinct non-adjacent nodes of G.
Theorem 12.3.8 The matrix M (P ) is a Colin de Verdi`ere matrix for the graph G.
Proof.
It is obvious by construction that M has the right pattern of 0’s and negative
elements. (12.3) can be written as
∑
Mij uj = 0,
j
which means that each coordinate of the ui defines a vector in the null space of M . Thus M
has corank at least 3.
For an appropriately large constant C > 0, the matrix CI − M is non-negative and
irreducible. Hence it follows from the Perron-Frobenius Theorem that the smallest eigenvalue
of M has multiplicity 1. In particular, it cannot be 0 (which we know has multiplicity at
least 3), and so it is negative. Thus M has at least one negative eigenvalue.
The most difficult part of the proof is to show that M has only one negative eigenvalue.
Observe that if we start with any true Colin de Verdi`ere matrix M of the graph G, and we
construct the polytope P from its null space representation as in Section 12.3.2, and then we
construct a matrix M (P ) from P , then we get back (up to positive scaling) the matrix M .
Thus there is at least one polytope P with the given skeleton for which M (P ) has exactly
one negative eigenvalue.
Steinitz proved that any two 3-dimensional polytopes with isomorphic skeletons can be
transformed into each other continuously through polytopes with the same skeleton (see
[201]). Each vertex moves continuously, and hence so does their center of gravity. So we can
translate each intermediate polytope so that the center of gravity of the vertices stays 0. Then
the polar of each intermediate polytope is well-defined, and also transforms continuously.
Furthermore, each vertex and each edge remains at a positive distance from the origin, and
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
192
therefore the entries of M (P ) change continuously. It follows that the eigenvalues of M (P )
change continuously.
Assume that M (P0 ) has more than 1 negative eigenvalue. Let us transform P0 into a
polytope P1 for which M (P1 ) has exactly one negative eigenvalue. There is a last polytope
Pt with at least 5 non-positive eigenvalues. By construction, each M (P ) has at least 3
eigenvalues that are 0, and we know that it has at least one that is negative. Hence it follows
that M (Pt ) has exactly four 0 eigenvalues and one negative. But this contradicts Proposition
6.1.6.
Next we show the Strong Arnold Property. Let X ∈ RV ×V be a symmetric matrix such
that M X = 0 and Xij = 0 for i = j and for ij ∈ E. Every column of X is in the nullspace of
M . We know that this nullspace is 3-dimensional, and hence it is spanned by the coordinate
vectors of the ui . This means that there are vectors hi ∈ R3 (i ∈ V ) such that Xij = uT
i hj .
We know that Xij = 0 if j = i or j ∈ N (i), and so hi must be orthogonal to ui and all its
neighbors. Since ui and its neighbors obviously span R3 , it follows that X = 0.
Finally, it is trivial by definition that M is polyhedral.
12.3.4
Further geometric properties
Let M be a polyhedral Colin de Verdi`ere matrix of G, and let P be the corresponding convex
3-polytope. Let P ∗ be the polar polytope with 1-skeleton G∗ = (V ∗ , E ∗ ). The key relations
between these objects are:
∑
Mij uj = 0
for every i ∈ V ,
(12.4)
j
and
wf − wg = Mij (ui × uj ) for every ij ∈ E and corresponding f g ∈ E ∗ .
(12.5)
There are some consequences of these relations that are worth mentioning.
First, taking the vectorial product of (12.5) with wg , we get (wf − wg ) × wg = wf × wg
on the left hand side and
(
)
(
T
Mij (ui × uj ) × wg = Mij (uT
i wg )uj − (uj wg )ui ) = Mij (uj − ui ).
on the right. Thus
wf × wg = Mij (uj − ui ).
(12.6)
The vector hi = (1/|ui |2 )ui is orthogonal to the face i of P ∗ and points to a point on the
plane of this face. Taking the scalar product of both sides of (12.6) with hi , we get
T
hT
i (wf × wg ) = Mij (hi uj − 1).
12.4. INNER PRODUCT REPRESENTATIONS AND SPHERE LABELINGS
193
The left hand side is 6 times the volume of the tetrahedron spanned by hi , wf and wg .
Summing this over all neighbors j of u (equivalently, over all edges f g of the facet i of P ∗ ),
we get 6 times the volume Vi of the cone spanned by the facet i of P ∗ and the origin. On the
right hand side, notice that the expression is trivially 0 if j is not a neighbor of i (including
the case j = i), and hence we get by (12.4)


∑
∑
∑
∑
T
Mij (hT
Mij uj  −
Mij = −
Mij .
i uj − 1) = hi
j
j
j
j
Thus we get
∑
Mij = −6Vi .
(12.7)
j
Note that by (12.4),
∑
i
Vi ui = −
1∑
6
i


∑

Mij  ui = −
j
1 ∑∑
Mij ui = 0.
6 j i
This is a well known fact, since Vi ui is just the area vector of the facet i of P ∗ .
(12.6) gives the following relation between the Colin de Verdi`ere matrices of a 3-connected
planar graph and its dual. Let M be a polyhedral Colin de Verdi`ere matrix for a planar graph
G = (V, E). Let P be a polytope defined by M . Let Vf denote the volume of the cone spanned
by the origin and the facet f . If ij ∈ E incident with facets f and g, then let Mf∗g = 1/Mij .
∑
Let Mf∗f = −Vf − g∈N (f ) Mf∗g ; finally, let Mf∗g = 0 if f and g are distinct non-adjacent
∗
∗
facets. These numbers form a symmetric matrix M ∗ ∈ RV ×V .
Corollary 12.3.9 M ∗ is a polyhedral Colin de Verdi`ere matrix of the dual graph G∗ .
12.4
Inner product representations and sphere labelings
12.4.1
Gram representation
Every well-signed G-matrix M of a graph G = (V, E) with exactly one negative eigenvalue
gives rise to another vector-labeling of the graph, this time in the (rk(M ) − 1)-dimensional
space [143]. Let λ1 < 0 = λ2 = · · · = λr < λr+1 ≤ . . . be the eigenvalues of M . By scaling,
we may assume that λ1 = −1. Let v1 be the eigenvector of unit length belonging to λ1 . Since
M is well-signed, we may assume that all entries of v1 are positive. We form the diagonal
matrix D = diag(v1 ).
We start with getting rid of the negative eigenvalue, and consider the matrix
(
)
M ′ = D−1 M + v1 v1T D−1 = D−1 M D−1 + J.
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
194
This matrix is symmetric and positive semidefinite, and so we can write it as a Gram matrix
n−r
M ′ = (aT
for all i ∈ V . We call the vector labeling i 7→ ai the Gram
i aj ), where ai ∈ R
representation of G (belonging to the matrix M ). This is not uniquely determined, but
since the inner products of the vectors ai are determined, so are their lengths and mutual
distances, and hence two Gram representations obtained from the same matrix differ only
in an isometric linear transformation of the space. From the fact that M is a well-signed
G-matrix, it follows that
{
= 1 for ij ∈ E,
T
′
ai aj = Mij
(12.8)
< 1 for ij ∈ E
(The somewhat artificial scaling above has been introduced to guarantee this.) The diagonal
entries of M ′ are positive, but they can be larger than, equal to, or smaller than 1.
Condition (12.8) can be reformulated in the following very geometric way, at least in
the case when |ai | > 1 for all nodes i. Let Ci denote the sphere with center ai and radius
√
ri = |ai |2 − 1. Then (12.8) says that Ci and Cj are orthogonal if ij ∈ E, but they intersect
at an angle less than 90◦ if ij ∈ E.
We need a non-degeneracy condition that will correspond to the Strong Arnold Property:
the condition is that the vectors ai , as a vector labeling of G, carry no homogeneous stress.
Recall that such a homogeneous stress would mean a symmetric V × V matrix X such that
Xij = 0 unless ij ∈ E, and
∑
Xij aj = 0
(12.9)
j
for every node i. Let us call such a matrix X a homogeneous G-stress carried by a.
Lemma 12.4.1 A graph G = (V, E) with at least 3 nodes satisfies µ(G) ≤ k if and only if it
has a vector labeling a : V → Rn−k−1 satisfying (12.8) such that a carries no homogeneous
G-stress.
Proof. First, suppose that m = µ(G) ≤ k, and let M be a Colin de Verdi`ere of G. Then M
is a well-signed G-matrix with exactly one negative eigenvalue, and hence the construction
above gives a vector-labeling a in Rn−m−1 satisfying (12.8). We claim that this vector
labeling carries no homogeneous G-stress. Suppose that X is such a homogeneous stress.
Then multiplying (12.9) by ai , we get
∑
Xij aT
j ai = 0
j
for every i ∈ V . Note that if Xij ̸= 0 then aT
j ai = 1; hence this equation means that
∑
−1
XD−1 , then
j Xij = 0. In other words, JX = 0. Let Y = D
M Y = D(M ′ − J)DD−1 XD−1 = DM ′ XD−1 = 0.
12.4. INNER PRODUCT REPRESENTATIONS AND SPHERE LABELINGS
195
Since Y ̸= 0 and has the same 0 entries as X, this contradicts the fact that M has the Strong
Arnold property.
The converse construction is straightforward. Let i 7→ ai ∈ Rd be a vector labeling
satisfying (12.8) that carries no homogeneous G-stress, and let A be the matrix with row
T
vectors aT
i . Let us form the matrix M = AA − J. It is clear that M is a well-signed
G-matrix with at most one negative eigenvalue, and has corank at least n − d − 1.
The only nontrivial part of the proof is to show that M has the Strong Arnold Property.
Suppose that M X = 0, where X is a symmetric matrix for which Xij = 0 unless ij ∈ E.
Then AAT X = JX, and hence rk(AT X) = rk(X T AAT X) = rk(X T JX) ≤ 1. (We have used
the fact that rk(B) = rk(B T B) for every real matrix B.) If rk(AT X) = 0, then AT X = 0,
and so X is a homogeneous G-stress, which contradicts the hypothesis of the lemma. So we
must have rk(AT X) = 1, and we can write AT X = uvT , where u ∈ Rd and v ∈ RV . Hence
AuvT = AAT X = JX = J 11T , which implies that (rescaling u and v if necessary) v = X 1
and Au = 1. Hence v = XAu = vuT u, and so |u| = 1. But then
(A − 1T u)(A − 1T u)T = AAT − J = M,
and so M is positive semidefinite.
This means that
∑
Xij (aT
j ak − 1) = 0
j
for all i, k ∈ V , which we can write as
(∑
j
)T
Xij aj
ak =
∑
Xij .
j
12.4.2
Gram representations and planarity
Now we are ready to prove part (b) of Theorem 12.2.2. The proof of part (a) uses similar,
but much simpler arguments, and is not given here.
Proof. I. Suppose that G is planar, we want to prove that µ(G) ≥ n − 5. Using Lemma
12.4.1, it suffices to construct a homogeneous-stress-free vector-labeling in R4 satisfying
(12.8).
We may assume that G is 3-connected, since we can delete edges from G so that G
remains planar but becomes 3-connected. We start with the K¨obe–Andre’ev representation
of G as the skeleton of a polytope P whose edges are tangent to the unit sphere S 2 . Let ui
√
be the vertex of P representing i ∈ V , and let ri = |ui |2 − 1 be the length of the tangent
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
196
from ui to S 2 . The spheres Ci with center ui and radius ri have the property that they are
orthogonal to S 2 , their interiors are disjoint, and Ci and Cj are tangent if and only if ij ∈ E.
( )
Next, for every i ∈ V we consider the sphere Ci′ ⊆ R4 with center wi = urii and radius
√
2ri . It is easy to check that Ci′ intersects the hyperplane of the first three coordinates
in Ci , and it is orthogonal to the unit sphere S 3 . Furthermore, Ci′ and Cj′ are orthogonal
whenever Ci and Cj are touching, and otherwise they intersect at an angle less than 90◦ . By
the remark after (12.8), this means that the vector-labeling w satisfies (12.8).
It remains to argue that the vector-labeling w, as a labeling of G, carries no homogeneous
stress. Suppose it does, and let X be a nonzero homogeneous stress. This means that X is
a symmetric nonzero V × V matrix for which Xij = 0 unless ij ∈ E, and
∑
Xij wj = 0
for every i ∈ V.
(12.10)
j∈{N (i)
We claim that X is not only a homogeneous stress but a stress. Indeed, multiplying (12.10)
by ui and using (12.8), we get
∑
Xij = 0,
j∈N (i)
and hence
∑
j∈{N (i)
Xij (wj − wi ) =
∑
Xij wj = 0.
j∈{N (i)
So X is a stress on w. It remains a stress if we project the vectors wi to their first three
coordinates, and so the vector labeling u has a stress. But this is a representation of G as
the skeleton of a 3-dimensional polytope, which cannot have a stress by Cauchy’s Theorem
6.1.1.
II. Assume that µ(G) ≥ n−4; we show that G is planar. By Lemma 12.4.1, G has a vector
labeling a : V → R3 satisfying (12.8) such that a has no braced stress (as a vector labeling
of G). Let V = [n], a0 = 0, and let P be the convex hull of the vectors ai (i = 0, 1, . . . , n).
Since (12.8) determines the neighbors of a node i once the vector ai is given, the assumption
that G is a connected twin-free graph implies that no two nodes are represented by the same
vector.
In the (perhaps) “typical” case when |ai | > 1 for every node i, condition (12.8) easily
implies that every vector ai is a vertex of the convex hull of all these vectors, and the segment
ai aj is an edge of of this polytope for every ij ∈ E. This implies that G is a subgraph of
the skeleton of P , and hence it is planar. Unfortunately, this conclusion is not true in the
general case, and we need to modify the labeling a to get an embedding into the surface of
P.
Claim 1 If |ai | > 1, then ai is a vertex of P .
12.5. AND MORE...
197
This follows from the observation that the plane defined by aT
i x = 1 separates ai from
every other aj and also from 0.
Claim 2 If ij ∈ E, then either ai or aj is a vertex of P . If both of them are vertices of P ,
then ai aj is an edge of P .
Indeed, we have aT
i aj = 1, which implies that one of ai and aj is longer than 1, and hence
it is a vertex of P . If ui and uj are vertices of P , but (ai , aj ) is not an edge of P , then some
point αui + (1 − α)uj (0 < α < 1) can be written as a convex combination of other vertices:
αai + (1 − α)aj =
∑
λk ak ,
k̸=i,j
where λk ≥ 0 and
∑
k
λk ≤ 1. Let, say, |aj | > 1, then we get
T
1 < αaT
j ai + (1 − α)aj aj =
∑
λ k aT
j ak ≤
k̸=i,j
∑
λk ≤ 1,
k̸=i,j
which is a contradiction. This proves the Claim.
Claim 3 If ai is not a vertex of P , then Qi = {x ∈ P : aT
i x = 1} is a face of P , whose
vertices are exactly the points aj , j ∈ N (i).
Indeed, by Claim 1 we have |ai | ≤ 1, and so the hyperplane {x : aT
i x = 1} supports P ,
which touches P in a face Qi . Condition (12.8) implies that all G-neighbors of i, and only
those nodes, are contained in Qi . It follows by Claim 2 that every neighbor of i is represented
by a vertex of P , and hence, by a vertex of Qi .
Claim 3 implies that Qj ̸= Qi for j ̸= i, since two nodes with Qi = Qj would be twins.
Now it is easy to complete the proof. By Claim 2, those nodes i for which ai is a vertex
of P induce a subgraph of the skeleton of P . Among the remaining nodes, those for which
Qi is a facet will be labeled by any internal node of Qi ; since these facets Qi are different for
different nodes i, this embeds these nodes and (by straight line segments) the edges incident
with them into the surface of P . If Qi is an edge, then we label i by a point of the surface
of P , near the midpoint of Qi , but not on Qi . Finally, we can label those nodes i for which
Qi is a vertex by any point on the surface of P , near Qi , but not on any of the previously
embedded edges.
12.5
And more...
12.5.1
Connectivity of halfspaces
We have repeatedly come across conditions on representations asserting that the subgraph
induced by nodes in certain halfspaces are connected. To be more precise, we say that a
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
198
representation v : V → Rd of a graph G = (V, E) has the connectivity property with respect
to the halfspace family H, if for every halfspace H ∈ H, the set v−1 (H) induces a connected
subgraph of G. We allow here that this subgraph is empty; if we insist that it is nonempty,
we say that the representation has the nonempty connectivity property.
A halfspace in the following discussion is the (open) set {x ∈ Rd : aT x > b}, where
a ∈ Rd is a nonzero vector. If b = 0, we call the halfspace linear. If we want to emphasize
that this is not assumed, we call it affine.
Let us recall some examples.
Example 12.5.1 The skeleton of a convex polytope in Rd has the connectivity property
with respect to all halfspaces (Proposition 3.1.2).
Example 12.5.2 The rubber band representation of a 3-connected planar graph, with the
nodes of a country nailed to the vertices of a convex polygon, has the connectivity property
with respect to all halfplanes (Claim 1 in the proof of Theorem 5.3.1).
Example 12.5.3 The nullspace representation defined by a matrix M ∈ RV ×V satisfying
Mij < 0 for ij ∈ E, Mij = 0 for ij ∈ E, and having exactly one negative eigenvalue, has
the nonempty connectivity property with respect to all linear halfspaces whose boundary is
a linear hyperplane spanned by representing vectors (Van der Holst’s Lemma 12.3.2), and
also with respect to all linear halfspaces whose boundary contains no representing vectors.
A general study of this property was undertaken by Van der Holst, Laurent and Schrijver
[118]. They studied two versions.
(a) Consider full-dimensional representations in Rd with the connectivity property with
respect to all halfspaces. It is not hard to see that not having such a representation is a
property closed under minors. Van der Holst, Laurent and Schrijver give the following (nice
and easy) characterization:
Theorem 12.5.4 A graph has a full-dimensional representation in Rd with the connectivity
property with respect to all halfspaces if and only if it is connected and has a Kd+1 minor.
We note that this result does not supply a “good characterization” for graphs not having
a Kd+1 minor (both equivalent properties put this class into co-NP). We know from the
Robertson-Seymour theory of graph minors that not having a Kd+1 minor is a property in
P, and explicit certificates (structure theorems) for this are known for d ≤ 5. To describe an
explicit certificate for this property for general d is an outstanding open problem.
(b) Consider full-dimensional representations in Rd with the nonempty connectivity property with respect to all linear halfspaces. No full characterization of graphs admitting such
a representation is known, but the following (nontrivial) special result is proved in [118]:
12.5. AND MORE...
199
Theorem 12.5.5 A graph has a full-dimensional representation in R3 with the nonempty
connectivity property with respect to all linear halfspaces if and only if it can be obtained from
planar graphs by taking clique-sums and subgraphs.
Exercise 12.5.6 Let f (G) denote the maximum multiplicity or the largest eigenvalue of any matrix obtained from the adjacency matrix of a graph G, where 1’s
are replaced by arbitrary positive numbers and the diagonal entries are changed
arbitrarily. Prove that f (G) is the number of connected components of the graph
G.
Exercise 12.5.7 A matrix A ∈ M1G has the Strong Arnold Property if and
only if for every symmetric matrix B there exists a matrix B ′ ∈ MG such that
xT (B ′ − B)x = 0 for all vectors x in the nullspace of A.
Exercise 12.5.8 The complete graph Kn is the only graph on n ≥ 3 nodes with
µ = n − 1.
Exercise 12.5.9 Let G be a connected graph with µ(G) = µ, let M be a Colin
de Verdi`ere matrix of G, and let x ∈ RV be any vector with M x ≤ 0. Then
supp+ (x) is nonempty, and either
— supp+ (x) spans a connected subgraph, or
— G has a cutset S with the following properties: Let G1 , . . . , Gr (r ≥ 2) be
the connected components of G − S. There exist non-zero, non-negative vectors
x1 , . . . , xr and y in RV such that supp(xi ) = V (Gi ) and supp(y)
∑⊆ S, M x1 =
M
x
2 = · · · = M xr = −y, and x is a linear combination x =
i αi xi , where
∑
i αi ≥ 0, and at least two αi are positive.
Exercise 12.5.10 Let M ∈ M(G) and let x ∈ RV satisfy M x < 0. Then
supp+ (x) is nonempty and induces a connected subgraph of G.
Exercise 12.5.11 Let M be a Colin de Verdi`ere matrix of a path Pn , let π be an
eigenvector of M belonging to the negative eigenvalue, and let w : V (Pn ) → R be
a nullspace representation of M . Then the mapping i 7→ wi /πi gives an embedding
of G in the line [169].
Exercise 12.5.12 Let M be a Colin de Verdi`ere matrix of a 2-connected outerplanar graph G, and let w : V (G) → R2 be a nullspace representation of M .
Then the mapping i 7→ vi = wi /|wi |, together with the segments connecting vi
and vj for ij ∈ E(G), gives an embedding of G in the plane as a convex polygon
with non-crossing diagonals.
Exercise 12.5.13 Let ψ : S 2 → S 2 be a local homeomorphism. (a) Prove that
there is an integer k ≥ 1 such that |ψ −1 (x)| = k for each x ∈ S 2 . (2) Let G be
any 3-connected planar graph embedded in S 2 , then ψ −1 (G) is a graph embedded
in S 2 , with k|V (G)| nodes, k|E(G)| edges, and k|F (G)| faces. (3) Use Euler’s
Formula to prove that k = 1 and ψ is bijective.
Exercise 12.5.14 Let G be a connected graph with µ(G) = µ, let M be a Colin
de Verdi`ere matrix of G, and let u be a nullspace representation belonging to
M , and let Σ be an open halfspace in Rµ containing the origin. Then the nodes
represented by vectors in Σ induce a non-empty connected subgraph.
Exercise 12.5.15 (CdV) Let ij be a cut-edge of the graph G. Let A ∈ MG , and
let F denote the nullspace of A. Then dim(F |{i,j} ) ≤ 1.
200
`
CHAPTER 12. THE COLIN DE VERDIERE
NUMBER
Exercise 12.5.16 Prove that a graph G has a full-dimensional representation in
Rd with the connectivity property with respect to all linear halfspaces, such that
the origin is not contained in the convex hull of representing vectors, if and only
if it has a Kd minor.
Exercise 12.5.17 Let u be a vector labeling of a graph G satisfying (12.8) and
the additional hypothesis that |ui | ̸= 1 for all i. Prove that every stress on u (as
a vector labeling of G) is also a homogeneous stress.
Exercise 12.5.18 Prove the following characterizations of graphs having fulldimensional representations in Rd with the nonempty connectivity property with
respect to all linear halfspaces, for small values of d: (a) for d = 1 these are the
forests; (b) for d = 2 these are the series-parallel graphs.
Exercise 12.5.19 A graph G is planar if and only if its 1-subdivision G′ satisfies
µ(G′ ) ≥ |V (G′ )| − 4.
Chapter 13
Linear structure of
vector-labeled graphs
In this chapter we study vector-labelings from their purely algebraic perspective - where only
linear dependence and independence of the underlying vectors plays a role. Replacing “cardinality” by “rank” in an appropriate way, we get generalizations of ordinary graph-theoretic
parameters and properties that are often quite meaningful and relevant. Furthermore, these
results about vector-labeled graphs can be applied to purely graph-theoretic problems. The
vector-labeling is used to encode information about other parts, or additional structure, of
the graph.
We restrict our attention to the study of two basic notions in graph theory: matchings
and node-covers. In the case of node-covers, these geometric methods will be important in
the proof of a classification theorem of cover-critical graphs, where the statement does not
involve geometric embeddings. In the case of matchings, we formulate extensions of basic
results like K˝onig’s and Tutte’s theorems to vector-labeled graphs. These are well known
from matroid theory, and have many applications in graph theory and other combinatorial
problems.
Since only the linear structure is used, most of the considerations in this chapter could be
formulated over any field (or at least over any sufficiently large field). Indeed, many of the
most elementary arguments would work over any matroid. However, two of the main results
(Theorems 13.1.10 and 13.2.2) do make use of the additional structure of an underlying
(sufficiently large) field.
201
202
CHAPTER 13. LINEAR STRUCTURE OF VECTOR-LABELED GRAPHS
13.1
Covering and cover-critical graphs
13.1.1
Cover-critical ordinary graphs
We start with recalling some basic facts about ordinary graphs and their node-cover numbers.
A graph G is called cover-critical, if deleting any edge from it, τ (G) decreases. Since isolated
nodes play no role here, we will assume that cover-critical graphs have no isolated nodes.
These graphs have an interesting theory, initiated by Erd˝os and Gallai [77]. See also [32, 98,
154, 166].
Figure 13.1: Some cover-critical graphs
For a cover-critical graph G, the following quantity, called its defect, will play a crucial
role: δ(G) = 2τ (G) − |V (G)|. The next proposition summarizes some of the basic properties
of cover-critical graphs.
Theorem 13.1.1 Let G be a cover-critical graph on n nodes. Then
˝ s, Gallai [77]) δ(G) ≥ 0 (in other words, |V (G)| ≤ 2τ (G)).
(a) (Erdo
(
)
˝ s, Hajnal, Moon [82]) |E(G)| ≤ τ (G)+1
(b) (Erdo
.
2
(c) (Hajnal [82]) Every node has degree at most δ(G) + 1.
Cover-critical graphs can be classified by their defect. For simpler statement of the results,
let us assume that the graph is connected. Then the only connected cover-critical graph with
δ = 0 is K2 (Erd˝os and Gallai [77]), and the connected cover-critical graphs with δ = 1 are
odd cycles.
It is not hard to see that if G′ is obtained from G by subdividing an edge by two nodes,
then G′ is cover-critical if and only if G is, and δ(G) = δ(G′ ). By this fact, for δ ≥ 2 it
suffices to describe those cover-critical graphs that don’t have two adjacent nodes of degree
2.
There is a slightly less trivial generalization of this observation: Let us split a node of a
graph G into two, and connect both of them to a new node. Then the resulting graph G′ is
cover-critical if and only if G is, and δ(G) = δ(G′ ). This implies that it is enough to describe
13.1. COVERING AND COVER-CRITICAL GRAPHS
203
connected cover-critical graphs with minimum degree at least 3. Let us call these graphs
basic cover-critical.
There is only one basic cover-critical graph with δ = 2, namely K4 (Andr´asfai [18]), and
four basic cover-critical graphs with δ = 3 (the last four graphs in Figure 13.1; Sur´anyi
[226]). A generalization of this notion to vector-labeled graphs will be used in the proof of
the following theorem (Lov´
asz [156]):
Theorem 13.1.2 For every δ ≥ 2, there is only a finite number of basic cover-critical graphs.
The proof (postponed to the end of the next section) will show that every basic cover2
critical graph has at most 23δ nodes. This bound is probably very rough, so the main content
of the result is that this number is finite.
13.1.2
Cover-critical vector-labeled graphs
We generalize the notions of covers and criticality to vector-labeled graphs, and (among
others) use this more general setting to prove Theorem 13.1.2.
Let (vi : i ∈ V ) be a multiset of vectors in Rd . We define the rank rk(S) of a set S ⊆ V
as the rank of the matrix whose columns are the vectors (vi : i ∈ S).
We extend the rank function as follows. Let w : V → R+ . There is a unique decomposition
w=
t
∑
αi 1Si ,
i=1
where t ≥ 0, αi > 0 and S1 ⊂ S2 ⊂ · · · ⊂ St . We define
rk(w) =
t
∑
αi rk(Si ).
i=1
If w is 0-1 valued, then this is just the rank of its supporting set. This way we have extended
the rank function from subsets of V (i.e., vectors in {0, 1}V ) to the nonnegative orthant RV+ .
It is easy to check that this extended rank function is homogeneous and convex [159].
A k-cover in a graph is a weighting w of nodes by nonnegative integers such that for every
edge, the sum of weights of the two endpoints is at least k. For a node-cover T , the weighting
1T is a 1-cover (and every minimal 1-cover arises this way). It is easy to describe 2-covers
as well. For every set T ⊆ V , define 2T = 1T + 1N (V \T ) . Then it is easy to check that 2T
is a 2-cover of G. Conversely, for every 2-cover w, let T be the set of nodes with positive
weight. Then T must be a node-cover, and all nodes in N (V \ T ) must have weight 2, so
w ≥ 2T . If the 2-cover w is minimal (i.e., no node-weight can be decreased and still have a
2-cover), then w = 2T . We note that in the representation of a minimal 2-cover in the form
2T , the set T is not unique. If T is a minimal node-cover (with respect to inclusion), then
N (V \ T ) = T , and so 2T = 21T .
204
CHAPTER 13. LINEAR STRUCTURE OF VECTOR-LABELED GRAPHS
The notion of k-covers is independent of the vector labeling, but the geometry comes in
when we consider “optimum” ones, by replacing “size” by “rank”. So the k-cover number
τk (G, v) for a vector-labeled graph (G, v) is defined as the minimum rank of any k-cover.
For the (most important) case of k = 1, we set τ (G, v) = τk (G, v). In other words, τ (G, v) is
the minimum dimension of a linear subspace L ⊆ Rd such that at least one endpoint of every
edge of G is represented by a vector in L. The rank of the 2-cover defined by a node-cover
T is
rk(2T ) = rk(T ) + rk(N (V \ T )).
Let us collect some elementary properties of k-covers. Clearly, the sum of a k-cover and an
m-cover is a (k +m)-cover. Hence τk+m (G, v) ≤ τk (G, v)+τm (G, v), in particular τ2 (G, v) ≤
2τ (G, v). The difference δ(G, v) = 2τ (G, v) − τ2 (G, v) will generalize the parameter δ(G) of
ordinary cover-critical graphs. (This quantity would be called an “integrality gap” in integer
programming. Note that this quantity is defined for all vector-labeled graph, in particular
for ordinary graphs, but one has to use some of the theory of cover-critical graphs to see that
is the same as the one defined in the previous section.)
A 2-cover is not necessarily the sum of two 1-covers (for example, the all-1 weighting of the
nodes of a triangle). But every 3-cover is the sum of a 2-cover and a 1-cover; more generally,
every k-cover (k ≥ 3) can be decomposed into the sum of a 2-cover and a (k − 2)-cover by
the following formulas:


2, if wi ≥ k,
wi′ = 1, if 1 ≤ wi ≤ k − 1, ,


0, if wi = 0,
wi′′ = wi − wi′ .
(13.1)
It is easy to check that w′ is a 2-cover and w′′ is a (k − 2)-cover. Note that every level-set of
w′ or w′′ is a level-set of w, which implies that
rk(w′ ) + rk(w′′ ) = rk(w).
(13.2)
Lemma 13.1.3 Let S be any node-cover in a vector-labeled graph (G, v), and let T be a
node-cover with minimum rank. Then rk(2S∪T ) ≤ rk(2S ).
Applying this repeatedly, we get that if S is a node-cover, and T1 , . . . , Tk are node-covers
with minimum rank, then
rk(2S∪T1 ∪···∪Tk ) ≤ rk(2S ).
We can state the conclusion of the lemma in a more explicit form:
rk(S ∪ T ) + rk(N (V \ (S ∪ T )) ≤ rk(S) + rk(N (V \ S))
(and similarly for more than one covers T ).
(13.3)
13.1. COVERING AND COVER-CRITICAL GRAPHS
205
Proof. If we decompose the 3-cover w = 1T + 2S by (13.1), we get w = w′ + w′′ , where
w′ = 2S∪T is a 2-cover and w′′ is a 1-cover. Using the convexity of the rank function and
(13.2), we get
rk(T ) + rk(2S ) ≥ rk(w) = rk(2S∪T ) + rk(w′′ ).
Since rk(T ) ≤ rk(w′′ ) by the minimality of rk(T ), it follows that rk(2S ) ≥ rk(2S∪T ).
The next step is to extend the notion of criticality to vector-labeled graphs. We say that
(G, v) is cover-critical, if deleting any edge or node from the graph, τ (G, v) decreases. This
condition implies, in particular, that G has no isolated nodes.
If (G, v) is cover-critical, and we project the representing vectors onto a generic τ (G)
dimensional space, then the resulting vector-labeled graph is still cover-critical. Hence we
could always assume that the representation is in τ (G) dimensional space, and so it is coverpreserving. This reduction is, however, not always convenient.
Figure 13.2: Some cover-critical vector-labeled graphs. The figures are drawn in
projective plane, so that 1-dimensional linear subspaces correspond to points, 2dimensional subspaces correspond to lines etc. The reader is recommended to use
this way of drawing vector-labeled graphs to follow the arguments in this chapter.
Labeling of a graph G by linearly independent vectors (equivalently, by vectors in general
position in τ (G) dimensions) is cover-critical if and only if the graph is cover-critical. This way
the theory of cover-critical vector labelings generalizes the theory of cover-critical graphs. It
seems that the structure of cover-critical vector-labeled graphs can be much more complicated
than the structure of cover-critical ordinary graph, and a classification theorem extending
Theorem 13.1.2 seems to be hopeless. But somewhat surprisingly, some of the more difficult
results for cover-critical graphs have been proved using vector-labelings.
Lemma 13.1.4 Let (vi : i ∈ V ) be a vector-labeling of a graph G, let j ∈ V , and let L
be the linear space spanned by the vectors {vi : i ∈ N (j)}. Construct a new labeling v′ by
replacing vj by a vector vj′ that is in general position in L. Then τ (G, v′ ) ≥ τ (G, v). If,
in addition, vj′ is in general position relative to v(V \ {j}), then τ (G, v′ ) = τ (G, v). If, in
addition, (G, v) is cover-critical, then (G, v′ ) is cover-critical as well.
206
CHAPTER 13. LINEAR STRUCTURE OF VECTOR-LABELED GRAPHS
Proof. The proof is a slightly tedious case distinction. Let rk′ denote the rank function of
(G, v′ ), and let T be any node-cover of G. If j ∈
/ T , then rk′ (T ) = rk(T ) ≥ τ (G, v). Suppose
that j ∈ T , and let T ′ = T \ {j}. If rk(T ′ ) = rk(T ), then we still have a similar conclusion:
rk′ (T ) ≥ rk′ (T ′ ) = rk(T ′ ) = rk(T ) ≥ τ (G, v). So suppose that rk(T ′ ) = rk(T ) − 1. The set
T ′ ∪ N (j) is a node-cover of G, and hence rk′ (T ′ ∪ N (j)) = rk(T ′ ∪ N (j)) ≥ τ (G, v). Hence
if rk(T ′ ∪ N (j)) = rk(T ′ ), we are done again. So suppose that rk(T ′ ∪ N (j)) > rk(T ′ ). Then
L ̸⊆ lin(v(T ′ )), and by the assumption that vj is in general position, we have vj′ ∈
/ lin(v(T ′ )).
Hence rk′ (T ) = rk′ (T ′ ) + 1 = rk(T ′ ) + 1 = rk(T ) ≥ τ (G, v).
Now suppose that vj is in general position relative to v(V \{j}), and let T be a node-cover
of G with minimum rank. If j ∈
/ T , then τ (G, v′ ) ≤ rk′ (T ) = rk(T ) = τ (G, v). Suppose that
j ∈ T , and let again T ′ = T \ {j}. By the general position assumption about vj , we have
rk(T ′ ) = rk(T ) − 1, and so τ (G, v′ ) ≤ rk′ (T ) ≤ rk′ (T ′ ) + 1 = rk(T ′ ) + 1 = rk(T ) = τ (G, v).
Finally, suppose that G is cover-critical, and let e ∈ E(G). If e is not incident with j,
then the same argument as above gives that τ (G − e, v′ ) = τ (G − e, v) < τ (G, v) = τ (G′ , v).
If e is incident with j, then a node-cover T of G − e with rk(T ) < τ (G, v) cannot contain j,
and hence τ (G − e, v′ ) ≤ rk′ (T ) = rk(T ) < τ (G, v) = τ (G, v′ ).
Lemma 13.1.5 Let (G, v) be a cover-critical representation, and let j be a node such that
vj ∈ lin(v(N (j))). Let v′ be the projection of v in the direction of vj onto a hyperplane.
Then τ (G − j, v′ ) = τ (G, v) − 1, and (G − j, v′ ) is cover-critical.
Proof. The proof is similar to that of the previous lemma, and is left to the reader.
Lemma 13.1.6 In a cover-critical vector-labeled graph (G, v), the neighbors of any node are
represented by linearly independent vectors.
Proof. Suppose that node a has a neighbor b ∈ N (a) such that vb is in the linear span of
{vj : j ∈ N (a) \ {b}. There is a (τ (G, v) − 1) dimensional subspace L that intersects every
edge except ab. Clearly, L does not contain va (since then it would be a node-cover of G),
and hence it must contain vj for every j ∈ N (a), j ̸= b. Hence the vector vb also belongs to
L, and so L meets every edge of G, a contradiction.
Lemma 13.1.6 implies that G has a cover-critical representation in dimension d, then
every node has degree at most d = rk(V ). The next theorem gives a much sharper bound on
the degrees.
Theorem 13.1.7 Let (G, v) be a cover-critical vector-labeled graph, let T ⊆ V (G) be a
node-cover, and let a ∈ V \ T . Then dG (a) ≤ rk(2T ) − rk(V \ {a}).
Using Lemma 13.1.6, this inequality can be re-written as
rk(2V \{a} ) ≤ rk(2T ).
(13.4)
13.1. COVERING AND COVER-CRITICAL GRAPHS
207
We can also write the inequality as
dG (a) ≤ rk(2T ) − rk(V ) + εa ,
where εa = 1 if va is linearly independent of the vectors {vj : j ̸= a}, and εa = 0 otherwise.
Proof.
Let S1 , . . . , Sk be all node-covers of minimum rank that do not contain a, and let
A = V \ T \ S1 \ · · · \ Sk . By Lemma 13.1.3, we have
rk(2V \A ) ≤ rk(2T ).
If A = {a}, then we are done, so suppose that B = A \ {a} is nonempty.
Claim 1 Let i ∈ N (a). Then i ∈
/ N (B), and vi is linearly independent of the set v(N (A) \
{i}).
Indeed, by the cover-critical property of (G, v), the vector-labeled graph (G \ {ai}, v) has
a node-cover Z with rank τ (G, v) − 1. Clearly a, i ∈
/ Z. The set Z ∪ {i} is a node-cover of G
with minimum rank, and hence it is one of the sets Si . By the definition of A, it follows that
Z ∩ A = ∅. This implies that there is no edge connecting i to B (since such an edge would
not be covered by Z), and N (A) \ {i} ⊆ Z. Since vi is linearly independent of v(Z) (else, i
could be added to Z to get a node-cover with the same rank), it is also linearly independent
of v(N (A) \ {i}).
The Claim implies that rk(N (A)) = rk(N (B)) + dG (a). Let b ∈ B. By induction on
|V \ T |, we may assume that
dG (b) ≤ rk(2V \B ) − rk(V \ {b}) ≤ rk(V \ B) + rk(B) − rk(V ) + 1.
Since dG (b) ≥ 1, this implies that rk(N (B)) + rk(V \ B) ≥ rk(V ), and hence
dG (a) = rk(N (A)) − rk(N (B) ≤ rk(N (A)) + rk(V \ B) − rk(V )
≤ rk(2T ) − rk(V \ A) + rk(V \ B) − rk(V ).
Using the submodularity inequality
rk(V \ {a}) + rk(V \ B) ≥ rk(V ) + rk(V \ A),
the Theorem follows.
Let us formulate some corollaries.
Corollary 13.1.8 If (G, v) is a cover-critical vector-labeled graph, then τ2 (G, v) = rk(V ).
Proof. The weighting
1V is a 2-cover, and hence τ2 (G, v) ≤ rk(V ). On the other hand, let
T be a node-cover such that rk(2T ) = τ2 (G, v). If T = V , then we have nothing to prove.
/ T satisfies
Else, by Theorem 13.1.7, every node a ∈
1 ≤ dG (a) ≤ rk(2T ) − rk(V \ {a}) ≤ τ2 (G, v) − rk(V ) + 1,
208
CHAPTER 13. LINEAR STRUCTURE OF VECTOR-LABELED GRAPHS
implying that τ2 (G, v) ≥ rk(V ).
We also get the following inequality generalizing Theorem 13.1.1(a).
Corollary 13.1.9 If (G, v) is a cover-critical vector-labeled graph, then rk(V ) ≤ 2τ (G, v). If
equality holds, then G consists of independent edges, and its nodes are represented by linearly
independent vectors.
Proof.
Let a ∈ V . By cover-criticality, there is a node-cover T with minimum rank such
that a ∈
/ T . Then by Theorem 13.1.7,
1 ≤ dG (a) ≤ rk(2T ) − rk(V ) + εa = 2τ (G, v) − τ2 (G, v) + εa ≤ 2τ (G, v) − τ2 (G, v) + 1.
This implies the inequality. If equality holds, then we must have dG (a) = 1 and εa = 1 for
every node a, which implies that G is a matching and the representing vectors are linearly
independent.
The inequality in the previous proof can also be written as
dG (v) ≤ δ(G, v) + εa ≤ δ(G, v) + 1.
(13.5)
Corollary 13.1.9 characterizes cover-critical vector-labeled graphs with δ = 0. It seems difficult to extend this characterization to δ > 0, generalizing Theorem 13.1.2 and the results
mentioned before it.
The next theorem gives a bound on the number of edges. This is in fact the first result
in this section which uses not only the matroid properties of the rank function, but the full
linear structure of Rd .
Theorem 13.1.10 If G has a cover-critical representation in dimension d, then |E(G)| ≤
(d+1)
2 .
Proof. Let (vi : i ∈ V (G)) be a cover-critical representation of a graph G in dimension d.
Consider the matrices Mij = vi vjT + vj viT (ij ∈ E).
We claim that these matrices are linearly independent. Suppose that there is a linear
dependence
∑
λij Mij = 0,
(13.6)
ij∈E(G)
where λab ̸= 0 for some edge ab. The graph G \ ab has a node-cover T that does not span
the space Rd ; let u ∈ Rd (u ̸= 0) be orthogonal to the span of T . Clearly uT va ̸= 0 and
uT vb ̸= 0, else {j ∈ V (G) : uT vj } = 0 would cover all edges of G, and the representations
would not be cover preserving. Then
uT Mij u = uT vi vjT u + uT vj viT u = 2(uT vi )(uT vj ) = 0
13.1. COVERING AND COVER-CRITICAL GRAPHS
209
for every edge ij ̸= ab (since one of i and j must be in T ), but
uT Mab u = 2(uT va )(uT vb ) ̸= 0.
This contradicts (13.6).
Thus the matrices Mij are linearly independent. Hence their number cannot be larger
( )
than the dimension of the space of symmetric d × d matrices, which is d+1
2 .
Representing the nodes of a complete graph on d + 1 nodes by vectors in Rd in general
position, we get a cover-critical representation, showing that the bound in Theorem 13.1.10
is tight.
Theorem 13.1.10 generalizes Theorem 13.1.1(b), giving a good upper bound on the number
of edges. What about the number of nodes? Since we don’t allow isolated nodes, a covercritical vector-labeled graph in Rd can have at most d(d + 1) nodes by Theorem 13.1.10. One
cannot say more in general, as the following construction shows. Let us split each node i
of G into d(i) nodes of degree 1, to get a graph G′ consisting of independent edges. Keep
the vector representing i for each of the corresponding new node (or replace it by a parallel
vector). Then we get another cover-critical representation of G′ . (Conversely, if we identify
nodes that are represented by parallel vectors in a cover-critical representation, we get a
cover-critical representation of the resulting graph.)
Theorem 13.1.10 has another important corollary for ordinary graphs.
Corollary 13.1.11 Let G = (V, E) be a cover-critical graph, and let T ⊆ V be a node-cover.
Then
(
)
τ (G) + |T | − |V | + 1
|E(G[T ])| ≤
.
2
In particular, if T is a minimum node-cover, then
(
)
δ(G) + 1
|E(G[T ])| ≤
.
2
Proof. Let A = V (G) \ T , and apply the argument in the proof of Lemma 13.1.6 above, but
this time eliminating all nodes in A. We are left with a cover-critical vector-labeled graph
(G[T ], u) with τ (G[T ], u) = τ (G) − |A|. An application of Theorem 13.1.10 completes the
proof.
We need one further lemma, which is “pure” graph theory.
Lemma 13.1.12 Let G be a bipartite graph, and let S1 , . . . , Sk ⊆ V (G) be such that G − Si
has a perfect matching for every i. Assume, furthermore, that deleting any edge of G this
property does not hold any more. Let s = maxi |Si |. Then the number of nodes of G with
( )
degree at least 3 is at most 2s3 k3 .
210
CHAPTER 13. LINEAR STRUCTURE OF VECTOR-LABELED GRAPHS
( )
Proof. Suppose that G has more than k3 s3 nodes of degree at least 3. Let us label every
edge e by an index i for which G − e − Si has no perfect matching, and let Mi denote a
perfect matching in G − Si . Clearly all edges labeled i belong to Mi .
Let us label every node with degree at least 3 by a triple of the edge-labels incident with
it. There will be more than 2s3 nodes labeled by the same triple, say by {1, 2, 3}.
The set M1 ∪ M2 consists of the common edges, of cycles alternating with respect to
both, and of alternating paths ending at S1 ∪ S2 . A node labeled {1, 2, 3} cannot belong
to a common edge of M1 and M2 (trivially), and it cannot belong to an alternating cycle,
since then we could replace the edges of M1 on this cycle by the other edges of the cycle, to
get a perfect matching in G − Si that misses an edge labeled i, contrary to the definition of
the label. So it must lie on one of the alternating paths. The same argument holds for the
sets M1 ∪ M3 and M2 ∪ M3 . The number of alternating Mi -Mj paths is at most s, so there
are three nodes labeled {1, 2, 3} that lie on the same Mi -Mj paths for all three choices of
{i, j} ⊆ {1, 2, 3}. Two of these nodes, say u and v, belong to the same color class of G.
We need one more combinatorial preparation: the alternating Mi -Mj path through u and
v has a subpath Qij between u and v, which has even length, and therefore starts with an
edge labeled i and ends with an edge labeled j, or the other way around. It is easy to see
that (permuting the indices 1, 2, 3 if necessary) we may assume that Q12 starts at u with
label 1, and Q23 starts at u with label 2.
Now traverse Q23 starting at u until it first hits Q12 (this happens at v at the latest), and
return to u on Q12 . Since G is bipartite, this cycle is even, and hence alternates with respect
to M2 . But it contains an edge labeled 2, which leads to a contradiction just like above. Now we are able to prove Theorem 13.1.2.
Proof. Let G be a basic cover-critical graph with δ(G) = δ, let T be a minimum node-cover
in G, let F be the set of edges induced by T , and let G′ be obtained by deleting the edges of
( )
F from G. By Corollary 13.1.11, m = |F | ≤ δ+1
2 .
Let R1 , . . . , Rk be the minimal sets of nodes that cover all edges of F . Trivially |Ri | ≤ m
and k ≤ 2m . Let τi = τ (G′ \ Ri ); clearly τi ≥ τ (G) − |Ri |. Since G′ \ Ri is bipartite, it has
a matching Mi of size τi . Let Si = V (G) \ V (Mi ). (We can think of the matching Mi as a
“certificate” that Ri cannot be extended to a node-cover if size less than τ (G).)
By this construction, the graphs G′ − Si have perfect matchings. We claim that deleting
any edge from G′ this does not remain true. Indeed, let e ∈ E(G′ ), then G−e has a node-cover
T with |T | < τ (G). Since T covers all edges of F , it contains one of the sets Ri . Then T − Ri
covers all edges of G′ − Ri except e, and so τ (G′ − Ri − e) ≤ |T | − |Ri | < τ (G) − |Ri | ≤ τi .
Thus G′ − Ri − e cannot contain a matching of size τi , and hence G′ − Si − e cannot contain
a perfect matching.
We can estimate the size of the sets Si as follows:
|Si | = n − 2τi ≤ n − 2(τ (G) − |Ri |) ≤ 2m.
13.2. MATCHINGS
211
( )
We can invoke Lemma 13.1.12, and conclude that G′ has at most 2 k3 (2m)3 nodes of degree
at least 3. Putting back the edges of F can increase this number by at most 2m, and hence
( )
2
k
n≤2
(2m)3 + 2m < 23δ .
3
13.2
Matchings
A matching in a vector-labeled graph is a set M of disjoint edges whose endpoints are represented by a linearly independent set, i.e., rk(V (M )) = 2|M |. We denote by ν(G, v) the
maximum cardinality of a matching.
The following inequality, familiar from graph theory, holds true for every vector-labeled
graph:
ν(G, v) ≤ τ (G, v).
(13.7)
Indeed, let T be a node-cover and M be a matching, then V (M ) is represented by linearly
independent vectors, and hence
rk(T ) ≥ rk(V (M ) ∩ T ) = |V (M ) ∩ T | ≥ |M |.
Next we want to extend the K˝onig-Hall Theorem, which states that for bipartite graphs,
equality holds in (13.7). Let us define a bipartite vector-labeled graph as a vector-labeled
graph that has a bipartition V = V1 ∪ V2 such that every edge connects V1 to V2 , and
rk(V ) = rk(V1 ) + rk(V2 ). In other words, the space has a decomposition Rd = L1 ⊕ L2 such
that every edge has an endpoint represented in L1 and an endpoint represented in L2 .
Theorem 13.2.1 If (G, v) is a bipartite vector-labeled graph, then ν(G, v) = τ (G, v).
This theorem is equivalent to the Matroid Intersection Theorem (Edmonds [69]), at least
for representable matroids. We include a proof, which follows almost immediately from the
results of the previous section (this proof would extend to the non-representable case easily).
Proof. We may assume that (G, v) is cover-critical (just delete edges and/or nodes as long
as τ (G, v) remains unchanged; it is enough to find a matching of size τ (G, v) in this reduced
graph). Let V = V1 ∪ V2 be the bipartition of (G, v), then both V1 and V2 are node-covers,
and hence
τ (G, v) ≤ min{rk(V1 ), rk(V2 )} ≤
rk(V1 ) + rk(V2 )
r(V )
=
.
2
2
By Corollary 13.1.9, this implies that (G, v) is a matching, and clearly |E(G)| ≥ τ (G, v). 212
CHAPTER 13. LINEAR STRUCTURE OF VECTOR-LABELED GRAPHS
Tutte’s Theorem characterizing the existence of a perfect matching and Berge’s Formula
describing the maximum size of matchings extend to vector-labeled graphs as well (Lov´asz
[158]). Various forms of this theorem are called the “Matroid Parity Theorem” or “Matroid
Matching Theorem” or “Polymatroid Matching Theorem”. We describe the result, but not
the proof, which is quite lengthy, and has been reproduced in another book (Lov´asz and
Plummer [166]).
To motivate the theorem, let us recall an elementary exercise from geometry: if a family
of lines in projective d-space has the property that every pair intersects, then either all lines
are coplanar of they all go through one and the same point. One way of saying the condition
is that no two lines span a 3-dimensional subspace. What happens if instead we exclude
three lines spanning a 5-dimensional subspace?
We can rephrase this as a question about 2-dimensional linear subspaces of Rd : if any
two of them have a generate a 4-dimensional space, then either they are all contained in a
3-dimensional subspace, or they all contain a common 1-dimensional subspace. What can we
say if we require that any no k of them generate a 2k-dimensional subspace? To show that
this is equivalent to the matching number of vector-labeled graphs, we can select a finite set
V of vectors so that each of the given subspaces contains at least two of them, and for each
of the given subspaces, joint two of the points of V in it by an edge.
The following is trivial bound on the matching number of a vector-labeled graph in Rd :
ν(G, v) ≤
⌊d⌋
2
.
(13.8)
Combining the bounds (13.7) and (13.8), we can formulate an upper bound that, for an
optimal choice, gives equality.
Theorem 13.2.2 If (G, u) is a vector-labeled graph in Rd . Let L, L1 , . . . , Lk range through
linear subspaces of Rd such that L ⊆ L1 , . . . , Lk , and for every edge of G, both vectors
representing its endpoints belong to one of the subspaces Li . Then
k ⌊
{
∑
dim(Li ) − dim(L) ⌋}
ν(G, v) = min dim(L) +
.
2
i=1
The bound (13.8) corresponds to the choice k = 1, L = 0, L1 = Rd . To get the bound
(13.7), let L be the linear span of a node-cover T , let V \ T = {i1 , . . . , ik }, and let Lj be the
linear span of L and uij .
Exercise 13.2.3 Prove that for any vector-labeled graph, τ2k (G, v) = kτ2 (G, v)
and τ2k+1 (G, v) = kτ2 (G, v) + τ1 (G, v).
Exercise 13.2.4 (a) Verify that for every graph G = (V, E) and every X ⊆ V ,
2X = 1X + 1N (V \X) is a 2-cover. (b) For every representation v of G, the
setfunction rk(2X ) is submodular.
13.2. MATCHINGS
Exercise 13.2.5 Let i be a node of a vector-labeled graph (G, v). Let us delete
all edges incident with i, create a new node j, connect it to i, and represent it
by a vector vj that is in general position in the space lin(v(N (i)). Let G′ be the
graph obtained. Prove that τ (G, v) = τ (G′ , v), and if (G, v) is cover-critical, then
so is (G′ , v).
Exercise 13.2.6 For a vector-labeled graph (G, v), split every node i into dG (i)
nodes of degree 1, and represent each of these by the vector vi . Let (G′ , v′ ) be
the vector-labeled graph obtained. Prove that τ (G, v) = τ (G′ , v), and (G, v) is
cover-critical if and only if (G′ , v′ ) is cover-critical.
E(G)
Exercise
, and let
∑13.2.7 Let (G, v) be a vector-labeled graph, let x ∈ R
A(x) = ij∈E(G) xij (vi ∧ vj ). Then ν(G, v) = maxx rk(A(x)). (Here u ∧ v =
uvT − vuT is the exterior product of the vectors u, v ∈ Rd .)
213
214
CHAPTER 13. LINEAR STRUCTURE OF VECTOR-LABELED GRAPHS
Chapter 14
Metric representations
Given a graph, we would like to embed it in a Euclidean space so that the distances between
nodes in the graph should be the same as, or at least close to, the geometric distance of the
representing vectors. Our goal is to illustrate how to embed graphs into euclidean spaces (or
other standard normed spaces), and, mainly, how to use such embeddings to prove graphtheoretic theorems and to design algorithms.
It is not hard to see that every embedding will necessarily have some distortion in nontrivial cases. For example, the “claw” K1,3 cannot be embedded isometrically in any dimension. We get more general and useful results if we study embeddings where the distances may
change, but in controlled manner. To emphasize the difference, we will distinguish distance
preserving (isometric) and distance respecting embeddings.
The complete k-graph can be embedded isometrically in a euclidean space with dimension k − 1, but not in lower dimensions. There is often a trade-off between dimension and
distortion. This motivates our concern with the dimension of the space in which we represent
our graph.
Several of the results are best stated in the generality of finite metric spaces. Recall that
a metric space is a set V endowed with a distance function d : V × V → R+ such that
d(u, v) = 0 if and only if u = v, d(v, u) = d(u, v), and d(u, w) ≤ d(u, v) + d(v, w) for all
u, v, w ∈ V .
There is a large literature of embeddings of one metric space into another so that distances
are preserved (isometric embeddings) or at least not distorted too much (we call such embeddings, informally, “distance respecting”). These results are often very combinatorial, and
have important applications to graph theory and combinatorial algorithms (see [122, 181]).
Here we have to restrict our interest to embeddings of (finite) metric spaces into basic
normed spaces like euclidean or ℓ1 spaces. We discuss some examples of isometric embeddings,
then we construct important distance respecting embeddings, and then we introduce the
fascinating notion of volume-respecting embeddings. We show applications to important
215
216
CHAPTER 14. METRIC REPRESENTATIONS
graph-theoretic algorithms. Several facts from high dimensional geometry are used, which
are collected in the last section.
14.1
Isometric embeddings
14.1.1
Supremum norm
If in the target space we use the supremum norm, then the embedding problem is easy.
Theorem 14.1.1 (Frechet) Any finite metric space can be embedded isometrically into
(Rk , ∥.∥∞ ) for some finite k.
The theorem is valid for infinite metric spaces as well (when infinite dimensional L∞
spaces must be allowed for the target space), with a slight modification of the proof. We
describe the simple proof for the finite case, since its idea will be used later on.
Proof. Let (V, d) be a finite metric space, and consider the embedding
uv = (d(v, i) : i ∈ V )
(v ∈ V ).
Then by the triangle inequality,
∥uu − uv ∥∞ = max |d(u, i) − d(v, i)| ≤ d(u, v).
i
On the other hand, the coordinate corresponding to i = v gives
∥uu − uv ∥∞ ≥ |d(u, v) − d(v, v)| = d(u, v).
14.1.2
Manhattan distance
Another representation with special combinatorial significance is an representation in Rn with
the ℓ1 -distance (often called Manhattan distance). It is convenient to allow semimetrics, i.e.,
symmetric functions d : V × V → R+ satisfying d(i, i) = 0 and d(i, k) ≤ d(i, j) + d(j, k) (the
triangle inequality), but for which d(u, v) = 0 is allowed even if u ̸= v. An ℓ1 -representation
of a finite semimetric space (V, d) is a mapping u : V → Rm such that d(i, j) = ∥ui − uj ∥1 .
We say that (V, d) is ℓ1 -representable, or shortly an ℓ1 -semimetric.
For a fixed underlying set V , all ℓ1 -semimetrics form a closed convex cone: if i 7→ ui ∈ Rm
′
is an ℓ1 -representation of (V, d) and i 7→ u′i ∈ Rm is an ℓ1 -representation of (V, d′ ), then
(
)
αu
i 7→
α′ u′
is an ℓ1 -representation of αd + α′ d′ for α, α′ ≥ 0.
14.2. DISTANCE RESPECTING REPRESENTATIONS
217
An ℓ1 -semimetric of special interest is the semimetric defined by a subset S ⊆ V ,
∇S (i, j) = 1(|{i, j} ∩ S| = 1),
which I call a 2-partition metric. (These are often call cut-semimetrics, but I don’t want to
use this term since “cut norm” and “cut distance” are used in this book in different sense.)
Lemma 14.1.2 A finite semimetric space (V, d) is an ℓ1 -semimetric if and only if it can be
written as a nonnegative linear combination of 2-partition semimetrics.
Proof. Every 2-partition semimetric ∇S can be represented in the 1-dimensional space by
the map 1S . It follows that every nonnegative linear combination of 2-partition semimetrics
on the same underlying set is an ℓ1 -semimetric. Conversely, every ℓ1 -semimetric is a sum of
ℓ1 -semimetrics representable in R1 (just consider the coordinates of any ℓ1 -representation).
So it suffices to consider ℓ1 -semimetrics (V, d) for which there is a representation i 7→ ui ∈ R
such that d(i, j) = |ui − uj |. We may assume that V = [n] and u1 ≤ · · · ≤ un , then for i < j,
d(i, j) =
n−1
∑
(uk+1 − uk )∇{1,...,k}
k=1
expresses d as a nonnegative linear combination of 2-partition semimetrics.
14.2
Distance respecting representations
As we have seen, the possibility of a geometric representation that reflects the graph distance
exactly is limited, and therefore we allow distortion. Let F : V1 → V2 be a mapping of the
metric space (V1 , d1 ) into the metric space (V2 , d2 ). We define the distortion of F as
max
u,v∈V1
d2 (F (u), F (v)) /
d2 (F (u), F (v))
min
.
u,v
d1 (u, v)
d1 (u, v)
Note that to have finite distortion, the map F must be injective. The distortion does not
change if all distances in one of the metric spaces are scaled by the same factor. So if we
are looking for embeddings in a Banach space, then we may consider embeddings that are
contractive, i.e., d2 (F (u), F (v)) ≤ d1 (u, v) for all u, v ∈ V1 .
14.2.1
Dimension reduction
In many cases, we can control the dimension of the ambient space based on general principles,
and we start with a discussion of these. The dimension problem is often easy to handle, due
to a fundamental lemma [125].
Lemma 14.2.1 (Johnson–Lindenstrauss) For every 0 < ε < 1, every n-point set S ⊂ Rn
can be mapped into Rd with d < (80 ln n)/ε2 ) with distortion at most ε.
218
CHAPTER 14. METRIC REPRESENTATIONS
More careful computation gives a constant of 8 instead of 80.
Proof. Orthogonal projection onto a random d-dimensional subspace L does the job. First,
let us see what happens to the distance of a fixed pair of points x, y ∈ S. Instead of projecting
the fixed segment of length |x − y| on a random subspace, we can project a random vector
of the same length on a fixed subspace. Then (15.29) applies, and tells us that that with
2
probability at least 1 − 4e−ε d/4 , the projections x′ and y′ of every fixed pair of points x and
x′ satisfy
√
√
|x′ − y′ |
1
d
d
≤
≤ (1 + ε)
.
1+ε n
|x − y|
n
( )
2
It follows that with probability at least 1 − n2 4e−ε d/4 , this inequality holds simultaneously
for all pairs x, y ∈ S, and then the distortion of the projection is at most (1 + ε)2 < 1 + 3ε.
Replacing ε by ε/3 and choosing d as in the Theorem, this probability is positive, and the
lemma follows.
14.2.2
Embedding with small distortion
We describe embedding of a general metric space into a euclidean space, due to Bourgain
[34], which has low distortion (and other very interesting and useful properties, as we will
see later).
Theorem 14.2.2 Every metric space with n points can be embedded into the euclidean space
Rd with distortion O(log n) and dimension d = O(log n).
To motivate the construction, let us recall Frechet’s Theorem 14.1.1: we embed a general
metric space isometrically into ℓ∞ by assigning a coordinate to each point w, and considering
the representation
x : i 7→ (d(w, i) : w ∈ V ).
If we want to represent the metric space by euclidean metric with a reasonably small distortion, this construction will not work, since it may happen that all points except u and v
are at the same distance from u and v, and once we take a sum instead of the maximum,
the contribution from u and v will become negligible. The remedy will be to take distances
from sets rather than from points; it turns out that we need sets with sizes of all orders of
magnitude, and this is where the logarithmic factor is lost.
We start with describing a simpler version, which is only good “on the average”. For a
while, it will be more convenient to work with ℓ1 distances instead of euclidean distances.
Both of these deviations from our goal will be easy to fix.
Let (V, d) be a metric space on n elements. Let m = ⌈log n⌉, and for every 1 ≤ i ≤ m,
choose a random subset Ai ⊆ V , putting every element v ∈ V into Ai with probability 2−i .
14.2. DISTANCE RESPECTING REPRESENTATIONS
219
Let d(v, Ai ) denote the distance of node v from the set Ai (if Ai = ∅, we set d(v, Ai ) = K
for some very large K, the same for every v). Consider the mapping x : V → RN defined
by
(
)
xu = d(u, A1 ), . . . , d(u, Am ) .
(14.1)
We call this the random subset representation of (V, d). Note that this embedding depends
on the random choice of the sets Ai , and so we have to make probabilistic statements about
it.
Lemma 14.2.3 The random subspace representation (14.1) satisfies
∥xu − xv ∥1 ≤ md(u, v),
and
E(∥xu − xv ∥1 ) ≥
1
d(u, v).
16
for every pair of points u, v ∈ V .
Proof. By the triangle inequality
|d(u, Ai ) − d(v, Ai )| ≤ d(u, v),
(14.2)
and hence
∥xu − xv ∥1 =
m
∑
|d(u, Ai ) − d(v, Ai )| ≤ md(u, v).
i=1
To prove the lower bound, fix two points u and v. Let (u0 = u, u1 , . . . , un−1 ) be the points
in V ordered by increasing distance from u (so that u0 = u), and let (v0 = v, v1 , . . . , vn−1 ) be
defined analogously. We break ties arbitrarily. Define ρi = max(d(u, u2i ), d(v, v2i )) as long
as ρi < d(u, v)/2. Let k be the first i with max(d(u, u2k ), d(v, v2k )) ≥ d(u, v)/2, and define
ρk = d(u, v)/2.
Focus on any i < k, and let (say) ρi = d(u, u2i ). If Ai does not select any point from
(u0 , u1 , . . . , u2i −1 ), but selects a point from (v0 , v1 , . . . , v2i−1 ), then
|d(u, Si ) − d(v, Si )| ≥ d(u, u2i ) − d(v, , v2i−1 ) ≥ ρi − ρi−1 .
The sets (u0 , u1 , . . . , u2i −1 ) and (v0 , v1 , . . . , v2i−1 ) are disjoint, and so the probability that
this happens is
(
i−1
i
(
1 )2 )(
1 )2
1
1− 1− i
1− i
≥ .
2
2
8
(
So the contribution of Ai to E |d(u, Ai ) − d(v, Ai )|) is at least (ρi − ρi−1 )/8. It is easy to
check that this holds for i = k as well. Hence
m
∑
1∑
1
1
E(∥xu − xv ∥1 ) =
E|d(u, Ai ) − d(v, Ai )| ≥
d(u, v), (14.3)
(ρi − ρi−1 ) = ρk =
8
8
16
i=1
i=0
k
220
CHAPTER 14. METRIC REPRESENTATIONS
which proves the Lemma.
b y) =
Proof. [Alternate] To prove the lower bound, fix two points u and v. Define d(x,
min(d(x, y), d(u, v)/2). It is easy to check that
b Ai ) − d(v,
b Ai )|.
|d(u, Ai ) − d(v, Ai )| ≥ |d(u,
Furthermore, since their value depends only on how Ai behaves in the disjoint open neighb Ai ) and d(v,
b Ai ) are independent
borhoods of radius d(u, v)/2 of u and v, the quantities d(u,
b Ai ) by d(v,
b Bi ), where Bi is an inderandom variables. This implies that if we replace d(v,
b
b Ai ) does not
pendently generated copy of Ai , then the joint distribution of d(u, Ai ) and d(v,
change. Hence
b Ai ) − d(v,
b Ai )|) = E(|d(u,
b Bi ) − d(v,
b Ai )|)
E(|d(u, Ai ) − d(v, Ai )|) ≥ E(|d(u,
and so
(
)
(
)
b Ai ) − d(u,
b Bi )| ≤ E |d(u,
b Ai ) − d(v,
b Ai )| + E(|d(v,
b Ai ) − d(u,
b Bi )|)
E |d(u,
(
)
b Ai ) − d(v,
b Ai )| .
= 2E |d(u,
(14.4)
(
)
b Ai ) − d(u,
b Bi )| from below.
This shows that it suffices to estimate E |d(u,
Let ai and bi be the nearest points of Ai and Bi to u, respectively. Then
b Ai ) − d(u,
b Bi )|)
E(|d(u,
d(u,v)/2
∫
P(ai ∈ B(v, t), bi ∈
/ B(v, t)) + P(ai ∈
/ B(v, t), bi ∈
/ B(v, t)) dt
=
0
d(u,v)/2
∫
P(ai ∈ B(v, t))(1 − P(ai ∈ B(v, t))) dt.
=2
0
Summing over all i, we get
∑
b Ai ) − d(u,
b Bi )|) ≥ 2
E(|d(u,
i
d(u,v)/2
∫
∑
0
P(ai ∈ B(v, t))(1 − P(ai ∈ B(v, t))) dt.
i
Elementary computation shows that for every t there is an i for which
3
4,
and so the integrand is at least 3/16. This implies that
∑
i
b Ai ) − d(u,
b Bi )|) ≥ 2
E(|d(u,
3
3 d(u, v)
=
d(u, v).
16 2
16
1
4
≤ P(ai ∈ B(v, t)) ≤
14.2. DISTANCE RESPECTING REPRESENTATIONS
221
Using (14.4),
E(∥xu − xv ∥1 ) =
≥
m
)
∑
(
) 1 (∑
b Ai ) − d(u,
b Bi )|)
E |d(u, Ai ) − d(v, Ai )| ≥
E(|d(u,
2 i
i=1
3
d(u, v),
32
as claimed.
Proof of Theorem 14.2.2. The previous lemma only says that the random subset representation does not shrink distances too much “on the average”. Equation 14.3 suggests the
remedy: the distance ∥xu − xv ∥1 is the sum of m independent bounded random variables,
and hence it will be close to its expectation with high probability, if m is large enough.
Unfortunately, our m as chosen is not large enough to be able to claim this simultaneously
for all u and v; but we can multiply m basically for free. To be more precise, let us generate N independent copies x1 , . . . , xN of the representation (14.1), and consider the mapping
y : V → RN m , defined by
yu = (x1u , . . . , xN
u ).
(14.5)
Then trivially
∥yu − yv ∥1 ≤ N md(u, v)
and E(∥yu − yv ∥1 ) = N E(∥xu − xv ∥1 ) ≥ N
For every u and v, (1/N )∥yu − yv ∥1 → E(∥xu − xv ∥1 ≥
1
16 d(u, v)
1
d(u, v).
16
by the Law of Large
Numbers with probability 1, and hence if N is large enough, then with high probability
∥yu − yv ∥1 ≥
N
d(u, v)
20
(14.6)
holds for all pairs u, v. So y is a representation in ℓ1 with distortion at most 20m.
To get results for ℓp -distance instead of the ℓ1 -distance (in particular, for the the euclidean
distance), we first note that the upper bound
∥yu − yv ∥p ≤ (d(u, v), . . . , d(u, v))p = (N m)1/p d(u, v)
follows similarly from the triangle inequality as the analogous bound for the ℓ1 -distance. The
lower bound is similarly easy, using (14.6):
∥yu − yv ∥p ≥ (N m) p −1 ∥yu − yv ∥1 ≥
1
(N m)1/p
d(u, v)
20m
(14.7)
with high probability for all u, v ∈ V . The ratio of the upper and lower bounds is 20m =
O(log n).
We are not yet done, since choosing a large N will result in a representation in a very high
dimension. An easy way out is to invoke the Johnson–Lindenstrauss Lemma 14.2.1 (in other
222
CHAPTER 14. METRIC REPRESENTATIONS
words, to apply a random projection). Alternately, we could estimate the concentration of
(1/N )∥yu − yv ∥1 using the Chernoff-Hoeffding Inequality.
Linial, London and Rabinovitch [150] showed how to construct an embedding satisfying
the conditions in Theorem 14.2.2 algorithmically, and gave the application to be described
in Section 14.2.3. Matouˇsek [180] showed that if the metric space is the graph distance in an
expander graph, then Bourgain’s embedding is essentially optimal, in the sense that every
embedding in a euclidean space has distortion Ω(log n).
14.2.3
Multicommodity flows and approximate Max-Flow-Min-Cut
We continue with an important algorithmic application of the random subset representation.
A fundamental result in the theory of multicommodity flows is the theorem of Leighton
and Rao [149]. Stated in a larger generality, as proved by Linial, London and Rabinovich
[150], it says the following. Suppose that we have a multicommodity flow problem on a graph
G. This means that we are given k pairs of nodes (si , ti ) (i = 1, . . . , k), and for each such
pair, we are given a demand di ≥ 0. Every edge e of the graph has a capacity ce ≥ 0. We
would like to design a flow f i from si to ti of value di for every 1 ≤ i ≤ k, so that for every
edge the capacity constraint is satisfied.
−
→
We state the problem more precisely. We need a reference orientation of G; let E denote
the set of these oriented edges, and for each set S ⊆ V , let ∇+ (S) and ∇− (S) denote the
−
→
set of edges leaving and entering S, respectively. An (s, t)-flow is a function f : E (G) → R
that conserves material:
∑
∑
f (e) =
f (e)
for all v ∈ V (G) \ {s, t}.
(14.8)
e∈∇− (v)
e∈∇+ (v)
Since the orientation of the edges serves only as a reference, we allow positive or negative
flow values; if an edge is reversed, the flow value changes sign. The number
val(f ) =
∑
f (e) −
∑
f (e) =
e∈∇− (s)
e∈∇+ (s)
∑
e∈∇− (t)
f (e) −
∑
f (e)
(14.9)
e∈∇+ (t)
is the value of the flow.
We want an (si , ti )-flow f i for every i ∈ [k], with prescribed values di , so that the total
flow through the edge does not exceed the capacity of the edge:
k
∑
|f i (e)| ≤ ce
−
→
for all e ∈ E (G)
(14.10)
i=1
(this clearly does not depend on which orientation of the edge we consider).
Let us hasten to point out that the solvability of a multicommodity flow problem is just
−
→
the feasibility of a linear program (we treat the values f (e) (e ∈ E (G)) as variables). So
14.2. DISTANCE RESPECTING REPRESENTATIONS
223
the multicommodity flow problem is polynomial time solvable. We can also apply the Farkas
Lemma, and derive a necessary and sufficient condition for solvability. If you work out the
dual, very likely you will get a condition that is not transparent at all; however, a very nice
form was discovered by Iri [123] and by Shahrokhi and Matula [218], which fits particularly
well into the topic of this book.
Consider a semimetric D on V (G)G. Let us describe an informal (physical) derivation
of the conditions. Think of an edge e = uv as a pipe with cross section ce and length
∑
D(e) = D(u, v). Then the total volume of the system is e cd D(e). If the multicommodity
problem is feasible, then every flow f i occupies a volume of val(f i )D(si , ti ), and hence
∑
di D(si , ti ) ≤
∑
cd D(e).
(14.11)
e
i
These conditions are also sufficient:
Theorem 14.2.4 Let G = (V, E) be a graph, let si , ti ∈ V , di ∈ R+ (i = 1, . . . , k), and
ce ∈ R+ (e ∈ E). Then there exist (si , ti )-flows (f i : i ∈ [k]) satisfying the the demand
conditions val(f i ) = di and the capacity constraints (14.10) if and only if (14.11) holds for
every semimetric D in V .
We leave the exact derivation of the necessity of the condition, as well as the proof of the
converse, to the reader.
For the case k = 1, the Max-Flow-Min-Cut Theorem of Ford and Fulkerson gives a simpler
condition for the existence of a flow with given value, in terms of cuts. Obvious cut-conditions
provide a system of necessary conditions for the problem to be feasible in the general case as
well. If the multicommodity flows exist, then for every S ⊆ V (G), we must have
∑
e∈∇+ (S)
ce ≥
∑
di .
(14.12)
i: S∩{si ,ti }=1
In the case of one or two commodities (k ≤ 2), these conditions are necessary and sufficient;
this is the content of the Max-Flow-Min-Cut Theorem (k = 1) and the Gomory–Hu Theorem
(k = 2). However, for k ≥ 3, the cut conditions are not sufficient any more for the existence
of multicommodity flows. The theorem of Leighton and Rao asserts that if the cut-conditions
are satisfied, then relaxing the capacities by a factor of O(log n), the problem becomes feasible.
The relaxation factor was improved by Linial, London and Rabinovich to O(log k); we state
the result in this tighter form, but for simplicity of presentation prove the original (weaker)
form.
Theorem 14.2.5 Suppose that for a multicommodity flow problem, the cut conditions
(14.12) are satisfied. Then replacing every edge capacity ce by 10ce log k, the problem becomes feasible.
224
CHAPTER 14. METRIC REPRESENTATIONS
Proof. Using Theorem 14.2.4, it suffices to prove that for every semimetric D on V (G),
∑
∑
di D(si , ti ) ≤ 10(log n)
cd D(e).
(14.13)
e
i
The cut conditions imply the validity of semimetric conditions (14.11) for 2-partition semimetrics, which in turn imply their validity for semimetrics that are nonnegative linear combinations of 2-partition semimetrics. We have seen that these are exactly the ℓ1 -semimetrics.
By Bourgain’s Theorem 14.2.2, there is an ℓ1 -metric D′ such that
D′ (u, v) ≤ D(u, v) ≤ 20(log n)D′ (u, v).
We know that D′ satisfies the semimetric condition (14.11), and hence D satisfies the relaxed
semimetric conditions (14.13).
14.3
Respecting the volume
A very interesting extension of the notion of small distortion was formulated by Feige [84].
We want an embedding of a metric space in a euclidean space such that is contractive, and
at the same time volume respecting up to size s, which means that every set of at most s
nodes spans a simplex whose volume is almost as large as possible.
Obviously, the last condition needs an explanation, and for this, we will have to define
a certain “volume” of a metric space. Let (V, d) be a finite metric space with |V | = n. For
∏
a spanning tree T on V (T ) = V , we define Π(T ) = uv∈E(T ) d(u, v), and we define the
tree-volume of (V, d) by
tvol(V ) = tvol(V, d) =
1
min Π(T )
k! T
Since we only consider one metric, denoted by d, in this section, we will not indicate it in
tvol(V, d) and similar quantities to be defined below. We will, however, change the underlying
set, so we keep V in the notation. Note that the tree T that gives here the minimum is also
minimizing the total length of edges (this follows from the fact that it can be found by the
Greedy Algorithm).
The following lemma relates the volume of a simplex in an embedding with the treevolume.
Lemma 14.3.1 For every contractive map x : V → Rn of a metric space (V, d), the volume
of the simplex spanned by x(V ) is at most tvol(V ).
Proof. We prove by induction on n = |V | that for any tree T on V ,
voln−1 (conv(x(V ))) ≤
1
ΠT .
(n − 1)!
14.3. RESPECTING THE VOLUME
225
For n = 2 this is obvious. Let n > 2, and let i ∈ V be a node of degree 1 in T , let j be the
neighbor of i in T , and let h be the distance of i from the hyperplane containing x(V \ {i}).
Then by induction,
h
d(i, j)
voln−2 (conv(x(V \ {i}))) ≤
voln−2 (conv(x(V \ {i})))
n−1
n−1
d(i, j)
1
1
≤
ΠT −i =
ΠT .
n − 1 (n − 2)!
(n − 1)!
voln−1 (conv(x(V ))) =
This completes the proof.
The main result about tree-volume (Feige [84]) is that the upper bound given in Lemma
14.3.1 can be attained, up to a polylogarithmic factor, for all small subsets simultaneously.
To be more precise, let (V, d) be a finite metric space, let k ≥ 2, and let x : V → Rn be any
contractive vector labeling. We define the volume-distortion for k-sets of x to be
(
sup
S⊆V
|S|=k
1
) k−1
tvol(S)
.
volk−1 (conv(x(S)))
(The exponent 1/(k − 1) is a natural normalization, since tvol(S) is the product of k − 1
distances.) Volume-distortion for 2-sets is just the distortion defined before.
Theorem 14.3.2 Every finite metric space (V, d) with n elements has a contractive map
x : V → Rd with d = O((log n)3 ), with volume-distortion at most (log n)2 for sets of size at
most log n.
Theorem 14.3.3 Every finite metric space (V, d) with n elements has a contractive map
x : V → Rd with d = O((log n)5 ), with volume-distortion O((log n)5 ) for sets of size at most
log n.
14.3.1
Bandwidth and density
Our goal is to use volume-respecting embeddings to design an approximation algorithm for
the bandwidth of a graph, but first we we have to discuss some basic properties of bandwidth.
The bandwidth bw(G) of a graph G = (V, E) is defined as the smallest integer b such that
the nodes of G can be labeled by 1, . . . , n so that |i − j| ≤ b for every edge ij. The number
refers to the fact that for this labeling of the nodes, all 1’s in the adjacency matrix will be
contained in a band of width 2b + 1 around the main diagonal.
The bandwidth of a graph is NP-hard to compute, or even to approximate within a
constant factor [63]. As an application of volume-respecting embeddings, we describe an
algorithm that finds a polylogarithmic approximation of the bandwidth of a graph in polynomial time. The ordering of the nodes which approximates the bandwidth will obtained
through a random projection of the representation to the line, in a fashion similar to the
Goemans–Williamson algorithm.
226
CHAPTER 14. METRIC REPRESENTATIONS
We can generalize the notion of bandwidth to any finite matric space (V, d): it is the
smallest real number b such that there is a bijection f : V → [n] such that |f (i) − f (j)| ≤
bd(i, j) for all i, j ∈ V . It takes a minute to realize that this notion does generalize graph
bandwidth: the definition of bandwidth requires this condition only for the case when ij is
an edge, but this already implies that it holds for every pair of nodes due to the definition
of graph distance. (We do loose the motivation for the word “bandwidth”.)
We need some preliminary observations about the bandwidth. It is clear that if there is
a node of degree D, then bw(G) ≥ D/2. More generally, if there are k nodes on the graph at
distance at most t from a node v (not counting v), then bw(G) ≥ k/(2t). This observation
generalizes to metric spaces. We define the local density of a metric space (V, d) by
|B(v, t)| − 1
.
t
dloc (V ) = max
v,t
Then
bw(V ) ≥
1
dloc (V ).
2
(14.14)
It is not hard to see that equality does not hold here (see Exercise 14.3.11), and the ratio
bw(G)/dloc (G) can be as large as log n for appropriate graphs.
We need the following related quantity, which we call the harmonic density:
dhar (V ) = max
v∈V
∑
u∈V \{v}
1
.
d(u, v)
(14.15)
Lemma 14.3.4 We have
dloc (V ) ≤ dhar (V ) ≤ (1 + ln n)dloc (V ).
Proof. For an appropriate t > 0 and v ∈ V ,
dloc (V ) =
|B(v, t)| − 1
≤
t
∑
u∈B(v,t)\{v}
1
≤
d(u, v)
∑
u∈V \{v}
1
= dhar (V ).
d(u, v)
On the other hand, dhar (V ) is the sum of n − 1 positive numbers, among which the k-th
largest is at most dloc (V )/k by the definition of the local density. Hence
dhar (V ) ≤
n−1
∑
k=1
dloc (V )
≤ (1 + ln n)dloc (V ).
k
We conclude this preparation for the main result with relating the harmonic density to
the tree-volume.
The harmonic density is related to the tree-volume:
14.3. RESPECTING THE VOLUME
Lemma 14.3.5
∑
1
V
k
S∈(
)
tvol(S)
227
≤ n(k − 1)!(4dhar (V ))k−1 .
Proof. Let H be the complete graph on V with edgeweights βij = 1/d(i, j), then
∑
∑ ∑ (k − 1)! ∑
1
≤
≤
(k − 1)! hom(T, H),
tvol(S)
Π(T )
T
S∈(Vk )
S∈(Vk ) VT(Ttree
)=S
(14.16)
where the summation runs over isomorphism types of trees on k nodes. To estimate homomorphism numbers, we use the fact that for every edge-weighting β of the complete graph
∑
Kn in which the edgeweights βij ≥ 0 satisfy j βij ≤ D for every node i, we have
hom(T, H) ≤ nDk−1 .
(14.17)
For integral edgeweights, we can view the left side as counting homomorphisms into a multigraph, and then the inequality follows by simple computation: fixing any node of T as its
root, it can be mapped in n ways, and then fixing any search order of the nodes, each of
the other nodes can be mapped in at most D ways. This implies the bound for rational
edgeweights by scaling, and for real edgeweights, by approximating them by rationals.
By (14.17), each term on the right side of (14.16) is at most n(k − 1)!dhar (V )k−1 , and it
is well known that the number of isomorphism types of trees on k nodes is bounded by 4k−1
(see e.g. [154], Problem 4.18), which proves the Lemma.
14.3.2
Approximating the bandwidth
Now we come to the main theorem describing the algorithm to get an approximation of the
bandwidth.
Theorem 14.3.6 Let G = (V, E) be a graph on n nodes, let k = ⌈ln n⌉, and let x : V → Rd
be a contractive representation such that for each set S with |S| = k, the volume of the simplex
spanned by x(S) is at least tvol(S)/η k−1 for some η ≥ 1. Project the representing points onto
a random line L through the origin, and label the nodes by 1, . . . , n according to the order of
their projections on the line. Then with high probability, we have |i − j| ≤ 200kηdhar (G) for
every edge ij.
Proof.
Let yi be the projection of xi in L. Let ij be an edge; since x is contractive we
√
√
have |xi − xj | ≤ 1, and hence, with high probability, |yi − yj | ≤ 2|xi − xj |/ d ≤ 2/ d for
all edges ij, and we will assume below that this is indeed the case.
√
We call a set S ⊆ V bad, if diam(y(S)) ≤ 2/ d. So every edge is bad, but we are
interested in bad k-sets. Let T = {i, i + 1, . . . , j} and |T | = B. Then all the points yu , u ∈ T ,
√
are between yi and yj , and hence diam(y(T )) ≤ 2/ d, so T is bad.
228
CHAPTER 14. METRIC REPRESENTATIONS
( )
For every k-tuple S ∈ Vk , let KS = conv(x(S)). By (15.37) (with d in place of n and
k − 1 in place of d),
( 2e √ d )k−1
(
)
πk−1
P(S is bad) ≤ √
1 + ln diam(x(S))
volk−1 (KS )
d k−1
( 2e )k−1
( 2e )k−1
πk−1
πk−1
≤ √
(1 + ln n) ≤ √
n.
volk−1 (KS )
volk−1 (KS )
k−1
k−1
Using the condition that x is volume-respecting, we get
( 2eη )k−1 π
k−1 n
P(S is bad) ≤ √
.
tvol(S)
k−1
Using Corollary 14.3.5, the expected number of bad sets is
( 2eη )k−1 ∑ π
( 2eη )k−1
k−1 n
√
≤ √
πk−1 (k − 1)!n2 (4dhar (V ))k−1 .
tvol(S)
k−1
k
−
1
S∈(Vk )
On the other hand, we have seen that the number of bad sets is at least
(
) (
B+1
8eη )k−1
≤ √
πk−1 (k − 1)!n2 dhar (V )k−1 .
k
k−1
(
)
≥ (B − k)k−1 /(k − 1)!, we get
Using the trivial bound B+1
k
(B+1)
k
, so
8eη
1/(k−1) 2/(k−1)
B ≤ k + (k − 1)!2/(k−1) √
π
n
dhar (V )
k − 1 k−1
≤ 200kηdhar (V ).
This completes the proof.
Corollary 14.3.7 For every finite metric space (V, d) with n points, its bandwidth and local
density satisfy
dloc (V ) ≤ bw(V ) ≤ (log n)5 dloc (V ).
Proof. [To be written.]
Exercise 14.3.8 (a) Prove that every finite ℓ2 -space is isometric with an ℓ1 space. (b) Show by an example that the converse is not valid.
√
Exercise 14.3.9 Let (V, d) be a metric space. Prove that (V, d) is a metric
space.
Exercise 14.3.10 Prove that every finite metric space (V, d) has a vector-labeling
with volume distortion at most 2 for V .
Exercise 14.3.11 Show that there exists a graph G on n nodes for which
bw(G)/dloc (G) ≥ log n.
Chapter 15
Adjacency matrix and its square
Perhaps the easiest way of assigning a vector to each node is to use the corresponding column
of the adjacency matrix. Even this easy construction has some applications, and we will
discuss one of them.
However, somewhat surprisingly it turns out that a more interesting vector-labeling can
be obtained by using the columns of the square of the adjacency matrix. This construction
is related to the Regularity Lemma of Szemer´edi [232], one of the most important tools of
graph theory, and it leads to reasonably efficient algorithms to construct (weak) regularity
partitions.
15.1
Neighborhood representation
We prove the following theorem [142].
Theorem 15.1.1 Let G = (V, E) be a twin-free graph on n vertices, and let r = rk(AG ).
Then n = O(2r/2 ).
Before starting with the proof, we recall a result from discrete geometry. The kissing
number sd in Rd is the largest integer N such that there are N non-intersecting unit balls
touching a given unit ball. The exact value of sd is only known for special dimensions d, but
the following bound, which follows from tighter estimates by Kabatjanski˘ı and Levenˇstein
[129], will be enough for us:
sd = O(2d/2 ).
(15.1)
Another way of stating this is that in every set of more const · 2d/2 vectors in Rd , there are
two vectors such that angle between them is less than π/6.
Proof.
which
By induction on r, we prove that n ≤ C2r/2 − 16, where C ≥ 10 is a constant for
sd ≤ C2d/2 − 16
(15.2)
229
230
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
holds for every d. For r = 2 it is easy to see that n ≤ 2, and since s2 = 6, the theorem holds
in this case. So we may suppose that r > 2 and n > C · 2r/2 − 16.
We will repeatedly use the following observation:
Claim 1 If G is a twin-free graph and an induced subgraph G[X] of G has twins, then
rk(AG[Z] ) ≤ r(AG ) − 2.
Indeed, if G[Z] has twins i and j, G must have a node u that distinguishes i and j. This
means that row u of AG is not in the span of the rows in Z, and hence adding this row to
AG[Z] increases its rank. Adding the corresponding column further increases the rank. This
implies that rk(AG[Z] ) ≤ r − 2.
Let G[Z] be a largest induced subgraph of G with rk(AG[Z] ) < r, let t = |Z|, and let
S = V (G) \ Z. Adding any row and the corresponding column of AG to AG[Z] increases its
rank by at most 2, and by the maximality of Z we must get a submatrix of rank r, hence
r − 2 ≤ rk(AG[Z] ) ≤ r − 1.
The key to the proof is a geometric argument using the neighborhood representation of
G to prove the inequality
t>
3n
.
4
(15.3)
Let ai denote the column of AG corresponding to i ∈ V , and let ui = 21 − ai . Clearly the
vectors ui are ±1-vectors, which belong to an (at most) (r + 1)-dimensional subspace.
Applying the kissing number bound (15.2) to the vectors ui , we get that there there are
two representing vectors ui and uj forming an angle less than π/6. For two ±1 vectors,
this means that ui and uj agree in more than 3n/4 positions, say positions 1, . . . , k, where
k > 3n/4. Clearly the vectors ai and aj also agree in these positions.
The first k full rows of AG form a submatrix whose rank is strictly smaller than that of
AG , since it must contain a column that distinguishes i and j, and so it is not in the linear
span of the first k columns. So t ≥ k > 3n/4, which proves (15.3).
If G[Z] does not have twins, then by the induction hypothesis applied to G[Z],
t ≤ C · 2(r−1)/2 − 16 <
n + 16
3n
√
− 16 <
,
4
2
which contradicts (15.3). So G[Z] has twins, and hence rk(AG[Z] ) = r − 2.
Let u ∈ S. By the maximality of Z, the rows of AG corresponding to Z ∪ {u} span its
row-space. This implies that u must distinguish every twin pair in Z, and so u is connected
to exactly one member of every such pair. This in particular implies that G[Z] does not have
three mutually twin nodes. Let T be the set of nodes that are twins in G[Z] and are adjacent
to u, and let T ′ be the set of twins of nodes of T . Let U = Z \ (T ∪ T ′ ), W = T ∪ U and
p = |T | (see Figure 15.1).
15.1. NEIGHBORHOOD REPRESENTATION
231
Figure 15.1: Node sets in the proof of Theorem 15.1.1. Dashed arrows indicate
twins in G[Z].
We want to apply induction hypothesis to both G[Z \ T ] and G[T ]. The graph G[Z \ T ] is
obtained from G[Z] by deleting one member of each twin-pair, hence it is trivially twin-free,
and rk(AG[Z\T ] ) = rk(AG[Z] ) = r − 2. So we can apply the induction hypothesis, to get
t − p = |Z \ T | ≤ C · 2(r−2)/2 − 16 <
n
− 8.
2
(15.4)
It is not obvious, however, that G[T ] is twin-free, or what its rank is; to establish this
we need more facts about the structure of our graph. Every v ∈ S is either connected to all
of nodes in T and none in T ′ , or the other way around. This is clearly true for the node u
used to define T and T ′ . For every other v ∈ S, row v of AG is a linear combination of rows
corresponding to Z and u. Let, say, the coefficient of row u in this linear combination be
positive; then for every twin pair {i, j} (where i ∈ T and j ∈ T ′ ) we have au,i > au,j , but
(by the definition of twins) aw,i = aw,j for all w ∈ Z, and hence av,i > av,j . This means that
v is adjacent to i, but not to j, and so we get that v is adjacent to all nodes of T but not
to any node of T ′ . If the coefficient of row u in this linear combination is negative, then v
is connected to all nodes in T ′ but no node of T ′ , by a similar argument. Thus we have a
decomposition S = Y ∪ Y ′ , where every node of Y is connected to all nodes of T but to no
node of T ′ , and for nodes in Y ′ , the other way around.
Using the obvious inequality p ≤ t/2 and (15.4), we get that |S| = n − t > 16, and hence
(say) |Y | > 8. Since the rows in Y are all different, this implies that they form a submatrix
of rank at least 4. Adding this submatrix to the rows in T , and using that in their columns
corresponding to T they have zeros, we get a matrix of rank at least rk(AG[T ] ) + 4. This
implies that rk(AG[T ] ) ≤ r − 4.
Next we show that G[T ] is twin-free. Suppose not, and let i, j ∈ T are twins in G[T ].
They remain twins in G − U : no node in T ′ can distinguish them (because its twin does not),
and no node in S can distinguish them, as we have seen. So G − U is not twin-free, and hence
rk(AG−U ) ≤ r − 2. Using (15.4), we get |V (G − U )| = n − t + 2p > t, which contradicts the
maximality of t.
232
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
So G[T ] is twin-free, and we can apply induction to G[T ], to get
p = |T | ≤ C · 2(r−4)/2 − 16 <
n
− 8.
4
Adding (15.4) and (15.5), we get that t < 3n/4, which contradicts (15.5).
15.2
Second neighbors and regularity partitions
15.2.1
ε-packings, ε-covers and ε-nets
(15.5)
We formulate some notions and elementary arguments for a general metric space (V, d). A
set of nodes S ⊆ V is an ε-cover, if for every point v ∈ V d(v, S) ≤ ε. (Here, as usual,
d(v, S) = mins∈S d(s, v).) We say that S is an ε-packing, if d(u, v) ≥ ε for every u, v ∈ S. An
ε-net is a set that is both an ε-packing and an ε-cover. It is clear that a maximal ε-packing
must be an ε-covering (and so, and ε-net). If S is an ε-cover, then no (2ε)-packing can
have more than |S| elements. On the other hand, every ε-cover has a subset that is both an
ε-packing and a (2ε)-cover.
Now suppose that we also have a probability measure on V (for our applications, V will
be finite, with the uniform distribution over its points). An average ε-cover is a set S ⊆ V
∑
such that
v∈V d(v, S) ≤ εn. An average ε-net is an average (2ε)-cover that is also an
ε-packing. (It is useful to allow this relaxation by a factor of 2 here.) For every average
ε-cover, a maximal subset that is an ε-packing is an average ε-net.
We will need the following simple lemma.
Lemma 15.2.1 Let (V, d) be a finite metric space and let T and S be average ε-covers. Then
there is a subset S ′ ⊆ S such that |S ′ | ≤ |T | and S ′ is an average (3ε)-cover.
Proof. For every point in T choose a nearest point of S, and let S ′ be the set of points
chosen this way. Clearly |S ′ | ≤ |T |. For every x ∈ V , let y ∈ S and z ∈ T be the points
nearest to x. Then d(z, S) ≤ d(z, y) ≤ d(x, z) + d(x, y) = d(x, T ) + d(x, S), and hence, by its
definition, S ′ contains a point y ′ with d(z, y ′ ) ≤ d(x, T ) + d(x, S). Hence
d(x, S ′ ) ≤ d(x, y ′ ) ≤ d(x, z) + d(z, y ′ ) ≤ 2d(x, T ) + d(x, S).
Averaging over x, the lemma follows.
We define the Voronoi cells of a (finite) set S ⊆ V as the partition which has a partition
class (“cell”) Vs for each s ∈ S, and every point v ∈ V is put into the cell Vs for which s is the
point in S closest to s. For our purposes, ties can be broken arbitrarily. If the metric space is
a euclidean space, then Voronoi cells have many nice geometric properties (for example, they
are polyhedra). In our case they will not be so nice, but it is clear that if S is an ε-cover,
then the Voronoi cells of S have diameter at most 2ε.
15.2. SECOND NEIGHBORS AND REGULARITY PARTITIONS
233
How to construct ε-nets? Assuming that V is finite and we can compute d(u, v) efficiently,
we can go through the nodes in any order, and build up the ε-net S, by adding a new node v
to S if d(v, S) ≤ ε. In other words, S is a greedily selected maximal ε-packing, and therefore
an ε-net.
Average ε-nets also come in through a simple algorithm. Suppose that the underlying set
V (G) is too large to go through each node, but we have a mechanism to select a uniformly
distributed random node (this is the setup in the theory of graph property testing). Again
we build up S randomly, in each step generating a new random node and adding it to S if its
distance from S is at least ε. For some pre-specified K ≥ 1, we stop if for K/ε consecutive
steps no new point has been added to S. The set S formed this way is an ε-packing, but not
necessarily maximal. However, it is likely to be an average (2ε)-cover. Indeed, let t be the
number of points x with d(x, S) > ε. These points are at distance at most 1 from S, and
hence
∑
d(x, S) ≤ (n − t)ε + t < nε + t.
x∈V
So if S is not an average (2ε)-cover, then t > εn. The probability that we have not hit this
set for K/ε steps is less than e−K .
We can modify this procedure to find an almost optimal ε-cover. Consider the smallest
average ε-cover T (we don’t know which points belong to T , of course), and construct a
(possibly much larger) average ε-cover S by the procedure described above. By Lemma
15.2.1, S contains a (3ε)-cover S ′ of size |S ′ | ≤ |T |. We can try all |T |-subsets of S, and find
S′.
This clearly inefficient last step is nevertheless useful in the “property testing” or “constant time algorithm” model. Let us assume that (V, d) is a very large metric space (could
even be infinite), endowed with a probability measure, and we can (a) generate independent
random points of V and (b) compute the distance of two points. Also suppose that we know
a bound K(ε) on the maximum size of an ε-packing, and let k(ε) be the minimum size of an
average ε-cover. Then in time O(K(ε)k(ε) /ε2 ) time, we can construct an average (4ε)-cover.
(The point is that the time bound depends only on ε, not on the size of V .)
15.2.2
Voronoi cells and regularity partitions
We define the 2-neighborhood representation of a graph G = (V, E) as the map i 7→ ui , where
ui = A2 ei is the column of A2 corresponding to node i (where A = AG is the adjacency
matrix of G). Squaring the matrix seems unnatural, but it is crucial. We define a distance
between the nodes, called the 2-neighborhood distance (or similarity distance), by
d(s, t) =
1
∥us − ut ∥1 .
n2
This normalization makes it sure that the distance of any two nodes is at most 1.
234
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
Example 15.2.2 To illustrate the substantial difference between the 1-neighborhood and
2-neighborhood metrics, let us consider a graph with a very simple structure: Let V (G) =
V1 ∪ V2 , where |V1 | = |V2 | = n/2, and let any two nodes in V1 be connected with probability
p, any two nodes in V2 , with probability q, and let any node in V1 be connected to any node
in V2 with probability r.
The following was proved (in a somewhat different form) in [170].
Theorem 15.2.3 Let G = (V, E) be a simple graph, and let d be its 2-neighborhood distance.
(a) If S ⊆ V is an average ε-cover, then the Voronoi cells of S define a partition P of V
√
such that d (G, GP ) ≤ 8 ε.
(b) If P = {V1 , . . . , Vk } is a partition of V such that d (G, GP ) ≤ ε, then we can select
elements si ∈ Vi so that {s1 , . . . , sk } is an average (4ε)-cover.
Combining with the Weak Regularity Lemma, it follows that every graph has an average
ε-cover, with respect to the 2-neighborhood metric, satisfying
( ( 1 ))
|S| ≤ exp O 2 .
ε
It also follows that it has a similarly bounded average ε-net.
Proof.
(a) Let S = {s1 , . . . , sk }, and let P = {V1 , . . . , Vk } be the partition of V defined
by the Voronoi cells of S (where si ∈ Vi ). For v ∈ St , let ϕ(v) = st . Define two matrices
P, Q ∈ RV ×V by

{
 1 , if i, j ∈ S ,
1, if j = ϕ(i),
t
Pij = |St |
Qij =

0, otherwise.
0,
otherwise.
Clearly P 2 = P . If x ∈ RV , then P x is the vector whose i-th coordinate is the average of xj
over the class of P containing i. Similarly, P AP = AP . The vector QT x is obtained from x
by replacing the entry in position i ∈ St by the entry in position st .
√
We want to show that ∥R∥ ≤ 8 ε. It suffices to show that for every vector x ∈ {0, 1}V ,
√
⟨x, Rx⟩ ≤ 2 ε n2 .
(15.6)
Write R = A − AP = A − P AP and y = x − P x, then P y = Qy = 0 and Ay = Ry = Rx.
Hence
⟨x, Rx⟩ = ⟨y, Rx⟩ + ⟨P x, Rx⟩ = ⟨x, Ry⟩ + ⟨P x, Ry⟩ = ⟨x + P x, Ay⟩
√
≤ 2∥Ay∥1 ≤ 2 n∥Ay∥2 .
(15.7)
15.2. SECOND NEIGHBORS AND REGULARITY PARTITIONS
235
Since ∥y∥∞ ≤ 1, we have here
∥Ay∥22 = ⟨y, A2 y⟩ = ⟨y, (A2 − A2 Q)y⟩ ≤ ∥A2 − A2 Q)∥1 =
=
∑
d(k, ϕ(k))n2 =
k
∑
∑
∥uk − uϕ(k) ∥1
k
d(k, S)n2 ≤ εn3 .
k
Combining with (15.7), this proves (15.6).
(b) Suppose that P is a weak regularity partition with error ε. Let R = A − P AP , then
we know that ∥R∥ ≤ ε. Let i, j be two nodes in the same partition class of P. Then
P ei = P ej , and hence A(ei − ej ) = R(ei − ej ). Thus
∥ui − uj ∥1 = ∥A2 (ei − ej )∥1 = ∥AR(ei − ej )∥1 ≤ ∥ARei ∥1 + ∥ARej ∥1 .
(15.8)
For every set T ∈ P, choose a point vT ∈ T for which ∥ARei ∥1 is minimal, and let S =
{vT : T ∈ P}. Then using (15.8) and (15.18),
∑
i
∑∑ 1
)
1 ∑ ∑(
∥ui − uvT ∥1 ≤ 2
∥ARei ∥1 + ∥ARej ∥1
2
n
n
T ∈P i∈T
T ∈P i∈T
∑
∑
1
2 ∑
2
1
∥ARei ∥1 + 2
|T |∥ARevT ∥1 ≤ 2
∥ARei ∥1 = 2 ∥AR∥1
= 2
n i
n
n i
n
T ∈P
2
2 ∑
2 ∑
= 2 ∥RA∥1 = 2
∥RAei ∥1 ≤ 2
2∥R∥ ∥Aei ∥∞ ≤ 4εn.
n
n i
n i
d(i, S) ≤
This completes the proof.
What can we say about ε-nets with respect to the 2-neighborhood metric, not in the
average but in the exact sense? In a general metric space there would be no relationship
between the sizes of these ε-nets, since exceptional points, far from from everything else,
must be included, and the number of these could be much larger than the size of the average
ε-net, even if it is negligible relative to the whole size of the space. It is somewhat surprising
that in the case of graphs, even exact ε-nets can be bounded by a function of ε (although
the bound is somewhat worse than for average ε-nets). The following result was proved by
Alon (unpublished).
Theorem 15.2.4 In any graph G, every set of k nodes contains two nodes s and t with
√
log log k
.
d(s, t) ≤ 20
log k
In other words, every ε-net (or ε-packing) S in a graph, with respect to the 2-neighborhood
metric, satisfies
( (1
1 ))
|S| ≤ exp O 2 log
.
ε
ε
236
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
It follows that every graph has an ε-cover of this size.
Proof. Fix s, t ∈ V , then we have
d(s, t) =
1
∥A2 (es − et )∥1 .
n2
Let us substitute for the second factor of A its decomposition in Lemma 15.3.4:
A=
q
∑
ai 1Si 1T
Ti + R,
(15.9)
i=1
where q = ⌊(log k)/(log log k)⌋,
d(s, t) ≤
∑
i
√
a2i ≤ 4, and ∥R∥ ≤ 4/ q. We get
q
1 ∑
|ai |∥1Si 1T
Ti A(es − et )∥1 + ∥RA(es − et )∥1 .
n2 i=1
The remainder term is small by the bound on R and by (15.18):
16
∥RA(es − et )∥1 ≤ 4∥R∥ ≤ √ .
q
Considering a particular term of the first sum,
T
T
T
∥1Si (1T
Ti Aes − 1Ti Aet )∥1 = ∥1Si ∥1 |1Ti Aes − 1Ti Aet )| ≤ n|A(s, Ti ) − A(t, Ti )|.
Hence
16
1∑
|ai | |A(s, Ti ) − A(t, Ti )| + √ .
n i=1
q
q
d(s, t) ≤
Since k > q q , by the pigeon-hole principle, any given set of k nodes contains two nodes s and
t such that |A(s, Si ) − A(t, Si )| ≤ n/q for all i, and for this choice of s and t we have
√
q
q
(1 ∑
)1/2
∑
1
16
16
18
log log k
2
d(s, t) ≤
|ai | + √ ≤
|ai |
+ √ ≤ √ < 20
.
q
q
q
q
q
log k
i=1
i=1
This proves the theorem.
15.2.3
Computing regularity partitions
The results above can be applied to the computation of (weak) regularity partitions in the
property testing model for dense graphs. This topic is treated in detail in [165], so we only
sketch the model and the application.
Exercise 15.2.5 Let (V, d) be a finite metric space and let T and S be (exact!)
ε-covers. Then there is a subset S ′ ⊆ S such that |S ′ | ≤ |T | and S ′ is a (4ε)-cover.
Appendix: Background material
15.3
Graph theory
15.3.1
Basics
For a simple graph G = (V, E), we denote by G = (V, E) the complementary graph. We
usually denote the number of nodes and edges of a graph G by n and m, respectively. For
S ⊆ V , we denote by G[S] = (V, E[S]) the subgraph induced by S.
For X ⊆ V , we denote by N (X) the set of neighbors of X, i.e., the set of nodes j ∈ V for
which there is a node i ∈ V such that ij ∈ E. Warning: N (X) may intersect X (if X is not
a stable set).
For X, Y ⊆ V , we denote by κ(X, Y ) the maximum number of vertex disjoint paths from
X to Y (if X and Y are not disjoint, some of these paths may be single nodes). By Menger’s
Theorem, κ(X, Y ) is the minimum number of nodes that cover all X − Y paths in G. The
graph G is k-connected if and only if |V | > k and κ(X, Y ) = k for any two k-subsets X and
Y . The largest k for which this holds is the vertex-connectivity of G, denoted κ(G). The
complete graph Kn is (n − 1)-connected but not n-connected.
BG (v, t), where v ∈ V , denotes the set of nodes at distance at most t from v.
In a directed graph, for S ⊆ V (G), let ∇+ (S) [∇− (S)] denote the set of oriented edges
emanating from S [entering S].
15.3.2
Graph products
There are many different ways of multiplying two graphs F and G, of which three plays a
role in this book. All three are defined on the nodes set V (F ) × V (G). Most common is
perhaps the cartesian product (sometimes also called cartesian sum F G, where (u1 , v1 ) and
(u2 , v2 ) (u1 , u2 ∈ V (F ), v1 v2 ∈ V (G) are adjacent if and only if u1 = u2 and v1 v2 ∈ E(G),
or u1 u2 ∈ E(G) and v1 = v2 . In the weak (or categorial) product F × G (u1 , v1 ) and (u2 , v2 )
are adjacent if and only if u1 u2 ∈ E(F ) and v1 v2 ∈ E(G). Finally, in the strong product
F G (u1 , v1 ) and (u2 , v2 ) are adjacent if and only if either ij ∈ E and uv ∈ F , or ij ∈ E
and u = v, or i = j and uv ∈ F (Figure 15.2).
237
238
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
It is easy to check that all three products are commutative and associative (up to isomorphism), and F G = (F × G) ∪ (F G). Another way of looking at the strong product is
to add a loop at each node, and then take the weak product. Note that in all three cases,
the operation sign is a little pictogram of the corresponding product of two copies of K2 .
The k-times iterated cartesian product of a graph G with itself is denoted by Gk , and we
define G×k and Gk analogously. As an example, the graph K2k is the skeleton of the
k-dimensional cube.
Figure 15.2: The strong, weak and cartesian products of two paths on four nodes.
It is easy to see that this multiplication is associative and commutative (if we don’t
distinguish isomorphic graphs). The strong product of k copies of G is denoted by Gk , and
we define G×k and Gk analogously.
15.3.3
Chromatic number and perfect graphs
Recall that for a graph G, we denote by ω(G) the size of its largest clique, and by χ(G),
its chromatic number. A graph G is called perfect, if for every induced subgraph G′ of G,
we have ω(G′ ) = χ(G′ ). To be perfect is a rather strong structural property; nevertheless,
many interesting classes of graphs are perfect (bipartite graphs, their complements and their
linegraphs; interval graphs; comparability and incomparability graphs of posets; chordal
graphs). We do not discuss perfect graphs, only to the extent needed to show their connection
with orthogonal representations.
The following deep characterization of perfect graphs was conjectured by Berge in 1961
and proved by Chudnovsky, Robertson, Seymour and Thomas [46].
Theorem 15.3.1 (The Strong Perfect Graph Theorem [46]) A graph is perfect if and
only if neither the graph nor its complement contains a chordless odd cycle longer than 3.
As a corollary we can formulate the “The Weak Perfect Graph Theorem” proved much
earlier [153]:
Theorem 15.3.2 The complement of a perfect graph is perfect.
From this theorem it follows that in the definition of perfect graphs we could replace the
equation ω(G′ ) = χ(G′ ) by α(G′ ) = χ(G′ ).
15.3. GRAPH THEORY
15.3.4
239
Regularity partitions
Very briefly we collect some facts about regularity lemmas.
Let G = (V, E) be a graph on n nodes. Let eG (Vi , Vj ) denote the number of edges in G
connecting Vi and Vj . If G is edge-weighted, then the weights of edges should be added up.
If i = j, then we define eG (Vi , Vi ) = 2|E(G[Vi ])|.
Let P = {V1 , . . . , Vk } be a partition of V . We define the weighted graph GP on V by
taking the complete graph and weighting its edge uv by eG (Vi , Vj )/(|Vi | |Vj |) if u ∈ Vi and
v ∈ Vj . We allow the case u = v (and then, of course, i = j), so G has loops. Let A be the
adjacency matrix of G, then we denote by AG the weighted adjacency matrix of GP . The
matrix AG is obtained by averaging the entries of A over every block Vi × Vj . For S, T ⊆ V ,
∑
let A(S, T ) =
i ∈ S, j ∈ T A(S, T ); then
(AP )u,v =
A(Vi , Vj )
|Vi | |Vj |
for u ∈ Vi , v ∈ Vj .
The Regularity Lemma says, roughly speaking, that the node set of every graph has an
equitable partition P into a “small” number of classes such that GP is “close” to G. Various
(non-equivalent) forms of this lemma can be proved, depending on what we mean by “close”.
The version we need was proved by Frieze and Kannan [92]. It will be convenient to define,
for two graphs G and H on the same set of n nodes,
d (G, H) =
1
∥AG − AH ∥ .
n2
Lemma 15.3.3 (Weak Regularity Lemma) For every k ≥ 1 and every graph G =
(V, E), V has a partition P into k classes such that
d (G, GP ) ≤ √
2
.
log k
We do not require here that P be an equitable partition; it is not hard to see that this
version implies that there is also an equitable partition with similar property, just we have
to double the error bound.
There is another version of the weak regularity lemma, that is sometimes even more
useful.
Lemma 15.3.4 For every matrix A ∈ Rn×n and every k ≥ 1 there are k 0-1 matrices
∑
Q1 , . . . , Qk of rank 1 and k real numbers ai such that i a2i ≤ 4 and
k
∑
4
ai Qi ≤ √ .
A −
k
i=1
Note that a 0-1 matrix Q of rank 1 is determined by two sets S, T ⊆ [n] by the formula
Q = 1S (1T )T . The important fact about this lemma is that the number of terms (k) and
√
the error bound (4/ k) are polynomially related.
240
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
15.4
Linear algebra
15.4.1
Basic facts about eigenvalues
Let A be an n × n real matrix. An eigenvector of A is a vector such that Ax is parallel to
x; in other words, Ax = λx for some real or complex number λ. This number λ is called the
eigenvalue of A belonging to eigenvector v. Clearly λ is an eigenvalue iff the matrix A − λI
is singular, equivalently, iff det(A − λI) = 0. This is an algebraic equation of degree n for λ,
and hence has n roots (with multiplicity).
The trace of the square matrix A = (Aij ) is defined as
tr(A) =
n
∑
Aii .
i=1
The trace of A is the sum of the eigenvalues of A, each taken with the same multiplicity as
it occurs among the roots of the equation det(A − λI) = 0.
If the matrix A is symmetric, then its eigenvalues and eigenvectors are particularly well
behaved. All the eigenvalues are real. Furthermore, there is an orthogonal basis v1 , . . . , vn
of the space consisting of eigenvectors of A, so that the corresponding eigenvalues λ1 , . . . , λn
are precisely the roots of det(A − λI) = 0. We may assume that |v1 | = · · · = |vn | = 1; then
A can be written as
A=
n
∑
λi vi viT .
i+1
Another way of saying this is that every symmetric matrix can be written as U T DU , where
U is an orthogonal matrix and D is a diagonal matrix. The eigenvalues of A are just the
diagonal entries of D.
To state a further important property of eigenvalues of symmetric matrices, we need the
following definition. A symmetric minor of A is a submatrix B obtained by deleting some
rows and the corresponding columns.
Theorem 15.4.1 (Interlacing eigenvalues) Let A be an n × n symmetric matrix with
eigenvalues λ1 ≥ · · · ≥ λn . Let B be an (n − k) × (n − k) symmetric minor of A with
eigenvalues µ1 ≥ · · · ≥ µn−k . Then
λi ≤ µi ≤ λi+k .
We conclude this little overview with a further basic fact about eigenvectors of nonnegative matrices. The following classical theorem concerns the largest eigenvalues and the
eigenvectors corresponding to it.
Theorem 15.4.2 (Perron-Frobenius) If an n × n matrix has nonnegative entries then it
has a nonnegative real eigenvalue λ which has maximum absolute value among all eigenvalues.
15.4. LINEAR ALGEBRA
241
This eigenvalue λ has a nonnegative real eigenvector. If, in addition, the matrix has no blocktriangular decomposition (i.e., it does not contain a k × (n − k) block of 0-s disjoint from the
diagonal), then λ has multiplicity 1 and the corresponding eigenvector is positive.
For us, the case of symmetric matrices is important; in this case the fact that the eigenvalue λ is real is trivial, but the information about its multiplicity and the signature of the
corresponding eigenvector is used throughout the book.
15.4.2
Semidefinite matrices
A symmetric n × n matrix A is called positive semidefinite, if all of its eigenvalues are
nonnegative. This property is denoted by A ≽ 0. The matrix is positive definite, if all of its
eigenvalues are positive.
There are many equivalent ways of defining positive semidefinite matrices, some of which
are summarized in the Proposition below.
Proposition 15.4.3 For a real symmetric n × n matrix A, the following are equivalent:
(i) A is positive semidefinite;
(ii) the quadratic form xT Ax is nonnegative for every x ∈ Rn ;
(iii) A can be written as the Gram matrix of n vectors u1 , ..., un ∈ Rm for some m; this
T
means that aij = uT
i uj . Equivalently, A = U U for some matrix U ;
(iv) A is a nonnegative linear combination of matrices of the type xxT ;
(v) The determinant of every symmetric minor of A is nonnegative.
Let me add some comments. The least m for which a representation as in (iii) is possible
is equal to the rank of A. It follows e.g. from (ii) that the diagonal entries of any positive
semidefinite matrix are nonnegative, and it is not hard to work out the case of equality: all
entries in a row or column with a 0 diagonal entry are 0 as well. In particular, the trace of
a positive semidefinite matrix A is nonnegative, and tr(A) = 0 if and only if A = 0.
The sum of two positive semidefinite matrices is again positive semidefinite (this follows
e.g. from (ii) again). The simplest positive semidefinite matrices are of the form aaT for
some vector a (by (ii): we have xT (aaT )x = (aT x)2 ≥ 0 for every vector x). These matrices
are precisely the positive semidefinite matrices of rank 1. Property (iv) above shows that
every positive semidefinite matrix can be written as the sum of rank-1 positive semidefinite
matrices.
The product of two positive semidefinite matrices A and B is not even symmetric in
general (and so it is not positive semidefinite); but the following can still be claimed about
the product:
Proposition 15.4.4 If A and B are positive semidefinite matrices, then tr(AB) ≥ 0, and
equality holds iff AB = 0.
242
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
Property (v) provides a way to check whether a given matrix is positive semidefinite.
This works well for small matrices, but it becomes inefficient very soon, since there are many
symmetric minors to check. An efficient method to test if a symmetric matrix A is positive
semidefinite is the following algorithm. Carry out Gaussian elimination on A, pivoting always
on diagonal entries. If you ever find a negative diagonal entry, or a zero diagonal entry whose
row contains a non-zero, stop: the matrix is not positive semidefinite. If you obtain an
all-zero matrix (or eliminate the whole matrix), stop: the matrix is positive semidefinite.
If this simple algorithm finds that A is not positive semidefinite, it also provides a certificate in the form of a vector v with v T Av < 0 (Exercise 15.4.6).
We can think of n × n matrices as vectors with n2 coordinates. In this space, the usual
inner product is written as A · B. This should not be confused with the matrix product AB.
However, we can express the inner product of two n × n matrices A and B as follows:
A·B =
n ∑
n
∑
Aij Bij = tr(AT B).
i=1 j=1
Positive semidefinite matrices have some important properties in terms of the geometry
of this space. To state these, we need two definitions. A convex cone in Rn is a set of
vectors is closed under sum and multiplication positive scalars. Note that according to this
definition, the set Rn is a convex cone. We call the cone pointed, if the origin is a vertex of
it; equivalently, if it does not contain a line. Any system of homogeneous linear inequalities
T
aT
1 x ≥ 0, . . . , am x ≥ 0
defines a convex cone; convex cones defined by such (finite) systems are called polyhedral.
For every convex cone C, we can form its polar cone C ∗ , defined by
C ∗ = {x ∈ Rn : xT y ≥ 0 ∀y ∈ C}.
This is again a convex cone. If C is closed (in the topological sense), then we have (C ∗ )∗ = C.
The fact that the sum of two positive semidefinite matrices is again positive semidefinite
(together with the trivial fact that every positive scalar multiple of a positive semidefinite
matrix is positive semidefinite), translates into the geometric statement that the set of all
positive semidefinite matrices forms a convex closed cone Pn in Rn×n with vertex 0. This
cone Pn is important, but its structure is quite non-trivial. In particular, it is non-polyhedral
for n ≥ 2; for n = 2 it is a nice rotational cone (Figure 15.3; the fourth coordinate x21 , which
is always equal to x12 by symmetry, is suppressed). For n ≥ 3 the situation becomes more
complicated, because Pn is neither polyhedral nor smooth: any matrix of rank less than n − 1
is on the boundary, but the boundary is not differentiable at that point.
The polar cone of P is itself; in other words,
Proposition 15.4.5 A matrix A is positive semidefinite iff A · B ≥ 0 for every positive
semidefinite matrix B.
15.4. LINEAR ALGEBRA
243
x22
x11
x12
Figure 15.3: The semidefinite cone for n = 2.
Exercise 15.4.6 Let A ∈ Rn×n be a symmetric matrix, and let us carry out
Gaussian elimination so that we always pivot on a nonzero diagonal entry as long
as this is possible. Suppose that you find a negative diagonal entry, or a zero
diagonal entry whose row contains a non-zero, stop: the matrix is not positive
semidefinite. Construct a vector v ∈ Rn such that v T Av < 0.
15.4.3
Cross product
This construction is probably familiar from physics. For a, b ∈ R3 , we define their cross
product as the vector
a × b = |a| · |b| · sin ϕ · u,
(15.10)
where ϕ is the angle between a and b (0 ≤ ϕ ≤ π), and u is a unit vector in R3 orthogonal
to the plane of a and b, so that the triple (a, b, u) is right-handed (positively oriented). The
definition of u is ambiguous if a and b are parallel, but then sin ϕ = 0, so the cross product
is 0 anyway. The length of the cross product gives the area of the parallelogram spanned by
a and b.
The cross product is distributive with respect to linear combination of vectors, it is
anticommutative: a × b = −b × a, and a × b = 0 if and only if a and b are parallel. The
cross product is not associative; instead, it satisfies the Expansion Identity
(a × b) × c = (a · c)b − (b · c)a,
(15.11)
which implies the Jacobi Identity
(a × b) × c + (b × c) × a + (c × a) × b = 0.
(15.12)
244
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
Another useful replacement for the associativity is the following.
(a × b) · c = a · (b × c) = det(a, b, c)
(15.13)
(here (a, b, c) is the 3 × 3 matrix with columns a, b and c.
We often use the cross product in the special case when the vectors lie in a fixed plane
Π. Let k be a unit vector normal to Π, then a × b is Ak, where A is the signed area of the
parallelogram spanned by a and b (this means that T is positive iff a positive rotation takes
the direction of a to the direction of b, when viewed from the direction of k). Thus in this
case all the information about a × b is contained in this scalar A, which in tensor algebra
would be denoted by a ∧ b. But not to complicate notation, we’ll use the cross product in
this case as well.
15.4.4
Matrices associated with graphs
Let G be a (finite, undirected, simple) graph with node set V (G) = {1, . . . , n}. The adjacency
matrix of G is be defined as the n × n matrix AG = (Aij ) in which
{
1, if i and j are adjacent,
Aij =
0, otherwise.
We can extend this definition to the case when G has multiple edges: we just let Aij be the
number of edges connecting i and j. We can also have weights on the edges, in which case
we let Aij be the weight of the edges. We could also allow loops and include this information
in the diagonal, but we don’t need this in this course.
The Laplacian of the graph is defined as the n × n matrix LG = (Lij ) in which
{
di ,
if i = j,
Lij =
−Aij , if i ̸= j.
Here di denotes the degree of node i. In the case of weighted graphs, we define di =
So LG = DG − AG , where DG is the diagonal matrix of the degrees of G.
∑
j
Aij .
The (generally non-square) incidence matrix of G comes in two flavors. Let V (G) =
{1, . . . , n} and E(G) = {e1 , . . . , em , and let BG denote the n × m matrix for which
{
1 if i is and endpoint of ej ,
(BG )ij =
0 otherwise.
Often, however, the following matrix is more useful: Let us fix an orientation of each edge,
−
→
− denote the V × E matrix for which
to get an oriented graph G . Then let B→
G


if i = h(j),
1
− )ij =
(B→
−1
if
i = t(j),
G


0
otherwise.
15.4. LINEAR ALGEBRA
245
Changing the orientation only means scaling some columns by −1, which often does not
matter much. For example, it is easy to check that, independently of the orientation,
T
− B→
LG = B→
−.
G G
(15.14)
It is worth while to express this equation in terms of quadratic forms:
xT LG x =
n
∑
(xi − xj )2 .
(15.15)
ij∈E(G)
∑
The matrices AG and LG are symmetric, so their eigenvalues are real. Clearly j Lij = 0,
or in matrix form, L1 = 0, so L has a 0 eigenvalue. Equation (15.14) implies that L is positive
semidefinite, so 0 is its smallest eigenvalue. The Perron–Frobenius Theorem implies that if
G is connected, then the largest eigenvalue λmax of AG has multiplicity 1. Applying the
Perron–Frobenius Theorem to cI − LG with a sufficiently large scalar c, we see that for a
connected graph, eigenvalue 0 of LG has multiplicity 1.
Let G = (V, E) be a graph. A G-matrix is any matrix M ∈ RV ×V such that Mij = 0 for
every edge ij ∈ E. A G-matrix is well-signed, if Mij < 0 for all ij ∈ E. (Note that we have
not imposed any condition on the diagonal entries.)
15.4.5
Norms
For vectors, we need the standard norms. The euclidean norm of a vector x will be denoted
by |x|; the ℓp norm, by ∥x∥p . So ∥x∥2 = |x|.
∑
For a matrix M ∈ RU ×V and for S ⊆ U, T ⊆ V , let M (S, T ) =
i ∈ S, j ∈ T Mi,j . The
cut-norm of a matrix M ∈ Rm×n is defined as follows:
∥M ∥ =
max
S⊆[m],T ⊆[n]
|M (S, T )|.
We also need the (entrywise) 1-norm and max-norm of a matrix:
∥M ∥1 =
m ∑
n
∑
|Mi,j |,
∥M ∥∞ = max |Mi,j |.
i=1 j=1
i,j
Clearly ∥M ∥ ≤ ∥M ∥1 ≤ mn∥M ∥∞ . The cut-norm satisfies
|xT M y| ≤ 4∥M ∥ ∥x∥∞ ∥y∥∞
(15.16)
for any to vectors x ∈ Rm and y ∈ Rn . (If one of the vectors x or y is nonnegative, then we
can replace the coefficient 4 by 2; if both are nonnegative, we can replace it by 1.) We can
restate this fact as
∥M ∥ = max{|xT M y| : x, y ∈ [0, 1]n }.
(15.17)
246
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
Another way of stating this inequality is that for a matrix M ∈ Rn×n and vector x ∈ Rn ,
∥M x∥1 ≤ 4∥M ∥ ∥x∥∞ .
(15.18)
(If x ≥ 0, then the coefficient 4 can be replaced by 2.)
For several other useful properties of the cut norm, we refer to [165], Section 8.1.
15.5
Convex bodies
15.5.1
Polytopes and polyhedra
The convex hull of a finite set of points in Rd is called a (convex) polytope. The intersection of
a finite number of halfspaces in Rd is called a (convex) polyhedron. (We’ll drop the adjective
“convex”, because we never need to talk about non-convex polyhedra.)
Proposition 15.5.1 Every polytope is a polyhedron. A polyhedron is a polytope if and only
if it is bounded.
For every polyhedron, there is a unique smallest affine subspace that contains it, called its
affine hull. The dimension of a polyhedron is the dimension of its affine hull. A polyhedron
[polytope] in Rd that has dimension d (equivalently, that has an interior point) is called a
d-polyhedron [d-polytope].
A hyperplane H is said to support the polytope if it has a point in common with the
polytope and the polytope is contained in one of the closed halfspaces with boundary H.
A face of a polytope is its intersection with a supporting hyperplane. A face of a polytope
that has dimension one less than the dimension of the polytope is called a facet. A face of
dimension 0 (i.e., a single point) is called a vertex.
Proposition 15.5.2 Every face of a polytope is a polytope. Every vertex of a face is a vertex
of the polytope. Every polytope has a finite number of faces.
Every polytope is the convex hull of its vertices. The set of vertices is the unique minimal
finite set of points whose convex hull is the polytope.
Every facet of a d-polyhedron P spans a (unique) hyperplane, and this hyperplane is the
boundary of a uniquely determined halfspace that contains the polyhedron. The polyhedron
is the intersection of the halfspaces determined by its facets this way.
15.5.2
Polyhedral combinatorics
Some of the applications of geometric representations in combinatorics make use of polyhedra
defined by combinatorial structures, and we need some of these. There is a rich literature of
such polyhedra.
15.5. CONVEX BODIES
247
Example 15.5.3 (Stable set polytopes) [To be written.]
Example 15.5.4 (Flow polytopes, edge version) [To be written.]
Example 15.5.5 (Flow polytopes, node version) [To be written.]
15.5.3
Polar, blocker and antiblocker
Let P be a d-polytope containing the origin as an interior point. Then the polar of P is
defined as
P ∗ = {x ∈ Rd : xT y ≤ 1 ∀y ∈ P }
Proposition 15.5.6 (a) The polar of a polytope is a polytope. For every polytope P we have
(P ∗ )∗ = P .
(b) Let v0 , . . . , vm be the vertices of a k-dimensional face F of P . Then
T
x = 1}
F ⊥ = {x ∈ P ∗ : v0T x = 1, . . . , vm
defines a d − k − 1-dimensional face of P ∗ . Furthermore, (F ⊥ )⊥ = F .
In particular, every vertex v of P corresponds to a facet v⊥ of P ∗ and vice versa. The
vector v is a normal vector of the facet v⊥ .
There are two constructions similar to polarity that concern polyhedra that do not contain
the origin in their interior; rather, they are contained in the nonnegative orthant.
A convex body K ⊆ Rd is called ascending, if K ⊆ Rd+ , and whenever x ∈ K, y ∈ Rd and
y ≥ x then y ∈ K. The blocker of an ascending convex set K is defined by
K bl = {x ∈ Rd+ : xT y ≥ 1∀y ∈ K}.
Proposition 15.5.7 The blocker of an ascending convex set is a closed ascending convex
set. The blocker of an ascending polyhedron is an ascending polyhedron. For every ascending
closed convex set K, we have (K bl )bl = K.
The correspondence between faces of a polyhedron P and P bl is a bit more complicated
than for polarity, and we describe the relationship between vertices and facets only. Every
vertex v of P gives rise to a facet v⊥, which determines the halfspace vT x ≥ 1. This construction gives all the facets of P bl , except possibly those corresponding to the nonnegativity
constraints xi ≥ 0, which may or may not define facets.
A closed convex set K is called a convex corner, if K ⊆ Rd+ and whenever x ∈ K, y ∈ Rd
and 0 ≤ y ≤ x then y ∈ K. The antiblocker of a convex corner K is defined by
K abl = {x ∈ Rd+ : xT y ≤ 1∀y ∈ K}.
A polytope which is a convex corner will be called a corner polytope.
248
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
Proposition 15.5.8 The antiblocker of a convex corner is a convex corner. The antiblocker
of a corner polytope is a corner polytope. For every convex corner K, we have (K abl )abl = K.
The correspondence between faces of a corner polytope P and P abl is again a bit more
complicated than for the polars. The nonnegativity constraints xi ≥ 0 always define facets,
and they don’t correspond to vertices in the antiblocker. All other facets of P correspond to
vertices of P abl . Not every vertex of P defines a facet in P abl . The origin is a trivial exceptional vertex, but there may be further exceptional vertices. We call a vertex v dominated,
if there is another vertex w such that v ≤ w. Now a vertex of P defines a facet of P ∗ if and
only if it is not dominated.
Let P ⊆ Rn be an ascending polyhedron. It is easy to see that P has a unique point
which is closest to the origin. (This point has combinatorial significance in some cases.) The
distance of the polyhedron to the origin is sometimes called the energy of the polyhedron.
Right now, we state the following simple theorem that relates this point to the analogous
point in the blocker.
Theorem 15.5.9 Let P ⊆ Rn be an ascending polyhedron, and let x ∈ P minimize the
objective function |x|2 . Let α = |x|2 be the minimum value. Then y = (1/α)x is in the
blocker P bl , and it minimizes the objective function |y|2 over P bl . The product if the energies
of blocking polyhedra is 1.
15.5.4
Spheres and caps
We denote by πd the volume of the d-dimensional unit ball B d . It is known that
πd =
π d/2
(2eπ)d/2
(
)
√
∼
.
πd(d+1)/2
Γ 1 + d2
(15.19)
The surface area of this ball ((n − 1)-dimensional measure of the sphere S d−1 ) is dπd .
A cap on this ball is the set Cu,s = {x ∈ S n−1 : uT x ≥ s. We will need the surface area
of this cap. In dimension 3, the area of this cap is 2π(1 − s). In higher dimension, the surface
area is difficult to express. It is easy to derive the integral formula:
∫1
voln−1 (Cui ,s ) = (n − 1)πn−1
(1 − x2 )(n−3)/2 dx.
(15.20)
s
but it is hard to bring this into any closed form. Instead of explicit formulas, the following
estimates will be more useful for us:
(
)π
2
πn−1
n−1
1− √
(1 − s2 )(n−1)/2 ≤ voln−1 (S n−1 ) ≤
(1 − s2 )(n−1)/2 . (15.21)
s
s
s n−1
√
(The lower bound is valid for s ≥ / n − 1.) The upper bound follows from the estimate
∫1
(1 − x )
2 (n−3)/2
s
1
dx <
s
∫1
x(1 − x2 )(n−3)/2 dx =
s
1
(1 − s2 )(n−1)/2 .
(n − 1)s
15.5. CONVEX BODIES
249
√
The lower bound uses a similar argument with a twist. Let t = s + 1/ n − 1, then
∫1
∫t
(1 − x )
2 (n−3)/2
s
(1 − x )
2 (n−3)/2
dx >
1
dx >
t
s
∫t
x(1 − x2 )(n−3)/2 dx
s
(
)
1
=
(1 − s2 )(n−1)/2 − (1 − t2 )(n−1)/2
(n − 1)t
From here, the bound follows by standard arguments.
The question about the area of caps can be generalized as follows: given a number 0 <
c < 1, what is the surface area of S n−1 whose projection onto the subspace formed by the
first d coordinates has length at most c? If d = n − 1, then this set consists of two congruent
√
√
caps, but if d < n, then the set is more complicated. The cases c < d/n and c > d/n
are quite different. We only need the first one in this book. Since no treatment giving the
bound we need seems to be easily available, we sketch its derivation.
Let x′ and x′′ denote the projections of a vector x onto the first d coordinates and onto
the last n − d coordinates, respectively. Let us assume that d ≤ n − 2 and (for the time
being) c < 1. We want to find good estimates for the n − 1 dimensional measure of the set
{
}
A = x ∈ S n−1 : |x′ | ≤ c .
Define ϕ(x) = x′ + x′′ /|x′′ |, then ϕ maps A onto A′ = (cB d ) × S n−d−1 bijectively. We claim
that
voln−1 (A) ≤ voln−1 (A′ ) = cd (n − d)πd πn−d .
(15.22)
Indeed, it is not hard to compute that the Jacobian of ϕ−1 (as a map A′ → A) is (1 −
|x′ |2 )(n−d−2)/2 , which is at most 1.
Note that for n = d + 2 we get that voln−1 (A) = voln−1 (A′ ), which implies, for n = 3,
the well-known formula for the surface area of spherical segments.
Another way of formulating this inequality is that choosing a random vector u uniformly
from S n−1 , it bounds the probability p that its orthogonal projection u′ to a d-dimensional
subspace is small:
P(|u′ | ≤ c) ≤
cd (n − d)πd πn−d
.
nπn
Using the asymptotics (15.19), this gives, for large enough d and n − d,
√ )
√ d(
n d
′
2
P(|u | ≤ c) < 2 de c
.
d
√
This estimate is interesting when c is smaller than the expectation of |u′ |, say c = ε d/n
for some 0 < ε < 1. Then we get
(
√ )
d
′
< (2ε)d .
(15.23)
P |u | ≤ ε
n
250
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
15.5.5
Volumes of other convex bodies
Let K ⊆ Rn be a convex body that is 0-symmetric (centrally symmetric with respect to the
origin, and let a vector v be chosen uniformly at random from S n−1 . Let ev denote the line
containing v. Rather simple integration yields the formula
)
(
vol(K) = πn 2−n E λ(K ∩ ev )n .
(15.24)
The volumes of a convex body and its polar are related. From the following bounds, we
only need the upper bound (due to Blaschke and Santal´o) in this book, but for completeness,
we also state the lower bound, due to Bourgain and Milman: For every 0-symmetric convex
body K ⊆ Rn ,
4n
≤ vol(K)vol(K ∗ ) ≤ πn2 .
n!
(15.25)
The lower bound is attained when K is a cube and K ∗ is a cross-polytope (or the other way
around); the upper bound is attained when both K and K ∗ are balls.
For a convex body K, the difference body K − K is defined as the set of differences x − y,
where x, y ∈ K. It is trivial that vol(K − K) ≥ vol(K) (since K − K contains a translated
copy of K). In fact, the Brunn-Minkowski Theorem implies that
vol(K − K) ≥ 2n vol(K).
(15.26)
We start with describing how to generate a random unit vector. We generate n independent standard Gaussian variables X1 , . . . , Xn , and normalize:
1
(X1 , . . . , Xn ).
u= √ 2
X1 + · · · + Xn2
Here X12 + · · · + Xn2 is from a chi-squared distribution with parameter n, and many estimates
are known for its distribution.
Results in probability theory (see e.g. Massart [179]) give that
2
P(X12 + · · · + Xn2 − n > εn) ≤ 2e−ε n/8 .
(15.27)
Now we turn to describing what projection does to the length of a vector. Let L be
a d-dimensional subspace, let u be a vector randomly and uniformly chosen from the unit
sphere S n−1 in Rn , and let u′ be the orthogonal projection of u onto L. We may assume that
L is the subspace of Rn in which the last n − d coordinates are 0. If we generate a random
unit vector as described above, then
|u′ |2 =
X12 + · · · + Xd2
.
X12 + · · · + Xn2
15.5. CONVEX BODIES
251
By (15.27), the numerator is concentrated around d, and the denominator is concentrated
√
around n. Hence the length of u′ is concentrated around d/n. To be more precise,
E(|u′ |2 ) =
d
,
n
(15.28)
and for every ε > 0,
√
√
1
d
d
′
≤ |u | ≤ (1 + ε)
1+ε n
n
with probability at least 1 − 4e−ε
different situations). So we have
( √ n )d
′
P(|u | < s) < βs
.
d
2
d/4
(15.29)
(this bound will be useful both for small and large ε in
(15.30)
Next, we turn to projections of convex bodies. Let K ⊂ Rn be a convex body, let v ∈ S n−1
be chosen randomly from the uniform distribution, and let Iv be the projection of K onto
the line ev containing v. So λ(Iv ) is the width of K in direction v. It is easy to check that
λ(Iv ) =
2
.
λ((K − K)∗ ∩ ev )
(15.31)
Combining with (15.24), we get
(
)
vol((K − K)∗ ) = πn E λ(Iv )−n .
(15.32)
By the Blaschke–Santal´
o Inequality (15.25) and (15.26), this implies
(
) vol((K − K)∗ )
πn
πn
E λ(Iv )−n =
≤
≤ n
.
πn
vol(K − K)
2 vol(K)
(15.33)
Markov’s Inequality implies that for every s > 0,
P(λ(Iv ) ≤ s) = P(λ(Iv )−n sn ≥ 1) ≤
πn sn
.
2n vol(K)
(15.34)
We need an extension of this bound to the case when K is not full-dimensional, and so
we have to consider a lower dimensional measure of its volume. We will need the following
probabilistic lemma.
Lemma 15.5.10 Let X and Y be random variables, such that a ≤ X ≤ a′ and b ≤ Y ≤ b′
(where −∞ and ∞ are permitted values). Let s ≤ a′ + b′ , and define α = max(a, s − b′ ) and
β = min(a′ , s − b)). Then
β+1
∫
)
P(X ≤ t, Y ≤ s + 1 − t dt.
P(X + Y ≤ s) ≤
α
(15.35)
252
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
Proof. Starting with the right side,
β+1
∫
)
β+1
∫
EX,Y 1(X ≤ t ≤ s + 1 − Y ) dt
P(X ≤ t, Y ≤ s + 1 − t dt =
α
α
β+1
∫
(
1(X ≤ t ≤ s + 1 − Y ) dt = EX,Y min(s + 1 − Y, β + 1) − max(X, α)
= EX,Y
)
α
≥ EX,Y 1(X + Y ≤ s) = P(X + Y ≤ s)
(one needs to check that if X + Y ≤ s, then min(s + 1 − Y, β + 1) − max(X, α) ≥ 1).
Let K ⊂ Rd be a convex body, and consider Rd as a subset of Rn . Let v ∈ S n−1
be chosen randomly from the uniform distribution. Let u be the projection of v onto Rd .
Then λ(Iv ) = |u|λ(Iu ). We apply Lemma 15.5.10 to the random variables X = ln |u| and
Y = ln λ(Iu ) (with probability 1, both are positive). For the bounds of integration we use
the bounds |u| ≤ 1 and λ(Iu ) ≤ diam(K). We get
∫es
(
)
P λ(Iv ) ≤ s ≤
1 (
es )
P |u| ≤ t, λ(Iu ) ≤
dt.
t
t
(15.36)
s/diam(K)
(The factor 1/t comes in because of the change of the variable.) Note that the length |u| and
the direction of u are independent as random variables, and hence so are |u| and λ(Iu ). By
(15.23) and (15.34),
√ )
( es )d ( √ n )d π
(
) (
es ) (
n d
πd
d
P |u| ≤ t P λ(Iu ) ≤
≤ 2t
= es
.
t
d 2d vold (K) t
d vold (K)
Substituting this bound in (15.36),
√ )
(
) (
n d πd
P λ(Iv ) ≤ s ≤ es
d vold (K)
∫es
1
dt
t
s/diam(K)
( √ n )d π
(
)
d
= es
1 + ln diam(K) .
d vold (K)
Exercise 15.5.11 Let G = (V, E) be an arbitrary graph and s, t ∈ V . We define
the unit flow polytope FG ⊆ RE(G) of G as the convex hull of indicator vectors of
edge-sets of s-t paths. The flow polyhedron FbG consists of all vectors dominating
vectors in FG .
(a) What is the blocker of the flow polyhedron?
(b) What is the energy of the flow polyhedron?
15.6
Matroids
Theorem 15.6.1 Let f : 2V → Z+ be a submodular setfunction. Then
Mf = {A ⊆ V : |A ∩ X| ≤ f (X)
for all X ⊆ V }
(15.37)
15.7. SEMIDEFINITE OPTIMIZATION
253
is a matroid. The rank function of this matroid is given by
r(X) = min f (Y ) + |X \ Y |.
Y ⊆X
Theorem 15.6.2 Let f : 2V → Z+ be a setfunction that is submodular on intersecting pairs
of sets. Then
M′f = {A ⊆ V : |A ∩ X| ≤ f (X)
for all X ⊆ V, X ̸= ∅}
is a matroid. The rank function of this matroid is given by
r(X) =
min
Y0 ,Y1 ,...,Yk
|Y0 | + f (Y1 ) + · · · + f (Yk ),
where {Y0 , . . . , Yk } ranges over all partitions of X.
15.7
Semidefinite optimization
Linear programming has been one of the most fundamental and successful tools in optimization and discrete mathematics. Linear programs are special cases of convex programs;
semidefinite programs are more general but still convex programs, to which many of the useful
properties of linear programs extend.
For more comprehensive studies of issues concerning semidefinite optimization, see [248,
163].
15.7.1
Semidefinite duality
A semidefinite program is an optimization problem of the following form:
minimize cT x
subject to x1 A1 + . . . xn An − B ≽ 0
(15.38)
Here A1 , . . . , An , B are given symmetric m × m matrices, and c ∈ Rn is a given vector. We
∑
can think of i xi Ai − B as a matrix whose entries are linear functions of the variables.
As usual, any choice of the values xi that satisfies the given constraint is called a feasible
∑
solution. A solution is strictly feasible, if the matrix i xi Ai − B is positive definite. We
denote by vprimal the infimum of the objective function.
The special case when A1 , . . . , An , B are diagonal matrices is just a “generic” linear program, and it is very fruitful to think of semidefinite programs as generalizations of linear
programs. But there are important technical differences. Unlike in the case of linear programs, the infimum may be finite but not a minimum, i.e., not attained by any feasible
solution.
254
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
As in the theory of linear programs, there are a large number of equivalent formulations
of a semidefinite program. Of course, we could consider minimization instead of maximization. We could allow additional linear constraints on the variables xi (inequalities and/or
equations). These could be incorporated into the form above by extending the Ai and B
with new diagonal entries.
∑
We could introduce the entries of the matrix X = i xi Ai − B as variables, in which
case the fact that they are linear functions of the original variables translates into linear relations between them. Straightforward linear algebra transforms (15.38) into an optimization
problem of the form
maximize C · X
subject to X ≽ 0
Di · X = di
(15.39)
(i = 1, . . . , k)
where C, D1 , . . . , Dk are symmetric m × m matrices and d1 , . . . , dk ∈ R. Note that C · X is
the general form of a linear combination of entries of X, and so Di · X = di is the general
form of a linear equation in the entries of X.
It is easy to see that we would not get any substantially more general problem if we
allowed linear inequalities in the entries of X in addition to the equations.
The Farkas Lemma has a semidefinite version:
Lemma 15.7.1 (Homogeneous version) Let A1 , . . . , An be symmetric m × m matrices.
The system
x1 A1 + · · · + xn An ≻ 0
has no solution in x1 , . . . , xn if and only if there exists a symmetric matrix Y ̸= 0 such that
Ai · Y = 0
(i = 1, . . . , n)
Y ≽ 0.
There is also an inhomogeneous version of this lemma.
Lemma 15.7.2 (Inhomogeneous version) Let A1 , . . . , An , B be symmetric m × m matrices. The system
x1 A1 + . . . xn An − B ≻ 0
has no solution in x1 , . . . , xn if and only if there exists a symmetric matrix Y ̸= 0 such that
Ai · Y = 0
B·Y ≥0
Y ≽ 0.
(i = 1, . . . , n)
15.7. SEMIDEFINITE OPTIMIZATION
255
Given a semidefinite program (15.38), one can formulate the dual program:
maximize B · Y
subject to Ai · Y = ci
(i = 1, . . . , n)
(15.40)
Y ≽ 0.
Note that this too is a semidefinite program in the general sense. We denote by vdual the
infimum of the objective function.
With this notion of duality, the Duality Theorem holds under somewhat awkward conditions (which cannot be omitted; see e.g. [246, 241, 242, 197]):
Theorem 15.7.3 Assume that both the primal program (15.38) and the dual program (15.40)
have feasible solutions. Then vprimal ≤ vdual . If, in addition, the primal program (say) has a
strictly feasible solution, then the dual optimum is attained and vprimal = vdual .
In particular, if both programs have strictly feasible solutions, then the supremum resp. infimum of the objective functions are attained and are equal. The following complementary
slackness conditions also follow.
Proposition 15.7.4 Let x be a feasible solution of the primal program (15.38) and Y , a
feasible solution of the dual program (15.40). Then vprimal = vdual and both x and Y are
∑
optimal solutions if and only if Y ( i xi Ai − B) = 0.
15.7.2
Algorithms for semidefinite programs
There are two essentially different algorithms known that solve semidefinite programs in
polynomial time: the ellipsoid method and interior point/barrier methods. Both of these
have many variants, and the exact technical descriptions are quite complicated; we refer to
[196, 197] for discussions of these.
The first polynomial time algorithm to solve semidefinite optimization problems in polynomial time was the ellipsoid method. This is based on the general fact that if we can test
membership in a convex body K ⊆ Rd (i.e., we have a subroutine that, for a given point
x ∈ Rd , tells us whether or not x ∈ K), then we can use the ellipsoid method to optimize any
linear objective function over K [101]. The precise statement of this fact needs nontrivial
side-conditions.
For any semidefinite program (15.38), the set K of feasible solutions is convex. With
rounding and other tricks, we can make it a bounded, full-dimensional set in Rn . To test
membership, we have to decide whether a given point x belongs to K; ignoring numerical
∑
problems, we can use Gaussian elimination to check whether the matrix Y = i xi Ai − B is
positive semidefinite. Thus using the ellipsoid method we can compute, in polynomial time,
a feasible solution x that is approximately optimal.
256
CHAPTER 15. ADJACENCY MATRIX AND ITS SQUARE
Unfortunately, the above argument gives an algorithm which is polynomial, but hopelessly
slow, and practically useless. Semidefinite programs can be solved in polynomial time and
also practically reasonably efficiently by interior point methods [192, 6, 7]. The algorithm
can be described very informally as follows. We consider the form (15.39), denote by K the
set of its feasible solutions (these are symmetric matrices), and want to minimize a linear
function C · X over X ∈ K. The set K is convex, but the minimum will be attained on the
boundary of K, and this boundary is neither smooth nor polyhedral in general. Therefore,
neither gradient-type methods nor simplex-type methods of linear programming can be used.
The main idea of barrier methods is that instead of minimizing a linear objective function
C X, we minimize the convex function Fλ (x) = − log det(X) + λC T X for some parameter
T
λ > 0. Since Fλ tends to infinity on the boundary of K, the minimum will be attained in the
interior. Since Fλ is convex and analytic in the interior, the minimum can be very efficiently
computed by a variety of numerical methods (conjugate gradient etc.)
Of course, the point we obtain this way is not what we want, but if λ is large it will be
close. If we don’t like it, we can increase λ and use the minimizing point for the old Fλ as the
starting point for a new gradient type algorithm. One can show that (under some technical
assumptions about the feasible domain) this algorithm gives a good approximation of the
optimum in polynomial time (see [7, 241, 242] and the book [191]).
Bibliography
´
[1] B.M. Abrego,
S. Fern´
andez-Merchant: A lower bound for the rectilinear crossing number.
Manuscript, 2003.
[2] O. Aichholzer, F. Aurenhammer and H. Krasser: On the crossing number of complete
graphs, in: Proc. 18th Ann. ACM Symp. Comp. Geom. (2002), pp. 19–24.
[3] M. Ajtai, Chv´
atal, V.; Newborn, M.; Szemer´edi, E. (1982): Crossing-free subgraphs,
Theory and Practice of Combinatorics, North-Holland Mathematics Studies 60 9–12.
[4] M. Ajtai, J. Koml´os and E. Szemer´edi: A note on Ramsey numbers, J. Combin. Theory
A 29 (1980), 354–360.
[5] D. J. Aldous and J. Fill: Reversible Markov Chains and Random Walks on Graphs (book
in preparation); URL for draft at http://www.stat.Berkeley.edu/users/aldous
[6] F. Alizadeh: Combinatorial optimization with semi-definite matrices, in: Integer Programming and Combinatorial Optimization (Proceedings of IPCO ’92), (eds. E. Balas,
G. Cornu´ejols and R. Kannan), Carnegie Mellon University Printing (1992), 385–405.
[7] F. Alizadeh, Interior point methods in semidefinite programming with applications to
combinatorial optimization, SIAM J. Optim. 5 (1995), 13–51.
[8] F. Alizadeh, J.-P. Haeberly, and M. Overton: Complementarity and nondegeneracy in
semidefinite programming, in: Semidefinite Programming, Math. Programming Ser. B,
77 (1997), 111–128.
[9] N. Alon: Eigenvalues and expanders, Combinatorica 6(1986), 83–96.
[10] N. Alon: The Shannon capacity of a union, Combinatorica 18 (1998), 301–310.
[11] N. Alon and E. Gy˝ori: The number of small semispaces of a finite set of points in the
plane, J. Combin. Theory Ser. A 41 (1986), 154–157.
[12] N. Alon and N. Kahale: Approximating the independence number via the ϑ-function,
Math. Programming 80 (1998), Ser. A, 253–264.
257
258
BIBLIOGRAPHY
[13] N. Alon and E. Lubetzky: The Shannon capacity of a graph and the independence
numbers of its powers, IEEE Trans. Inform. Theory 52 (2006), 2172–2176.
[14] N. Alon and V. D. Milman: λ1 , isoperimetric inequalities for graphs and superconcentrators, J. Combinatorial Theory B 38(1985), 73–88.
[15] N. Alon, L. Lov´
asz: Unextendible product bases, J. Combin. Theory Ser. A 95 (2001),
169–179.
[16] N. Alon, P.D. Seymour and R. Thomas: Planar separators, SIAM J. Disc. Math. 7
(1994), 184–193.
[17] N. Alon and J.H. Spencer: The Probabilistic Method, Wiley, New York, 1992.
[18] B. Andr´asfai: On critical graphs, Theory of Graphs (International Symposium, Rome,
1966), ed. P. Rosenstiehl, Gordon and Breach, New York (1967), 9–19.
[19] E. Andre’ev: On convex polyhedra in Lobachevsky spaces, Mat. Sbornik, Nov. Ser. 81
(1970), 445–478.
[20] E. Andre’ev: On convex polyhedra of finite volume in Lobachevsky spaces, Mat. Sbornik,
Nov. Ser. 83 (1970), 413–440.
[21] S. Arora, C. Lund, R. Motwani, M. Sudan, M. Szegedy: Proof verification and hardness
of approximation problems Proc. 33rd FOCS (1992), 14–23.
[22] L. Asimov and B. Roth: The rigidity of graphs, Trans. Amer. Math. Soc., 245 (1978),
279–289.
[23] R. Bacher and Y. Colin de Verdi`ere: Multiplicit´es des valeurs propres et transformations
´etoile-triangle des graphes, Bull. Soc. Math. France 123 (1995), 101-117.
[24] E.G. Bajm´oczy, I. B´ar´
any, On a common generalization of Borsuk’s and Radon’s theorem, Acta Mathematica Academiae Scientiarum Hungaricae 34 (1979) 347–350.
[25] A.I. Barvinok: Problems of distance geometry and convex properties of quadratic maps,
Discrete and Comp. Geometry 13 (1995), 189–202.
[26] A.I. Barvinok: A remark on the rank of positive semidefinite matrices subject to affine
constraints, Discrete and Comp. Geometry (to appear)
[27] M. Bellare, O. Goldreich, and M. Sudan: Free bits, PCP and non-approximability—
towards tight results, SIAM J. on Computing 27 (1998), 804–915.
[28] I. Benjamini and L. Lov´
asz: Global Information from Local Observation, Proc. 43rd
Ann. Symp. on Found. of Comp. Sci. (2002) 701–710.
BIBLIOGRAPHY
259
[29] I. Benjamini and L. Lov´
asz: Harmonic and analytic frunctions on graphs, Journal of
Geometry 76 (2003), 3–15.
[30] I. Benjamini and O. Schramm: Harmonic functions on planar and almost planar graphs
and manifolds, via circle packings. Invent. Math. 126 (1996), 565–587.
[31] I. Benjamini and O. Schramm: Random walks and harmonic functions on infinite planar
graphs using square tilings, Ann. Probab. 24 (1996), 1219–1238.
[32] C. Berge: Graphs and Hypergraphs, North-Holland, Amsterdam (1973).
[33] T. B¨ohme, On spatial representations of graphs, in: Contemporary Methods in Graph
Theory (R. Bodendieck, ed.), BI-Wiss.-Verl. Mannheim, Wien/Zurich (1990), 151–167.
[34] J. Bourgain: On Lipschitz embedding of finite metric spaces in Hilbert space, Israel J.
Math. 52 (1985), 46–52.
[35] P. Brass: On the maximum number of unit distances among n points in dimension four,
Intuitive geometry Bolyai Soc. Math. Stud. 6, J. Bolyai Math. Soc., Budapest (1997),
277–290.
[36] P. Brass: Erd˝os distance problems in normed spaces, Comput. Geom. 6 (1996), 195–214.
[37] P. Brass: The maximum number of second smallest distances in finite planar sets. Discrete Comput. Geom. 7 (1992), 371–379.
[38] P. Brass, G. K´arolyi: On an extremal theory of convex geometric graphs (manuscript)
[39] G.R. Brightwell, E. Scheinerman: Representations of planar graphs, SIAM J. Discrete
Math. 6 (1993) 214–229.
[40] R.L. Brooks, C.A.B. Smith, A.H. Stone and W.T. Tutte, The dissection of rectangles
into squares. Duke Math. J. 7, (1940). 312–340.
[41] A.E. Brouwer and W.H. Haemers: Association Schemes, in Handbook of Combinatorics
(ed. R. Graham, M. Gr¨otschel, L. Lov´asz), Elsevier Science B.V. (1995), 747–771.
[42] P. Cameron, J.-M. Goethals, J.J. Seidel and E.E. Shult, Line graphs, root systems and
elliptic geometry, J. Algebra 43 (1976), 305–327.
[43] A.K. Chandra, P. Raghavan, W.L. Ruzzo, R. Smolensky and P. Tiwari: The electrical
resistance of a graph captures its commute and cover times, Proc. 21st ACM STOC
(1989), 574–586.
[44] S.Y. Cheng, Eigenfunctions and nodal sets, Commentarii Mathematici Helvetici 51
(1976) 43–55.
260
BIBLIOGRAPHY
[45] J. Cheriyan, J.H. Reif: Directed s-t numberings, rubber bands, and testing digraph
k-vertex connectivity, Proceedings of the Third Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York (1992), 335–344.
[46] M. Chudnovsky, N. Robertson, P. Seymour and R. Thomas: The strong perfect graph
theorem, Annals of Math., 164 (2006), 51–229.
[47] Y. Colin de Verdi`ere: Sur la multiplicit´e de la premi`ere valeur propre non nulle du
laplacien, Comment. Math. Helv. 61 (1986), 254–270.
[48] Y. Colin de Verdi`ere: Sur un nouvel invariant des graphes et un crit`ere de planarit´e,
Journal of Combinatorial Theory, Series B 50 (1990), 11–21. English translation: On a
new graph invariant and a criterion for planarity, in: Graph Structure Theory (Robertson
and P. D. Seymour, eds.), Contemporary Mathematics, Amer. Math. Soc., Providence,
RI (1993), 137–147.
[49] Y. Colin de Verdi`ere: Une principe variationnel pour les empilements de cercles, Inventiones Math. 104 (1991) 655–669.
[50] Y. Colin de Verdi`ere: Multiplicities of eigenvalues and tree-width of graphs, J. Combin.
Theory Ser. B 74 (1998), 121–146.
[51] Y. Colin de Verdi`ere: Spectres de graphes, Cours Sp´ecialis´es 4, Soci´et´e Math´ematique
de France, Paris, 1998.
[52] R. Connelly: A counterexample to the rigidity conjecture for polyhedra, Publications
´ 47 (1977), 333–338.
Math´ematiques de l’IHES
[53] G. Csizmadia: The multiplicity of the two smallest distances among points, Discrete
Math. 194 (1999), 67–86.
[54] A. Coja-Oghlan: The Lov´
asz number of random graphs, in: Approximation, randomization, and combinatorial optimization, Lecture Notes in Comput. Sci. 2764, Springer,
Berlin (2003), 228–239.
[55] A. Coja-Oghlan and A. Taraz: Exact and approximative algorithms for coloring G(n, p),
Random Structures and Algorithms 24 (2004), 259–278.
[56] R. Courant and D. Hilbert: Methods of Mathematical Physics, Wiley–Interscience, 1953.
[57] S.D.A. Dasgupta and A.K. Gupta: An Elementary Proof of a Theorem of Johnson and
Lindenstrauss, Random Structures and Algorithms 22 (2002), 60–65.
[58] C. Delorme and S. Poljak: Laplacian eigenvalues and the maximum cut problem, Math.
Programming 62 (1993) 557–574.
BIBLIOGRAPHY
261
[59] C. Delorme and S. Poljak: Combinatorial properties and the complexity of max-cut
approximations, Europ. J. Combin. 14 (1993), 313–333.
[60] T.L. Dey: Improved bounds for planar k-sets and related problems, Discrete and Computational Geometry 19 (1998), 373–382.
[61] M. Deza and M. Laurent: Geometry of Cuts and Metrics, Springer Verlag, 1997.
[62] P.D. Doyle and J.L. Snell: Random walks and electric networks, Math. Assoc. of Amer.,
Washington, D.C. 1984.
[63] C. Dubey, U. Feige, W. Unger: Hardness results for approximating the bandwidth,
Journal of Computer and System Sciences 77 (2010), 62–90.
[64] R.J. Duffin: Basic properties of discrete analytic functions, Duke Math. J. 23 (1956),
335–363.
[65] R.J. Duffin: Potential theory on the rhombic lattice, J. Comb. Theory 5 (1968) 258–272.
[66] R.J. Duffin and E.L. Peterson: The discrete analogue of a class of entire functions,
J. Math. Anal. Appl. 21 (1968) 619–642.
[67] I.A. Dynnikov and S.P. Novikov: Geometry of the Triangle Equation on Two-Manifolds,
Mosc. Math. J. 3 (2003), 419–438.
[68] H. Edelsbrunner, P. Hajnal: A lower bound on the number of unit distances between
the vertices of a convex polygon, J. Combin. Theory Ser. A 56 (1991), 312–316.
[69] J. Edmonds: Submodular functions, matroids, and certain polyhedra, in: Combinatorial
structures and their applications (eds. R.K. Guy, H. Hanani, N. Sauer and J. Schonheim),
Gordon and Breach, New York (1970), pp. 69–87.
[70] H.G. Eggleston: Covering a three-dimensional set with sets of smaller diameter, J.
London Math. Soc. 30 (1955), 11–24.
[71] P. Erd˝os, L. Lov´
asz, G. Simmons, and E. Strauss: Dissection graphs of planar point
sets, in: A Survey of Combinatorial Theory, North Holland, Amsterdam, (1973), pp.
139–149.
[72] P. Erd˝os, On sets of distances of n points, Amer. Math. Monthly 53 (1946), 248–250.
[73] P. Erd˝os: On sets of distances of n points in Euclidean space, Publ. Math. Inst. Hung.
Acad. Sci. 5 (1960), 165–169.
[74] P. Erd˝os: Gr´afok p´aros k¨or¨
ulj´
ar´as´
u r´eszgr´afjair´ol (On bipartite subgraphs of graphs, in
Hungarian), Mat. Lapok 18 (1967), 283–288.
262
BIBLIOGRAPHY
[75] P. Erd˝os: On some applications of graph theory to geometry, Canad. J. Math. 19 (1967),
968–971.
[76] P. Erd˝os and N.G. de Bruijn: A colour problem for infinite graphs and a problem in
the theory of relations, Nederl. Akad. Wetensch. Proc. – Indagationes Math. 13 (1951),
369–373.
[77] P. Erd˝os and T. Gallai: On the minimal number of vertices representing the edges of a
graph, Magyar Tud. Akad. Mat. Kut. Int. K¨
ozl 6 (1961), 181–203.
[78] P. Erd˝os, F. Harary and W.T. Tutte, On the dimension of a graph Mathematika 12
(1965), 118–122.
[79] P. Erd˝os, L. Lov´
asz, K. Vesztergombi: On the graph of large distances, Discrete and
Computational Geometry 4 (1989) 541–549.
[80] P. Erd˝os, L. Lov´
asz, K. Vesztergombi: The chromatic number of the graph of large
distances, in: Combinatorics, Proc. Coll. Eger 1987, Coll. Math. Soc. J. Bolyai 52,
North-Holland, 547–551.
[81] P. Erd˝os and M. Simonovits: On the chromatic number of geometric graphs, Ars Combin. 9 (1980), 229–246.
[82] P. Erd˝os, A. Hajnal and J. Moon: A problem in graph theory, Amer. Math. Monthly 71
(1964), 1107–1110.
[83]
[84] U. Feige: Approximating the bandwidth via volume respecting embeddings, Proc. 30th
Ann. ACM Symp. on Theory of Computing, ACM New York, NY (1998), 90-99.
[85] J. Ferrand, Fonctions pr´eharmoniques et fonctions pr´eholomorphes. (French) Bull. Sci.
Math. 68, (1944). 152–180.
[86] C.M. Fiduccia, E.R. Scheinerman, A. Trenk, J.S. Zito: Dot product representations of
graphs, Discrete Math. 181 (1998), 113–138.
[87] H. de Fraysseix, J. Pach, R. Pollack: How to draw a planar graph on a grid, Combinatorica 10 (1990), 41–51.
[88] P. Frankl, H. Maehara: The Johnson-Lindenstrauss lemma and the sphericity of some
graphs, J. Combin. Theory Ser. B 44 (1988), 355–362.
[89] P. Frankl, H. Maehara: On the contact dimension of graphs, Disc. Comput. Geom. 3
(1988) 89–96.
BIBLIOGRAPHY
263
[90] P. Frankl, R.M. Wilson: Intersection theorems with geometric consequences, Combinatorica 1 (1981), 357–368.
[91] S. Friedland and R. Loewy, Subspaces of symmetric matrices containing matrices with
multiple first eigenvalue, Pacific J. Math. 62 (1976), 389–399.
[92] A. Frieze and R. Kannan: Quick approximation to matrices and applications, Combinatorica 19 (1999), 175–220.
[93] Z. F¨
uredi: The maximum number of unit distances in a convex n-gon, J. Combin. Theory
Ser. A 55 (1990), 316–320.
[94] Z. Fredi and J. Komls: The eigenvalues of random symmetric matrices, Combinatorica
1 (1981), 233–241.
[95] H. Gluck: Almost all closed surfaces are rigid, in: Geometric Topology, Lecture Notes
in Mathematics 438, Springer (1974), 225-239.
[96] M. X. Goemans and D. P. Williamson: Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming, J. Assoc. Comput.
Mach. 42 (1995), 1115–1145.
[97] J. Graver, B. Servatius and H. Servatius: Combinatorial rigidity Grad. Studies in Math.
2, American Mathematical Society, Providence, RI, 1993.
[98] M. Gr¨otschel, L. Lov´
asz and A. Schrijver: The ellipsoid method and its consequences in
combinatorial optimization, Combinatorica 1 (1981), 169-197.
[99] M. Gr¨otschel, L. Lov´
asz and A. Schrijver: Polynomial algorithms for perfect graphs,
Annals of Discrete Math. 21 (1984), 325-256.
[100] M. Gr¨otschel, L. Lov´
asz, A. Schrijver: Relaxations of vertex packing, J. Combin. Theory B 40 (1986), 330-343.
[101] M. Gr¨otschel, L. Lov´
asz, A. Schrijver: Geometric Algorithms and Combinatorial Optimization, Springer (1988).
[102] B. Gr¨
unbaum: A proof of V´azsonyi’s conjecture, Bull. Res. Council Israel, section A
6 (1956), 77–78.
[103] B. Gr¨
unbaum, T.S. Motzkin: On polyhedral graphs, Proc. Sympos. Pure Math. 7
(1963), Amer. Math. Soc., Providence, R.I., 285–290.
[104] Richard K. Guy: A combinatorial problem, Bull. Malayan Math. Soc. 7 (1967), 68–72.
264
BIBLIOGRAPHY
¨
[105] H. Hadwiger: Ein Uberdeckungssatz
f¨
ur den Euklidischen Raum, Portugaliae Math. 4
(1944), 140–144.
[106] W. Haemers: On some problems of Lov´asz concerning the Shannon capacity of a graph,
IEEE Trans. Inform. Theory 25 (1979), 231–232.
[107] J. H˚
astad: Some optimal in-approximability results, Proc. 29th ACM Symp. on Theory
of Comp., 1997, 1–10.
[108] J. H˚
astad: Clique is hard to approximate within a factor of n1−ε , Acta Math. 182
(1999), 105–142.
[109] Z.-X. He, O. Schramm: Hyperbolic and parabolic packings, Discrete Comput. Geom.
14 (1995), 123–149.
[110] Z.-X. He, O. Schramm: On the convergence of circle packings to the Riemann map,
Invent. Math. 125 (1996), 285–305.
[111] A. Heppes: Beweis einer Vermutung von A. V´azsonyi, Acta Math. Acad. Sci. Hung. 7
(1956), 463–466.
[112] A. Heppes, P. R´ev´esz: A splitting problem of Borsuk. (Hungarian) Mat. Lapok 7 (1956),
108–111.
[113] A.J. Hoffman, Some recent results on spectral properties of graphs, in: Beitr¨
age zur
Graphentheorie, Teubner, Leipzig (1968) 75–80.
[114] A.J. Hoffman: On graphs whose least eigenvalue exceeds −1 −
Appl. 16 (1977), 153–165.
√
2, Linear Algebra and
[115] A.J. Hoffman: On eigenvalues and colorings of graphs. Graph Theory and its Applications, Academic Press, New York (1970), 79–91.
[116] H. van der Holst: A short proof of the planarity characterization of Colin de Verdi`ere,
Journal of Combinatorial Theory, Series B 65 (1995) 269–272.
[117] H. van der Holst: Topological and Spectral Graph Characterizations, Ph.D. Thesis,
University of Amsterdam, Amsterdam, 1996.
[118] H. van der Holst, M. Laurent, A. Schrijver, On a minor-monotone graph invariant,
Journal of Combinatorial Theory, Series B 65 (1995) 291–304.
[119] H. van der Holst, L. Lov´
asz and A. Schrijver: On the invariance of Colin de Verdi`ere’s
graph parameter under clique sums, Linear Algebra and its Applications, 226–228
(1995), 509–518.
BIBLIOGRAPHY
265
[120] H. van der Holst, L. Lov´
asz, A. Schrijver: The Colin de Verdi`ere graph parameter,
in: Graph Theory and Combinatorial Biology, Bolyai Soc. Math. Stud. 7, J´anos Bolyai
Math. Soc., Budapest (1999), 29–85.
[121] H. Hopf, E. Pannwitz: Jahresbericht der Deutsch. Math. Vereinigung 43 (1934), 2 Abt.
114, Problem 167.
[122] P. Indyk and J. Matouˇsek: Low-distortion embeddings of finite metric spaces, in: Handbook of Discrete and Computational Geometry (ed. J.E. Goodman, J. O’Rourke), Second
Edition, Chapman and Hall (2004), 177196.
[123] M. Iri: On an extension of the maximum-flow minimum cut theorem to multicommodity
flows, J. Oper. Res. Soc. Japan 5 (1967), 697–703.
[124] R. Isaacs, Monodiffric functions, Natl. Bureau Standards App. Math. Series 18 (1952),
257–266.
[125] W.B. Johnson, J. Lindenstrauss: Extensions of Lipschitz mappings into a Hilbert space.
Conference in modern analysis and probability (New Haven, Conn., 1982), 189–206,
Contemp. Math., 26, Amer. Math. Soc., Providence, R.I., 1984.
[126] M.R. Jerrum and A. Sinclair: Approximating the permanent, SIAM J. Comput.
18(1989), 1149–1178.
[127] J. Jonasson and O. Schramm: On the cover time of planar graphs Electron. Comm.
Probab. 5 (2000), paper no. 10, 85–90.
[128] F. Juh´asz: The asymptotic behaviour of Lov´asz’ ϑ function for random graphs, Combinatorica 2 (1982) 153–155.
[129] G.A. Kabatjanski˘ı and V.I. Levenˇstein: Bounds for packings on a sphere and in space
(in Russian), Problemy Peredachi Informatsii 14 (1978), 3–25.
[130] J. Kahn, G, Kalai: A counterexample to Borsuk’s conjecture. Bull. Amer. Math. Soc.
29 (1993), 60–62.
[131] R. Kang, L. Lov´
asz, T. Muller and E. Scheinerman: Dot product representations of
planar graphs, Electr. J. Combin. 18 (2011), P216.
[132] D. Karger, R. Motwani, M. Sudan: Approximate graph coloring by semidefinite programming, Proc. 35th FOCS (1994), 2–13.
[133] B. S. Kashin and S. V. Konyagin: On systems of vectors in Hilbert spaces, Trudy Mat.
Inst. V.A.Steklova 157 (1981), 64–67; English translation: Proc. of the Steklov Inst. of
Math. (AMS 1983), 67–70.
266
BIBLIOGRAPHY
[134] R. Kenyon: The Laplacian and Dirac operators on critical planar graphs, Inventiones
Math 150 (2002), 409-439.
[135] R. Kenyon and J.-M. Schlenker: Rhombic embeddings of planar graphs with faces of
degree 4 (math-ph/0305057).
[136] S. Khot, G. Kindler, E. Mossel and R. O’Donnell: Optimal inapproximability results
for MAX-CUT and other two-variable CSP’s, SIAM Journal on Computing 37, 319–357.
[137] D. E. Knuth: The sandwich theorem, The Electronic Journal of Combinatorics 1
(1994), 48 pp.
[138] P. Koebe: Kontaktprobleme der konformen Abbildung, Berichte u
¨ber die Verhandlungen d. S¨
achs. Akad. d. Wiss., Math.–Phys. Klasse, 88 (1936) 141–164.
[139] V. S. Konyagin, Systems of vectors in Euclidean space and an extremal problem for
polynomials, Mat. Zametky 29 (1981), 63–74. English translation: Math. Notes of the
Academy USSR 29 (1981), 33–39.
[140] A. Kotlov: A spectral characterization of tree-width-two graphs, Combinatorica 20
(2000)
[141] A. Kotlov: Minors and Strong Products, Europ. J. Comb. 22 (2001), 511–512.
[142] A. Kotlov, L. Lov´
asz: The rank and size of graphs, J. Graph Theory 23 (1996), 185–189.
[143] A. Kotlov, L. Lov´
asz, S. Vempala, The Colin de Verdi`ere number and sphere representations of a graph, Combinatorica 17 (1997) 483–521.
[144] G. Laman (1970): On graphs and rigidity of plane skeletal structures, J. Engrg. Math.
4, 331–340.
[145] P. Lancaster, M. Tismenetsky, The Theory of Matrices, Second Edition, with Applications, Academic Press, Orlando, 1985.
[146] D.G. Larman, C.A. Rogers: The realization of distances within sets in Euclidean space.
Mathematika 19 (1972), 1–24.
[147] M. Laurent and S. Poljak: On the facial structure of the set of correlation matrices,
SIAM J. on Matrix Analysis and Applications 17 (1996), 530–547.
[148] F.T. Leighton: Complexity Issues in VLSI, Foundations of Computing Series, MIT
Press, Cambridge, MA (1983).
BIBLIOGRAPHY
267
[149] F.T. Leighton and S. Rao, An approximate max-flow min-cut theorem for uniform
multicommodity flow problems with applications to approximation algorithms, Proc.
29th Annual Symp. on Found. of Computer Science, IEEE Computer Soc., (1988), 422431.
[150] N. Linial, E. London, Y. Rabinovich: The geometry of graphs and some of its algorithmic applications, Combinatorica 15 (1995), 215–245.
[151] N. Linial, L. Lov´
asz, A. Wigderson: Rubber bands, convex embeddings, and graph
connectivity, Combinatorica 8 (1988) 91-102.
[152] R.J. Lipton, R.E. Tarjan: A Separator Theorem for Planar Graphs, SIAM J. on Applied
Math., 36 (1979), 177–189.
[153] L. Lov´
asz: Normal hypergraphs and the perfect graph conjecture, Discrete Math. 2
(1972), 253-267.
[154] L. Lov´
asz: Combinatorial Problems and Exercises, Second Edition, Akad´emiai Kiad´o
- North Holland, Budapest (1991).
[155] L. Lov´
asz: Flats in matroids and geometric graphs, in: Combinatorial Surveys, Proc.
6th British Comb. Conf., Academic Press (1977), 45–86.
[156] L. Lov´
asz: Some finite basis theorems in graph theory, in: Combinatorics, Coll. Math.
Soc. J. Bolyai 18 (1978), 717–729.
[157] L. Lov´
asz: On the Shannon capacity of graphs, IEEE Trans. Inform. Theory 25 (1979),
1–7.
[158] L. Lov´
asz: Selecting independent lines from a family of lines in a space, Acta Sci.
Math. Szeged 42 (1980), 121–131.
[159] L. Lov´
asz: Submodular functions and convexity, in: Mathematical Programming: the
State of the Art (ed. A.Bachem, M.Gr¨otschel, B.Korte), Springer (1983), 235–257.
[160] L. Lov´
asz: Singular spaces of matrices and their applications in combinatorics, Bol.
Soc. Braz. Mat. 20 (1989), 87–99.
[161] L. Lov´
asz: Random walks on graphs: a survey, in: Combinatorics, Paul Erd˝
os is
Eighty, Vol. 2 (ed. D. Miklos, V. T. Sos, T. Szonyi), J´anos Bolyai Mathematical Society,
Budapest, 1996, 353–398.
[162] L. Lov´
asz: Steinitz representations of polyhedra and the Colin de Verdi`ere number, J.
Comb. Theory B 82 (2001), 223–236.
268
BIBLIOGRAPHY
[163] L. Lov´
asz: Semidefinite programs and combinatorial optimization, in: Recent Advances in Algorithms and Combinatorics, CMS Books Math./Ouvrages Math. SMC 11,
Springer, New York (2003), 137–194.
[164] L. Lov´
asz: Graph minor theory, Bull. Amer. Math. Soc. 43 (2006), 75–86.
[165] L. Lov´
asz: Large graphs, graph homomorphisms and graph limits, Amer. Math. Soc.,
Providence, RI (2012).
[166] L. Lov´
asz, M.D. Plummer: Matching Theory, Akad´emiai Kiad´o–North Holland, Budapest, 1986 (reprinted by AMS Chelsea Publishing, 2009).
[167] L. Lov´
asz, M. Saks, A. Schrijver: Orthogonal representations and connectivity of
graphs, Linear Alg. Appl. 114/115 (1989), 439–454; A Correction: Orthogonal representations and connectivity of graphs, Linear Alg. Appl. 313 (2000) 101–105.
[168] L. Lov´
asz, A. Schrijver: A Borsuk theorem for antipodal links and a spectral characterization of linklessly embedable graphs, Proceedings of the American Mathematical
Society 126 (1998) 1275–1285.
[169] L. Lov´
asz, A. Schrijver: On the null space of a Colin de Verdi`ere matrix, Annales de
l’Institute Fourier 49, 1017–1026.
[170] L. Lov´
asz and B. Szegedy: Szemer´edi’s Lemma for the analyst, Geom. Func. Anal. 17
(2007), 252–270.
[171] L. Lov´
asz, K. Vesztergombi: Geometric representations of graphs, to appear in Paul
Erdos and his Mathematics, (ed. G. Hal´asz, L. Lov´asz, M. Simonovits, V.T. S´os), Bolyai
Society–Springer Verlag.
[172] L. Lov´
asz, K. Vesztergombi, U. Wagner, E. Welzl: Convex quadrilaterals and k-sets,
in: Towards a Theory of Geometric Graphs, (J. Pach, Ed.), AMS Contemporary Mathematics 342 (2004), 139–148.
[173] L. Lov´
asz and P. Winkler: Mixing times, in: Microsurveys in Discrete Probability (ed.
D. Aldous and J. Propp), DIMACS Series in Discr. Math. and Theor. Comp. Sci., 41
(1998), 85–133.
[174] L. Lov´
asz, Y. Yemini: On generic rigidity in the plane, SIAM J. Alg. Discr. Methods
1 (1982), 91-98.
[175] Lubotzky, Phillips, Sarnak
[176] H. Maehara, V. R¨odl: On the dimension to represent a graph by a unit distance graph,
Graphs and Combinatorics 6 (1990), 365–367.
BIBLIOGRAPHY
269
[177] P. Mani, Automorphismen von polyedrischen Graphen, Math. Annalen 192 (1971),
279–303.
[178] Margulis [expander]
[179] P. Massart: Concentration Inequalities and Model Selection, Ecole dEt´e de Probabilit´es
de Saint-Flour XXXIII, Springer 2003.
[180] J. Matouˇsek: On embedding expanders into lp spaces. Israel J. Math. 102 (1997),
189–197.
[181] J. Matouˇsek: Lectures on discrete geometry, Graduate Texts in Mathematics bf 212,
Springer–Verlag, New York (2002), Chapter 15.
[182] C. Mercat, Discrete Riemann surfaces and the Ising model. Comm. Math. Phys. 218
(2001), no. 1, 177–216.
[183] C. Mercat: Discrete polynomials and discrete holomorphic approximation (2002)
[184] C. Mercat: Exponentials form a basis for discrete holomorphic functions (2002)
[185] G.L. Miller, W. Thurston: Separators in two and three dimensions, Proc. 22th ACM
STOC (1990), 300–309.
[186] G.L. Miller, S.-H. Teng, W. Thurston, S.A. Vavasis: Separators for sphere-packings
and nearest neighbor graphs. J. ACM 44 (1997), 1–29.
[187] B. Mohar and S. Poljak: Eigenvalues and the max-cut problem, Czechoslovak Mathematical Journal 40 (1990), 343–352.
[188] B. Mohar and C. Thomassen: Graphs on surfaces. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, Baltimore, MD, 2001.
[189] A. Mowshowitz: Graphs, groups and matrices, in: Proc. 25th Summer Meeting Canad.
Math. Congress, Congr. Numer. 4 (1971), Utilitas Mathematica, Winnipeg, 509–522.
[190] C. St.J. A. Nash-Williams, Random walks and electric currents in networks, Proc.
Cambridge Phil. Soc. 55 (1959), 181–194.
[191] Yu. E. Nesterov and A. Nemirovsky: Interior-point polynomial methods in convex programming, Studies in Appl. Math. 13, SIAM, Philadelphia, 1994.
[192] M. L. Overton: On minimizing the maximum eigenvalue of a symmetric matrix, SIAM
J. on Matrix Analysis and Appl. 9 (1988), 256–268.
[193] M. L. Overton and R. Womersley: On the sum of the largest eigenvalues of a symmetric
matrix, SIAM J. on Matrix Analysis and Appl. 13 (1992), 41–45.
270
BIBLIOGRAPHY
[194] G. Pataki: On the rank of extreme matrices in semidefinite programs and the multiplicity of optimal eigenvalues, Math. of Oper. Res. 23 (1998), 339–358.
[195] S. Poljak and F. Rendl: Nonpolyhedral relaxations of graph-bisection problems, DIMACS Tech. Report 92-55 (1992).
[196] L. Porkol´
ab and L. Khachiyan: On the complexity of semidefinite programs, J. Global
Optim. 10 (1997), 351–365.
[197] M. Ramana: An exact duality theory for semidefinite programming and its complexity
implications, in: Semidefinite programming. Math. Programming Ser. B, 77 (1997), 129–
162.
[198] A. Recski: Matroid theory and its applications in electric network theory and in statics,
Algorithms and Combinatorics, 6. Springer-Verlag, Berlin; Akad´emiai Kiad´o, Budapest,
1989.
ˇ
[199] J. Reiterman, V. R¨odl, E. Sinajov´
a: Embeddings of graphs in Euclidean spaces, Discr.
Comput. Geom. 4 (1989), 349–364.
ˇ
[200] J. Reiterman, V. R¨odl and E. Sinajov´
a: On embedding of graphs into Euclidean spaces
of small dimension, J. Combin. Theory Ser. B 56 (1992), 1–8.
[201] J. Richter-Gebert: Realization spaces of polytopes, Lecture Notes in Mathematics 1643,
Springer-Verlag, Berlin, 1996.
[202] N. Robertson, P.D. Seymour: Graph minors III: Planar tree-width, J. Combin. Theory
B 36 (1984), 49–64.
[203] N. Robertson, P.D. Seymour, Graph minors XX: Wagner’s conjecture (preprint, 1988).
[204] N. Robertson, P. Seymour, R. Thomas: A survey of linkless embeddings, in: Graph
Structure Theory (N. Robertson, P. Seymour, eds.), Contemporary Mathematics, American Mathematical Society, Providence, Rhode Island, 1993, pp. 125–136.
[205] N. Robertson, P. Seymour, R. Thomas: Hadwiger’s conjecture for K6 -free graphs,
Combinatorica 13 (1993) 279–361.
[206] N. Robertson, P. Seymour, R. Thomas: Sachs’ linkless embedding conjecture, Journal
of Combinatorial Theory, Series B 64 (1995) 185–227.
[207] B. Rodin and D. Sullivan: The convergence of circle packings to the Riemann mapping,
J. Diff. Geom. 26 (1987), 349–360.
[208] H. Sachs,
BIBLIOGRAPHY
271
[209] H. Sachs, Coin graphs, polyhedra, and conformal mapping, Discrete Math. 134 (1994),
133–138.
[210] F. Saliola and W. Whiteley: Constraining plane configurations in CAD: Circles, lines
and angles in the plane SIAM J. Discrete Math. 18 (2004), 246–271.
[211] H. Saran: Constructive Results in Graph Minors: Linkless Embeddings, Ph.D. Thesis,
University of California at Berkeley, 1989.
[212] W. Schnyder: Embedding planar graphs on the grid, Proc. First Annual ACM-SIAM
Symp. on Discrete Algorithms, SIAM, Philadelphia (1990), 138–148;
[213] O. Schramm: Existence and uniqueness of packings with specified combinatorics, Israel
Jour. Math. 73 (1991), 321–341.
[214] O. Schramm: How to cage an egg, Invent. Math. 107 (1992), 543–560.
[215] O. Schramm: Square tilings with prescribed combinatorics, Israel Jour. Math. 84
(1993), 97–118.
[216] A. Schrijver, Minor-monotone graph invariants, in: Combinatorics 1997 (P. Cameron,
ed.), Cambridge University Press, Cambridge, 1997, to appear.
[217] J. Schwartz: Fast probabilistic algorithms for verification of polynomial identities, Journal of the ACM 27 (1980), 701–717.
[218] F. Shahroki and D.W. Matula: The maximum concurrent flow problem, J. ACM 37
(1990), 318–334.
[219] D. Singer:
Rectilinear crossing numbers, Manuscript (1971),
http://www.cwru.edu/artsci/math/singer/home.html.
available at
[220] P.M. Soardi: Potential Theory on Infinite Networks, Lecture notes in Math. 1590,
Springer-Verlag, Berlin–Heidelberg, 1994.
[221] J. Spencer, E. Szemer´edi and W.T. Trotter: Unit distances in the Euclidean plane,
Graph Theory and Combinatorics, Academic Press, London–New York (1984), 293–303.
[222] D.A. Spielman and S.-H. Teng: Disk Packings and Planar Separators, 12th Annual
ACM Symposium on Computational Geometry (1996), 349–358.
[223] D.A. Spielman and S.-H. Teng: Spectral Partitioning Works: Planar graphs and finite
element meshes, Proceedings of the 37th Annual IEEE Conference on Foundations of
Comp. Sci. (1996), 96–105.
272
BIBLIOGRAPHY
[224] E. Steinitz: Polyeder und Raumabteilungen, in:
senschaften 3 (Geometrie) 3AB12, 1–139, 1922.
Encyclop¨
adie der Math. Wis-
[225] K. Stephenson: Introduction to Circle Packing, The Theory of Discrete Analytic Functions, Cambridge University Press (2005).
[226] L. Sur´anyi: On line-critical graphs, Infinite and Finite Sets III, (Colloq. Keszthely,
Hungary, 1973), eds. A. Hajnal, R. Rado and V. T. S´os, Colloq. Math. Soc. J´anos
Bolyai 10, North-Holland, Amsterdam (1975), 1411–1444.
[227] J.W. Sutherland: Jahresbericht der Deutsch. Math. Vereinigung 45 (1935), 2 Abt. 33.
[228] S. Straszewicz: Sur un probl`eme geometrique de Erd˝os, Bull. Acad. Polon. Sci. CI.III
5 (1957), 39–40.
[229] M. Szegedy: A note on the θ number of Lov´asz and the generalized Delsarte bound,
Proc. 35th FOCS (1994), 36–39.
[230] L.A. Sz´ekely: Measurable chromatic number of geometric graphs and sets without some
distances in Euclidean space, Combinatorica 4 (1984), 213–218.
[231] L.A. Sz´ekely: Crossing numbers and hard Erd˝os problems in discrete geometry. Combin.
Probab. Comput. 6 (1997), 353–358.
[232] E. Szemer´edi: Regular partitions of graphs, Colloque Inter. CNRS (J.-C. Bermond,
J.-C. Fournier, M. Las Vergnas and D. Sotteau, eds.) (1978) 399–401.
[233] W.P. Thurston: Three-dimensional Geometry and Topology, Princeton Mathematical
Series 35, Princeton University Press, Princeton, NJ, 1997.
[234] W.P. Thurston: The finite Riemann Mapping Theorem, lecture at the International
Symposium at Purdue University on the occasion of the proof of the Bieberbach Conjecture (1985).
[235] G. T´oth: Point sets with many k-sets, Discr. Comput. Geom. 26 (2001), 187–194.
[236] W.T. Tutte: How to draw a graph, Proc. London Math. Soc. 13 (1963), 743–768.
[237] V. Vassilevska Williams: Multiplying matrices faster than Coppersmith-Winograd,
[238] K. Vesztergombi: On the distribution of distances in finite sets in the plane, Discrete
Math., 57, [1985], 129–145.
[239] K. Vesztergombi: Bounds on the number of small distances in a finite planar set, Stu.
Sci. Acad. Hung., 22 [1987], 95–101.
BIBLIOGRAPHY
273
[240] K. Vesztergombi: On the two largest distances in finite planar sets, Discrete Math. 150
(1996) 379–386.
[241] L. Vandeberghe and S. Boyd: Semidefinite programming, in: Math. Programming:
State of the Art (ed. J. R. Birge and K. G. Murty), Univ. of Michigan, 1994.
[242] L. Vandeberghe and S. Boyd: Semidefinite programming. SIAM Rev. 38 (1996), no. 1,
49–95.
[243] R.J. Vanderbei and B. Yang: The simplest semidefinite programs are trivial, Math. of
Oper. Res. 20 (1995), no. 3, 590–596.
[244] E. Welzl: Entering and leaving j-facets, Discr. Comput. Geom. 25 (2001), 351–364.
[245] W. Whiteley, Infinitesimally rigid polyhedra, Trans. Amer. Math. Soc. 285 (1984),
431–465.
[246] H. Wolkowitz: Some applications of optimization in matrix theory, Linear Algebra and
its Applications 40 (1981), 101–118.
[247] H. Wolkowicz: Explicit solutions for interval semidefinite linear programs, Linear Algebra Appl. 236 (1996), 95–104.
[248] H. Wolkowicz, R. Saigal and L. Vandenberghe: Handbook of semidefinite programming.
Theory, algorithms, and applications. Int. Ser. Oper. Res. & Man. Sci., 27 (2000) Kluwer
Academic Publishers, Boston, MA.
[249] D. Zeilberger: A new basis for discrete analytic polynomials, J. Austral. Math. Soc.
Ser. A 23 (1977), 95–104.
[250] D. Zeilberger: A new approach to the theory of discrete analytic functions, J. Math.
Anal. Appl. 57 (1977), 350–367.
[251] D. Zeilberger and H. Dym: Further properties of discrete analytic functions, J. Math.
Anal. Appl. 58 (1977), 405–418.
[252] R. Zippel: Probabilistic algorithms for sparse polynomials, in: Symbolic and Algebraic
Computation, Lecture Notes in Computer Science 72 (1979), 216–226.