Working Paper How to design and analyze priority rules: Example of simple

Working Paper
How to design and analyze
priority rules: Example of simple
assembly line balancing
Alena Otto, Christian Otto and Armin Scholl
03/2012
Working Papers
in Supply Chain Management
Friedrich-Schiller-University of Jena
Prof. Dr. Nils Boysen
Prof. Dr. Armin Scholl
Chair of Operations Management
School of Business and Economics
Friedrich-Schiller-University Jena
Carl-Zeiß-Str. 3, D-07743 Jena
Phone ++49 (0)3641 943100
e-Mail: [email protected]
Chair of Management Science
School of Business and Economics
Friedrich-Schiller-University Jena
Carl-Zeiß-Str. 3, D-07743 Jena
Phone ++49 (0)3641 943170
e-Mail: [email protected]
http://pubdb.wiwi.uni-jena.de
How to design and analyze priority rules:
Example of simple assembly line balancing
Alena Ottoa, Christian Ottoa and Armin Scholla,*
a
Friedrich-Schiller-University of Jena, Chair of Management Science,
Carl-Zeiß-Straße, D-07743 Jena, Germany
*Corresponding author: phone: +49 3641 943171, e-mail: [email protected]
Abstract
Priority rule-based methods (PRBMs) rely on problem-specific knowledge to construct
good solutions in a very short time. They can be used as stand-alone procedures or can be
integrated into (partial) enumeration procedures, like branch & bound or dynamic programming, and heuristic solution methods. PRBMs are especially important for solving
NP-hard optimization problems.
In this study, we provide guidance, how to design PRBMs based on a thorough computational investigation. We conduct our analysis on the example of the NP-hard Simple Assembly Line Balancing Problem (SALBP), on which with small modifications most situations in the planning of assembly lines are based.
In our study, we persuasively show that for large instances, unless all the task times are
low in comparison to the cycle time, PRBMs are competitive even as a stand-alone solution method. Further, we provide evidence that composite priority rules perform better and
more robust than elementary ones. We also give advice on how to incorporate the
knowledge on the problem instance’s structure to make PRBMs even more effective and
how to find several good-quality solutions. Overall, PRBMs developed in this paper significantly outperform those available in the literature.
Keywords: operations management; manufacturing; priority rules; assembly line balancing
1
Introduction
In most complex decision situations in practice, effective but simple and intuitive rules of
thumb are often used to obtain sufficiently good and practicable solutions. Their counterparts
in the theory of combinatorial optimization are priority rule-based methods (PRBMs) that rely
on problem-specific knowledge to construct good solutions in a very short time. PRBMs are
essentially important for solving large instances of NP-hard optimization problems. Firstly,
they enhance the effectiveness of (partial) enumeration procedures, like branch & bound or
dynamic programming, and of (meta-) heuristic solution methods. PRBMs provide initial solutions and bounds (e.g. Vance et al. 1994), deliver the sequence of enumeration (e.g. local
renumbering in SALOME, see Scholl and Klein 1997) and/ or are an integral part of local
search procedures (see Storer et al. 1992). Secondly, PRBMs are also often used as standalone solution methods in dynamic problems (see, e.g., Ouelhadj and Petrovic 2009), for solving large instances (especially in real world problems) and whenever the speed of getting a
good solution is important (e.g. in simulation based optimization in Jin et al. 2002). Thirdly,
PRBMs are preferred in practical applications, because they are very intuitive for planners.
Several studies of priority rules have been conducted in the literature so far (see, e.g., Panwalkar and Iskander 1977 for scheduling problems; Haupt 1989, for job shop scheduling; Talbot
et al. 1986, and Scholl and Voß 1996, for simple assembly line balancing problem (SALBP)).
These studies list and classify priority rules, proposed in the literature, and compare their performance. Three important facts have been established. First of all, priority rules show very
different performance. For example, the best single rule was about 30% better in the average
deviation from the optimal solution than the worst one in the study of Talbot et al. (1986),
whereas no rule dominated any other. However, no tests of priority rules have been performed
on problem instances of sizes and structures relevant to practice. Secondly, there is a consensus that individual priority rules tend to perform worse than their combinations (see e.g. Panwalkar and Iskander 1977). However, to our best knowledge, there is a lack of practicable
advice on how and which rules to combine. Thirdly, it was shown in several studies that the
results of PRBMs for some given problem instance can be significantly improved by adjusting
parameters in a time-consuming learning procedure for precisely this instance (e.g. Hershauer
and Ebert 1975). However, it has been not clearly shown up to now, whether we can utilize in
advance (without instance-specific learning) information on the instance’s structure in order
to improve the solution quality of a PRBM.
1
The aim of the present work is to provide guidance in application of PRBMs. For this purpose, differently to the previous studies in this field, we also provide a thorough computational investigation of priority rules. We selected the simple assembly line balancing problem of
type 1 (SALBP-1) for our analysis, since it is a well known NP-hard problem on which all the
assembly balancing problems are based (cf., e.g., Boysen et al. 2007 and 2008; Becker and
Scholl 2006). A (stand-alone) application of PRBMs is of utmost importance for assembly
line balancing in manufacturing. A typical problem instance is very large and contains about
1,000 tasks or more, while (nearly) optimal solutions have to be generated for a planner within just a few seconds.
In our work, first of all, we convincingly show that PRBMs are competitive solution methods,
both in terms of average solution quality and robustness, especially for instances containing
large task times. This perception is new. In earlier investigations much smaller instances were
used, for which a difference in one station might mean a large deterioration in performance
(e.g. Scholl data set, Scholl 1999). In contrast to the previous reviews, we rely on the sufficiently large, systematically generated data set, described in Otto et al. (2011), that widely
mimics real assembly lines. We start our analysis with elementary priority rules described in
the literature, the best of which has only about 1-2% average relative deviation from the optimum – a very good result. In the end of our study, we are able to reduce this deviation by
about two thirds. In absolute terms, this improvement equals on average to about three stations or between €1,500,000 and €3,000,000 savings in annual costs for a typical automobile
plant. Such good performance of priority rules is achieved, despite the hardiness of the tested
instances, which are much harder than those previously tested in the literature. Overall,
PRBMs achieve impressive results at low computational costs.
Secondly, the purpose of this work is not just to compare the existing priority rules to each
other, but to give practicable advice on how to design (new) effective PRBMs, including how
to combine priority rules and to generate several different solutions of a good quality. Moreover, we show how to utilize knowledge on the structure of the problem instances in order to
improve the performance of priority rules. Due to our reliance on large and hard instances in
our tests, we get sharpened insights in the differences in performance of priority rules.
We proceed as follows. In Section 2, we give an introduction into assembly line balancing and
some theoretical aspects of PRBMs. The performance of effective multiple-pass PRBMs is
thoroughly analyzed in Section 3. Section 4 describes our comprehensive analyses of different
2
priority rule concepts, that led to selecting the rules of Section 3, and gives general advice on
the design of effective PRBMS. We conclude with a discussion in Section 5.
2
Preliminary terms and concepts
Solution methods for extended assembly line balancing problems that are relevant for practical applications are mostly based on the insights coming from the basic problem – the simple
assembly line balancing problem (SALBP). In this work, we consider SALBP with the objective to minimize the number of stations (SALBP-1), since this objective is most important for
practice. We proceed with a description of the problem, possible construction schemes for
priority rules and discussion on the frontiers of the method. Further, we list popular priority
rules for SALBP-1.
2.1
Simple Assembly Line Balancing Problem-1 (SALBP-1)
At assembly lines, the set of tasks ܸ =
{1, … , ݊} with deterministic operation times ‫ݐ‬௝
(for ݆∈ ܸ), that have to be performed on a
workpiece, are divided or station loads
ܵ௞ ⊂ ܸ with the station time ‫ܵ(ݐ‬௞) = ∑௝∈ௌೖ ‫ݐ‬௝
(see Becker and Scholl 2006). The workpieces
Fig. 1 Example of a precedence graph
are transported along the set of (work)stations ܹ = {1, … , ‫ }ܭ‬in a given sequential order at a
constant speed. The amount of time, during which each station has access to every workpiece,
is called cycle time ܿ. The assignment of tasks has to obey two different types of constraints.
First of all, the workers operating at each station have to be able to perform all the tasks contained in the station loads during the cycle time, i.e., ‫ܵ(ݐ‬௞) ≤ ܿ(cycle time constraints). Secondly, certain tasks have to be executed in a given order due to organizational or technologi-
cal conditions (precedence constraints). Precedence relation (݅, ݆) describes a requirement,
when task ݅∈ ܸ must be performed before another task ݆∈ ܸ. A non-cyclical digraph
‫ܸ( = ܩ‬, ‫ܧ‬, ‫ )ݐ‬with node set ܸ, called precedence graph, summarizes all the precedence relations. Here ‫݅({ = ܧ‬, ݆)|݅∈ ܸ, ݆∈ ‫ܨ‬௜} is the set of arcs representing direct precedence relations
(see Figure 1). The set of all (direct and transitive) precedence relations is denoted by ‫ ∗ܧ‬. The
sets of direct (all) followers and direct (all) predecessors of task ݆are notated as ‫ܨ‬௝ (‫ܨ‬௝∗ ) and ܲ௝
(ܲ௝∗ ), respectively. The node weights represent the operation times ‫ݐ‬௝ (see Baybars 1986;
Scholl 1999, Ch. 2; Scholl and Becker 2006, for details).
3
Given the precedence relations and the task times, the earliest station ‫ܧ‬௝ = ቒ‫ݐ‬௝ + ∑௛∈௉ೕ∗ ‫ݐ‬௛ ቓand
the latest station ‫ܮ‬௝ = ‫ ܤܷܩ‬+ 1 − ቒ‫ݐ‬௝ + ∑௛∈ிೕ∗ ‫ݐ‬௛ ቓof each task can be determined, where GUB
refers to the theoretical upper bound as described in Scholl and Voß (1996).
The most widespread characterization of the strength of precedence relations in the precedence graph is the order strength (OS), which is the ratio of the number of direct and indirect
precedence relations between each pair of tasks to the maximal possible number of prece∗
∑೙
ೕసభቚிೕ ቚ
dence relations ቆܱܵ = ଴.ହ∙௡∙(௡ିଵ)ቇ. The connection between OS and the performance of solu-
tion methods has been suggested for SALBP and related problems in combinatorial optimization (cf. Mastor 1970; Elmaghraby and Herroelen 1980; Kolisch et al. 1995).
The idle time at station ݇ is defined as ݅‫ݐ‬௞ = ܿ− ‫ܵ(ݐ‬௞). In general, the idle time is a waste of
resources and the sum of idle times has to be minimized as in case of minimization of the
number of stations in SALBP-1.
2.2
Construction methods
During the construction of a feasible solution, it can be differentiated between available and
assignable tasks. A task is called available if all its predecessors are already assigned. An
available task ݆is called assignable to open station ݇ if the idle time at this station, ݅‫ݐ‬௞, is at
least as large as the task time ‫ݐ‬௝ (Scholl 1999, Ch. 4).
Different construction schemes for priority rules have been proposed in the SALBP-literature:
these are taskoriented (TOS) and stationoriented (SOS) schemes, as well as forward, backward
and bidirectional approaches. In SOS, assignable tasks with the highest priority value are
stepwise allocated at the currently open station ݇, until no available task fits into this station
anymore without violating the cycle time. After that, station ݇ has to be closed and the next
station ݇ + 1 is opened and filled with tasks. The procedure is repeated until all tasks are assigned. In the case of using TOS, we take the available task with the highest priority value and
assign it to the first station where it is assignable. For a more detailed description of the TOS
and SOS approaches we refer to Scholl and Voß (1996) and Scholl (1999, Ch. 4).
Example of SOS. Let us apply the maximum time rule (max ܶ) to the exemplary instance of
Figure 1 with cycle time ܿ = 30. In the stationoriented scheme, we open station 1 at the first
step. The only available task is task 1. We assign it to the first station so that the idle time of
station 1 becomes 30 − 18 = 12. Now, tasks 4, 3, 2 and 5 are available (they are listed ac-
4
cording to their priority values). However, the assignable task to station 1 with the highest
priority is task 2. We assign task 2, the station load of station 1 is now {1, 2} and idle time
݅‫ݐ‬ଵ = 6. The updated list of available tasks is 4, 3, 6 and 5 with task 6 being the assignable
task with the highest priority. Since after assignment of task 6, ݅‫ݐ‬ଵ = 0 and no further tasks
are assignable to station 1, we open the next station. Proceeding in the same way, we receive
the solution {1, 2, 6}, {4, 5}, {8}, {3, 10}, {7, 9}, {11} with six stations.
Beside the forward construction scheme, the tasks can also be scheduled backwards. In the
backward construction, priority rules are applied to the reversed precedence graph, where
directions of all the arcs are inverted. According to the bidirectional approach, as a construction scheme applied only together with SOS, each time we temporary build two station loads –
one in the forward and one in the backward direction. After that, the station load with the
lowest idle time is selected and assigned permanently. In the remaining part of this section,
we refer our definitions to the forward direction only; the transformation to the backward and
bidirectional cases is straightforward.
In the example above, we had several candidate tasks with the same priority value (task 3 and
10) at one of the construction steps. In such a situation, tie-breaking rules are used. Usage of
another good priority rule as a tie breaker may significantly improve the performance of the
priority rule (Talbot et al. 1986). Other improving elements that may raise its performance are
Jackson dominance rule (JackDom, see Jackson 1956; Scholl and Klein 1997 for more details) as well as SingleFit and DualFit criteria (cf. Boctor 1995). JackDom is calculated in the
preprocessing. It requires that large tasks, whose followers contain all the followers of task ݆,
are preferred to task ݆. SingleFit and DualFit are inspired by the insights coming from the bin
packing problem. According to them, if one (SingleFit) or a combination of two candidate
tasks (DualFit), assignable to the open station ݇, completely exhausts the remaining idle time
of the station, then it must be assigned to this station ݇ irrespective of the priority values.
2.3
Priority rules
Priority rules utilize the data of the problem instance to compute priority values of the tasks.
Afterwards, a feasible solution is constructed by scheduling tasks in a sequence according to
their priority values. It can be differentiated between elementary and composite priority rules
(cf. Haupt 1989). Elementary priority rules construct priority values based on a single attribute of the task, as for example task time or number of followers. In composite priority rules,
priority values of tasks are computed by a combination of elementary priority rules. Conventionally, non-linear combinations of the elementary rules are still referred to as elementary
5
rules. For example, task time divided by the number of followers would be an elementary
priority rule. Therefore, composite rules include additive, or weighted sum, and hierarchical
combinations of the elementary rules (Haupt 1989).
Elementary and composite priority rules can be applied as a single-pass or a multiple-pass
PRBM. In the single-pass PRBM, only one feasible solution is constructed. The multiple-pass
method constructs several feasible solutions, often by application of several priority rules one
after each other.
Time-oriented
rules
Elementary rules |
Task time (ܶ)
Correspondent priority value ‫ݒ݌‬௝ =
‫ݐ‬௝
Task time divided by slack (ܶ݀ܵ)
Mixed rules
Scholl and Voß (1996)
௧ೕ
Talbot et al. (1986)
௅ೕ
‫ݐ‬௝ + ∑௛∈ிೕ∗ ‫ݐ‬௛
Positional weight (ܹܲ ∗ )
‫ݐ‬௝ + ∑௛∈ிೕ ‫ݐ‬௛
Positional weight, based on direct followers (ܹܲ )
௉ௐ ೕ
Average ranked positional weight (‫) ܹܲܣ‬
ቀቚிೕ∗ ቚାଵቁ
Sum of ‫ݐ‬௝ and task times of all its available followers (ܹܲ ௩)
Recursive cumulated positional weight (‫) ܹܲܥ‬
‫ݐ‬௝ + ∑௛∈ிೕ ‫ ܹܲܥ‬௛
Critical path (‫ – )ܲܥ‬Sum of ‫ݐ‬௝ and task times of all tasks in the longest
path in terms of sum of task times from ݆to a sink
‫ܮ‬௝
Latest station (‫)ܮ‬
‫ܮ‬௝ − ‫ܧ‬௝
Slack (ܵ)
ቚிೕ∗ ቚ
Precedence-oriented rules
Total number of followers divided by slack (‫)ܵ݀ܨ‬
Number of available followers (|‫ܨ‬
Number of assignable followers (|‫ܨ‬
Table 1 Elementary priority rules for SALBP-1
Hackman et al. (1989)
Killinci (2011)
Talbot et al. (1986)
Talbot et al. (1986)
Bennett and Byrd (1976)
Talbot et al. (1986)
Tonge (1965)
Boctor (1995)
Boctor (1995)
Number of bottleneck nodes within all followers (‫) ܰܰܤ‬
Cumulated number of followers (‫)ܨܥ‬
new
Talbot et al. (1986)
௦|)
Recursive cumulated edges (ܴ‫)ܧ‬
Talbot et al. (1986)
ቀቚிೕ∗ ቚାଵቁ
ห‫ܨ‬௝ห
௩|)
Scholl and Voß (1996)
Talbot et al. (1986)
Longest path (‫ – )ܲܮ‬Number of followers in the longest path having ݆as
its root
ห‫ܨ‬௝∗ ห
Total number of followers (|‫)| ∗ܨ‬
Number of immediate followers (|‫)|ܨ‬
Helgeson and Birnie (1961)
൫௅ೕିாೕାଵ൯
௅ೕ
Latest station divided by number of followers (‫)ܨ݀ܮ‬
Moodie and Young (1965)
௧ೕ
൫௅ೕିாೕାଵ൯
Task time divided by latest station (ܶ݀‫)ܮ‬
Source
ห‫ܨ‬௝ห+ ∑௛∈ிೕ ܴ‫ܧ‬௛
ห‫ܨ‬௝ห+ ∑௛∈ிೕ∗ |‫ܨ‬௛ |
new
Hackman et al. (1989)
Scholl and Voß (1996)
6
In Table 1, we summarize 21 elementary priority rules proposed in the literature (cf. the surveys of Talbot et al. 1986; Scholl and Voß 1996; Scholl 1999, Ch. 5; and Scholl and Becker
2006). Note, that we have omitted the random rule and its modifications (e.g. minimal task
number rule) from our study, because they do not utilize information on the problem instance.
Each rule can be applied as maximization (task with the higher priority value receives the
higher rank) or minimization rule (lower rank for a task with the higher priority value).
Whenever possible, we formulated beside the static also dynamic variant(s) of the rule. A rule
is called static if the priority values are calculated once, prior to the assignment procedure. In
case of dynamic rules, the priority value of the task is updated at each assignment step. For
example, the rule “number of followers” has one static |‫ | ∗ܨ‬and two dynamic variants |‫ܨ‬௩|
and |‫ܨ‬௦|. Note, that it often does not make sense to apply dynamic priority rules for small
instances, since these may frequently require tie-breakers.
Elementary priority rules can also be classified based on the kind of information they utilize
about the problem instance. Thus, rules ܶ, ܶ݀‫ ܮ‬and ܶ݀ܵ heavily rely on the information about
the task time (time-oriented rules). Precedence-oriented rules |‫| ∗ܨ‬, ܴ‫ܧ‬, ‫ ܨܥ‬and ‫ ܲܮ‬heavily
rely on the information about precedence relations. Some rules, like ܹܲ ∗ , use information on
task time and precedence relations to about the same extent and are called mixed rules.
All priority rules summarized in Table 1 have low computational complexity (see Hackman et
al. 1989).
2.4
Limits of priority rule-based methods
There have been shown enough examples in the literature that priority rules may deliver suboptimal results (e.g. Talbot et al. 1986). But
even for these cases some specific combination of certain elementary rules may exist that delivers the optimal solution. Several approaches have been proposed in the literature how to find such optimal combinations of rules (see, e.g., Tonge 1965; Bennett and
Byrd 1976; Storer et al. 1992).
We have already seen that all the widespread and easy to compute
Fig. 2 Example of a
precedence graph
elementary priority rules proposed in the literature utilize the information on the task time, precedence relations and/or task times of predecessors or followers.
Still, there exist parts of precedence graphs, for which no (conventional deterministic) priority
7
rule as well as no combination of such rules, under no construction scheme is able to find an
optimal solution.
Figure 2 shows the part of a precedence graph where all tasks in the dotted area can only be
discriminated by priority rules related to the task times, because they have the same successors (or predecessors if we plan backward) – we will call these tasks parallel to each other.
For the sake of simplicity, let us assume that each of the tasks outside the dotted area exactly
needs the cycle time to be performed. Then, we have to solve a bin packing problem for this
part of the precedence graph. Let us employ the stationoriented construction scheme. Further,
ଵ
let inside the dotted area be twelve tasks: three tasks each with time ଶ ܿ, three tasks each with
ଵ
ଵ
time ଷ ܿ and six tasks each with time ସ ܿ. An optimal solution of the correspondent bin packଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ
ing problem is ቀଶ , ସ , ସ ቚଶ , ସ , ସቚଶ , ସ , ସ ቚଷ , ଷ , ଷቁ and needs four bins (stations). Using ݉ ܽ‫ܶݔ‬, we
ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ
get a balance ቀଶ , ଶ ቚଶ , ଷቚଷ , ଷ , ସ ቚସ , ସ , ସ , ସቚସቁ that requires five stations. Also, application of
ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ ଵ
݉ ݅݊ܶ leads to a non-optimal solution ቀସ , ସ , ସ , ସ ቚସ , ସ , ଷቚଷ , ଷ ቚଶ , ଶቚଶቁwith five stations.
Unfortunately, such structures are rather widespread in precedence graphs for real-world assembly lines (see Otto et al. 2011). This feature is especially important for PRBM-based
learning procedures. Therefore it may be beneficial to incorporate some kind of local search
or random distortion into priority values of tasks, while applying these methods.
3
Effective multiple-pass PRBMs (comparative analysis)
In the following, we describe the multiple-pass PRBMs found to be especially effective in our
experiments, provide details on the benchmark data sets used in our experiments and report on
the performance of the recommended PRBMs.
In fact, this section represents the final comparative analysis to check the performance of the
designed PRBMs after the design- and parameter-tuning that is described in detail in Section
4. We start our exposition “from the end”, with reporting the results of the comparative analysis, in order to facilitate the reading of the paper. In such a manner, the reader has a chance to
appreciate the findings before consulting the next section on the nuances of the design.
3.1
Recommended multiple-pass PRBMs
We propose three fast multiple-pass PRBMs with five passes, each consisting of five rules:
elementary (5Elem), composite (5Comp) and composite rules that are adjusted to the charac-
8
teristics (structure) of the problem instance (5Comp-S). Note that we call one pass, when we
take the best result from the forward and backward directions.
Further, for the cases when some time for more passes is available, as well as for the cases
when more different good-quality solutions are needed, we recommend four PRBMs with
random influence: random tie-breaker with (RTie-S) and without (RTie) adjustment to the
instance-specific information, as well as random choice (RChoice-S and RChoice, respectively). The PRBMs with random influence can generate a relatively large number of different
solutions. In this section, we conduct 130 passes: five passes of 5Comp or 5Comp-S and 125
passes according to the correspondent randomization concept.
Overall, five concepts (5Elem, 5Comp, 5Comp-S, RTie and RChoice) proved to be effective
in our pre-tests in Section 4. RTie-S and RChoice-S methods are tested here for the sake of
completeness.
Following the results of Section 4.1.4, we always apply the stationoriented construction
scheme and JackDom in this section. According to the recommendations in the literature (cf.
Scholl and Voß 1996) as well as our pre-tests, we use T as tie-breaker for all the rules, except
for rules that depend only on the task time (ܶ, ܶଵΨ ௖ and ܶଷ.ଷΨ ௖). In the last case |‫ | ∗ܨ‬is used.
Note, that we introduced new elementary rules which are aggregations of the task time rule:
ܶଵΨ ௖ and ܶଷ.ଷΨ ௖. We constructed them by rounding down times to the next multiple of ܿ
௧
ೕ
ቀ’”‹‘”‹–›˜ƒŽ—‡‫ݐ‬௝ఈ = ቔఈ∙௖
ቕ∙ ߙ ∙ ܿቁ. For example, for ܿ = 1000, ‫ݐ‬ଵ = 335 and ‫ݐ‬ଶ = 816, the
task times aggregated at 1%ܿ-level will be ‫ݐ‬ଵଵΨ = 330 and ‫ݐ‬ଶଵΨ = 810.
5Elem is a multiple-pass PRBM consisting of the elementary rules TdL, TdS, T, ܹܲ
௩
and CP
(see Section 4.1.2 for details). 5Comp, analogously, consists of five composite rules
ଵ
ଵ
ଵ
ቄܶ; ‫ ;ܲܮ‬ଶቅ, {ܶ; ‫ ;ܲܥ‬2}, ቄܶ݀‫ ;ܲܮ ;ܮ‬ଶቅ, ቄܶ݀‫ ;ܲܥ ;ܮ‬ଶቅ, and ቄܶଵΨ ௖; ‫;ܲܮ‬
ଵ
ቅ. We notate a compo-
ଵ଴
site rule as a pairing of two scaled elementary rules and the weight of the first rule within the
ଵ
pair (the weight of the second rule is always fixed to 1). For example, ቄܶ; ‫ ;ܲܮ‬ହቅdenotes rules
ଵ
ܶ and LP with weights ହ to 1 (see Section 4.2.4 for details). In 5Comp-S, we select five composite rules specific for the instance’s characteristics, as described in Section 4.3. By RChoice,
in each pass, a composite rule is randomly selected from a set of well-performing composite
rules. The weight-ratio within the composite rules is “distorted” by a random element (see
Section 4.4). Finally, RTie requires that in the applied composite rules the ties between the
tasks with equal priority values are broken each time randomly (see Section 4.4).
9
3.2
Data sets used in our experiments
We conduct our comparative analysis on two data sets: the very large control data set (VLControl) and the Scholl data set. To avoid influence of possible overfitting after the selection
of rules and parameter-tuning, we use a different data set in our pre-tests in Section 4, the very
large benchmark data set (VL).
The VL was proposed by Otto et al. (2011). It most closely resembles assembly line characteristics observed in practice and therefore suits especially well for testing (cf. Rardin and Uzsoy
2001). It consists of 525 problem instances each with ݊ = 1,000 tasks, with systematically
varied structural parameters, such as order strength (OS = 0.2, 0.6 and 0.9), three different
types of task time distributions (PB, BM and PM (see below)) and three types of precedence
relations within the graph (mixed as well as with a lot of bottlenecks and chains – structures
that are often found in practice, see for definitions Otto et al. 2011). PB (normal task time
distribution with median task time at 0.1ܿ) and BM (bimodal, based on two normal distribu-
tions with medians at 0.1ܿ and 0.5ܿ) task time distributions are often encountered in practice
(cf. Kilbridge and Wester 1961). Instances with PM (normal with median task time at 0.5ܿ)
are observed to be particularly difficult. Since not all optima are known for this benchmark
data set and our main purpose in Section 4 is to compare priority rules to each other, we use
the number of stations in the best-known solution (UB) for each instance to build performance
indicators. UB is calculated as the minimum from the available upper bound on website
www.assembly-line-balancing.de (see Otto et al. 2011) and the best results over all our experiments on this data set conducted for this paper. In such a way, the reported performance indicators from different experiments are directly comparable with each other. Overall, we were
able to improve the known upper bound of 279 instances, the optimum is currently known for
46% (244) of the instances. The optimal number of stations lies around 210–240 stations for
instances of BM, 130–140 for PB and 490–550 for PM.
VL-Control is generated by the generator SALBPGen (see Otto et al. 2011) with another start
point for the random number generator but with the same settings as VL. The Scholl benchmark data set was proposed by Scholl (1993). It contains 269 instances of 7 up to 297 tasks. It
is based on 25 different graphs with varying cycle times. For the Scholl data set, we performed comparisons to the optimal solutions. For VL-Control, the analysis was done both
with respect to the UB, which is the objective value of the best known solution found in all
the performed experiments, and to the lower bound computed as maximum of LB1 to LB7,
described in Becker and Scholl (2006). Overall for VL-Control, the optimum is yet known in
10
31% of cases (or for 161 instances), the average relative deviation of UB from LB equals to
1.44%, the maximum deviation is 11% (2% for instances with PB and BM). These relatively
small deviations indicate that UB and LB used in the current study pretty well estimate the
optimum solution value of the instances.
In the following, we denote the average (maximal) relative deviation from UB as average
(worst) performance and average (maximal) relative deviation from LB as average (worst)
performance from LB.
3.3
Performance of PRBMs on the VL-Control data set
Figure 3 and Tables 2–4 summarize the results. In order to receive a better picture for performance of PRBMs, we compare them both with the best-known upper bound (UB) and the
best-known lower bound (LB). In Tables 2–4 we also report results of solution methods proposed in the literature: Boctor (Boctor 1995) and Talbot156 (Talbot et al. 1986), as well as
SALOME with 10 seconds time limit (Scholl and Klein 1997), which is one of the most effective exact solution methods for SALBP-1.
On average, one pass of PRBM required about 0.06 seconds or, in other words, about 15
passes can be computed in one second for an instance with 1000 tasks. Such computational
times were received on an Intel Core i7 920, 2,67 GHz processor with 3GB RAM. The code
was written in VBA for Excel without parallelization.
Fig. 3 Overview of PRBMs – avg. relative deviation, share of instances with UB(LB) found, max. relative
deviation from UB(LB)
From Figure 3, we see that by application of just 5 rules (left-hand side of the figure) we reach
an average relative deviation from UB of 0.44% and find it exactly for 25% of instances,
whereas for LB these numbers equal 1.88% and 5%, respectively. In other words, with only
five rules we can achieve an average relative deviation from the optimum somewhere between
0.44% and 1.88% with a 5–25% chance to find the optimal solution. With 125 passes of
RChoice (rand-hand side of the figure) we have up to 5–40% probability to find the optimum,
whereas the average relative deviation lowers to 0.31% from UB and 1.75% from LB. In both
11
cases, for each single of 525 instances the found solutions are better than those of the pure
random construction procedure with 125 passes (R125). Overall R125 never found UB and
was about 3-18 times higher in terms of average deviations than the proposed PRBMs.
TTD
OS
5Elem
5Comp
5Comp-S
RTie
RTie-S
RChoice
RChoice-S
R125
Boctor
Talbot156
SALOME
BM
PM
0.9
0.2
0.6
0.9
0.2
0.6
PRBMs suggested in this article
0.60
0.77
0.79
0.32
0.33
0.77
1.04
1.15
5
(17)
(5)
(16)
(36)
(27)
(0)
(0)
(0)
0.59
0.74
0.55
0.29
0.18
0.37
0.50
0.52
5
(19)
(4)
(32)
(40)
(59)
(20)
(7)
(4)
0.54
0.71
0.53
0.25
0.18
0.30
0.52
0.48
5
(27)
(5)
(40)
(48)
(59)
(32)
(3)
(5)
0.53
0.66
0.50
0.21
0.10
0.19
0.27
0.36
125+5
(28)
(9)
(40)
(56)
(79)
(56)
(25)
(12)
0.53
0.64
0.53
0.11
0.13
0.26
0.34
0.46
125+5
(28)
(12)
(40)
(77)
(72)
(40)
(13)
(7)
0.53
0.67
0.44
0.19
0.10
0.09
0.22
0.22
125+5
(28)
(8)
(48)
(60)
(79)
(80)
(23)
(27)
0.43
0.64
0.50
0.18
0.10
0.26
0.24
0.24
125+5
(41)
(12)
(40)
(63)
(77)
(40)
(29)
(20)
PRBMs and comparable solution methods suggested in the literature
1.51
1.92
2.38
2.42
2.80
4.17
12.23 12.71
125
(0)
(0)
(0)
(0)
(0)
(0)
(0)
(0)
0.82
1.97
3.10
1.98
3.43
5.63
10.41 11.57
*
1
(27)
(0)
(0)
(0)
(0)
(0)
(0)
(0)
0.52
0.75
0.81
0.19
0.30
0.84
0.86
1.00
156
(29)
(3)
(16)
(60)
(33)
(0)
(0)
(0)
0.00
0.00
0.03
0.37
1.11
2.47
5.51
6.01
10 sec**
(100) (100)
(96)
(33)
(3)
(0)
(0)
(0)
# passes
0.2
PB
0.6
Total
0.9
2.08
(0)
0.81
(0)
0.37
(8)
0.36
(16)
0.28
(20)
0.18
(24)
0.27
(16)
0.78
(13)
0.49
(21)
0.44
(25)
0.35
(35)
0.37
(35)
0.31
(39)
0.31
(38)
10.55
(0)
8.91
(0)
2.38
(0)
5.91
(0)
5.61
(0)
5.15
(4)
0.71
(19)
2.26
(38)
Table 2 Comparison of different PRBMs for VL-Control data set, average performance (share of instances (%),
where UB was found). * (**) – corresponds to about 5 (165) passes of other PRBMs in terms of computational time
Among PRBMs based on five complementing rules, 5Comp-S that incorporates knowledge
on the instance’s structure showed the best performance. The improvement with respect to
5Comp was the largest for instances with medium (0.6) and high (0.9) levels of order strength
(OS). To the contrary, at low levels of OS as well as for instances with BM and medium OS,
the results were similar.
Among PRBMs with random influence, RChoice shows a comparable or even much better
performance than RTie, especially at high levels of OS. Interestingly, applications of these
concepts of random influence to composite rules relying on structural parameters (5Comp-S)
in several cases do not improve, but even moderately worsen the results. Nevertheless in total,
RTie-S and RChoice-S show a similar performance to RTie and RChoice.
We compared the performance of suggested PRBMs with similar approaches, recommended
in the literature. Boctor is a multi-stage lexicographic PRBM with individual tie-breakers on
each stage. On the first stage, Boctor applies SingleFit, then he looks for a large task with the
12
largest number of immediate followers, afterwards DualFit and on the last stage ห‫ܨ‬௝ห-rule are
applied. We report the results for the best solutions of Boctor found in forward and backward
directions. Since application of DualFit is time consuming (see Section 4.1.4), it is appropriate
to compare Boctor to the performance of PRBMs based on five passes. From Tables 2 and 3
we see, that Boctor achieves its best performance for instances with time distribution PB. But
even in these categories, Boctor performed considerably worse than the recommended 5-rule
PRBMs. So, it can be stated that Boctor is not competitive.
TTD
OS
5Elem
5Comp
5Comp-S
RTie
RTie-S
RChoice
RChoice-S
R125
Boctor
Talbot156
SALOME
BM
PM
0.2
0.9
0.2
0.6
0.9
0.2
0.6
PRBMs suggested in this article
0.60
0.77
1.52
0.78
1.11
2.54
2.89
4.34
5
(17)
(5)
(0)
(0)
(0)
(0)
(0)
(0)
0.59
0.74
1.28
0.75
0.96
2.13
2.34
3.69
5
(19)
(4)
(0)
(0)
(0)
(0)
(0)
(0)
0.54
0.71
1.26
0.71
0.96
2.05
2.35
3.64
5
(27)
(5)
(0)
(0)
(0)
(0)
(0)
(0)
0.53
0.66
1.23
0.68
0.87
1.95
2.09
3.53
125+5
(28)
(9)
(0)
(0)
(0)
(0)
(0)
(0)
0.53
0.64
1.26
0.58
0.90
2.02
2.17
3.63
125+5
(28)
(12)
(0)
(0)
(0)
(0)
(0)
(0)
0.53
0.67
1.17
0.65
0.87
1.84
2.04
3.39
125+5
(28)
(8)
(0)
(0)
(0)
(0)
(0)
(0)
0.43
0.64
1.23
0.64
0.88
2.02
2.07
3.40
125+5
(41)
(12)
(0)
(0)
(0)
(0)
(0)
(0)
PRBMs and comparable solution methods suggested in the literature
1.51
1.92
3.12
2.89
3.60
5.99
14.28 16.26
125
(0)
(0)
(0)
(0)
(0)
(0)
(0)
(0)
0.82
1.97
3.85
2.45
4.24
7.48
12.42 15.10
*
1
(27)
(0)
(0)
(0)
(0)
(0)
(0)
(0)
0.52
0.75
1.55
0.65
1.08
2.61
2.70
4.19
156
(29)
(3)
(0)
(0)
(0)
(0)
(0)
(0)
0.00
0.00
0.76
0.83
1.90
4.26
7.43
9.35
10 sec**
(100)
(100)
(20)
(8)
(0)
(0)
(0)
(0)
# passes
PB
0.6
Total
0.9
11.35
(0)
9.96
(0)
9.48
(0)
9.48
(0)
9.39
(0)
9.28
(0)
9.38
(0)
2.23
(3)
1.93
(3)
1.88
(5)
1.80
(5)
1.81
(6)
1.75
(5)
1.75
(8)
20.59
(0)
18.80
(0)
11.68
(0)
15.53
(0)
7.20
(0)
6.72
(4)
2.17
(5)
3.77
(31)
Table 3. Comparison of different PRBMs for VL-Control data set, average performance from the lower bound
LB (share of instances (%), where LB was found). * (**) – corresponds to about 5 (165) passes of other PRBMs
in terms of computational time.
Another PRBM, Talbot156, applies 13 elementary rules in both directions (forward and
backward), taking each time one of the remaining 12 rules as a tie breaker. In total, Talbot156
employs 13 ∙ 12 = 156 passes. On average, one pass of Talbot156 takes approximately the
same time as one pass of PRBMs recommended in this paper. From Tables 2 and 3 we see
that the relative strength of Talbot156 lies at low and medium levels of order strength for PB
and BM. For all but one group of instances, Talbot156 performs worse than a simple combination of five rules (5Comp and 5Comp-S). For instances with PB and ܱܵ= 0.2, Talbot156
showed even a slightly better result than of RChoice with 125 passes and was dominated sole-
13
ly by RChoice-S. To summarize, Talbot156 is only competitive for instances with low task
times and at low levels of order strength.
It is interesting to compare PRBMs to an effective enumeration procedure like SALOME. In
Tables 2 and 3, the results of SALOME with 10 seconds time limit (which roughly corresponds to 165 passes of priority rules) are reported. Within this time limit, SALOME was able
to solve 161 instance (31%) to optimum. Average performance of SALOME was rather bad
and equaled to 2.26% from UB and to 3.77% from LB, whereas the worst performance
reached high 9% from UB and 17% from LB in case of PM, as well as 3% from UB and 5%
from LB in case of PB and BM. SALOME is especially successful for instances with PB at
low and medium OS, where it solved all the instances to optimum. A good performance it
revealed for instances with PB at OS = 0.9 and BM at OS = 0.2. Overall, SALOME performed much better than the examined PRBMs for instances with PB. However, for all the
other groups of instances SALOME showed a worse performance than simple five passes of
recommended priority rules. Also our pre-tests showed that an enumeration procedure like
SALOME does not always generate a feasible solution for instances with 1,000 tasks at very
low computational times (less than a second).
5Elem 5Comp 5Comp
-S
# passes
All instances
Instances
with
݊ ≥ 50
5
1.71
(67)
1.93
(57)
5
1.96
(68)
1.74
(60)
5
1.77
(69)
1.69
(62)
RTie
RTie-S RChoice RChoice R125 Boctor Talbot
-S
156
125+5 125+5
1.83
1.63
(70)
(70)
1.56
1.63
(63)
(62)
125+5
1.29
(75)
1.27
(69)
125+5
1.28
(75)
1.33
(68)
125
2.84
(52)
3.22
(37)
1*
4.17
(46)
3.75
(36)
156
1.39
(73)
1.58
(64)
SALOME
10 sec**
0.13
(96)
0.19
(94)
Table 4 Comparison of PRBMs for Scholl data set, average performance (share of instances (%), where UB was
found). * (**) – corresponds to about 5 (165) passes of other PRBMs in terms of computational time.
3.4
Performance of PRBMs on the Scholl data set
The results for the Scholl data set (Table 4) overall confirmed the conclusions made for the
VL-Control data set. Solely, 5Comp as well as 5Comp-S seemed to perform worse than the
collection of elementary rules 5Elem in terms of average relative deviation. The reason is that
5Elem found better solutions for several very small instances, where a difference in one station might mean up to a 33.3% better performance. If we restrict the analysis to 191 instances
with more than 50 tasks, this effect disappears. Further, in line with previous research, SALOME with ten seconds time limit was able to solve almost all the instances of the Scholl
data set (optimum found for 95% of instances and proven for 84% of instances). The reasons
of such good performance of SALOME are a moderate size (up to ݊ = 297 tasks) and lower
14
hardiness of instances in Scholl data set (cf. Otto et al. 2011). This result again shows that the
old data set of Scholl is outdated and should be replaced by the new ones of Otto et al. (2011).
4
A thorough examination of different design concepts for PRBMs
To come up with the PRBM concepts recommended in Section 3, we conducted a thorough
analysis of priority rules and design concepts for PRBM. Overall, we performed about 3,000
passes per instance (i.e. over 3 million runs) to perform the tests reported in this section and
about ten times as much passes for additional tests and checks not reported here.
However, this section is of value not only as pre-tests and parameter-tuning for the PRBMs
suggested in Section 3. It also provides guidance and advice, how to construct effective and
efficient priority rule-based methods.
In order to sharpen the differences in performance of priority rules, we drop the improving
elements (JackDom, Singlefit and Dualfit) and use “minimal task number” as a tie-breaker in
all the experiments of this section except for Section 4.1.4. We also apply the stationoriented
construction scheme, which proved to deliver better results than the taskoriented one for the
well-performing rules in our preliminary experiments and in Scholl and Voß (1996).
We proceed with the analysis of elementary (Section 4.1), composite (Section 4.2) and structure-dependent composite rules (Section 4.3). Finally, we provide insights into PRBMs with
random influence in Section 4.4. To facilitate reading, we give a short summary of the findings, called recommendation, at the beginning of each section.
4.1
Analysis of elementary rules
Recommendation. We found that a multi-pass PRBM based on five elementary rules
(5Elem: TdL, TdS, T, CP and ܹܲ ௩) already produces good solutions. If several different
good-quality solutions are needed, we recommend aggregating the rules at moderate levels.
In Section 4.1.1, we compare the performance of 21 elementary rules (each as minimizing and
as maximizing rule) introduced in Section 2.3, then propose an effective multi-pass PRBM
based on elementary rules (Section 4.1.2), show an easy method to produce well performing
elementary rules by aggregation (Section 4.1.3), and give a short overview of the performance
of some improving elements (Section 4.1.4). Also in Section 4.1.3, we identify ten best performing elementary rules for the further analysis.
15
4.1.1
Comparison of elementary rules
Overall, the quality of performance highly varies among the elementary rules (see Tables 5
and 6). For example, max ܹܲ ∗ , proposed by Helgeson and Birnie (1961) and often used in
the literature, found solutions 8.5 times worse in terms of average performance than the best
rule max TdL (see Table 5). Overall, our study clearly confirms the direction of application of
the rules recommended by their authors (as maximization vs. minimization rule). Therefore
further, by referring to a rule, we always mean this rule applied in its “meaningful” direction.
The best priority rules revealed a very good performance. The best rule TdL missed UB only
by 1% on average. But even if we compare the found solutions with the best known lower
bounds (LBs), then average performance from LB for TdL still equals to just 2.34% and
reaches in the observed worst case 13.94%. For instances with distributions of task times that
are widespread in practice (PB and BM), deviation from LB never exceeded 3.52% and
equaled on average to just 1.07%.
Rule
Best of forward
and backward
Bidirectional
Min
Max
LdF
CF
3.13
3.15
3.81
3.84
Max
Min
ܹܲ
‫ܰܰܤ‬
6.25
6.67
7.00
6.58
Min
Max
|‫| ∗ܨ‬
LdF
10.22
10.25
9.99
10.06
Max
TdL
Max
TdS
Max
T
Max
CP
Max
LP
Min
L
Max
CPW
3.05
Max
ܹܲ ௩,
3.12
Max
|‫| ∗ܨ‬
0.97
1.29
1.71
2.33
2.71
3.01
0.99
Max
RE
3.18
3.93
Max
ܹܲ ∗
7.38
7.97
Min
‫ܵ݀ܨ‬
10.26
10.06
1.12
Min
‫ܹܲܣ‬
3.44
4.63
Min
ܹܲ ∗
7.54
7.12
Min
ܵ
10.27
10.05
1.75
Max
‫ܵ݀ܨ‬
3.61
4.30
Min
|‫|ܨ‬
8.35
8.03
Min
‫ܲܮ‬
10.28
10.02
2.99
Max
‫ܰܰܤ‬
3.69
3.93
Min
RE
8.58
7.90
Max
‫ܮ‬
10.33
10.10
3.22
Max
ܵ
3.71
3.75
Min
ܹܲ
9.08
8.73
Min
CP
11.23
10.98
3.74
Max
|‫ܨ‬௩|
4.12
4.76
Min
|‫ܨ‬௩|
9.34
9.01
Min
ܹܲ ௩
13.14
12.57
3.95
Max
|‫ܨ‬௦|
4.48
4.80
Max
‫ܹܲܣ‬
9.54
9.05
Min
TdS
19.85
19.76
3.48
Max
|‫|ܨ‬
4.61
5.16
Min
|‫ܨ‬௦|
9.61
9.45
Min
TdL
19.99
19.81
3.81
Min
‫ܹܲܥ‬
5.79
4.88
Min
CF
10.21
10.01
Min
T
21.25
21.59
3.12
Table 5 Average performance (%) (results from best of forward and backward and bidirectional construction
scheme, sorted according performance)
This good performance of priority rules is observed despite the high hardiness of the instances. The hardiness of the instances can be measured by the Trickiness coefficient, which shows
the share of suboptimal solutions in the solution space (cf. Otto et al. 2011). According to this
categorization, all the instances in our data set are extremely tricky.
From Table 5, we see that best solutions from forward and backward directions are often better than those from the bidirectional construction scheme. Among the ten best performing
rules, only for max TdS the opposite is the case. Since both approaches are comparable in
16
terms of computational time, we will always use the former setting for each rule in the further
experiments.
Our analysis of the elementary rules confirms and extends results reported in the literature so
far. Thus, all the five best rules found in the study of Talbot et al. (1986) also exhibited very
good results and about the same order of their performance in our study. We confirm results
of Scholl and Voß (1996) that TdL, TdS, T, CPW and |‫ | ∗ܨ‬are among the best performing
rules. However, in line with Talbot et al. and contrary to Scholl and Voß, we observe a weak
performance of the ܹܲ
∗
rule. The reason for such differences in the results may lie in a large
share of small instances in the data set used by Scholl and Voß.
4.1.2
Effective multiple-pass PRBM, based on five elementary rules (5Elem)
An interesting and practically relevant question is, which set of, say, five elementary rules is
especially good if applied as a multiple-pass heuristic (5Elem). These are not necessarily the
best performing rules, but those, which complement each other in the best manner. To find
such rules, we set up an optimization model. We looked for a set of five elementary rules that
achieve together the maximal number of best solutions found in this elementary rules experiment. We solved this linear optimization model with FICO Xpress-MP. The five rules, defining the set 5Elem, are: TdL, TdS, T, ܹܲ
௩
and CP. 5Elem reach an average performance of
0.88%. It is astonishing, that results of 5Elem cannot be improved for any instance by any of
the tested 42 rules. It is even more impressing, that for almost 30% of instances TdL found a
better solution than any other tested rule (see Table 6).
Max
Max
Max
Max
Max
Max
Max
Max
Min
TdL
TdS
T
CP
CPW
LP
RE
L
ܹܲ ௩
Share of instances (%), where the rule showed the best result (among the tested rules)
84
60
46
36
24
24
16
15
15
Share of instances (%), where the rule was the only one that found this best result
29.7
6.3
1.1
4.0
0.4
0.0
0.0
0.0
0.0
Max
CF
Min
LdF
15
Max
|‫| ∗ܨ‬
14
14
0.0
0.0
0.0
Table 6 Performance of rules that found the best (among all the tested rules) result at least in 10% of cases
(Best from forward and backward construction schemes, sorted according performance)
4.1.3
Merits of aggregation
Some applications require several good-quality initial solutions. For example, the more diverse the initial solutions are and the better their quality is, the better is, as a rule, the performance of metaheuristics (see, e.g., Burke et al. 1998 for timetabling problems). A simple way
to obtain diversified good-quality solutions is to apply aggregation to a priority rule.
Indeed, the well-performing longest path rule LP can be seen as aggregated (clustered) information about the number of followers |‫( | ∗ܨ‬if, e.g., the number of direct followers is identical17
ly independently distributed for each task), whereas L and CP are aggregations of ܹܲ ∗ . The
surprising fact about the performance of the elementary rules is that aggregations often show
a comparable or even a better performance. For example, LP was better than |‫ | ∗ܨ‬and L per-
formed much better than ܹܲ ∗ . The reasons for such results could be that rules, especially
with a high discrimination power, may contain some portion of noise, so that the aggregated
rule preserves or even “purifies” the important information. E. g., at a cycle time of 1,000 it
may be not important at all whether a task has time of 996 or 995, but rather if it is small, medium or large.
We examined different degrees of aggregations from ܶଵΨ ௖ to ܶହ଴Ψ ௖ for the widely used ele-
mentary rule T. From Table 7, we see that moderate aggregations of T (up to about 10% ofܿ)
still produce solutions of a very good quality, similar to or not much worse than not aggregated T itself (with 1.72% average performance).
ܶଵΨ ௖
1.72
55
Avg performance, %
Avg SSI with T, %
ܶଷ.ଷΨ ௖
1.82
51
ܶଵ଴Ψ ௖
2.22
48
ܶଷଷΨ ௖
3.72
45
ܶହ଴Ψ ௖
3.34
45
Table 7. Performance of aggregated task time rules and similarity of solutions with T.
An important question remains, whether good performance of aggregated task time rules is
solely explained by the equivalence of solutions to those of ܶ rule, or whether they indeed
produce different solutions and achieve high performance due to preserving an important
piece of information about each task. To compare how similar solutions are, we propose the
Solution Similarity Index (SSI), which describes the similarity of task-to-station assignments
in solutions. We rejected the idea to compare the assignment sequences of tasks, because even
different sequences may produce the same task-to-station assignments and hence the same
solution of SALBP-1. SSI shows how large is the share of the task pairs that are assigned toത = ܸ ∪ {0} be an exgether to one station in the solutions under comparison. Formally let ܸ
ത× ܸ
ത be the set of all possible pairs of the tasks, whereby ݆× 0
tended set of tasks. Let ܸ
ത× ܸ
ത as the set of task
means that task ݆is assigned to some station alone. We define ‫ܩ‬௦ ⊆ ܸ
pairs that are assigned to the same station in solution ‫ݏ‬. Then SSI meaning similarity of solutions 1 and 2 is ܵܵ‫=ܫ‬
ଶ∙|ீభ∩ீమ|
.
|ீభ|ା|ீమ|
For example, let the first solution be {1, 2, 3}, {4} and the se-
cond solution be {1, 3, 4}, {2}. The set of tasks from ‫ܩ‬ଵ that are assigned together is
{1, 2}, {1, 3}, {2, 3} and {4, 0}, from ‫ܩ‬ଶ is {1, 3}, {1, 4}, {3, 4}, {2, 0}. Two of these pairs are
ଶ
identical ({1, 3}, {1, 3}). Hence, ܵܵ‫ = ଼ =ܫ‬25%. SSI equals to one only if two solutions are
18
identical and equals to 0 only if no pair of tasks is assigned to the same station in both solutions.
From Table 7, we see that the aggregated task time rules show a moderate similarity of 55%
or less to T for our data set. Even for the less aggregated rule ܶଵΨ ௖ SSI never exceeds 92%
and is higher than 80% only for 7% of instances. Hence, moderately low values of SSI reveal
that aggregated rules produce solutions different to T.
Given the results received in this subsection, we select a set of ten best elementary rules
(10Best) for our further analysis: TdL, TdS, T, CP, LP, ܹܲ ௩, |‫| ∗ܨ‬, ܶଵΨ ௖, ܶଷ.ଷΨ ௖ and L. We
dropped CPW because it leads to exorbitant high priority values and may cause calculation
outflow errors. Thus it is less suitable for applications in practice, as explained in Section 2.3.
4.1.4
Performance of improving elements
As discussed above, there exist several opportunities to improve the solution quality of
PRBMs. In Table 8, we show the potential of JackDom, SingleFit, DualFit as well as of some
of their combinations. We used the best result over 10Best for our analysis. As we see, each
of the so-called improving elements (IEs) may even produce worse solutions than a PRBM
without IEs. In total, IEs lead to moderate improvements in the received solutions. Among
IEs, DualFit and JackDom together with DualFit bring the highest improvement on average,
however the observed worst case performance for these IEs is higher as well. Furthermore,
DualFit increases computational effort by about five times on average for our data set. JackDom and SingleFit do not cause any significant increase in computational times. Overall,
based on the average and worst-case performance, as well as on the computational effort, we
do only recommend JackDom and apply it in our control experiment (Section 3).
Avg performance, %
Worst performance, %
without IEs
SingleFit
DualFit
0.88
3.26
0.89
3.26
0.76
3.49
0.87
3.13
JackDom (&)
SingleFit
0.88
3.13
DualFit
0.76
3.49
Table 8 Advantages of improving elements (comparison of the best result of 10Best)
4.2
Analysis of composite rules
Recommendation. We recommend using composite rules instead of or in addition to the elementary rules, because the former exhibit better and more robust results (combination principle). It is advantageous to combine time-oriented and precedence-oriented rules. In particular, these are the pairings of TdL, T, ܶଵΨ ௖ and ܶଷ.ଷΨ ௖ on one side and CP, LP, L and |‫| ∗ܨ‬
on the other side. We recommend taking the weights for these rules in a vicinity of those
19
provided in Table 12. Moreover, it is worth applying a pairing of rules several times, changing the weights of the composite rules in an interval around the recommended weights. A
ଵ
ଵ
good performing multi-pass PRBM 5Comp consists of ቄܶ; ‫ ;ܲܮ‬ଶቅ, {ܶ; ‫ ;ܲܥ‬2}, ቄܶ݀‫ ;ܲܮ ;ܮ‬ଶቅ,
ଵ
ቄܶ݀‫ ;ܲܥ ;ܮ‬ଶቅ, and ቄܶଵΨ ௖; ‫;ܲܮ‬
ଵ
ቅ.
ଵ଴
As already mentioned in Section 1, there is an overall consensus that a combination of elementary rules performs better than elementary rules themselves (combination principle, cf.
Panwalkar and Iskander 1977). In this section we examine, whether it is (always) the case and
which rules have to be combined to achieve improved results.
4.2.1
Performance of elementary rules’ pairings
For our investigation, we applied a weighted sum and a lexicographical combination of elementary rules, since these are the most widespread combination methods in the literature (e.g.
Talbot et al. 1986). In a lexicographical combination of rules, a secondary rule serves as a tiebreaker to a primary rule. We examine all possible pairings of 10Best defined in Section
4.1.3. For each pairing of rules we construct two different possible lexicographical composite
rules (changing the roles of primary and secondary rules) and 9 weighted sum composite rules
with the following weight for the first rule (w. l. o. g., the weight of the second rule is always
ଵ ଵ
fixed to 1): 100, 10, 5, 2, 1, ଶ, ହ,
ଵ
ଵ଴
and
ଵ
. Overall, 45 different pairings of rules, or 495
ଵ଴଴
10
composite rules, were examined ቆቀ ቁ∙ 11 = 495ቇ.
2
Rule
Scaling coefficient
TdL
TdS
ܿfor nominator and
‫ ∗ ܤܷܩ‬for denominator
T
ܿ
ܶଵΨ ௖
ܿ
ܶଷ.ଷΨ ௖
ܿ
CP
LP
∑௡௜ୀଵ ‫ݐ‬௜
݊
L
ܹܲ
௩
‫ܿ ∗ ܤܷܩ‬య√݊
|‫| ∗ܨ‬
݊
Table 9 Applied scaling for rules in the composite rules experiment. *) Computed as in Scholl (1999, Ch. 5)
We apply a scaling of elementary rules, while combining them into a composite rule. Firstly,
neglecting the scaling would make the weights or even the very logic of the rule incomparable
for different problem instances. For example, let us take the rule of Bhattacharjee and Sahu
(1988) with priority values ‫ݒ݌‬௝ = ‫ݐ‬௝ + ห‫ܨ‬௝∗ ห. Changing both task times and the cycle time by a
common factor (e.g. 10), we receive a completely analogous problem. However, the relative
importance of the task times in the composite rule will rise and the application of the composite rule will result in a different solution. The second reason for scaling is that it maps priority values of different rules to similar ranges (in our case to the interval [0,1]) and hence we
20
can apply a similar and intuitive set of weights for all the rules. The applied scaling is intuitive, except for, perhaps, TdL, TdS and ܹܲ
௩
(Table 9). Theoretically, priority values for rules
TdL and TdS take values from 1 to c and for ܹܲ
௩
from 1 to ∑௡௜ୀଵ ‫ݐ‬௜. Hence, c and ∑௡௜ୀଵ ‫ݐ‬௜,
correspondently, would be intuitive scaling factors. But in such cases, the priority values produced by the scaled rules are extremely low. Therefore, somewhat arbitrary, we improvised
the scaling coefficients shown in Table 9. Such scaling produces good results.
In Table 10, we report the results of the best weight combination for each pairing of rules (for
further analysis of weights see Section 4.2.3). For the sake of comparison, the results of the
elementary rules are given on the main diagonal in bold. From this table we see, that in many
cases paired combinations of elementary rules exhibit better results than the most successful
elementary rule TdL. The best performing pairings are those of TdL, T, ܶଵΨ ௖ and ܶଷ.ଷΨ ௖ on
one side with CP, LP, L and |‫ | ∗ܨ‬on the other side. The best performing composite rule
ଵ
ቄܶ; ‫ ;ܲܮ‬ହቅachieved average performance of 0.77, which is better than that of the best single
ଵ
rule TdL by 0.20 percent points (pp). ቄܶ; ‫ ;ܲܮ‬ହቅ found solutions with on average one station
less than TdL. Composite rules also tend to be more robust. For example, TdL missed the UB
by 3.62% in the observed worst case, whereas, the worst performance of the best composite
ଵ
rule ቄܶ; ‫ ;ܲܮ‬ହቅequaled to 1.9%.
TdL
TdS
T
ܶଵΨ ௖
ܶଷ.ଷΨ ௖
CP
LP
L
ܹܲ ௩
|‫| ∗ܨ‬
TdL
0.97
TdS
0.97
1.29
T
0.97
1.18
1.71
ܶଵΨ ௖
0.97
1.19
1.71
1.72
ܶଷ.ଷΨ ௖
0.97
1.20
1.71
1.72
1.82
CP
0.77
1.15
0.80
0.80
0.85
2.33
LP
0.77
1.11
0.77
0.77
0.82
2.30
2.71
L
0.86
1.23
0.88
0.90
0.92
2.34
2.72
3.01
ܹܲ ௩
0.97
1.29
1.71
1.71
1.66
2.48
2.43
2.93
3.12
|‫| ∗ܨ‬
0.86
1.24
0.88
0.88
0.91
2.33
2.75
3.11
2.88
3.12
Table 10 Average performance (%) of the best weight combinations for the given pairings of rules
4.2.2
Combination principle
To check the validity of the combination principle, we compare the results of the paired combinations of rules from Table 10 with the better result of the two respective constituent elementary rules (see Table 11). If the paired rule performs better than the minimum from its two
constituent rules (these cases are marked bold in Table 11), then the combination of rules
brings additional value (within a half of the computational time). Indeed, for more than a half
of examined pairs of elementary rules, the combined result performs better. The highest im21
provement of almost 1 pp. is observed in case of T, ܶଵΨ ௖, and ܶଷ.ଷΨ ௖ on one side and CP, LP,
L, and |‫ | ∗ܨ‬on another side. On the other hand, a combination of rules that employs a similar
type of information (e.g. TdS and TdL; or L and |‫ )| ∗ܨ‬may even lead to a worse result than the
application of two elementary rules independently from each other.
Herewith we confirm the combination principle. It seems that it is the most beneficial to combine time-oriented with precedence-oriented elementary rules.
TdL
TdL
TdS
T
ܶଵΨ ௖
ܶଷ.ଷΨ ௖
CP
LP
L
ܹܲ ௩
TdS
-0.05
T
-0.01
0.08
ܶଵΨ ௖
-0.01
0.07
-0.10
ܶଷ.ଷΨ ௖
0.00
0.07
-0.09
-0.07
CP
0.19
0.12
0.84
0.86
0.87
LP
0.20
0.17
0.92
0.94
0.97
0.02
L
0.11
0.05
0.82
0.82
0.88
-0.01
-0.07
ܹܲ ௩
-0.03
-0.04
-0.07
-0.06
0.06
-0.25
0.06
-0.17
|‫| ∗ܨ‬
0.11
0.05
0.82
0.84
0.89
0.00
-0.07
-0.15
-0.05
Table 11 Gain in information: difference between average performance (%) of the best result of two correspondent elementary rules from two directions and the best composite rule
4.2.3
Best performing composite rules
Table 12 reports the best sets of weight combinations for the most successful pairings of elementary rules, as well as their performance (see also Table 10). These weights of the pairings
were ideal for more than a half of the instances, i.e. for more than a half of the instances the
solution could not be improved by changing the weights. Moreover, each of these composite
rules performs better than the best elementary rule TdL, both in terms of the average and the
worst results.
First Rule
Second Rule
Weight of the first rule
Performance, %
Avg
Worst
First Rule
Second Rule
Weight of the first rule
Performance, %
Avg
Worst
|‫| ∗ܨ‬
2
0.88
2.90
|‫| ∗ܨ‬
2
0.88
2.66
T
L
1
0.88
2.53
LP
1⁄5
0.77
1.88
ܶଵΨ ௖
L
1
0.90
3.02
LP
1⁄5
0.77
2.21
CP
1⁄5
0.80
1.99
CP
1⁄5
0.80
2.02
|‫| ∗ܨ‬
2
TdL
0.86
2.47
0.87
2.57
LP
1⁄5
0.77
2.13
CP
1⁄2
0.77
2.23
|‫| ∗ܨ‬
ܶଷ.ଷΨ ௖
L
LP
1
1⁄5
0.92
0.82
2.49
2.00
CP
1⁄5
0.85
2.21
2
0.91
2.55
L
1
Table 12 Best combination of weights for the best performing pairings of elementary rules
We examined the neighborhoods of the best weight combinations in Table 12, to find out,
how smooth they are. Figure 4 reports this analysis for the pairing T and LP. Its behavior is
typical for all the examined pairings of elementary rules. An important observation from this
22
data is that the pairing performed very well on a rather wide range of weight combinations.
We see that both in terms of the worst observed performance and the average performance,
the pairing T and LP receive similar results for weights of T ranging from about
ଵ
up to
ଷ.ହ
ଵ
.
ଵ଺
Namely for 95% of instances (95%-percentile), the relative deviation never exceeds 1.53%,
with average relative deviation resting between 0.80–0.94%. Therefore, it seems to be a good
idea to “try around” a good combination of weights. For example, the best result from 26
composite rules formed of T and LP within the indicated range of weights from
ଵ
ଷ.ହ
up to
ଵ
,
ଵ଺
could not be improved by any other combination of weights for this pairing for 79% of the
instances. We will use this fact in Section 4.4, to construct multiple-pass PRBM with random
influence. In Section 4.3, we also show how to select even better performing weights for pairings of rules, considering the structural characteristics of the instance.
Note, that Figure 4 delivers insights, why multiple-pass PRBMs with random influence RTieS and RChoice-S tested in Section 3.3 delivered moderately worse results than simple RTie
and RChoice in several cases. The reason is that RTie, RTie-S, as well as RChoice and
RChoice-S perform a neighborhood search (in the weight space) due to random distortion of
the underlying composite rule (see Section 4.4). For each pairing of rules, commonly, there is
an interval of possible weight ratios, where this pairing exhibits a good performance (see Figure 4). In case of RTie-S and RChoice-S for several groups of structural characteristics, the
recommended weight ratios for composite rules lie on either border of such intervals. Therefore RTie-S and RChoice-S in vicinity of such border weights also examine weight combinations outside of this well-performing interval and thus may deliver worse results.
Fig. 4 Performance of pairing T and LP at different weights; bright dotted area marks the range between the 5%
and 95% percentiles
23
4.2.4
Effective multiple-pass PRBM, based on five composite rules (5Comp)
We also looked for a set of five complementing composite rules. For this purpose, we set up
an optimization model according to the same logics as described in Section 4.1.2. We looked
for a set of five composite rules that maximizes the number of best solutions found among all
the 495 composite rules under examination. The found best performing multiple-pass PRBM
ଵ
ଵ
ଵ
5Comp consists of ቄܶ; ‫ ;ܲܮ‬ଶቅ, {ܶ; ‫ ;ܲܥ‬2}, ቄܶ݀‫ ;ܲܮ ;ܮ‬ଶቅ, ቄܶ݀‫ ;ܲܥ ;ܮ‬ଶቅ, and ቄܶଵΨ ௖; ‫;ܲܮ‬
ଵ
ቅ.
ଵ଴
The solutions for 64% of instances found by 5Comp could not be improved by any of the 495
composite rules. Overall, 5Comp was by 0.30 pp. better than 5Elem (see Section 4.1.2; 0.58%
vs. 0.88%) in terms of average performance, it found better solutions than the latter in about
43% of cases and a worse solution only for about 5% of instances. The improvement is even
more pronounced for the worst performance: in case of 5Comp the relative deviation never
exceeded 1.80%, whereas for 5Elem it equaled to 3.43% in the observed worst case.
4.3
Consideration of structural characteristics of instances
Recommendation. Incorporating the knowledge on the instance’s structure improves the results. In case of composite rules, it is useful to increase the weight of the precedenceoriented rule for instances with higher order strength or with a higher share of large tasks.
Factorial analysis is an important part of testing that generates new and practicable knowledge
about the solution algorithm (cf. Barr et al. 1995; Hooker 1995).
TTD
PB
BM
PM
OS
0.2
0.6
0.9
0.2
0.6
0.9
0.2
0.6
0.9
TdL
95
89
76
92
71
28
92
100
36
TdS
88
93
76
83
75
36
16
0
72
T
93
80
60
69
45
12
5
0
0
ܶଵΨ ௖
89
75
64
53
39
28
4
0
0
ܶଷ.ଷΨ ௖
67
67
64
28
19
12
0
0
0
CP
52
53
92
8
9
44
0
0
0
LP
43
51
56
0
1
0
0
0
0
L
40
44
52
0
1
0
0
0
0
ܹܲ ௩
60
73
96
8
43
92
0
0
0
|‫| ∗ܨ‬
41
40
52
0
0
0
0
0
0
Table 13 Share of instances (%), where the best result among the ten elementary rules was found
Elementary rules use different pieces of information about the problem instance. Therefore it
is natural to expect them to show different performance on instances with different structure.
In this section, we perform our analysis with respect to the two most important structural
measures: order strength (OS) and distribution of task times (TTD). In Tables 13 and 14, we
combined results for instances with the same OS and TTD together, since no large differences
were detected for instances with different graph structures like chains and bottlenecks.
24
In Table 13 for each rule of 10Best, the share of instances with the best result among the ten
rules is reported. We see there, that rules which weakly rely on or ignore information about
the task time, such as |‫| ∗ܨ‬, L and LP perform well only in instances with low task times (distribution PB). CP and ܹܲ
௩
show a better performance at high order strength. T, including its
aggregations, proved to be rather robust. Although its relative performance gets worse with
growing order strength, it still outperforms most other rules at low (0.2) and average (0.6)
levels of order strength. In total, TdL proved to be the best rule. For PM at 0.6 OS, TdL found
even better solutions than any other of the examined rules, but still showed rather high 2.6%
of average and 3.6% of worst performance. However, for instances with high order strength,
TdL was outstripped by CP, ܹܲ
௩
or TdS. Strikingly, TdL showed the best result among the
examined rules only in less than one third of the instances with BM and order strength of 0.9,
whereas for the whole benchmark data set this number equaled to 84% (cf. Table 6). Overall,
the results support the initial intuition, that at lower levels of order strength elementary rules
with more intensive incorporation of task time information perform better. Moreover, if instances contain a lot of large tasks (PM), elementary rules should take into account both information on the task time and precedence relations (as TdL and TdS).
First Rule
Second Rule
TTD
OS
PB
0.2
0.6
0.9
BM
0.2
0.6
0.9
PM
0.2
0.6
0.9
|‫| ∗ܨ‬
100
10
1
10
10
2
5
2
1⁄2
T, ܶଵΨ ௖ or ܶଷ.ଷΨ ௖
L
LP
100
5
1⁄2
10
5
1⁄2
2
1
1⁄2
2
1
1⁄10
2
1
1⁄10
1⁄2
1⁄5
1⁄10
CP
10
1⁄2
1⁄5
2
1
1⁄10
1⁄2
1⁄5
1⁄10
|‫| ∗ܨ‬
100
10
1
100
10
1⁄2
10
5
1⁄2
TdL
L
LP
CP
∞
5
1
∞
2
1⁄2
5
2
1⁄5
5
1
1⁄5
1
2
1⁄10
1⁄2
1⁄2
1⁄10
10
1
1⁄2
2
1
1⁄10
1⁄2
1⁄2
1⁄10
Table 14 Best weights for pairings depending on structural parameters of the problem instance
Similar conclusions can be drawn for the composite rules. Table 14 reports the best weights
for the first rule found for each pairing at the given TTD-OS combinations. We see that for
instances with a lot of large tasks (PM) a higher weight of a precedence-oriented rule is required, whereas for instances with other TTDs, especially at lower levels of order strength,
high weights for the task time based rules are recommended. Application of pairings with the
structure-specific weights (see Table 14), brings about 0.10 pp. improvement in terms of average performance to each pairing compared to the constant weights provided in Table 12.
25
Table 15 reports 5Comp with different degrees of incorporation of information about the instance’s structure. In 5Comp-½S, we use the same pairings of rules as in 5Comp, but combine
them with the structure-dependent weights from Table 14. In 5Comp-S, we go a step further
and select five complementing composite rules specific for each TTD and each level of OS by
the optimization procedure described in Sections 4.1.2 or 4.2.4. All rules in 5Comp-S appeared to come from the recommended 16 pairings with the weights in the vicinity to those
recommended in Table 14. From Table 15 we see that we can achieve a moderate improvement in the performance of the rules, using knowledge on the instance’s structure.
5Comp
5Comp-½S*
5Comp-S
Performance, %
Avg
Worst
0.58
1.80
0.54
1.50
0.50
1.50
Share of instances with best result among the tested 495
composite rules, %
64
70
78
Table 15 Performance of composite rules adjusted to the problem structure. *) Five pairings of rules as in
5Comp, but with structure-dependent weights from Table 11
Remark. We may construct a 5Comp-S for a problem instance with arbitrary level of OS and
arbitrary type of the task time distribution. First, we assume that the task time distribution is
fully characterized by the mean task time of the tasks (in relation to the cycle time). Afterwards, we construct the weights for the composite rules by employing a log-linear approximation, based on the weights reported in Table 14.
4.4
Multi-pass PRBMs with random influence
Recommendation. Apply RTie for best complementing composite rules (5Comp or 5CompS) or RChoice concept to the pairings of TdL, T, ܶଵΨ ௖ and ܶଷ.ଷΨ ௖ on one side and CP, LP, L
and |‫ | ∗ܨ‬on the other side around weights recommended in Section 4.2.3. Apply several iterations of RTie only for rules which often need a tie-breaker. In contrast, for pairings con-
taining TdL or CP and for pairing T and L one pass suffices. Before starting RChoice or
RTie, find solutions by applying 5Comp or 5Comp-S alone.
Priority rules are just reasonable heuristics. Therefore in computational experiments on small
data sets (see e.g. Talbot et. al. 1986) each tested priority rule found a worse solution than a
simple random assignment of tasks at least for some of the problem instances (in our experiments with large and thus much harder instances, this is not the case). Therefore, researchers
often incorporate a purely random assignment rule or some kind of random influence in their
PRBM concept and perform several passes of priority rules (e.g. Tonge 1965). In this section,
we compare different multi-pass PRBM concepts with random influence.
26
Several concepts of multi-pass PRBMs with random influence were proposed in the literature.
One of the earliest suggestions is COMSOAL of Arcus (1966), which is a construction method where priority values provide the probabilities that the task will be assigned in the next
step. In the study of Talbot et al. (1986), 1,000 passes of COMSOAL were found to perform
comparable to the multi-pass PRBM based on 13 elementary rules, each applied in forward
and backward direction with 12 different tie-breakers. Further, Storer et al. (1992) proposed to
add a random influence to the problem data (in our case – to the priority values of the tasks).
We call this method RAdd. We also examine performance of a random tie-breaker (RTie).
Besides, we propose RChoice, a method where for each pass a composite rule is randomly
selected from a set of pairings of rules with the weight being “distorted” by a random element.
This method is directly based on our insights from Section 4.2.3. It applies the logic similar to
the approach proposed by Hershauer and Ebert (1972), which showed a good performance.
In our experiments, we construct seven multi-pass PRBMs: RChoice for composite rules, as
well as RTie, RAdd and COMSOAL for both the elementary rules from 5Elem and the composite rules from 5Comp. We call the construction of two solutions (in forward and backward
direction) by the same rule as one pass of a PRBM. The total number of passes is set to 125.
We apply RChoice by selecting rules from pairings of TdL, T, ܶଵΨ ௖ and ܶଷ.ଷΨ ௖ on one hand
and CP, LP, L and |‫ | ∗ܨ‬on the other hand and distorting the recommended weight (see Table
12) of the first rule by a random factor from interval [1/5, 5]. For example, if the pairing T and
ଵ
LP with recommended weight ହ was selected, the weight of the correspondent composite rule
ଵ
is randomly set from ଶହ up to 1.
In RAdd, random element ߳௝ is independently uniformly distributed on the interval [0, 1). We
tested four different weights for the random element, 0.025, 0.05, 0.1 and 0.2. Note that these
weights are given in relation to the sum of weights for the other rules within the composite
rule (this procedure is equivalent to the scaling of the composite rule prior to the application
of the random component). For example, the scaled (see Section 4.2.1) priority value ‫ݐ‬௝ would
be modified to ‫ݐ‬௝ + 0.025 ∙ ߳௝ for purposes of RAdd, whereas priority value ‫ݐ‬௝ + 5‫ܲܮ‬௝ is
changed to ‫ݐ‬௝ + 5‫ܲܮ‬௝ + 6 ∙ 0.025 ∙ ߳௝. The best results were shown at a moderate weight of the
random element 0.025. In case of RAdd, as well as for COMSOAL, 125 passes are distributed
uniformly among the five rules (5Elem or 5Comp, respectively), 25 per each rule.
In case of RTie, it does not make sense to apply several passes to rules that do not need a tiebreaker, or in other words, rules that have a high discrimination power. Therefore for RTie,
27
we performed only one pass for such rules and distribute the rest of 125 passes uniformly
among less discriminating rules, so that we come up with 125 passes in total. In order to characterize discrimination power of priority rules, we computed the average number of times per
instance, where a tie-breaker was needed. Computational experiments on our benchmark data
set showed, that only rules, where tie-breaker was needed more than 50 times per instance,
reached an improvement ( 0.05% in terms of average relative deviation) after several passes
of the RTie compared to the first pass. Among 5Elem described in Section 4.1.2, priority rules
TdL, TdS, and CP have a high discrimination power. For these rules only one pass of RTie
was performed. From the 16 best pairings of rules suggested in Section 4.2, high discrimination power characterizes the pairings including TdL or CP and the pairing T and L.
The performance of the examined PRBM-concepts with random influence is reported in Figure 5. From this figure, we see that COMSOAL is clearly dominated by all the other tested
PRBM concepts with random influence. Moreover, even after 25 passes for each rule, COMSOAL still exceeded 4% in terms of relative average deviation, which is more than four times
higher than the result of the best elementary rule TdL.
Fig. 5 Performance of different PRBMs with random influence
125 passes in total within the RTie (RAdd) concept resulted in 0.82% (0.67%) average relative deviation for elementary and 0.46% (0.64%) for composite rules, correspondently. Application of RAdd to 5Elem was better than 5Elem alone by 0.15 pp., whereas RAdd applied to
5Comp was even 0.06 pp. worse than 5Comp alone. Moreover, no single composite rule in
RAdd showed a better result after 25 passes than this composite rule itself without addition of
a random component (in average relative deviation).
The best performing PRBMs with random influence proved to be RTie applied to 5Comp and
RChoice (see Figure 5). After 125 passes, RChoice deviated from the UB by 0.43% on aver28
age and never more than by 1.50%; in case of RTie for 5Comp these numbers equaled to
0.46% and 1.50%, correspondently. RChoice seems to be a bit less prone to the “flatter-out”
effect than RTie. The “flatter-out” effect refers to the absence of or little improvement to the
found solutions starting from a certain number of passes. Overall, after 625 passes the average
relative deviation was lowered to 0.37% for RChoice and 0.42% for RTie for 5Comp.
To find, how many passes are needed for RChoice to catch up the performance of 5Comp and
5Comp-S, we conducted random sampling with replacement from received results for different number of passes (bootstrapping). We found that in more than 95% of cases 16 passes are
enough to reach 0.58% deviation (the result of 5Comp) and with 42 passes the result of
5Comp-S (0.5%) is received. Good performance of RChoice for the recommended pairings is
in line with our findings of Section 4.2.3.
5
Discussion
As in many other areas, where managerial decisions have to be taken, very good results in
assembly line balancing can be achieved with very simple priority rules. This was convincingly shown on instances of real-world size and structure in our study. The collection of only five
elementary rules (5Elem), such as largest time over latest station (TdL), largest time over
slack (TdS), largest processing time (ܶ), largest critical path (CP) and largest positional
weight of available tasks (ܹܲ ௩) dominated, i.e., showed a significantly better performance,
than a pure random construction procedure with 125 passes (R125). In our computational experiments, 5Elem showed from 3% (from the best known UB) up to 14% (from the best
known LB) deviation from the optimal solution in the worst case (see Figure 3).
However, in practice even these 3-14% mean an increase in variable and fixed costs, which,
as we have shown, can be easily reduced by applying five composite rules with structuredependent parameters (5Comp-S) instead. In the observed in our experiments worst case,
5Comp-S show just 1.6-11% deviation from optimum and reduce the average relative deviation by about a half.
If some computational time for more passes of priority rules is available, then priority rulebased methods with random influences can be applied. We especially recommend the random
choice procedure RChoice described in Section 4.4. After 125+5 passes, this method was able
to reduce the average relative deviation achieved by 5Comp-S by about a quarter.
However, this article is more than just a recommendation of several promising priority rulebased methods that can be applied stand-alone or incorporated into (partial) enumeration pro29
cedures, metaheuristics or simulations. We are aware that it is impossible to get the best receipt of the priority rules for all possible instances of assembly line balancing problems.
Therefore this study is more a comparison and a thorough analysis of ideas, innovative
measures and approaches that, we hope, will facilitate the creation of new rules and priority
rule-based methods for the actual needs of real balancing situations.
For example, often for a successful application of metaheuristics (e.g. tabu search) several
good-quality initial solutions are needed. As we showed in this article, a moderate aggregation
of a good rule (see Section 4.1.3) or a combination of time-oriented with precedence-oriented
rules (see Section 4.2.3), will provide different and near-to-optimum solutions. It is astonishing, but the combination of such rules produces synergy effects that often lead to a better solution than the best solution after two passes each with one constituent rule.
We also highly recommend that planners and researchers should use information about the
structure of the instance. Especially it is useful for real-world assembly lines, were the line
has to be rebalanced often, while characteristics of the precedence graph remain (almost) the
same. As we have shown in our analysis in Section 4.3 and in our control experiments in Section 3, the consideration of just two structural parameters like order strength and task time
distribution already leads to better results. Overall, this study is one of very few ones (along
with Kurtulus and Davis 1982) that show the importance of structural parameters of the instance and gives detailed recommendations on promising adjustments of priority rules.
Still, some questions remain for future research. The present study indicates the most effective
priority rules. The impact of incorporating these priority rules into (partial) enumeration
methods is still to be investigated. Further, Storer et al. (1992) initiated a promising discussion on possible ways to apply the knowledge of problem-specific priority rules to metaheuristic search schemes (that are generalist in their nature). A comparative investigation of these
methods is still to be performed for assembly line balancing problems.
Acknowledgments
This article was supported by the Federal Program “ProExzellenz” of the Free State of Thuringia.
30
References
Arcus, A.L.: COMSOAL: A computer method of sequencing operations for assembly lines.
Int. J. of Prod. Res. 4, 259–277 (1966)
Barr, R. S., Golden, B. L., Kelly, J. P., Resende, M. G. C., Steward, W. R.: Designing and
reporting on computational experiments with heuristic methods. J. Heuristics 1, 9-32
(1995)
Baybars, I.: A survey of exact algorithms for the assembly line balancing problem. Manag.
Sci. 32, 909–932 (1986)
Bhattacharjee, T.K., Sahu, S.: A heuristic approach to general assembly line balancing. Int. J.
Oper. Prod. Manag. 8, 67–77 (1988)
Becker, C., Scholl, A.: A survey on problems and methods in generalized assembly line balancing. Eur. J. Oper. Res. 168, 694–715 (2006)
Bennett, G.B., Byrd, J.: A trainable heuristic procedure for the assembly line balancing problem. AIIE Trans. 8, 195–201 (1976)
Boctor, F.F.: A multiple-rule heuristic for assembly line balancing. J. Oper. Res. Soc. 46, 62–
69 (1995)
Boysen, N., Fliedner, M., Scholl, A.: A classification of assembly line balancing problems.
Eur. J. Oper. Res. 183, 674–693 (2007)
Boysen, N., Fliedner, M., Scholl, A.: Assembly line balancing: Which model to use when?
Int. J. Prod. Econ. 111, 509-528 (2008)
Burke, E.K., Newall, J.P., Weare, R.F.: Initialisation strategies and diversity in evolutionary
timetabling. Evol. Comput. 6, 81–103 (1998)
Elmaghraby, S.E., Herroelen, W.S.: On the measurement of complexity in activity networks.
Eur. J. Oper. Res. 5, 223-234 (1980)
Hackman, S. T., Magazine, M. J.,Wee, T. S.: Fast, effective algorithms for simple assembly
line balancing problems. Oper. Res., 37, 916-924 (1989)
Haupt, R.: A survey of priority rule-based scheduling. OR Spectr. 11, 3–16 (1989)
Helgeson, W.B., Birnie, D.P.: Assembly line balancing using the ranked positional weight
technique. J. Ind. Eng. 12, 394–398 (1961)
Hershauer, J.C., Ebert, R.J.: Search and simulation selection of a job-shop sequencing rule.
Manag. Sci. 21, 833–843 (1975)
Hooker, J. N.: Testing heuristics: We have it all wrong. J. Heuristics 1, 33-42 (1995)
Jackson, J.R.: A computing procedure for a line balancing problem. Manag. Sci. 2, 261–271
(1956)
Jin, Z. H., Ohno, K., Ito, T., Elmagharaby, S. E.: Scheduling hybrid flowshops in printed circuited board assembly lines. POMS Ser. Technol. and Oper. Manag. 11, 216–230 (2002)
Kilbridge, M., Wester, L.: The balance delay problem. Manag. Sci. 8, 69–84 (1961)
Kilincci, O.: Firing sequences backward algorithm for simple assembly line balancing problem of type 1. Comput. Ind. Eng. 60, 830–839 (2011)
Kolisch, R., Sprecher, A., Drexl, A.: Characterization and generation of a general class of
resource-constrained project scheduling problems. Manag. Sci. 41, 1693–1703 (1995)
Kurtulus, I., Davis, E.W.: Multi-project scheduling: Categorization of heuristic rules performance. Manag. Sci. 28, 161–172 (1982)
31
Mastor, A. A.: An experimental investigation and comparative evaluation of production line
balancing techniques. Manag. Sci. 16, 728–746 (1970)
Moodie, C. L., Young, H. H.: A heuristic method of assembly line balancing for assumptions
of constant or variable work element times. J. Ind. Eng. 16, 23-29 (1965)
Ouelhadj, D., Petrovic, S.: A survey of dynamic scheduling in manufacturing systems. J.
Sched. 12, 417-431 (2009)
Otto, A., Otto, C., Scholl, A.: SALBPGen – A systematic data generator for (simple) assembly line balancing. Jena Res. Pap. Bus. Econ. 05/2011, School of Economics and Business Administration, Friedrich-Schiller-University Jena (2011)
Panwalkar, S.S., Iskander, W.: A survey of scheduling rules. Oper. Res. 25, 45-61 (1977)
Rardin, R. L., Uzsoy, R.: Experimental Evaluation of Heuristic Optimization Algorithms: A
Tutorial. J. Heuristics 7, 261–304 (2001)
Scholl, A.: Data of assembly line balancing problems. Schriften zur Quantitativen Betriebswirtschaftslehre 16, TH Darmstadt (1993)
Scholl, A.: Balancing and sequencing of assembly lines. Physica, Heidelberg (1999)
Scholl, A., Becker, C.: State-of-the-art exact and heuristic solution procedures for simple assembly line balancing. Eur. J. Oper. Res. 168, 666−693 (2006)
Scholl, A., Klein, R.: SALOME: A bidirectional branch and bound procedure for assembly
line balancing. INFORMS J. Comput. 9, 319-334 (1997)
Scholl, A., Voß, S.: Simple assembly line balancing – Heuristic approaches. J. Heuristics 2,
217-244 (1996)
Storer, R. H., Wu, D. S., Vaccari R.: New search spaces for sequencing problems with application to job shop scheduling. Manag. Sci. 38, 1495–1509 (1992)
Talbot, F. B., Patterson, J. H., Gehrlein, W. V.: A comparative evaluation of heuristic line
balancing techniques. Manag. Sci. 32, 430-454 (1986)
Tonge, F.M.: Assembly line balancing using probabilistic combinations of heuristics. Manag.
Sci. 11, 727–735 (1965)
Vance, P. H., Barnhart, C., Johnson, E. L., Nemhauser, G. L.: solving binary cutting stock
problems by column generation and branch-and-bound. Comput. Optim. Appl. 3, 111–
130 (1994)
32