ON ESTIMATION OF A LOGNORMAL MEAN USING A RANKED SET SAMPLE By

Sankhy¯
a : The Indian Journal of Statistics
1994, Volume 56, Series B, Pt. 3, pp. 323-333
ON ESTIMATION OF A LOGNORMAL MEAN USING A
RANKED SET SAMPLE
By WEI-HSIUNG SHEN
Tunghai University
SUMMARY. When the experimental or sampling units in a study can be more easily ranked
than quantified, McIntyre (1952) observed that to estimate the population mean, the mean of n
units based on a ranked set sample (RSS) provides an unbiased estimator with a smaller variance
compared to a simple random sample of the same size n. In this paper we further explore the
concept of RSS for the problem of estimation of a lognormal mean with a known coefficient of
variation, and show that the use of RSS and its suitable modifications results in much improved
estimators compared to the use of a simple random sample.
1. Introduction
In some sampling situations when the variable of interest to be observed from
the experimental units can be more easily ranked than quantified, McIntyre (1952)
introduced the concept of ‘Ranked Set Sampling’ (RSS) and indicated that for estimation of the population mean, it is highly beneficial and much superior to the
conventional simple random sampling (SRS). Fortunately, it is indeed possible to
rank the experimental or sampling units without actually measuring them. For examples, in agricultural studies, estimating of herbage mass and clover conte along
a pipeline in order to find contaminated sections (Sinha et al. (1992)). For other
applications, we refer to Halls and Dell (1966) and Martin et al. (1980).
The basic concept behind RSS, patterned after Sinha et al. (1992), is described
(x) with
below. Suppose X1 , X2 , . . . , Xn is a simple random sample (SRS) from F
¯ = n Xi /n
a mean µ and a finite variance σ 2 . Then a standard estimator of µ is X
1
¯ = σ 2 /n. In contrast to SRS, RSS uses only one observation, namely,
with var(X)
X1:n ≡ X(11) , the lowest observation, from this set, then X2:n ≡ X(22) , the second
lowest from another independent set of n observations, and finally Xn:n ≡ X(nn) ,
the largest observation from a last set of n observations. This process can be described in a table as follows.
Paper received. May 1994; revised September 1994.
AMS (1991) subject classifications. Primary 62G05; secondary 62G30.
Key words and phrases. Lognormal distribution, order statistics, ranked set sample, sample median, simple random sample, uniformly minimum variance unbiased estimator.
324
wei-hsiung shen
TABLE 1. DISPLAY OF n2 OBSERVATIONS IN n SETS OF n EACH
X(11)
X(21)
..
.
.
X(n1)
X(12)
X(22)
..
.
.
X(n2)
···
···
X(1(n−1))
X(2(n−1))
..
.
.
X(n(n−1))
···
X(1n)
X(2n)
..
.
.
X(nn)
It should be noted that although RSS requires identification of as many as n2
experimental or sampling units, only n of them, namely, {X(11) , . . . , X(nn) }, are
actually measured, thus making a comparison of this sampling strategy with SRS
of the same size n meaningful. Clearly the new sample X(11) , X(22) , . . ., X(nn) ,
termed by McIntyre (1952) a Ranked Set Sample (RSS), are independent but not
identically distributed, and marginally, X(ii) is distributed as Xi:n , the ith order
statistic in a sample of size n from F (x).
McIntyre (1952) proposed the obviously unbiased estimator
µ
ˆrss =
n
X(ii) /n
. . . (1.1)
1
¯ and a somewhat surprising result which
as a rival estimator of µ as opposed to X,
makes RSS a serious contender is that
¯ !
var(ˆ
µrss ) < var(X)
. . . (1.2)
In fact, Dell (1969) and Dell and Clutter (1972) observed that
var(ˆ
µrss ) = σ 2 /n −
n
(µ(i) − µ)2 /n2 .
. . . (1.3)
1
Many aspects of RSS have been studied in the literature. Takahasi and Waki¯
moto (1968) have shown that the relative precision (RP ) of µ
ˆrss relative to X,
¯
defined as RP = var(X)/var(ˆ
µrss ), satisfies : 1 ≤ RP ≤ (n + 1)/2, with RP =
(n + 1)/2 in case the population is uniform. Patil et al. (1992) computed the expression for RP for many discrete and continuous distributions. David and Levine
(1972) and Ridout and Cobby (1987) discussed the consequences of presence of
errors in ranking. For some other aspects of RSS, we refer to Muttlak and McDonald (1990a,b), Stokes (1977), Stokes and Sager (1988), Takahashi (1969, 1970),
Yanagawa and Shirahata (1976), Yanagawa and Chen (1980).
It should be noted that the concept of RSS is purely nonparametric in nature
because no functional form is assumed about F (x). The object of this paper is
to investigate further improvements of µ
ˆrss and suitable modifications of RSS for
estimation of the mean of a lognormal distribution with a known coefficient of
variation. This is exactly in the same spirit as in Sinha et al. (1992) where two
other forms of F (x), namely, normal and exponential, are assumed. Because of
lognormal mean using a ranked set sample
325
a close connection between normal and lognormal, in this paper we have freely used
materials from Sinha et al. (1992). The lognormal distributions are important
competitors to the exponential, gamma, or Weibull distributions as models for nonnegative quantitative random phenomena. For examples, the lognormal distribution
is often applied to production data in economics, concentrations of the chemical elements in geological materials, lifetimes of mechanical and electrical systems and
other survival data, and the incubation periods of infectious diseases.
Before concluding this section, we note that the pdf of a lognormal distribution
can be written as
f (y|θ, σ) =
1
√
yσ 2π
e−
(lny−θ)2
2σ 2
,
y > 0, −∞ < θ < ∞, σ > 0.
. . . (1.4)
Using the fact that X = lnY is normally distributed with mean θ and variance σ 2 ,
we easily get
2
2
σ2
E(Y ) = eθ+ 2 , var(Y ) = e2θ eσ (eσ − 1)
. . . (1.5)
so that the coefficient of variation (CV) of Y is given by
CV (Y ) = eσ2 − 1.
. . . (1.6)
Thus, when CV (Y ) (equivalently, σ) is known, the problem of estimation of
E(Y ) essentially boils down to estimation of φ(θ) = eθ based on X1 , . . . , Xn , where
Xi = lnYi is normal with E(X) = θ, var(X) = σ 2 . Throughout this paper, we
have taken σ = 1 without any loss of generality, and addressed this reformulated
problem.
It may be noted that, based on a SRS, the uniformly minimum variance unbiased
estimator of φ(θ) is given by
1
¯
φˆsrs (θ) = eX− 2n
with
1
var(φˆsrs (θ)) = e2θ (e n − 1).
. . . (1.7)
. . . (1.8)
2. Estimation of φ(θ) based on RSS
In this section we describe different estimators of φ(θ) based on the original
McIntyre’s RSS, namely, (X(11) , . . . , X(nn) ). We begin with a general form of an
unbiased estimator of θ given by
θ˜rss =
n
r=1
cr:n X(rr)
. . . (2.1)
326
wei-hsiung shen
where the coefficients cr:n ’s satisfy
n
cr:n = 1,
r=1
n
cr:n νrr:n = 0,
. . . (2.2)
r=1
and νrr:n is the mean of the rth order statistic of a sample of size n from a standard
normal distribution. Since, by independence of X(rr) ’s,
n
n
n
E[e 1 cr:n X(rr) ] = eθ [
E{ecr:n (X(rr) −θ) }] = eθ [
Kr:n ],
r=1
and
= E[ecr:n (X(rr) −θ) ]
−1
= E[ecr:n Φ (Ur ) |Ur ∼ Beta(r, n − r + 1)],
Kr:n
. . . (2.3)
r=1
. . . (2.4)
an unbiased estimator of φ(θ) is given by
n
cr:n X(rr)
r=1
e
φ˜rss (θ) =
,
. . . (2.5)
Kn
n
where Kn = 1 Kr:n . In the above, Φ(·) is the standard normal CDF, and Φ−1 (·)
is its inverse. Using the fact that
˜
E[e2θrss ] = e2θ [
n
E{e2cr:n (X(rr) −θ) }] = e2θ [
r=1
where
∗
Kr:n
=
=
n
∗
Kr:n
],
. . . (2.6)
r=1
E[e2cr:n (X(rr) −θ) ]
−1
E[e2cr:n Φ (Ur ) |Ur ∼ Beta(r, n − r + 1)],
it follows that
var(φ˜rss (θ)) = e2θ [
Kn∗
− 1],
Kn2
. . . (2.7)
. . . (2.8)
n ∗
.
where Kn∗ = 1 Kr:n
We now specialize to a few specific forms of φ˜rss (θ).
∗
are
2.1. Ordinary RSS. Here cr:n = 1/n, and the corresponding Kr:n and Kr:n
given by
Kr:n
=
=
1
E[e n (X(rr) −θ) ]
1 −1
E[e n Φ (Ur ) |Ur ∼ Beta(r, n − r + 1)],
. . . (2.9)
and
∗
Kr:n
=
=
2
E[e n (X(rr) −θ) ]
2 −1
E[e n Φ (Ur ) |Ur ∼ Beta(r, n − r + 1)].
. . . (2.10)
327
lognormal mean using a ranked set sample
rr:n
2.2. BLUE. Here cr:n = 1/v
n
1/v
rr:n
1
(see Sinha et al. (1992)), where vrr:n is the
variance of the r − th order statistic of a sample of size n from a standard normal
∗
distribution and the corresponding expressions for Kr:n and Kr:n
are given by
rr:n
1/v
n
=
Kr:n
E[e
=
E[e
1
1/vrr:n
rr:n
1/v
n
1
1/vrr:n
(X(rr) −θ)
]
Φ−1 (Ur )
. . . (2.11)
|Ur ∼ Beta(r, n − r + 1)],
and
rr:n
2/v
n
∗
Kr:n
=
E[e
=
E[e
1
1/vrr:n
rr:n
2/v
n
1
1/vrr:n
(X(rr) −θ)
]
Φ−1 (Ur )
. . . (2.12)
|Ur ∼ Beta(r, n − r + 1)].
2.3. BLUE based on partial RSS (PRSS). Based on a partial RSS of size m < n,
namely, (X(11) , . . . X(mm) ), we get (see Sinha et al. (1992) for details)
cr:n
=
=
(
m νrr:n
m νrr:n νrr:n
2
1
(
)
vrr:n ) vrr:n −(
1 vrr:n vrr:n
m1 νrr:n
m νrr:n 2
2
m
1
0
1
vrr:n
)(
1
vrr:n
)−(
1
vrr:n
)
for r ≤ m
. . . (2.13)
for r > m.
∗
The corresponding Kr:n and Kr:n
are defined similarly.
2.4. BLUE based on modified partial RSS (MPRSS). Here we proceed as in
Sinha et al. (1994). Our modified PRSS always begins in the middle of McIntyre’s
diagonal sample in Table 1 and proceeds both below and above in equal amounts.
This is discussed below separately for n odd and even.
If n is odd (= 2m + 1, say), we start with the unique one in the middle,
namely, X(m+1 m+1) , and keep on including (X(m m) , X(m+2 m+2) ), (X(m−1 m−1) ,
X(m+3 m+3) ), · · ·, in pairs. For example, if n = 3, we use only X(22) , and c2:n = 1; if n
rr:n
= 5, we use either X(33) and c3:n = 1, or {X(22) , X(33) , X(44) }, and cr:n = 1/v
4
2
1/vrr:n
for r = 2, 3, 4; if n = 7, we use either X(44) and c4:n = 1 or {X(33) , X(44) , X(55) }
rr:n
for r = 3, 4, 5; or {X(22) ,X(33) ,X(44) ,X(55) , X(66) } and cr:n
and cr:n = 1/v
5
rr:n
= 1/v
6
2
1/vrr:n
3
1/vrr:n
for r = 2, . . ., 6; and so on.
On the other hand, if n is even (= 2m, say), we start with the unique two
in the middle, namely, (X(m m) , X(m+1 m+1) ), and keep on including (X(m−1 m−1) ,
X(m+2 m+2) ), (X(m−2 m−2) , X(m+3 m+3) ), · · ·, again in pairs. Thus, if n = 4, we
rr:n
for r = 2, 3; if n = 6, we use either
use {X(22) , X(33) } and cr:n = 1/v
3
2
rr:n
{X(33) , X(44) } and cr:n = 1/v
4
rr:n
and cr:n = 1/v
5
2
1/vrr:n
3
1/vrr:n
1/vrr:n
for r = 3, 4, or {X(22) , X(33) , X(44) , X(55) }
for r = 2, . . ., 5; if n = 8, we use either {X(44) , X(55) } and
328
wei-hsiung shen
rr:n
cr:n = 1/v
5
4
1/vrr:n
rr:n
for r = 4, 5, or {X(33) , X(44) , X(55) , X(66) } and cr:n = 1/v
6
for r = 3, . . ., 6, or {X(22) , X(33) , X(44) , X(55) , X(66) , X(77) } and
rr:n
for r = 2 . . ., 7; and so on.
cr:n = 1/v
7
2
3
1/vrr:n
1/vrr:n
∗
The corresponding Kr:n and Kr:n
are defined similarly.
3. Estimation of φ(θ) based on smallest order statistics
In this section we propose an estimator of φ(θ) based on m smallest order statistics from Table 1, namely, (X(11) , . . . , X(m1) ), as in Sinha et al. (1992). We begin
with
m
1 θ˜smallest =
X(i1)
. . . (3.1)
m i=1
and note that
˜
1
E[eθsmallest ] = eθ E[e m
where
1
−1
L1:m = E[e m Φ
m
i=1
(U1 )
(X(i1) −θ)
] = eθ (L1:m )m
|U1 ∼ Beta(1, n)].
We, therefore, propose the unbiased estimator of φ(θ) given by
m
1
X(i1)
m
1
e
.
φ˜smallest (θ) =
(L1:m )m
. . . (3.2)
. . . (3.3)
. . . (3.4)
It is easy to verify that
(L2:m )m
− 1]
(L1:m )2m
. . . (3.5)
|U1 ∼ Beta(1, n)].
. . . (3.6)
var(φ˜smallest (θ)) = e2θ [
where
2
−1
L2:m = E[e m Φ
(U1 )
4. Estimation of φ(θ) based on medians
In this section we propose estimators of φ(θ) based on the medians from Table
1. We separately discuss the two cases of n being odd and even.
Case 1: n = 2k + 1. Consider
1 (i)
X
θ˜median =
m i=1 k+1:n
m
. . . (4.1)
lognormal mean using a ranked set sample
329
(i)
where Xk+1:n is the median from the i−th row of Table 1, and note that
˜
E[eθmedian ]
1
(i)
eθ [E(e m (Xk+1:n −θ) )]m
eθ (Mmedian:m )m
=
=
. . . (4.2)
where
1
−1
Mmedian:m = E[e m Φ
(Uk+1 )
|Uk+1 ∼ Beta(k + 1, k + 1)].
We, therefore, propose the unbiased estimator of φ(θ) given by
m (i)
1
e m 1 Xk+1:n
˜
φmedian (θ) =
.
(Mmedian:m )m
. . . (4.3)
. . . (4.4)
It is easy to verify that
var(φ˜median (θ)) = e2θ [
∗
)m
(Mmedian:m
− 1]
(Mmedian:m )2m
. . . (4.5)
where
2
−1
∗
Mmedian:m
= E[e m Φ
(Uk+1 )
|Uk+1 ∼ Beta(k + 1, k + 1)].
. . . (4.6)
Case 2: n = 2k. In this case, following Sinha et al. (1992), we consider averaging
only an even number (m) of measurements from Table 1 in a very special way. This
is described below for m = 2, 4.
(i) For m = 2, we consider
1 (1)
(2)
θ˜median,2 = [Xk:n + Xk+1:n ]
2
(1)
. . . (4.7)
(2)
where Xk:n is from row 1 and Xk+1:n is from row 2 of Table 1. Note that
˜
E[eθmedian,2 ]
where
and
=
=
1
−1
−1
(Uk+1 )
M1:2 = E[e 2 Φ
1
M2:2 = E[e 2 Φ
(1)
1
1
(2)
E[e 2 Xk:n ]E[e 2 Xk+1:n ]
eθ (M1:2 )(M2:2 )
(Uk )
|Uk ∼ Beta(k, k + 1)],
|Uk+1 ∼ Beta(k + 1, k)].
. . . (4.8)
. . . (4.9)
. . . (4.10)
We, therefore, propose the unbiased estimator of φ(θ) given by
˜
φ˜median,2 (θ) =
eθmedian,2
.
(M1:2 )(M2:2 )
. . . (4.11)
330
wei-hsiung shen
Also, it is easy to show that
var(φ˜median,2 (θ)) = e2θ [
where
∗
M1:2
= E[eΦ
and
−1
∗
= E[eΦ
M2:2
(Uk )
−1
∗
∗
)(M2:2
)
(M1:2
− 1]
(M1:2 )2 (M2:2 )2
|Uk ∼ Beta(k, k + 1)],
(Uk+1 )
|Uk+1 ∼ Beta(k + 1, k)].
. . . (4.12)
. . . (4.13)
. . . (4.14)
(ii) For m = 4, we consider
1 (1)
(2)
(3)
(4)
θ˜median,4 = [Xk:n + Xk+1:n + Xk:n + Xk+1:n ]
4
(1)
(2)
(3)
. . . (4.15)
(4)
where Xk:n is from row 1, Xk+1:n from row 2, Xk:n is from row 3, and Xk+1:n is
from row 4 of Table 1. We propose
˜
φ˜median,4 (θ) =
eθmedian,4
(M1:4 )2 (M2:4 )2
with
var(φ˜median,4 (θ)) = e2θ [
∗ 2
∗ 2
(M1:4
) (M2:4
)
− 1]
4
(M1:4 ) (M2:4 )4
. . . (4.16)
. . . (4.17)
where
1
−1
1
−1
1
−1
M1:4 = E[e 4 Φ
M2:4 = E[e 4 Φ
and
∗
= E[e 2 Φ
M1:4
∗
= E[e
M2:4
(Uk )
|Uk ∼ Beta(k, k + 1)];
(Uk+1 )
(Uk )
. . . (4.18)
|Uk+1 ∼ Beta(k + 1, k)],
. . . (4.19)
|Uk ∼ Beta(k, k + 1)] ≡ M1:2 ;
. . . (4.20)
1 −1
(Uk+1 )
2Φ
|Uk+1 ∼ Beta(k + 1, k)] ≡ M2:2 .
. . . (4.21)
Extensions to other even values of m are similar.
5. Comparison of different estimators of φ(θ)
In this section we provide a comparison of various estimators of φ(θ) for n = 5,
10, 15, 20. Since e2θ is a common factor in all the variances, it is clear that whenever
dominance holds, it is uniform in θ. In the following tables, we have taken e2θ = 1
without any loss of generality.
331
lognormal mean using a ranked set sample
TABLE 2. COMPARISON OF φ˜srs (θ), φ˜rss (θ), AND φ˜blue (θ)
FOR n = 5, 10, 15, 20
φ˜srs (θ)
1
en − 1
0.22140
0.10517
0.06894
0.05127
n
5
10
15
20
Kn
1.03635
1.00979
1.00395
1.00156
φ˜rss (θ)
∗
∗ /K 2 − 1
Kn
Kn
n
1.15475
0.07516
1.04180
0.02170
1.01893
0.01093
1.01029
0.00715
Kn
1.03501
1.00890
1.00337
1.00116
φ˜blue (θ)
∗
∗ /K 2 − 1
Kn
Kn
n
1.14876
0.07236
1.03814
0.01991
1.01660
0.00978
1.00868
0.00634
In Table 2,variances of φ˜srs (θ), φ˜rss (θ) and φ˜blue (θ) are given. The overwhelming uniform dominance of φ˜rss (θ) and φ˜blue (θ) over φ˜srs (θ) is clear. Surprisingly
enough, the performances of φ˜rss (θ) and φ˜blue (θ) are nearly the same.
TABLE 3. MINIMUM VALUES OF m FOR WHICH φ˜prss (θ)
AND φ˜mprss (θ) DOMINATE φ˜srs
n
5
10
15
20
m
3
4
2
6
3
7
2
9
φ˜prss (θ)
∗
∗ /K 2 − 1
Kn
Kn
n
Kn
1.04800
1.20749
0.09942
1.02872
1.12943
0.06726
1.03139
1.13395
0.06598
1.01965
1.08365
0.04228
Kn
1.05177
φ˜mprss (θ)
∗
∗ /K 2 − 1
Kn
Kn
n
1.22385
0.10633
1.03849
1.16312
0.07851
1.01721
1.07063
0.03470
1.01942
1.07999
0.03923
Table 3 presents the variances of φ˜prss (θ) and φ˜mprss (θ) for the minimum values
of m for which the desired dominance over φ˜srs (θ) holds.
TABLE 4. MINIMUM VALUES OF m FOR WHICH
φ˜smallest (θ) DOMINATES φ˜srs
n
5
10
15
20
m
3
4
5
6
L1:m
0.69532
0.68777
0.71066
0.73498
φ˜smallest (θ)
L2:m
(L2:m )m /(L1:m )2m − 1
0.50660
0.15053
0.48285
0.08573
0.51108
0.06120
0.54450
0.04874
In Table 4, the values of φ˜smallest (θ) are given for the minimum values of m for
which φ˜smallest (θ) is uniformly better than φ˜srs (θ). Finally, in Table 5 we provide
the variances of φ˜median (θ) for minimum values of m which guarantees uniform
dominance of φ˜median (θ) over φ˜srs (θ).
It is clear from the above tables that the use of appropriate variations of RSS
coupled with optimum weights, if applicable, results in much better estimators of
332
wei-hsiung shen
φ(θ) compared to the use of SRS. It is also interesting to observe that, as in the
case of estimation of a normal mean, here also the use of two medians is enough to
achieve uniform dominance over φ˜srs (θ).
TABLE 5. MINIMUM VALUES OF m FOR WHICH
φ˜median (θ) DOMINATES φ˜srs
n
10
20
n
5
15
m
2
2
Mmedian:m
1.03651
1.01279
m
2
2
M1:2
0.95841
0.97884
M2:2
1.08355
1.04146
∗
Mmedian:m
1.15434
1.05217
∗
M1:2
0.95379
0.97673
φ˜median (θ)
∗
(Mmedian:m
)m /(Mmedian:m )2m − 1
0.10039
0.05217
φ˜median,2 (θ)
∗
∗ )(M ∗ )/(M
2
2
M2:2
(M1:2
1:2 ) (M2:2 ) − 1
2:2
1.21947
0.07849
1.10572
0.03923
Throughout this paper, the computations of the absolute constants Kn , Kn∗ ,
∗
∗
∗
L1:m , L2:m , Mmedian:m , Mmedian:m
, M1:2 , M2:2 , M1:2
and M2:2
have been carried
out using a standard numerical integration technique, namely, Simpson’s formula
(see Burden and Faires (1989)), upon dividing the interval (0, 1) into 10, 000 equal
subintervals. Also, we have used Tietjen et al. (1977) for values of νrr:n and vrr:n .
Acknowledgment. The author sincerely thanks to a Co-Editor and a referee for
many helpful comments.
References
Burden, R. L. and Faires, J. D. (1989). Numerical Analysis, 4th ed., PWS-KENT, Boston.
Cobby, J. M., Ridout, M. S., Bassett, P. J., and Large, R.V. (1985).
An investigation into the use of ranked set sampling on grass and grass-clover swards. Grass
and Forage Science, 40, 257-263.
David, H. A. and Levine, D. N. (1972). Ranked set sampling in the presence of judgement
error. Biometrics, 28, 553-555.
Dell, T. R. (1969). The theory of some applications of ranked set sampling. Ph.D. thesis,
University of Georgia, Athens, GA.
Dell, T. R. and Clutter, J. L. (1972). Ranked set sampling theory with order statistics
background. Biometrics, 28, 545-553.
Halls, L. S. and Dell, T. R. (1966). Trial of ranked set sampling for forage yields. Forest
Science, 12, (No. 1), 22-26.
Martin, W. L., Sharik, T, L., Oderwald, R. G., and Smith, D. W. Evaluation of ranked set
sampling for estimating shrub Oak forest. Publication No. FWS-4-80, School of Forestry
and Wildlife Resources, Virginia Polytechnic Institute and State University, Blacksburg.
McIntyre, G. A. (1952). A method for unbiased selective sampling, using ranked sets. Australian J. Agricultural Research, 3, 385-390.
lognormal mean using a ranked set sample
333
Muttlak, H. A. and McDonald, L. L. (1990a). Ranked set sampling with respect to concomitant variables and with size biased probability of selection. Commun. Statist.-Theory
Meth., 19(1), 205-219.
− − − − −− (1990b). Ranked set sampling with size biased probalility of selection. Biometrics,
46, 435-445.
Patil, G. P., Sinha, A. K., and Taillie, C. (1992). Ranked set sampling and ecological data
analysis. Technical Reports, Department of Statistics, Penn State University.
Ridout, M. S. and Cobby, J. M. (1987). Ranked set sampling with non-random selection of sets
and errors in ranking. Appl. Statist., 36, No.2, 145-152.
Sinha, Bimal K., Sinha, Bikas K. and Purkayastha, S. (1992). On some aspects of ranked set
sampling for estimation of normal and exponential parameters. Technical Report, University
of Maryland Baltimore County.
Sinha, B. K., Chuiv, N. N. and Wu, Z. (1994). Estimation of a Cauchy
location using a ranked set sample. Technical Report, University of Maryland Baltimore
County.
Stokes, S. L. (1977). Ranked set sampling with concomitant variables. Communications in
Statistics, Theory and Methods, A6(12), 1207-1211.
Stokes, L. S. and Sager, T. (1988). Characterization of a ranked set sample with application
to estimating distribution functions. J. Amer. Statist. Assoc., 83, No. 402, 374-381.
Takahasi, K., and Wakimoto, K. (1968). On unbiased estimates of the population mean based
on the sample stratified by means of ordering. Ann. Inst. Statist. Math., 20, 1-31.
Takahasi, K. (1969). On the estimation of the population men based on ordered samples from
an equicorrelated multivariate distribution. Ann. Inst. Statist. Math., 21, 249-255.
− − − − − (1970). Practical note on estimation of population means based on samples stratified
by means of ordering. Ann. Inst. Statist Math., 22, 421-428.
Tietjen, G. L., Kahaner, D. K., and Beckman, R. J. (1977). Variances and covariances of the
normal order statistics for sample size 2 to 50. Selected Tables in Mathematical Statistics,
5, 1-73.
Yanagawa, T. and Shirahata, S. (1976). Ranked set sampling theory with selective probability
matrix. Australian Journal of Statistics, 18(1,2), 45-52.
Yanagawa, T. and Chen, S-H. (1980). The MG-procedurd in rank set sampling. Journal of
Statistical Planning and Inference, 4, 33-44.
Department of Statistics
Tunghai University
Taichung
Taiwan