A Hybrid Approach to Indoor Sensor Area Localization and Coverage

JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
209
A Hybrid Approach to Indoor Sensor Area
Localization and Coverage
Mengmeng Gai, Azad Azadmanesh, Ali Rezaeian
College of Information Science & Technology
University of Nebraska-Omaha, U.S.
E-mail: {mgai, azad, arezaeian}@unomaha.edu
Abstract — This study presents a hybrid approach to
indoor object detection and tracking. A test area is
partitioned into regions that are covered by sensors placed
on the ceilings. The region location of an object is
determined by these sensors using RSSI. The region
determination is then followed by the exact localization of
the object within the region using Building Information
Modeling (BIM) and the 3D stereo image measurements. To
determine the coverage ratio of a region for better
placement of sensors, the 3D space is partitioned into 2D
planes whose coverage points are determined by a newly
developed software product called LOCOPac. Other than
indoor localization, some simulation results of sensor
distribution algorithms that are more suitable for outdoor
applications will also be presented. Based on the
experimental and simulation results, the proposed approach
to indoor localization and LOCOPac has the potential to be
used for real world applications.
Index Terms-- Wireless Sensor Networks; Building
Information Modeling (BIM); Sensor Area Localization
(SAL); Stereo Vision; Sensor Area Coverage; Simulation;
Network Lifetime.
I. INTRODUCTION
Wireless sensor networks (WSNs) have been used in
numerous military and civilian applications such as
battlefield, surveillance, biological detection, and
environmental monitoring [14,15,31]. What distinguishes
these networks from normal ad-hoc networks is that the
nodes are small, lightweight, with limited storage and
processing power. Two of the fundamental challenges
that often arise in sensor network design and deployment
are Sensor Area Localization (SAL) [2,4,24] and Sensor
Area Coverage (SAC) [9,18,19]. SAL refers to the ability
of estimating and computing the physical positions of all
or a subset of sensor nodes in a wireless sensor network
(WSN), whereas SAC is concerned with the techniques
and mechanisms for placement of sensors in a field to
quantify how well the sensors sense (monitor) the field.
One can easily visualize that the service quality of SAL
relies heavily on SAC, i.e. without a decent coverage,
localization cannot be performed effectively. Therefore,
to provide a unified and manageable approach to the SAL
and SAC problems, and at the same time to provide a
simulation environment for testing and collecting data,
this research has initiated the development of a software
package called LOcalization and COverage Package
© 2015 ACADEMY PUBLISHER
doi:10.4304/jnw.10.4.209-221
(LOCOPac). The package enables a user to visually
observe the movement of an object from one hallway to
another on a computer monitor in real time. The package
has also the capability to determine the coverage quality
of the sensors on a ceiling. To test the approach, a case
study is conducted in the Peter Kiewit Institute (PKI) of
the University of Nebraska.
Design of sensor networks is not complete without the
consideration of important secondary factors such as
energy efficiency and fault tolerance [16,19,35,37,39,40].
For this reason, LOCOPac includes the following energy
efficiency algorithms: Coverage Configuration Protocol
(CCP), Random Independent Sleeping (RIS) and Probing
Environment and Adaptive Sleeping (PEAS). To evaluate
the performance of these algorithms more realistically,
the fault tolerance aspect of sensor node redundancy and
random failures of sensors have also been simulated.
Even though the typical applications of sensor
localization in real scenarios of WSNs are in 3D terrain,
most of the solutions proposed to SAL in the past
decades primarily emphasized on two-dimensional (2D)
localization [29]. This research proposes a hybrid
approach to the 3D SAL challenge by taking advantage
of Building Information Modeling (BIM) [25,28] and 3D
image measurements. The proposed methodology
consists of two phases: region determination (RD) and
sensor node localization (SNL). RD attempts at localizing
the physical area (region) in which the object resides by
deploying a group of sensor nodes with 3D coverage.
Once the physical area is determined, SNL obtains the
accurate location of the object in the region.
The rest of the paper is organized as follows. Section
II provides some general and technical information about
the implementation aspects of coverage and localization
in 2D and 3D spaces.
Section III discusses the
methodologies employed in developing LOCOPac.
Section IV is devoted to the SAL implementation.
Section V presents some simulation examples of SAC.
Section VI concludes the paper with some comments for
future enhancements.
II. BACKGROUND
This section is partitioned into indoor sensor
localization followed by some background information
about sensor area coverage.
210
A. Indoor sensor area localization
The SAL solutions can be classified into two broad
categories of centralized and distributed localizations. In
the centralized localization approaches [8], sensor nodes
receive their positions from a central unit. In particular,
the central unit computes the physical location of the
sensors based on the localization data collected by the
sensors. The physical positions are then sent back to their
corresponding sensor nodes. In the distributed localization
solutions [26], sensor nodes are capable of handling the
relevant computation by themselves, without the support
of a central unit. There are already several technologies
available for SAL [1,20,21,29]. Some of the contemporary
technologies are briefly described below.
Satellite Navigation System (SNS) – A satellite
Navigation System (SNS), e.g., Global Positioning
System (GPS), refers to a space-based satellite navigation
system which provides location information with critical
capabilities to civil, commercial users and military.
Because of the line-of-sight requirement, these navigation
systems suffer from implementation issues in
environments like mountains and indoor scenarios where
obstacles may block the satellite signals [21]. In addition,
the particular hardware requirements like antenna
receivers are critical limitations to the cost factor and
power consumption of SNS-based solutions.
Received Signal Strength Indicator (RSSI) -- RSSIbased solutions measure the power of the received radio
signal. Based on the received signal strength, distance
values can be computed relatively according to the
propagation of radio signals. However, measurements
based on RSSI are confronted with noises caused by
surrounding obstacles and link reliability issues [3,20].
Time of Arrival (TOA)/Time Difference of Arrival
(TDOA) -- TOA/TDOA-based measurements compute
the range value according to the travel time propagation
of a radio signal between a signal transmitter and a
receiver. TOA utilizes the absolute travel time between
signal units while TDOA takes advantage of the arrival
time difference, which can be directly interpreted to a
ranging value. This category of solutions can be
employed with different signal types such as ultrasonic,
acoustic and radio frequency. But interference can cause
inaccuracy in TOA/TDOA in some environments [26].
Angle of Arrival (AOA) -- In contrast to the previous
approaches that require some form of cooperation and
tight timing synchronization between the sender and
receiver, AOA has the advantage of locating objects in a
stealth and passive manner, which are desirable features
for military applications. AOA is based on the capability
of sensing the direction of the signal received. AOA
requires an antenna array (base stations), which are able
to determine the compass direction of the sensor’s signal.
The collected information from the base stations is then
analyzed to determine the sensor’s location [3]. With the
help of antenna array, AOA-based methods can provide
reliable measurement accuracy compared to the
techniques discussed above. But the drawbacks of this
solution are in the additional requirements of hardware
equipment and their deployment difficulties.
© 2015 ACADEMY PUBLISHER
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
B. Sensor area coverage
Among a number of metrics used to measure the
quality of area coverage, the coverage ratio (CR) is the
ratio of covered area to the whole area. Complete area
coverage (CAC) occurs if this ratio is 1, i.e. the field is
100% covered. Otherwise, the field is said to be partially
covered, which is referred to as partial area coverage
(PAC). Another parameter of interest is the rate of
sensors deployed per square unit area called sensor
density (SD). Specifically, SD is the total number of
sensors distributed divided by the total field area. The
critical sensor density (CSD) provides an insight into the
minimal number of nodes required to reach a desired
level of coverage [37].
Another parameter of interest is the failure rate 𝜆,
which is the number of sensors expected to fail per unit
of time. As the sensors are often deployed in large
quantities, are cheap and have a limited power, the failure
rate of sensors becomes a big concern. In LOCOPac, the
failure rate is assumed to be exponentially distributed
with the cumulative distribution function of:
𝐹(𝑡) = 1 − 𝑒 −𝜆 𝑡
(1)
Deployment Strategies -- The deployment strategies
[41] of sensor nodes are concerned with how the
placement of sensors in a sensor network is constructed.
Two general categories of sensor deployments are static
and dynamic deployments. Static deployment means that
the placement of sensor nodes before network operation
is independent of the network state during the lifetime of
the network. The static deployment can further be
classified into deterministic and random placements. In
deterministic deployment, the sensor locations are
preplanned for maximum performance. Random
deployment of sensors is another strategy more suitable
for remote or hostile fields. In dynamic deployment,
placement decision of nodes is based on network
changes. For instance, as nodes fail, some nodes might be
moved to other locations for better coverage.
Some of the simulation experiments presented in this
paper are based on static, random distribution of sensors,
using uniform and Poisson functions, which are more
appropriate for outdoor applications. Under uniform
distribution, N sensor nodes are distributed uniformly
over a squared field of size A with sensor density of
𝜆𝑆𝐷 = 𝑁/𝐴 . In the Poisson model of deployment, the
number of nodes deployed follows Poisson distribution
with parameter 𝜆𝑆𝐷 . Similar to uniform distribution, each
sensor has equal likelihood of being at any location in the
region, but the number of sensors to be distributed is
Poisson and changes from one deployment to the other.
For indoor applications, deterministic sensor
placements are more appropriate. In LOCOPac, the 3D
coverage is converted into 2D coverage by simply
partitioning the spheres formed by the signal radius of
sensors into circles, i.e., slicing the spheres into 2D
planes (layers). The 2D coverage simulation can then be
used to find the overall 3D coverage by counting the
number of pixels that fall into the sliced circles projected
onto the 2D planes. To make the explanation more
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
211
manageable, the following describes the volume coverage
approximation of a sensor that is placed on the center of a
room ceiling. Fig. 1 shows a simplistic drawing of the
room partitioned into three layers (𝐿0 , 𝐿1 , 𝐿2 ), where the
sensor is in layer 0. The sensor has half of its coverage
above the ceiling and the other half below the ceiling. As
the senor radius could be higher than the room height, the
end cap of the sphere could be above or below the floor.
 A = The spherical area below the ceiling
 Ai = Area between two adjacent circles that form an approximate
cylinder
 d = Distance between two adjacent layers
 S = Sensor signal radius
 ri= Circle radius in layer Li
 n = Number of layers (0…n-1)
Figure 1. A simplistic formation of circles projected onto layers of a
region by a sensor placed on the ceiling.
The area of a circle at layer Li is 𝜋𝑟𝑖2 . Therefore, the
approximate volume formed between the circles of two
adjacent layers Li-1 and Li that are apart from each other
by distance d is 𝐴𝑖 = 𝑑𝜋𝑟𝑖2 . Also, 𝑆 2 = (𝑖𝑑)2 + 𝑟𝑖2 .
Consequently, the approximate total volume between
layers L0 and Ln-1 is:
𝑛−1
𝐴=∑
𝑖=1
𝑛−1
𝑑𝜋𝑟𝑖2 = 𝑑𝜋 ∑
𝑖=1
(𝑆 2 − (𝑖𝑑)2 )
2
= 𝑑𝜋 [(𝑛 − 1)𝑆 2 − 𝑑 2 ∑𝑛−1
𝑖=1 𝑖 ]
= 𝑑𝜋 [(𝑛 − 1)𝑆 2 − 𝑑 2
= (𝑛 − 1)𝑑𝜋 [𝑆 2 − 𝑑 2
(𝑛−1)𝑛(2𝑛−1)
6
]
𝑛(2𝑛−1)
]
6
When 𝑆 = (𝑛 − 1)𝑑, A becomes:
(𝑆 + 𝑑)(2𝑆 + 𝑑)
]
6
2
2
4𝑆 −𝑑(3𝑆+𝑑)
2𝑆
𝑑(3𝑆+𝑑)
= 𝑆𝜋 [
] = 𝑆𝜋 [ −
]
𝐴 = 𝑆𝜋 [𝑆 2 − 𝑑 2
6
3
6
(2)
To find the real volume, d must approach zero, i.e.:
lim𝑑→0 𝑆𝜋 [
2𝑆 2
3
−
𝑑(3𝑆+𝑑)
2
6
3
] = 𝜋𝑆 3
(3)
This is indeed the half volume of a sphere. Since A in
(2) is the sum of the approximate cylinders formed
between the layers, the approximate coverage area of the
2D planes can be obtained by dividing (2) by d.
Additionally, since the sensor radius could be higher than
the room height, the volume of the spherical cap must be
subtracted from (2), producing:
𝐴 = 𝑆𝜋 [
2𝑆 2
3
−
𝑑(3𝑆+𝑑)
𝜋ℎ𝑐
6
6
© 2015 ACADEMY PUBLISHER
]−
(3𝑟𝑐 + ℎ𝑐2 )
(4)
where rc and hc are the radius of the cap base and the
height of the cap from the base, respectively.
To convert the 3D coverage into 2D coverage with
good approximation, the area of the circles projected on
each layer depends on the number of layers in the room,
which is hr/d + 1, where hr is the room (region) height.
Since hr is fixed, the accuracy of the volume in (4)
depends on how small d is. The smaller d is, the more
accurate (4) would be. In general, d defines the resolution
of the matrix formed to represent a region. Since the grid
approach is used to determine coverage, i.e., whether a
pixel is covered by a circle, d needs to be small enough to
provide a good approximation of the region coverage.
C. Energy Efficiency Algorithms
Since sensors have often limited storage of energy,
many scheduling algorithms with various operation
modes for CAC and PAC have been proposed to control
the sleeping patterns of the sensors [37].
Coverage Configuration Protocol (CCP) [39] decides
on the sleep pattern of redundant nodes while keeping the
coverage at the desired level. CCP is a distributed
protocol used for either CAC or PAC. Each node is
initially active and runs an eligibility algorithm
independently to determine its sleep eligibility. A node S
is eligible to go to sleep if all the points inside its sensing
range are already covered by other active nodes in its
local neighborhood. Node S is covered by its neighbors if
for each intersection point (crossing point) p between two
sensor nodes A and B that is inside of the sensing range S,
p is within the sensing range of a different sensor node
C. If so, S is redundant and thus eligible to go to sleep.
CCP can be extended to k-coverage redundancy easily. A
crossing point is k-covered if it is covered by at least k
distinct sensors. A sensor is k-covered and thus eligible to
go to sleep if all its crossings within its sensing radius
with other sensors are at least covered by k distinct
sensors excluding itself.
Random Independent Sleeping (RIS) [16,37] is a
simple algorithm where each node decides independently
for its sleep schedule and does not need to know any
information from its neighbors such as their locations or
distances. RIS can be implemented with various flavors.
A simple implementation is to divide the total running
time into time slots of equal length 𝑇𝑠𝑙𝑜𝑡 partitioned into
ACTIVE and SLEEP periods. The duration of active and
sleep period is 𝑝𝑇𝑠𝑙𝑜𝑡 and 𝑇𝑠𝑙𝑜𝑡 − (𝑝𝑇𝑠𝑙𝑜𝑡 ) respectively,
where 0 ≤ 𝑝 ≤ 1 is the active probability of a sensor that
is globally known by all sensors.
Probing Environment and Adaptive Sleeping (PEAS)
[37,40] extends the network lifetime by inactivating the
redundant nodes. A node is always in one of the
following states: SLEEP, PROBE, or ACTIVE. Initially
all sensors are in the SLEEP state. A node awakes and
transits to the PROBE state according to its probing rate.
The probe rate provides the frequency with which a node
wakes up to decide on becoming active or going back to
sleep. Once in the PROBE state, the node broadcasts a
beacon signal and waits for a reply from its neighbors. If
a reply is received, the node will go back to sleep for
another time duration. Otherwise, it will stay active.
212
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
III.
LOCOPac
LOCOPac, developed in Java, is constituted of two
main modules: SAC simulation and SAL implementation.
The SAC simulation module contains two components:
deployment and simulator. The deployment component
contains the deployment algorithms controlling the
manner in which the sensors are distributed or the
management of sensor activities. The simulator
component, based on a chosen deployment algorithm,
simulates SAC, either in 2D or 3D space under various
scenarios. On the other hand, the SAL implementation
module, which is integrated with the SAC simulation
module, provides the user interface for the simulation of
object localization and more importantly the visualization
of object tracking in real time.
The interface is flexible enough to allow for manual
intervention of distributing sensors. For instance, a user
can move the mouse pointer to insert, move or delete
sensors in the field and immediately observe their effect
on the area coverage. As an example, Fig. 2 shows the
interface for the 3D environment with some sensors
distributed by clicking on the 3D on the left side. The
right side shows some information about the sensors and
their coverage for a specific layer - in this example layer
0 (ceiling). A user can change the layer by adjust the
button under “Sensor coverage layers” from left to right.
The user can also change the total number of layers.
coverage tends to be partial or complete. The selections
shown on the right side are in the form of drop-down
menu with more details about each selection. The top of
the figure shows some general options such as zooming
in or out of the field, or moving the field to the right or
left for better observation. The bottom panel is used to
show the console where the user inputs are shown along
with some of the simulation results such as the points
covered in a field or the k-coverage of the points.
Figure 3. A combination of 3D and 2D for a specific layer.
Fig. 4 shows the five choices (in blue) that are so far
implemented for 2D that the user can choose from. For
example, if the user is concerned with energy efficiency,
the simulation results can be affected by the distribution
algorithm of the sensors and depending on whether the
field coverage is partial or complete.
Figure 2. An example of a 2D representation of 3D environment
with sensors projected on a specific layer.
Fig. 3 is another example of 3D environment. In this
figure, the field is shown in two different ways, where the
left side shows the 3D view of a region in a cube form
with sensors on the ceiling. The right half shows the
vertical view of the same sensors on a specific layer
(layer 0 in this example). The middle bar can be moved
to make each partition in the center window larger or
smaller. The bottom panel shows some of the inputs,
based upon the selections on the right side, and the output
results reaching a required accuracy and the coverage
attained for the total field.
From the left side of the figure, the user can also
choose the 2D option. Choosing this option will treat the
field as a 2D plane without any reference to 3D
characteristics. The appropriate selection choices appear
on the right side that a user can use to set the field size
and select the deployment algorithms, and whether the
© 2015 ACADEMY PUBLISHER
Figure 4. The 2D choices for SAC and the associated options.
Fig. 5 shows an instance of SAL user interface, where
the center window shows the test space that is partitioned
into regions according to the physical structure of the test
space. Clicking on the start button under the SAL option
will cause the particular menus for SAL on the right side
of the figure to appear. These menus include setting the
field such as width and height of the field. Below the set
field is the interface for SAC simulation, where one can
set up the parameters such as sensor size, sensor radius,
and the simulation algorithm. There is also a particular
interface panel for sensor setting below the simulation
panel, which provides the functions to add, remove, or
edit sensor nodes in the simulation space.
The real time interface for SAL implementation is
provided under the test panel, which includes parameter
setting and initializations for the Kinect stereo camera
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
[36,38] and ZigBee sensors. More details about the
sensors used in SAL will be discussed in a later section.
213
considered as the primary test environment due to its
spatial arrangement and sufficient 3D building
information. The 3D spatial information (SI) for object
localization is then obtained by converting the BIM
image to the 3D coordinates. The SI model is generated
through sampling every surface of the BIM image via the
method proposed by Mark et al. [25] and integrated in the
open-source Point Cloud Library (PCL) [30]. In
particular, the SI model provides a bunch of 3D spatial
points with specific coordinate values (x, y, z).
Global Region Localization
PKI Building
Test Space
Local Distance Measurement
Wireless
Sensor Array
Stereo Camera +
Laser
3D BIM Model
Database
Region
Determination
Figure 5. LOCOPac interfaces for SAL.
IV. SAL IMPLEMENTATION
Stereo vision technology [10,22] has attracted ample
attention and has shown its great potential for ubiquitous
sensor localization as it is capable to accurately obtain 3D
distance parameters. A stereo vision system is equipped
with at least two cameras displaced horizontally from one
another in a manner of human binocular vision. Two
images from the cameras are compared in order to obtain
the depth information of their common points. Generally,
several pre-requisite processes are necessary such as
camera calibration to remove image distortions and
image rectification in order to project two images back to
a common plane for computation.
By investigating the principles of stereo vision
technique, a Distance Measurement Sensor (DMS) is
built with a stereo camera called Kinect manufactured by
Microsoft [36,38] and a ZigBee gateway. The DMS
behaves as the object to be localized. The purpose of the
Kinect is to provide the 3D depth information of the
surrounding areas so that the DMS can be localized. The
Zigbee gateway is used to communicate with the preinstalled ZigBee sensor nodes in the surrounding areas in
order to determine the region location of the DMS.
On the other hand, BIM provides a new perspective on
viewing the environment (e.g. a building) in 3D images.
It has a wide range of applications on generation and
management of virtual representations of physical
facilities. In this study, BIM is used to generate 3D
spatial information (x, y, and z coordinates) for sensor
localization. As shown in Fig. 6, the working process of
the proposed methodology primarily consists of three
phases: 3D database generation, local distance
measurement, and global sensor localization.
In Fig. 6, before the 3D coordinates of PKI1 (Fig. 7)
can be generated and stored in a database, its image as a
3D BIM model is needed. Generally, the BIM model is
chiefly utilized to visualize the design and functional
procedures in a 3D view. The PKI second floor is
1
The realistic 3D BIM model of PKI (Fig. 7) is provided by Kiewit
Corporation.
© 2015 ACADEMY PUBLISHER
Laser Points
Reference
Global
Localized Region
3D Spatial Info. (x, y, z)
Result Presentation
Database
to “Object Global
Local Position
Localization”
Local 3D Distance
Measurement
Local Distance Values
Figure 6. System architecture for SAL.
y
z
x
Figure 7. 3D Bird view of the PKI building.
During the phase of local distance measurement, DMS
will measure its 3D distance values from its surrounding
working space. Here “local” means the process of
distance measurement is implemented based on the
coordinate system of DMS. Using the SI model as a
reference, the DMS distance measurements will be
converted to global coordinates. The relationship between
the local and global coordinate system will be discussed
in the next section.
The 3D spatial information of the test space is
partitioned into sections called regions. For instance a
region could be a room in the PKI building. By using the
Zigbee-based wireless sensor nodes that are preinstalled
in the regions, the responsibility of the global region
localization phase is to determine the region where the
mobile target node, i.e. DMS, resides. The traditional
RSSI-based ranging measurement is used for this
214
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
purpose. This information is then combined with the local
distance measurements phase to determine the actual
location of DMS. Recall that RSSI measurements may
not be accurate because of the noises created by the
surrounding obstacles. Therefore, once the approximate
location of DMS is identified, local distance information
will be used to determine the exact location of DMS.
A. The Coordinate Systems
Four levels of coordinate systems are defined, as
shown in Fig. 8a: Geographic Coordinate System (GC) as
the level I coordinate system,
the local spatial
information coordinate (LC-SI) system as level II, local
DMS coordinates (LC-DMS) as level III, and the local
coordinates in the Kinect cameras (LC-Kinect) as the
level IV coordinate system. Each coordinate system has
its own origin O(0, 0, 0) in 3D space. Accordingly, a 3D
point can be described in four different coordinate values,
which can be converted from one coordinate system a to
a corresponding point in a different coordinate system b
using a transformation matrix Ta→b. For example, Fig. 8b
shows a point in the third coordinate system that is being
converted to its corresponding point in the second
coordinate system using the transformation matrix T3→2.
Transformation matrices are described later in the
section.
y
y
o4
z
Level IV: Local Coordinate System in Cameras
x
T43
T34
y
z
Level III: Local Coordinate System in DMS
o3
x
T32
T23
y
Level II: 3D Local Spatial Information
Coordinate System
z
o2 x
T21
T12
y
z
Level I: Geographic Coordinate System
o1 x
x
o
z
(a)
z
Point B: (Obx,Oby,Obz)
y
z
Point A: (Oax,Oay,Oaz)
o2
x
x
o3
y
(b)
Figure 8. (a) The coordinate systems and their relationships (b)
Example showing transforming one coordinate point from one system
to another.
© 2015 ACADEMY PUBLISHER
Level I: Geographic Coordinate System (GC) – The
GC system refers to the coordinate system that the whole
test space belongs to. Since the global geographic
coordinate values of the PKI building (Fig. 7) are
41.2477208 degree (latitude), -96.0154971 degree
(longitude), and 316 m (altitude), the origin O1(0, 0, 0) of
GC system can be actually mapped to this geographic
point. As a result, each point in the PKI building can be
endued with real geographic values on the earth.
Level II: 3D Local Spatial Information Coordinate
System (LC-SI) -- The local coordinate system in 3D
spatial information model aims at providing the
coordinates for the test space (PKI second floor). LC-SI
system localizes its original point as O2(0, 0, 0) in the
bottom left corner of the PKI second floor. This origin,
i.e. O2(0, 0, 0) of LC-SI system, is actually the origin
point O1 in GC system with transition of 15.5 meters in
x-axis, 27.9 meters in y-axis, and 39.1 meters in z-axis
direction from O1(0, 0, 0).
Level III: Local Coordinate System in DMS (LC-DMS)
-- The DMS local coordinate system serves as the base
coordinate system for the functional implementations of
DMS. LC-DMS system performs differently compared to
the GC and LC-SI systems in that it is mobile while the
other coordinate systems are fixed. Correspondingly, the
process of calculating the transformation matrix related to
this system is also different than the other coordinate
system transformation because of the DMS mobility. In
this scenario, the T3→x, where x is either 2 or 4, describes
at least two transformations: translation by a distance d
and rotation by an angle θ about x, y, or z axis, which are
explained below. These two transformation matrices
along with the spatial relationship between the DMS and
the SI model will be used to determine the DMS’s
position in the GC system.
Level IV: Local Coordinate System in Kinect (LCKinect) -- The local coordinate system in the Kinect
provides the pixel positions in 2D images obtained by the
image cameras and 3D depth information corresponding
to the 2D images. Kinect [36] is an RGB-camera
equipped with a horizontal sensor bar connected to a
small motorized base. The base can be positioned
lengthwise above or below the image. The middle point
of the horizontal sensor is the origin coordinate point for
the LC-Kinect system.
The notion of multiple coordinate systems is not new.
For example, multiple coordinate systems are normally
used in robotics to solve different problems. For the
purpose of this study, a separate spatial correlation exists
between two adjacent coordinate systems with the
transformation matrix Ta→b that allows the points in
coordinate system a to be converted to their
corresponding neighbor points in a coordinate system b.
For this conversion to take place, one needs to obtain two
matrices: translation and rotation. A translation matrix
simply captures the distance between the two sets of
coordinate point values between the two systems, so that
the coordinate values in one system can be adjusted
correctly as they are moved to the other system. Since the
axes of the two coordinate systems may not be in parallel,
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
215
the rotation matrix captures the rotation degree along
each axis of system a so that it becomes aligned with the
corresponding axis in system b. As these rotations take
place, the coordinate values are adjusted accordingly to
keep pace with the rotations.
Fig. 8b shows a 3D point in coordinate system 𝑂3
(level III) whose coordinate values are A: (𝑂𝑎𝑥 , 𝑂𝑎𝑦 , 𝑂𝑎𝑧 ).
Fig. 8b also shows the corresponding 3D point B:
( 𝑂𝑏𝑥 , 𝑂𝑏𝑦 , 𝑂𝑏𝑧 ) in the coordinate system 𝑂2 (level II).
Assume, the angles of rotation along each axes x, y, and z
in the coordinate system 𝑂2 are θx, θy, and θz,
respectively. Further assume that translation along the
axes of x, y, and z are dx, dy, and dz, respectively. Then,
the translation matrix B1 and the rotational matrices B2,
B3, and B4 along the axes x, y, and z, respectively, are as
follows [18]:
1
𝐵1 = [0
0
0
0
1
0
0
0 𝑑𝑥
𝑐𝑜𝑠𝜃𝑥
0 𝑑𝑦
] , 𝐵2 = [ 𝑠𝑖𝑛𝜃𝑥
0
1 𝑑𝑧
0
0 1
−𝑠𝑖𝑛𝜃𝑥
𝑐𝑜𝑠𝜃𝑥
0
0
0
0
1
0
0
0],
0
1
equipped with ZigBee sensor devices, a server gateway,
and user interfaces. In particular, the sensor devices are
able to perform the low-power wireless sensor network
measurements
including
temperature,
humidity,
barometric pressure, acceleration and ambient light. In
addition to communication with the server (PC) through
the USB interface, the gateway is able to collect data
from the ZigBee devices. The gateway is also able to
gather RSSI values from the sensor devices in order to
estimate their corresponding distance relationships from
the DMS, so that its region can be determined.
Compared to other alternatives such as AOA and TDOA,
the cost of a RSSI-based system is generally lower and
the measurement and calculations involved in RSSI are
simpler and less complicated [3,11,29]. Fig. 10 illustrates
the high level configuration of the ZigBee sensor
network, which is composed of a gateway base station
attached to the DMS and a number of ZigBee devices.
y
Test Area
z
𝑐𝑜𝑠𝜃𝑦 −𝑠𝑖𝑛𝜃𝑦 0 0
𝑠𝑖𝑛𝜃𝑦 𝑐𝑜𝑠𝜃𝑦 0 0
𝐵3 = [
],
0
0
1 0
0
0
0 1
𝑐𝑜𝑠𝜃𝑧 −𝑠𝑖𝑛𝜃𝑧
𝑐𝑜𝑠𝜃𝑧
0
0
𝐵4 = [ 𝑠𝑖𝑛𝜃𝑧
0
0
0
0
1
0
0
0]
0
1
The transformation matrix is then: 𝑇𝑎→𝑏 = 𝐵2 𝐵3 𝐵4 𝐵1 .
Finally, the point A in Fig. 8b will be converted to the
corresponding point B by doing:
𝑂𝑏𝑥
𝑂𝑎𝑥
𝑂𝑏𝑦
𝑂𝑎𝑦
[
] = 𝑇𝑎→𝑏 [
]
𝑂𝑏𝑧
𝑂𝑎𝑧
1
1
B. Region Determination
Fig. 9 shows the test space on the second floor of the PKI
building, which consists of a hallway and several rooms.
Separate regions in this test space are pre-defined based
on the spatial range of the PKI coordinate values. For
example, a coordinate point in Region#1 is defined as
𝑥 ∈ {𝑥1 , … , 𝑥2 } , 𝑦 ∈ {𝑦1 , … , 𝑦2 } , and 𝑧 ∈ {𝑧1 , … , 𝑧2 } ,
where the pairs [𝑥1 , 𝑥2 ], [𝑦1 , 𝑦2 ], and [𝑧1 , 𝑧2 ] are constant
threshold values which determine the range values
spanned along each axis in Region#1.
As shown in Fig. 6, the overall object global
localization receives inputs from Region Determination
(RD) and Sensor Node Localization (SNL) to localize
DMS. The RD process refers to the estimation of the
physical area (region) where the DMS is localized. The
implementation of RD process relies on the ZigBee
sensors installed in the working space, whose coordinates
are pre-known.
The MEMSIC professional kit for WSNs development
was utilized to implement the experiments in the region
determination phase in order to verify the validation of
the developed sensor localization system introduced in
the previous sections. The MEMSIC kit is mainly
© 2015 ACADEMY PUBLISHER
x
(a)
#11
#12
#13 #14
#15 #16
#10 Hallway
#6 #3 #1
#9
Region #0
#7 #4
#8 #5
#2
(b)
Figure 9. (a) The 3D SI model of the PKI second floor and the test
area; (b) The pre-defined regions of the test area.
The following will introduce the process of RSSIbased distance estimation of sensor devices. Because the
gateway attached to the DMS might receive signals from
many sensors attached to the test space, and since the
RSSI signals used for distance measurement may not be
accurate due to the building construction material and
obstacles, the selected sensors for region determination
are based on their measured signal strengths within a
specified range.
Specifically, the signal strength of sensors for various
distances to about 4 meters have been obtained and
physically validated for their accuracy. MoteView that
came with the development kit was used for the measured
signal strengths. In particular, a message can be sent by
216
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
the gateway asking the sensors to send a reply message to
determine their signal strengths. Fig. 11 reflects the
relationship between the measured distances and their
corresponding RSSI proportional values. For each signal
strength received, MoteView provides a corresponding
proportional value. Fig. 11 further shows the linear best
fit of the RSSI proportional values versus the measured
distances, which is dis = -7.3 sig + 460.0, where dis
shows the distance and sig is the RSSI proportional value
received by the gateway. The approximate range of (≈0
… ≈60) is used as the proportional values to decide
which sensors to select. The corresponding distances are
then used to determine the DMS region using the
trilateration method. The process of obtaining Fig. 11
follows the concept of location fingerprinting [1,7,17],
where the received signal strength is compared against a
radio map of signal strengths for various locations. The
advantage of the radio map, which is constructed off-line,
is that the signal strengths measured are customized
toward a particular environment. Therefore, more trust
can be placed on the accuracy of the measured distances
dis. The other advantage of dis is to reduce the false
positives caused by sensors from other regions whose
signals might reach the other regions.
Sensor Nodes
Sensor Nodes
C. Sensor Node Localization (SNL)
Once the region is determined, SNL is used to localize
the DMS accurately. Theoretically, a cloud of x-y-z point
values for a 360-degree view are expected to be entirely
matched with the corresponding point values provided by
the 3D SI model (Fig. 6). The conversion process from
the DMS’s local coordinate system to the digital images
pixels that are matched against the SI model is the key in
object localization.
The 3D spatial data, i.e. the coordinates, are measured
based on the common coordinates of the images with
depth information captured by the Kinect camera and the
DMS’s coordinate system. The set of points in each
image is then compared against the set of coordinates in
the 3D SI model. The typical Iterative Closest Point
(ICP) algorithm [5] provided by Point Cloud Library is
employed to implement the point matching process. Also,
the distances are determined using the stereo vision
technique [10,22]. Consequently, the coordinates of
references in multiple images along with their
corresponding distances are combined using trilateration
to determine the location of DMS.
Fig. 12 is an example of the 3D point values obtained
by DMS. As the DMS moves (Fig. 13), it is able to obtain
its accurate position by detecting the coordinate values
that are mapped against the corresponding point values of
the SI model for the region determined by the preinstalled sensors.
3D point - x, y, z values
X
Y
Z
-16634.605469 -20534.800781 11693.732422
-23279.833984 -20534.984375 10772.852539
-21531.990234 -20382.523438 12222.529297
-16633.574219 -20442.738281 14622.803711
3D point - x, y, z values
Sensor Gateway
X
Y
Z
-22155.539063 -23522.800781 11995.356445
-21137.466797 -23430.800781 10206.484375
-17429.714844 -23430.800781 14025.322266
-22461.205078 -23522.800781 14197.208984
DMS
Figure 10. Sensor network and DMS in test space.
Distance 2
y
Distance n
Distance 1
z
o x
DMS
Figure 12. 3D point cloud model with x-y-z values obtained by DMS.
Figure 11. The linear best fit of distance as a function of RSSI.
© 2015 ACADEMY PUBLISHER
As an example, Fig. 14 shows the overall results of
SAL implementation in real time that includes the
determined region and the localization of the target
sensor node (i.e. DMS) shown as a yellow dot. Recall
that accurate localization of DMS requires the assistance
of pre-installed ZigBee sensors to determine the region.
The gateway installed in DMS keeps detecting the
ZigBee sensors whose assigned identification values are
mapped against specific regions. Once the region is
identified, its borderline (shown in red in Fig. 14) starts
flashing. The Kinect attached to DMS is then utilized to
measure the 3D depth information between DMS and the
surrounding area of the region, which will lead to the
precise location of DMS.
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
217
be expensive in terms of hardware and energy
consumption.
RSSI
RSSI
Determined
Region
Measured
Distance by DMS
Target
Distance
Distance
Sensor
Node
Figure 13. Measured distances by DMS in the determined region.
Figure 15. Relationship between number of sensors and coverage
ratio.
TABLE I. RELATIONSHIP BETWEEN FIELD SIZES AND THEIR
CORRESPONDING COVERAGE RATIO
Figure 14. Determined region and localized target sensor.
V. SAC SIMULATION RESULTS
To show the benefits of LOCOPac, this section
provides some simulation results of SAC for the uniform
distribution and energy efficiency in 2D followed by
some examples under 3D coverage showing what the
coverage would be by placing the ZigBee sensors on a
ceiling of a room. The graphs shown in the following are
obtained by using the export option of LOCOPac to
export the results to Excel, or by using the internal graph
capability option of LOCOPac. Also, although some
aspects of fault tolerance are covered in the following
examples, the reader is referred to [32] for more details
on approaches and simulation results for fault tolerance
and security options.
A. Uniform deployment
Under the uniform deployment, Fig. 15 shows the
relationship between the number of deployed sensors and
CR. The sensing radius is set at 20. The number of
deployed sensors is varied from 100 to 1800 in step of
100, and the selected field sizes attempted are 200, 400,
600 and 800 𝑚2 .
Each coverage ratio in the graph is the average
coverage ratio (ACR) of 10 repeated simulation runs for
each combination of sensors and field size. According to
Fig. 15, Table I shows the SD for at least 80% and 100%
coverage. The table shows that SD would need to be
doubled to transit from 80% to 100% coverage, which
corresponds to doubling the sensors. This transition can
© 2015 ACADEMY PUBLISHER
In the next experiment, different number of sensors
from 100 to 600 with multiple sensing radii of 10 𝑚 to
60 𝑚 are deployed in a fixed field of 400 𝑚2 . The
simulation is repeated 10 times to get the ACR for the
entire field. Fig. 16 shows that when the sensing radius is
low, e.g. 10 𝑚 , the impact of increasing sensors to
achieve high coverage is low. For example, when the
radius is 10 𝑚, increasing sensors by 100, i.e. from 300
to 400, from 400 to 500, and from 500 to 600, provides
about 8% increase in the coverage and neither change
results in high coverage. Looking at this differently, by
doubling the sensors from 300 to 600, the increase in
coverage is about 24%. Also, the impact of increasing
sensors to obtain high coverage beyond 95% is not much.
Furthermore, as expected, the coverage ratio increases as
the sensor radius is increased to 20 𝑚, but the effect of
increasing sensors diminishes beyond 30 m radius.
Figure 16. Sensor radius vs. average coverage ratio.
218
B. Energy efficiency algorithms
In random sensor distribution, the number of sensors
deployed might be more than what is required to achieve
a certain CR. This has the advantage of having areas that
are covered by extra (redundant) sensors. In addition, the
extra sensors provide for longer network lifetime in case
of failures due to the depletion of energy or other forms
of faults. On the other hand, coverage of the same areas
by multiple sensors wastes energy due to a number of
factors such as sensing of the environment or exchanging
data with the neighboring sensors. Additionally, the
amount of energy consumption is proportional to the
coverage ratio required. For example, forest fire
monitoring applications may require complete coverage
in dried seasons but only require 80% of the area to be
covered in rainy seasons. Therefore, to extend the
network lifetime, CR should be adjusted accordingly.
Consequently, algorithms have been designed with the
primary objective of ensuring the required CR and the
secondary objective of energy efficiency by deciding
when and how often certain sensors to be placed in the
sleep mode. It is important to have a good balance
between the CR and the number of active sensors in order
to maximize network lifetime. The following shows some
simulation results for CCP, RIS, and 3D coverage. For
further results on these and other algorithms such as
PEAS, the reader is referred to [32].
Coverage Configuration Protocol (CCP) - CCP is
basically a sensor redundancy algorithm that is executed
by every sensor to determine its sleep eligibility. Since
each sensor determines its fate independently and is only
concerned with its own sensing coverage, the sensors
have no clue whether the field is completely covered. In
other words, the CCP algorithm is a local and
decentralized algorithm with the ability to cope with node
failures under PAC or CAC.
The following is an experiment using CCP with
complete coverage and with energy efficiency in mind. In
the experiment, a static field size of 100 𝑚2 is used and
100 sensors with sensing radius of 40 𝑚 are uniformly
distributed. The coverage redundancy k is set at 1 and the
failure rates considered are 0.005, 0.01, 0.05, 0.1, 0.3,
and 0.5. For each failure rate, the simulation is repeated
with a new sensor distribution until five distributions are
found that provide 100% coverage. As the sensors fail,
the average sensor density of the five simulations at
different points of time are then calculated. Obviously if
no failures occur, sensors stay active or sleep for the
entire time duration. Therefore, their state changes as
failures occur. Fig. 17 plots the results for different
failure rates.
These graphs show the extent to which sensor density
can decrease and still have 100% coverage. A failure rate
of 0.005 indicates an average of one failure in 200 units
of time. A failure rate of 0.01 is on average a failure in
100 units of time, which is half the running time of 200.
This indicates that such failure rates tend to keep the
sensor density constant and thus CAC is preserved.
However, a failure rate of 0.1, i.e. one failure in 10 units
(5% of running time) and higher, tends to be risky as the
© 2015 ACADEMY PUBLISHER
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
sensor density can change quickly. Other simulations can
be conducted in LOCOPac to see the effect of various
parameters on CAC while trying to conserve energy. As
an example, simulations have been run to test the effect
of failure rates on the total number of sleeping sensors
while ensuring 100% coverage, for various k of kcoverage. Because of limited space, the results are not
shown, but the simulation results illustrate that the rate of
change in the number of sleeping sensors is similar
regardless of the value chosen for k. For further detail,
the reader is referred to [32].
Figure 17. Sensor density for varius failure rates while keeping CAC.
Random Independent Sleeping (RIS) – Recall that in
RIS, a sensor stays active for 𝑝𝑇𝑠𝑙𝑜𝑡 duration at the
beginning of the time slot 𝑇𝑠𝑙𝑜𝑡 and goes to sleep for the
rest of the time, i.e. 𝑇𝑠𝑙𝑜𝑡 − (𝑝𝑇𝑠𝑙𝑜𝑡 ), where 0 ≤ 𝑝 ≤ 1 is
the active probability of a sensor. In LOCOPac, to
increase randomness among sensors with respect to their
sleeping patterns, the order of active and sleep durations
are probabilistically determined. The new algorithm is
called Probabilistic RIS (PRIS). Specifically, a sensor
decides whether to stay active with probability of a or be
in the sleep mode with the probability of (1- a) at the start
of each time period. In the experiments conducted, a =
0.5 is used.
The relation between the number of deployed sensors
N and CR can be derived by using asymptotic analysis.
According to [37], the ACR is:
ACR = 1 − 𝑒 −𝜆𝑆𝐷𝜋𝑅𝑠
2
(5)
where 𝜆𝑆𝐷 and 𝑅𝑠 are the sensor density of the active
sensors and the sensing range of each sensor,
respectively. This formula indicates that ACR is
independent of field size.
The next experiment is focused on partial coverage
using the PRIS algorithm. A field size of 200 𝑚2 is
considered and 100 sensors with sensing radius of 15 𝑚
are uniformly distributed in the field. The simulation is
run for 200 units of time divided into 5 time slots of 40
units. The simulation is repeated five times with a new
distribution for each time. The average CR of the five
runs is calculated and compared against the ACR in (5).
Fig. 18 conveys that the simulated and mathematical
coverage are closely overlapped for the 200 time units of
the simulation. This shows that equation (5) indeed
shows a good estimate of CR and the fact that LOCOPac
provides the correct results. This further shows that the
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
ACR in (5) can be used to decide on the active
probability p to achieve a certain level of coverage.
Figure 18. Simulation vesus mathematical CR in PRIS algorithm.
Indoor 3D Coverage - Recall that the region coverage
highly depends on the distance d between the layers. Also
recall the measured and actual spherical volumes in (2)
(3), respectively. Fig. 19 exhibits the simulation results
from LOCOPac that shows the accuracy of a single
sensor area coverage by increasing the number of layers.
In the simulation it is assumed that the region height is 12
units, e.g. 12 meters, and the sensor range is 5 units. A
unit is user defined. The accuracy is simply the division
of (2) by (3), which produces a value between 0 and 1.
Fig. 19a simply shows that as the number of layers
increases, implying that d decreases, the accuracy
approaches 1. In the figure, when n = 10, the accuracy
reaches 0.74. With n = 15, the accuracy is 0.85. An
accuracy of 0.95, as requested by the user, is achieved
when n = 40. Similar to Fig. 19a, Fig. 19b shows the
accuracy of the volume but with respect to d. An
accuracy of 0.90 requires d = 0.50, and accuracy of 0.95,
as requested by the user needs the value of d = 0.31.
219
Once the distance d between the layers is determined, the
total 3D coverage of sensors can be calculated. Fig. 20a
shows the deployment of some sensors in 3D space on
the ceiling of a room. The red dots are the sensor nodes,
whose signal spheres that are formed project the sensor
circles on the layers. One specific plane is marked in
yellow (Fig. 20a), and the corresponding sensor circles
for the layer are shown in Fig. 20b. The user is able to
select different planes. The corresponding output results
including the coverage for the selected layer and the total
coverage for the field are shown in 21c.
(a)
(a)
(b)
(b)
Figure 19. Volume accuracy of a sensor sphere: (a) Accuracy as the
function of number of layers n; (b) Accuracy based on distance value d.
© 2015 ACADEMY PUBLISHER
(c)
Figure 20. Coverage of multiple overlapping circles: (a) Sensor
deployment in a 3D cube space; (b) Projected circle formations of a
layer; (c). Coverage results produced by LOCOPac.
220
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
VI. CONCLUSIONS AND FUTURE WORK
The ability of accurately localizing indoor objects
equipped with sensors and cameras in WSNs has been
demonstrated by taking advantage of BIM, 3D stereo
techniques, and the PCL open source library. The target
BIM model of the target space has been converted to a
3D spatial information that is partitioned into specific
regions. The region in which a mobile target is localized
is determined using a group of pre-installed sensors. The
accurate location of the target is then determined based
on the image matching algorithm, i.e. ICP algorithm, and
the distance measurement technique using stereo image
computations. The successful experiments conducted by
allowing the object to roam in different connected
regions and recording its position at various points of
time show that the proposed methodology is potentially
useful for real-world applications such as in smart
buildings, security, and surveillance.
The detection of moving objects highly depends on the
appropriate location of sensors. Since sensors located
inside of a building are often installed on places other
than floors (to provide better line of sight and minimizing
multipath reflection effects), it was necessary to consider
3D coverage of sensors and investigating their effects on
coverage depending on their locations. The 3D coverage
of a region was obtained by converting the spheres
formed by the ZigBee signals into 2D planes (layers).
The coverage summation of the planes provided a good
estimate of the total 3D coverage. The approach for
achieving 3D coverage caused the development and
simulation of 2D sensor deployment algorithms, which
have the additional advantage for outdoor sensor
coverage. In this regard some simulation examples have
been presented.
Currently, we are enhancing the indoor coverage and
localization in multiple ways:
 Investigating mobile devices – The possibility of using
smaller mobile devices such as phones and tablets
equipped with ZigBee capability are being
investigated.
 Using other network protocol technologies Fingerprinting based on WiFi access points instead of
using ZigBee sensors is under consideration. WiFi
based-fingerprinting has been proposed in a number of
studies [12,13,23,33]. Many of these techniques,
including this study, use the RSSI absolute values for
building the fingerprints. A common drawback among
these approaches is the inaccuracy of localization
because of the sensitivity of signal strength to
multipath fading, types of surrounding obstacles, and
device orientation. Additionally, different devices may
register different measurements of RSSI, further
reducing the localization accuracy. A better approach,
as proposed in [6], is to refrain from using the absolute
values of RSSI, and instead provide localization based
on relative signal strength measurements. In other
words, although different devices may provide
different RSSIs, their order relation among the
measured RSSIs tends to be the same. Consequently, a
viable approach to enhance the accuracy of region
© 2015 ACADEMY PUBLISHER
localization is to use an approach similar to the one
proposed in [6].
 Enhancing LOCOPac – A number of enhancements
can be made to LOCOPac. One is the inclusion of
other deployment algorithms such as some of the
predefined
deterministic
algorithms.
Other
enhancements include rearrangement of options to
provide better user interface, or accessing the software
via the web interface.
 Faster processing – There might be a need for better
mathematical schemes to account for faster image
processing. Specifically, since the 3D spatial model is
generated directly from the BIM model, region
determination and accuracy of sensor localization
depend on the resolution of the generated SI model.
Better accuracy depends on higher resolution, which
requires more storage with higher processing time for
image processing.
 Including fault tolerance - Further investigation is
needed for better placement of sensors to ensure
complete area coverage while minimizing false
positives. We conjecture that this can be achieved
using fault tolerance techniques. More specifically,
while DMS is receiving signals from the region it is in,
it may receive signals from faulty sensors or from
sensors in other regions as well, causing the DMS to
mistakenly localize its region. To minimize such a
problem, we are analyzing and combing multiple
signal strengths and removing the outliers to reach a
decision with high probability of success.
ACKNOWLEDGEMENT
This research has been partially supported by the
NASA Nebraska Space Grant and the Graduate
Research and Creative Activity (GRACA) program at
the University of Nebraska Omaha (UNO). We also
thank the Kiewit Corporation for providing us with the
experimental model of PKI.
REFERENES
[1] K. Al Nuaimi, H. Kamel, “A survey of indoor positioning
systems and algorithms”, Int’l Conference on Innovations
in Information Technology, pp. 185-190, 2011.
[2] J. Aspnes, et al., “A theory of network localization,” IEEE
Transactions on Mobile Computing, pp. 1663-1678, 2006.
[3] B. Azzedine, A.B.F.O. Horacio, F.N. Eduardo, A.F.L.
Antonio, “Localization Systems for Wireless Sensor
Networks,” IEEE Wireless Communications, pp. 15351284, Dec 2007.
[4] J. Bachrach, T. Christopher, “Localization in Sensor
Networks,” in Handbook of Sensor Networks: Algorithms
and Architectures, I. Stojmenovic (Ed.), 2005.
[5] P.J. Besl, N.D. McKay, “A method for registration of 3-D
shapes,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992.
[6] I. Bisio, F. Lavagetto, M. Marchese, “GPS/HPS-and Wi-Fi
fingerprint-based location recognition for check-in
applications over smartphones in cloud-based LBSs”,
IEEE Transactions on Multimedia, vol. 15, no 4, pp. 858869, 2013.
JOURNAL OF NETWORKS, VOL. 10, NO. 4, APRIL 2015
[7] I. Bisio, F. Lavagetto, M. Marchese, A. Sciarrone,
“Performance comparison of a probabilistic fingerprintbased indoor positioning system over different
smartphones”, Performance Evaluation of Compute and
Telecommunication systems, pp. 161-166, 2013.
[8] N. Bulusu, J. Heidemann, D. Estrin. “GPS-less low-cost
outdoor localization for very small devices,” IEEE
Personal Communications, vol. 7, pp. 28-34, 2000.
[9] M. Cardei, J. Wu, “Coverage in wireless sensor networks,”
in Handbook of Sensor Networks, M. Iiyas, I. Mahgoub
(Eds), CRC Press, 2004.
[10] C.C. Chen, M.C. Lu, C.T. Chuang, C.P. Tsai, “Visionbased distance and area measurement system,” WSEAS
Transactions on Signal Processing, vol. 4, no. 2, pp. 3643, Feb 2008.
[11] W. Chen, X. Li, “Sensor localization under Limited
Measurement Capabilities,” IEEE Network, vol. 3, pp. 1623, May 2007.
[12] Y. Chon, E. Talipov, H. Cha, “Autonomous management
of everyday places for a personalized location provider”,
IEEE Transactions on Systems, Man, an Cybernetics, Part
C, vol. 42, no 4, pp. 518-531, 2012.
[13] Y. Chon, H. Cha, “Lifemap: A smartphone-based context
provider for location-based services”, IEEE Pervasive
Computing, vol. 10, no. 2, pp. 58-67, 2011.
[14] D. Culler, D. Estrin, M.B. Srivastava, “Overview of sensor
networks,” IEEE Computer, vol. 37, no. 8, pp. 41-49,
2004.
[15] I. Dietrich, F. Dressler, “On the lifetime of wireless sensor
networks,” ACM Transactions on Sensor Networks, vol. 5,
no. 1, Feb 2009.
[16] C. Gui, P. Mohapatra, “Power conservation and quality of
surveillance in target tracking sensor networks,” ACM Int’l
Conf on Mobile Computing and Networking, pp. 129-143,
2004.
[17] V. Honkavirta, T. Perala, S. Ali-Loytty, R. Piche, “A
comparative survey of WLAN location fingerprinting
methods,” Workshop on Positioning, Navigation and
Communication, pp. 243-251, Mar 2009.
[18] C. Howie, M. Kevin, H. Seth, A.K. George, B. Wolfram,
E.K. Lydia, T. Sebastian, “Principles of robot motion:
Theory, algorithms, and implementations,” MIT Press,
May 2005.
[19] C.F. Huang, Y.C. Tseng, H.L. Wu, “Distributed protocols
for ensuring both coverage and connectivity of a wireless
sensor network,” ACM Transactions on Sensor network,
vol. 3, no. 1, pp. 1–24, 2007.
[20] W. Jing, R. K. Ghosh, S.K. Das, “A survey on sensor
localization,” Control Theory, vol. 8, pp. 2–11, Apr 2010.
[21] W. Kang, S. Nam, Y. Han, S. Lee, “Improved heading
estimation for smartphone-based indoor positioning
systems”, IEEE Int’l Symposium on Personal, Indoor and
Mobile Radio Communications, pp. 2449-2453, 2012.
[22] D. J. Kriegman, E. Triendl, T. O. Binford, “Stereo vision
and navigation in buildings for mobile robots,” IEEE
Transactions Robotics and Automation, vol. 5, no. 6, pp.
792-803, Dec 1989.
[23] M. Lee, H. Yan, D. Han, C. Yu, “Crowdsourced radiomap
for room-level place recognition in urban environment”,
IEEE Int’l Conference on Pervasive Computing and
Communications Workshops, pp. 648-653, 2010.
[24] H. Liu, “Survey of wireless indoor positioning techniques
and systems”, IEEE Transactions on systems, Man, and
© 2015 ACADEMY PUBLISHER
221
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
Cybernetics-Part C: Applications and Reviews, vol. 37, no.
6, pp. 1067-1080, Nov 2007.
P. Mark, G. Markus, P. K. Leif, “Efficient simplification of
point-sampled surfaces,” IEEE Visualization, pp. 163-170,
Oct 2002.
D. Moore, J. Leonard, D. Rus, et al., “Robust distributed
network localization with noisy range measurements,” Int’l
Conference on Embedded Networked Sensor Systems, pp.
50-61, 2004.
R. Mulligan, H.M. Ammari, “Coverage in wireless sensor
networks: A survey,” Network Protocols and Algorithms,
vol. 2, no. 2, pp. 27-53, 2010.
National Building Information Model (BIM), Standard
Committee,
http://www.nationalbimstandard.org/faq.php# faq 1,
Accessed Nov 2014.
A. Pal, “Localization algorithms in wireless sensor
networks: Current approaches and future challenges,”
Network Protocols and Algorithms, vol. 2, pp. 45-74, Mar
2010.
Point Cloud Library: http://pointclouds.org, Accessed Nov
2014.
D. Puccinelli, M. Haenggi, “Wireless Sensor Networks:
Applications and Challenges of Ubiquitous Sensing,”
IEEE Circuits and System Magazine, 3rd Quarter, pp. 1929, 2005.
A. Rezaeian, “Performance & simulation analysis of
sensor area coverage,” MS Thesis, Computer Science
Dept., University of Nebraska, Omaha, 2013.
B.J. Shin, K.W. Lee, S.H. Choi, J.Y. Kim, W.J. Lee,
H.S.Him, “Indoor WiFi positioning system for Androidbased smartphone”, Int’l Conference on Information and
Communication Technology Convergence, pp. 319-320,
2010.
P.K. Singh, T. Bharat, P. S. Narendra, “Node localization
in wireless sensor networks,” Int’l Journal of Computer
Science and Information Technologies, vol. 2, pp. 25682572, 2011.
D. Tian, N.D. Georganas, “A node scheduling scheme for
energy conservation in large wireless sensor networks,”
Journal of Wireless Communications and Mobile
Computing, vol. 3, no. 2, pp. 271–290, 2003.
S. Totilo, "Natal Recognizes 31 Body Parts, Uses Tenth of
Xbox 360 "Computing Resources"," Kotaku, Gawker
Media, Nov 25, 2010.
B. Wang and Coverage Control in Sensor Networks,
Springer, 2010.
Q. Wu, X.R. Li, G.S. Wu, “Interface Design for
Somatosensory Interaction”, LNCS 8518, pp. 794-801,
2014.
G. Xing, X. Wang, Y. Zhang, C. Lu, R. Pless and C. Gill,
“Integrated coverage and connectivity and configuration
for energy conservation in sensor networks,” ACM
Transactions on Sensor Networks, vol. 1, no. 1, pp. 36-72,
2005.
F. Ye, H. Zhang, S. Lu, L. Zhang, J. Hou, “A randomized
energy-conservation protocol for resilient sensor
networks,” Wireless Networks, vol. 12, no. 5, pp. 637-652,
2006.
M. Younis, K. Akkaya, “Strategies and techniques for
node placement in wireless networks: A survey,” Ad
HocNetworks, vol. 6, no. 4, pp. 621-655, 2007.