A Modern Method For Tracking The Distance Traversed

EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
A Modern Method For Tracking The Distance Traversed By
Human Using Image Processing
shadi shanesazzadeh
Sharif university of technology, Kish Island, Iran
ABSTRACT
At the present study, a new method has been proposed for purpose of tracking
individuals and measuring the distance traversed by human using a camera
attached to the person down to earth. This is the first time that the object under
detection would not be exposed to vision angle of the camera and tracking
would be conducted using inside-out (movable camera) method. At the present
study, tracking would be performed by a video taken from texture of the earth,
which enables one to measure the traversed distance using tracking markers
embedded on the earth and algorithms of image processing. In order to detect
and track markers, RGB method would be applied. Following, the traversed
distance per pixels would be calculated using spatial calibration in real time. At
the end of the report, a template image would be processed. Relative error of
obtained distance by algorithm compared to real distance is about o.003cm,
which indicates high ability of the method for calculating the distance.
Simulation would be conducted using MATLAB software.
KEY WORDS: object tracking, spatial calibration, RGB method, image
processing
Introduction
Video tracking system is one of the basic subjects in image processing. In
general, tracking objects has possessed abundant functions in the society,
among which one can name its application in military systems [1] traffic
monitoring [2] identification systems and many other medical functions [3]. The
aim by tracking an object is to determine location of the object in any possible
frame in a sequence of video images. Over the years, various methods have
been proposed for object tracking, which each of them has some advantages and
disadvantages based on type of applications, level of accuracy, complexity of
calculation, speed and so on. Generally thinking, tracking methods can be
classified in three groups including point tracking, model tracking and halo
tracking. In point tracking, detected objects would be shown in sequential
1
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
frames with points [4, 5]. In model tracking, the objects would be followed
through estimating movement of core in sequential frames and model can be a
rectangular pattern or a cylindrical frame with relevant histogram [6, 7, and 8].
Scholars have considered head and body of human as a 2-d model of the body
[9, 10]. Halo tracking would be performed through estimating location of object
in each frame [11, 12].
At this study, a new method has been proposed for purpose of locating
individuals and measuring the distance traversed by human using a camera
attached to the person down to earth. The tracking would be estimated through
recording video of embedded markers on the earth and using image processing
algorithms. Following, the study would introduce process of spatial calibration.
Then, algorithms of the proposed system would be described and finally,
obtained results from tracking and simulating system would be presented in
MATLAB software and further routes.
Proposed method
At the present study, a new method has been presented for calculation of the
distance traversed by human using camera and image processing method. The
camera would be embedded in certain distance from the earth attached to the
person as it is obvious in figure 1. For this purpose, texture of earth traversed by
person in certain period would be detected. In order to achieve desired goal,
labels on the earth would be applied. The labels can be embedded in geometric
shapes. In the mentioned project, rectangular labels have been applied. Idea of
the labels has been derived from street lanes. The study has applied RGB
method for purpose of tracking markers. In continue, center of gravity of each
marker and its displacement compared to the previous frame and displacement
based on pixel would be obtained using target algorithm. Then, spatial
calibration would be applied for purpose of obtaining the distance traversed by
the person per centimeter. Spatial calibration can enable one to convert
measurement of frame of pixels to some units such as centimeter or inch.
2
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
Figure 1: profile of person and camera
The steps of performing the project after spatial calibration have been depicted
in algorithm of figure 2.
3
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
Figure 2: proposed algorithm
Taking image
The aim by taking images is capturing image of the earth through a digital
filming camera. Size of each frame of film is 640*480 pixels to RGB form and
at least 20 frames per second. An image in size of m*n is in fact a matrix of
pixels with m rows and n columns. Size of each frame of matrix indicates
brightness level of the pixel of image.
Pre-processing
The step includes processing for improving the image. Optimization would be
performed on image using algorithm written in MATLAB in three steps as
follows:
4
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
Deleting noise:
In this step, average filter would be applied for purpose of deleting noise added
to the image.
Morphological operations:
At the second step, for conducting better processing on images and to increase
accuracy of processing operations, it is required to fill areas and holes existed in
the image and create balance between the areas and the main area of the image.
Contrast
Increase in brightness of an image would be conducted for some reasons such as
improving quality of image. The main action performed on image in this step is
that level of brightness and light of each pixel would be added to a constant
value, so that it can be increased. For purpose of preventing creation of noise on
image, one should consider limitation on enhancing contrast. In order to
increase contrast of color images, one should manipulate the brightness and
light of the image without destroying source and basic color of the image.
Detection of labels
In this step, the labels embedded on the earth would be detected using RGB
method. For purpose of achieving the mentioned objective, labels have been
considered in colors such as red, green and blue. In general, colorful images
have been formed from combination of several 2-d images. In RGB colorful
system, each image has been formed of 3 categories of image of red, green and
blue colors. The mode; is based on Cartesian coordinate system.
Center of gravity
In this step, the aim is obtaining the center of gravity od detected labels in the
previous step. The algorithm has been written using mathematical calculations,
which has the ability to obtain center of gravity of most mathematical shapes.
Pixel displacement
In this step, obtain center of gravity in each frame would be subtracted from the
previous pixel and displacement of center of gravity of detected label would be
obtained per pixel in this step.
5
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
Spatial calibration process
Spatial calibration is a process of calculating transfer of pixels to actual value
that is the time for errors in settings. Information of the image is existed in
pixels. Calibration process enables one to convert measurement of form of
pixels to other units such as inch or centimeter. The calculation is easy, if one is
informed of converting pixels and their actual value. For example, is a pixel is
equal to 1 inch; a length of 10 pixels in real amount is equal to 10 inch. In order
to obtain value of each pixel in real time, a network of similar nodes would be
considered similar to figure 3, in which the distance of dx, dy is clear in real
conditions. The camera would take image of calibration network in its distance
from the earth. One can detect the points using written algorithm and can then
obtain distance of the points per pixel. Through comparing the distance
compared to points in the real time, value of pixel would be obtained per
centimeter. The distance from the camera to the earth during a film should be
fixed; although the distance can be varied due to calibration operation presented
in this paper. According to distance from the camera to the earth, distance of
each pixel would be varied per centimeter in real time.
Figure 3: spatial calibration
Obtaining displacement in real time
Considering the distance traversed per pixels and spatial calibration, the
traversed distance in real time would be obtained per centimeter. In step 7, two
algorithms would be applied. Firstly, the first algorithm would be obtained that
gives number of pixels displaced in the center and it would be possible to obtain
traversed distance per pixels. Then, second algorithm would be applied to
obtain value of pixels in real time per centimeter. Value of obtained pixels from
the first algorithm would be multiplied in obtained value from the second
6
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
algorithm and the result would be same distance traversed by person per
centimeter.
Results of simulation
At this study, a new method has been presented for purpose of locating and
tracking person and measuring traversed distance by the person using a camera
attached to the person down to the earth. The tracking was performed using the
video taken from texture of the earth. This could pave the way for estimating
the distance traversed using tracking markers embedded on the earth and also
using algorithms of image processing. For purpose of detecting and tracking
markers, RGB method has been applied. In continue, the traversed distance has
been estimated per pixels through obtaining center of gravity of markers and
through investigating its displacement by the algorithm. Then, using spatial
calibration, the distance has been estimated in real time per measurement units.
Afterwards, the labels have been detected using RGB method. Figure 4 has
indicated implementation of the algorithm on several films. X, y value indicates
coordinate of displacement of gravity center of labels compared to its previous
frame. Using written algorithms, x, y displacement would be obtained based on
t. in figure 5; the result of implementation of algorithm on a sample video has
been depicted.
7
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
Figure 4: detecting markers and obtaining displacement of coordinates of
center of gravity
In figure 5-a, pixel changes of X curve based on time would be observed. The
first row is related to pixel displacement of center of gravities with red color and
the second row is associated with blue pixel displacement and third row is
related to green color. The fourth row has also depicted sum of the rows related
to three colors per centimeter. Pixel changes of x depict displacement of camera
8
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
in direction of horizontal axis. In figure 5-b, pixel changes of y curve per time
can be observed. Y pixel changes are same distance traversed by the person.
After implementation of the step, displacement per pixel would be obtained,
which is equal to 612 pixels in this video.
Figure 5-a: changes of x displacement based on time
Figure 5-b: changes of y curve based on time
9
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
At the next step, value of pixels in real time would be obtained per centimeter.
In this step, through implementing algorithm on the calibration network, pixel
value would be obtained based on the figure. It should be noted that the camera
should take image of calibration network from same distance of taking image of
texture of the earth. As it is illustrated in figure 1, in order to implement
calibration operation, firstly the circles would be detected and then, distance
between two circles detected per pixel. Due to the distance in real time per
centimeter and simple scaling, one can obtain value of each pixel in real time
per centimeter. Calibration has determined each pixel equal to 0.14cm due to
the distance of camera from the earth. In last step, x curve has been traced based
on y as it is obvious in figure 6.
Figure 6: the distance traversed in x-y coordinate (per cm)
Through multiplying obtained pixels in step 5 in real value of pixels per
centimeter, in step 6; the traversed distance by person would be obtained. The
distance has been equal to 612*0.14 which is equivalent for 85.6cm. Units in
the figure are determined per cm. as it is clear in the figure, y displacement has
been obtained about 85cm, which is same distance traversed by the person. X
has been also obtained to 2.4cm, which can depict camera vibration along x
axis. Real value of y is equal to 86cm. Approximate error of obtained distance
by algorithm is about 0.003cm compared to real distance. This can indicate high
ability of the method in calculating the distance. Simulation has been conducted
in MATLAB software.
In this paper, a new method has been presented for purpose of locating
individuals and measuring the distance traversed by the person using a camera
10
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
attached to the person down to the earth. Tracking has been conducted using a
video of texture of the earth. This can enable one to calculate the distance
traversed through tracking markers embedded on the earth and image
processing algorithms. In order to detect and trace markers, RGB method has
been applied. In continue, through obtaining center of gravity of markers and
investigating their displacement using the algorithm, traversed distance per
pixels has been obtained. Afterwards, spatial calibration operation has been
applied to calculate the distance in real time per measurement units. It is hope
that future works can generalize the project for tracking and calculating curve
shaped routes, routs with barrier, stairs and so on. Also, it is hope that future
studies can develop tracking with the presence of geometric perspective noises.
The mentioned project can be also applied as movement guidance of robot and
error of movement of robot can be also modified using feedback systems.
References
1- Ahmadi hashem, (2004), presenting a method for fast detection of objects in
videos using information of color and movement, Power and Telecommunication
University of Isfahan Industrial University
2- Xiaoping Yun, Eric R.Bchmann,Hyatt Moore IV, and James Calusdian, (2007),"
Self-contained Position Tracking of human Movement using small inertial
Magnetic Sensor Modules"/ International Conference on robotics and automation
3- M.kass, A.witkin, &D.Terzopoulos, (1987), snakes: active contour models :in
proc. first international conference computer vision, London, pp.259-269
4- Lirong Wang,Xiaoli Wang,jing Xu, (2010), lip detection and Tracking Using
Variance Based haar-Like Features and kalman filter", International conference on
frontier of computer Science and technology
5- Mr.G Santhosh kumar, (2008), "object tracking from video sequence", department
of computer science Cochin university, June
6- Nicola. Apuzzo, (2003), "human body motion capture from multi-image video
sequence" institute of geodesy and photogrammetry, ETH Zurich, Switzerland
7- C.Wern, A.azarbayejani, A.pentland, (1997), pfinden: real time tracking of the
human body, IEEE Trans. patt. Analy. Mach. Intell, vol.19, No. , pp.780-785
8- Leibe, B., Seemann, E, &Scheiele, B. (2009), "Pedestrian detection in crowded
scenes" in IEEE Conference on computer Vision and Pattern Recognition
(CVPR)(VOL.1 ,pp.878-885
9- Q.cai & J.K. Aggarwal, tracking human motion using multiple cameras, computer
&vision Research center Department of electrical and computer Engineering
10- Y.Wuand T.S.Huang, (2004), Robust visual tracking by integrating multiple Cues
based on co-inference learning. International Journal of computer vision, 58 (1),
55-71, June
11
EPHEMERA
http://ephemerajournal.com/
ISSN: 1298-0595
Vol.27; No. 4 (2015)
11- H.Schaub & C.smith, (2005), color snakes for dynamic lighting conditions on
mobile manipulation platforms, proceedings of the IEEE/RSJ international
conference on intelligent Robots & systems
12- W.Kim J.Jang Lee, (2005), object tracking based on the modular active shape
model, mechatronics, vol.15, issue.3, pp.371-402
13- Karan Gupta& ANjali V.Kulkarni Karan Gupta& ANjali V.Kulkarni, (2009),
implementation of an Automated single camera object tracking system using
frame differencing and dynamic template matching, IEEE
14- T.Zhao & R.Nevatia. (2003), "baysian human segmentation in crowded
situations.in IEEE Computer vision & pattern recognition" pages II:456-459
15- Irashad Ali, (2009), "detection & tracking of multiple humans in high-density
crowds". Asian institute of technology school of engineering & technology
Thailand, May
16- E.Poon and D.J.Fleet., (2002), hybrid "Monte Carlo filtering: edge-based people
tracking" .in IEEE workshop on motion and video computing, pages:151-158
17- C.Yang, R.Duraiswami, and L.s Davis. (2005), "Efficient mean-shift tracking via
a new similarity measure" in IEEE Computer Vision and pattern recognition,
pages I:176-183
18- J.Pittscher.J.kato, S.Joga, A.Blake, (2000), "A probabilistic background model for
tracking ", European Conference on computer vision (E.CCV), vol.2, pp.336-350
19- M.Pontil and A.verri," Support vector machines for d object recognition "IEEE
trans on pattern Analysis and Machine intelligence, vol.20 ,no.6, 199814
20- Ming wu, Ji Ying Sun, (2010), object Detection and tracking with mobile robot
extended Kalman filter in unknown environment, international conference on
machine vision and human machine interface
21- Thiago T.Santos,Carlos H.Morimoto, (2010), "Multiple camera people detection
and tracking using support integration" Elsevier
22- S.A.Vigus, D.R.Bull, C.N. Canagarajah, (2001), "Video object tracking using
region split & merge & a kalman filter tracking algorithm", 0-7803-6725, IEEE
23- Michal Juza &Karel Marik& Jiri Rojicek, (2006), " D_Tempelate based single
camera multiple object tracking" computer vision winter workshop
24- I.A.karaulova, P.M.Hall & A.D.Marshall, (2000), "A hierarchical model of
dynamics for tracking people with a single video camera" in proc. British machine
vision conf. pp.262-352
25- C.stauffer, W.Grimson, (2000), "Learning patterns of activity using real-time
tracking" IEEE Trans.patt.Analy.Mach.intell, vol.22, No.8, pp.747-767
26- A.Elgammal, D.Har wood, L.Davis, (2000), " Nonparametric model for
background Subtraction" European Conference on computer vision (E.CCV),
pp.751-757
12