KALMAN FILTERING METHODS FOR MOVING VEHICLE TRACKING By NICOLAS OBOLENSKY A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2002 Copyright 2002 by Nicolas Obolensky To my parents ACKNOWLEDGMENTS I would like to express my sincere gratitude towards my advisor, Dr. José Príncipe, for giving me an opportunity to work on this challenging topic and for providing continuous feedback during the course of this work. Thanks are due to Dr. Erik Larsson and Dr. Yuguang Fang for agreeing to be on my committee. I wish to thank my parents and my sister for their constant emotional support and encouragement throughout my academic career. This work was partially supported by a Honeywell Corporation grant. iv TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................................................................................. iv ABSTRACT....................................................................................................................... xi CHAPTER 1 INTRODUCTION ...........................................................................................................1 Related Work ................................................................................................................. 1 Approach........................................................................................................................ 2 2 THE KALMAN FILTER APPLIED TO MOVING VEHICLE TRACKING...............4 A Description of the Kalman Filter ............................................................................... 4 A Two-State Filter: Friedland’s Model.......................................................................... 8 Dynamic Model....................................................................................................... 9 Measurement Model ............................................................................................. 10 A Three-State filter: Ramachandra’s Model................................................................ 10 Dynamic Model..................................................................................................... 11 Measurement Equation ......................................................................................... 12 Drawbacks of the Classical Methods .................................................................... 12 3 A TIME-VARYING KALMAN FILTER ....................................................................13 Time Varying Kalman Filter........................................................................................ 14 A Time-Varying State Model ...................................................................................... 14 Model Update............................................................................................................... 16 Results.......................................................................................................................... 18 4 THE UNSCENTED TRANSFORMATION ON NONLINEAR MEASUREMENTS34 The Extended Kalman Filter Applied to Moving Vehicle Tracking ........................... 35 A Description of the Extended Kalman Filter ...................................................... 36 The Extended Kalman Filter and its Flaws ........................................................... 37 The Unscented Kalman Filter ...................................................................................... 39 The Unscented Transformation............................................................................. 40 The Application to the Kalman Filter ................................................................... 41 v Results.......................................................................................................................... 44 Robustness to State Noise ..................................................................................... 46 Robustness to Measurement Noise ....................................................................... 53 Conclusion ................................................................................................................... 57 5 CONCLUSION AND FURTHER RESEARCH ..........................................................58 Conclusion ................................................................................................................... 58 Further Research .......................................................................................................... 58 REFERENCES ..................................................................................................................60 BIOGRAPHICAL SKETCH .............................................................................................62 vi LIST OF TABLES page Table 2.1 Discrete Kalman Filter time update equations.........................................................7 2.2 Discrete Kalman Filter measurement update equations...........................................7 4.1 Extended Kalman Filter time update equations .....................................................37 4.2 Extended Kalman Filter measurement update equations.......................................37 4.3 Computation of the sigma points ...........................................................................43 4.4 Unscented Kalman Filter time update equations ...................................................43 4.5 Unscented Kalman Filter measurement update equations .....................................43 vii LIST OF FIGURES page Figure 2.1 The ongoing discrete Kalman filter cycle................................................................6 2.2 A complete picture of the operation of the Kalman filter........................................8 3.1 Example of video images. First video, the vehicle is appearing............................19 3.2 Example of video images. First video, the vehicle is slowing down.....................19 3.3 Example of video images. First video, the vehicle is stopping..............................19 3.4 Example of video images. First video, the vehicle is speeding up. .......................20 3.5 Example of video images. First video, the vehicle is disappearing. ......................20 3.6 Example of videos. Second video, the vehicle is entering.....................................20 3.7 Example of videos. Second video, the vehicle is stopping. ...................................21 3.8 Example of videos. Second video, the vehicle is stopped and the camera is shaking. ..................................................................................................................21 3.9 Example of videos. Second video, the vehicle is moving slowly..........................21 3.10 Example of videos. Second video, the vehicle goes back......................................22 3.11 Example of videos. Third video, the moving vehicle is on the road. ....................22 3.12 Example of videos. Third video, the vehicle is approaching a crossroad..............22 3.13 Example of videos. Third video, the vehicle is crossing the crossroad. ................23 3.14 Example of videos. Third video, the vehicle is speeding up. ................................23 3.15 Example of videos. Third video, the vehicle is disappearing ................................23 3.16 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=5, for the first video.............................................25 viii 3.17 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=5, for the second video. .......................................26 3.18 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=5, for the third video............................................27 3.19 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=10, for the first video...........................................28 3.20 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=10, for the second video. .....................................28 3.21 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=10, for the third video..........................................29 3.22 Actual trajectory, noisy measurements and estimated trajectory on the first video with a noise standard deviation of 5. .....................................................................30 3.23 Actual trajectory, noisy measurements and estimated trajectory on the first video with a noise standard deviation of 10. ...................................................................30 3.24 Actual trajectory, noisy measurements and estimated trajectory on the second video with a noise standard deviation of 5.............................................................31 3.25 LMS algorithm step size evolution with respect to time for the second video......31 3.26 LMS algorithm step size evolution with respect to time for the third video. ........32 4.1 A complete picture of the operation of the extended Kalman filter ......................38 4.2 The Unscented Transformation estimates for the mean and the covariance compared to the actual mean and covariance, and to the “first-order” estimates. .42 4.3 Experimental set for the Unscented Kalman Filter performance test ....................45 4.3 Unscented Kalman Filter subsystem......................................................................45 4.4 Actual and Estimated Trajectories for the Extended Kalman Filter for a state noise of 10-3. ....................................................................................................................46 4.5 Actual and Estimated Trajectories for the Unscented Kalman Filter for a state noise of 10-3............................................................................................................47 4.6 Range Estimation Residual of the Extended Kalman Filter for a state noise of 10-3.48 4.7 Range Estimation Residual for the Unscented Kalman Filter for a state noise of 10-3. ........................................................................................................................48 ix 4.8 Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the Extended Kalman Filter for a state noise of 10-3........................49 4.9 Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the Unscented Kalman Filter for a state noise of 10-3. .....................49 4.10 Actual and Estimated Trajectories for the Extended Kalman Filter for a state noise of 10-1. ....................................................................................................................50 4.11 Actual and Estimated Trajectories for the Unscented Kalman Filter for a state noise of 10-1............................................................................................................51 4.12 Range Estimation Residual for the Extended Kalman Filter for a state noise of 101 ..............................................................................................................................51 4.12 Range Estimation Residual for the Unscented Kalman Filter for a state noise of 10-1 .........................................................................................................................52 4.13 Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the Extended Kalman Filter for a state noise of 10-1........................52 4.14 Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the Unscented Kalman Filter for a state noise of 10-1. .....................53 4.15 Actual and Estimated Trajectories for the Extended Kalman Filter for a measurement noise of 100. ....................................................................................54 4.16 Actual and Estimated Trajectories for the Unscented Kalman Filter for a measurement noise of 100. ....................................................................................54 4.17 Range Estimation Residual for the Extended Kalman Filter for a measurement noise of 100............................................................................................................55 4.18 Range Estimation Residual for the Unscented Kalman Filter for a measurement noise of 100............................................................................................................55 4.19 Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the Extended Kalman Filter for a measurement noise of 100. .........56 4.20 Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the Unscented Kalman Filter for a measurement noise of 100. .......56 x Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science KALMAN FILTERING METHODS FOR MOVING VEHICLE TRACKING By Nicolas Obolensky August 2002 Chair: Dr. Jose Principe Major Department: Electrical and Computer Engineering Since the time of its introduction, the Kalman filter has been the subject of extensive research and application, particularly in the area of target tracking. This is likely due in large part to advances in digital computing that made the use of the filter practical, but also to the relative simplicity and robust nature of the filter itself. This thesis presents improvements in Kalman filtering applied to target tracking. First, we describe a time varying extension to the Kalman Filter for tracking moving objects in images. Classical methods for tracking assume either a constant acceleration model or a random acceleration. Those assumptions often turn out to be inaccurate. The proposed Kalman filter adapts its model (i.e., the state transition matrix) at each step to better estimate the movement of the maneuvering target. The performance of this time-varying Kalman filter is tested on an airborne surveillance video. Then, we introduce an improvement to the classical extended Kalman filter, called the unscented Kalman filter. The state distribution is represented, as in the xi extended Kalman filter, by a Gaussian random variable, but is now described using a minimal set of carefully chosen points that, when propagated through the true nonlinear system, capture the posterior mean and covariance accurately, and help improve the tracking performance. The combination of the two methods leads to an algorithm that efficiently tracks moving targets with noisy radar measurements. xii CHAPTER 1 INTRODUCTION Each time an outfielder tracks and catches a fly ball, he intuitively solves a problem of target tracking. That problem has stymied engineers, mathematicians, and computer scientists for years. Two great mathematicians, the American Norbert Wiener and the Russian Andrey Kolmogorov, first approached the problem during World War II. Rather than catching fly balls, Kolmogorov and Wiener were trying, in the days before computers, to develop mathematical algorithms that would help to track enemy aircraft by radar and automatically guide an anti-aircraft gun to shoot it down. The research started by Kolmogorov and Wiener has developed into a thriving area of applied mathematics known as Optimum Filtering Theory. For the past two decades, because of the development of computer technologies and computer vision, target tracking has evolved in many ways and has spread out to a large number of areas from traffic control to radar missile targeting. We will focus here on moving vehicle tracking with radar or ladar observations. It consists in predicting the future position of a vehicle moving on the ground, given range and angle (azimuth and elevation) data. Related Work As a fruitful application for state estimation, the problem of tracking a moving object from radar or ladar measurements has received a great deal of attention in the 1 2 literature. Various issues such as decoupled tracking, tracking in cluttered environment, tracking a maneuvering target, and tracking multiple targets have been studied in detail [1,2]. However a number of problems are still open, and reducing computational requirements, while achieving some desired tracking performance is another important issue in many practical applications. Usually a Cartesian coordinate system is well suited to describe the target dynamics since the dynamic equations are often linear and uncoupled. Within the significant toolbox of mathematical tools that can be used for linear state estimation from noisy sensor observations, one of the most often-used tools in the target tracking field is what is known as the Kalman Filter. But, because measurements of the target location are expressed as nonlinear equations in coordinates, the tracking problem is connected with nonlinear estimation. The natural way to generalize the methods applied to linear estimation was to extend the Kalman Filter to nonlinear recursive equations: this classical extended algorithm is called the Extended Kalman Filter. These two algorithms are the foundation of most moving vehicle tracking algorithms, but they have a lot of important flaws that need to be addressed. Approach The Kalman Filter is the foundation of our moving vehicle tracking method, because of its adaptivity for implementation on a digital computer for on-line estimation and usefulness of the state-space approach. The state-space equations are decomposed into two parts: the state equation and the measurement equation. The state equation describes the target dynamics and are chosen in order to be linear. The measurement equation, that connects the state vector to the radar observations, is consequently 3 nonlinear. To solve this system of linear and nonlinear recursive equations, we introduce a combination of two filtering methods, each one improving the classical methods in their domain. This work presents in a first place a description of the Kalman Filter. It then describes a new version of the Kalman Filter that improves target tracking in a linear case. This improved version of the Kalman Filter, the time-varying Kalman Filter, is tested on a set of vehicle tracking surveillance videos. Finally, this work introduces a new extension to the Kalman Filter to nonlinear equations, which, combined with the timevarying Kalman Filter, constitutes an efficient moving target tracking method, using radar measurements. CHAPTER 2 THE KALMAN FILTER APPLIED TO MOVING VEHICLE TRACKING Within the significant toolbox of mathematical tools that can be used for stochastic estimation from noisy sensor measurements, one of the most well-known and often-used tools is what is known as the Kalman Filter [3]. The Kalman filter is essentially a set of mathematical equations that implement a predictor-corrector type estimator that is optimal in the sense that it minimizes the estimated error covariancewhen some presumed conditions are met. Rarely do the conditions necessary for optimality actually exist, and yet the filter apparently works well for many applications in spite of this situation. A Description of the Kalman Filter The Kalman Filter has made a dramatic impact on linear estimation because of its adaptivity for implementation on a digital computer for on-line estimation and usefulness of the state-space approach. Today, the Kalman Filter is an established technique widely applied in the fields of navigation [4], guidance, aircraft [5] and missile tracking [6], reentry of space vehicles, etc. This section describes the filter in its original formulation [3], where the measurements occur and the state is estimated at discrete points in time. The Kalman filter addresses the general problem of trying to estimate the state of a discrete-time controlled process that is governed by the linear stochastic difference equation x k +1 = Ak x k + Bu k + wk 4 5 with a measurement zk∈ℜ that is y k = Hx k + v k The random variables wk and vk represent the process and measurement noise (respectively). They are assumed to be independent (of each other), white, and with normal probability distribution p(w)~N(0,Q) p(v)~N(0,R) In practice, the process noise covariance Q and measurement noise covariance R matrices might change with each time step or measurement, however here we assume they are constant. The n x n matrix A in the difference equation relates the state at the previous time step k-1 to the state at the current step k, in the absence of either a driving function or process noise. Note that in practice A might change with each time step, but here we assume it is constant. The nx1 matrix B relates the optional control input uk∈ℜl to the state xk+1. The m x n matrix H in the measurement equation relates the state to the measurement zk. In practice, H might change with each time step or measurement, but here we assume it is constant. The Kalman filter estimates a process by using a form of feedback control: the filter estimates the process state at some time and then obtains feedback in the form of (noisy) measurements. As such, the equations for the Kalman filter fall into two groups: time update equations and measurement update equations. The time update equations are responsible for projecting forward (in time) the current state and error covariance estimates to obtain the a priori estimates for the next time step. The measurement update 6 equations are responsible for the feedback, i.e. for incorporating a new measurement into the a priori estimate to obtain an improved a posteriori estimate. The time update equations can also be thought of as predictor equations, while the measurement update equations can be thought of as corrector equations. Indeed the final estimation algorithm resembles that of a predictor-corrector algorithm for solving numerical problems. The time update projects the current state estimate ahead in time. The measurement update adjusts the projected estimate by an actual measurement at that time. The process can be described by the following figure: Figure 2.1:The ongoing discrete Kalman filter cycle The first task during the measurement update is to compute the Kalman gain Kk. The next step is to actually measure the process to obtain zk, and then to generate an a posteriori state estimate by incorporating the measurement. The final step is to obtain an a posteriori error covariance estimate Pk. After each time and measurement update pair, the process is repeated with the previous a posteriori estimates used to project or predict the new a priori estimates. 7 Table 2.1 Discrete Kalman Filter time update equations xˆ k− = Axˆ k −1 + Bu k Pk− = APk −1 AT + Q Table 2.2 Discrete Kalman Filter measurement update equations K k = Pk− H T ( HPk− H T + R ) −1 xˆ k = xˆ k− + K k ( y k − Hxˆ k− ) Pk = ( I − K k H ) Pk− The symbol “^” highlights that the calculated value is an estimate. The superscript “-” in xˆ k− or Pk− means that an a priori estimate is computed. HT is the transpose of the matrix H. In the actual implementation of the filter, the measurement noise covariance R is usually measured prior to operation of the filter. Measuring the measurement error covariance R is generally practical (possible) because we need to be able to measure the process anyway (while operating the filter) so we should generally be able to take some off-line sample measurements in order to determine the variance of the measurement noise. The determination of the process noise covariance Q is generally more difficult as we typically do not have the ability to directly observe the process we are estimating. Sometimes a relatively simple (poor) process model can produce acceptable results if one "injects" enough uncertainty into the process via the selection of Q. Certainly in this case one would hope that the process measurements are reliable. In either case, whether or not we have a rational basis for choosing the parameters, often times superior filter performance (statistically speaking) can be obtained by tuning the filter parameters Q and 8 R. The tuning is usually performed off-line, frequently with the help of another (distinct) Kalman filter in a process generally referred to as system identification. Figure 2.2: A complete picture of the operation of the Kalman filter A Two-State Filter: Friedland’s Model Consider an aircraft or similar space vehicle with constant velocity perturbed by a zero mean random acceleration ([7,8]). The position of the vehicle is assumed to be measured by a track-while-scan radar sensor at uniform sampling intervals of time T seconds and all measurements are noisy. The problem is to obtain the optimum estimates of position and velocity of the vehicle. This model assumes that each component of the vehicle position is independently measured by a radar sensor in the Cartesian coordinate system with constant accuracy, and that the observation errors have zero mean and are uncorrelated. 9 Dynamic Model As the model assumes that each position coordinate is measured independently, each coordinate is uncoupled from the other two and hence can be treated separately. For each coordinate, the vehicle dynamics is assumed to be described by x k +1 = x k + x& k T + 1 ak T 2 2 x& k +1 = x& k + a k T where xk is the position at scan k, x& k the velocity at scan k, ak the acceleration acting on the vehicle at scan k, T the sample rate. In that given model, the acceleration is assumed to be a random constant between successive observations with zero mean and uncorrelated with its values at other intervals, i.e., E[a k ] = 0 E[a k2 ] = σ a2 = const E[a n a k ] = 0 if n ≠ k In the vector-matrix form, the vehicle dynamics may be written as Xk+1=AXk+Bak with xk Xk = , x& k 1 T A= 0 1 and T 2 B = 2 . T 10 Xk is the vehicle state vector consisting of position and velocity components, A is the transition matrix, and B is the input distribution matrix. Measurement Model The position of the vehicle is assumed to be measured by a radar at uniform intervals of time T seconds and each observation noisy. The measurement equation is given by xm(k)=xk+νk where xk is the true position at scan k, and νk is random noise corrupting the measurement at scan k. The statistical properties of the noise are assumed to be E[ν k ] = 0 E[ν k2 ] = σ ν2 = const E[ν nν k ] = 0 if n ≠ k In terms of the state vector Xk, the measurement equation can be written as xm(k)=HXk+νk with H = [1 0] . A Three-State filter: Ramachandra’s Model Ramachandra’s model [9] is a one-dimensional model for estimating the optimum steady-state position, velocity and acceleration of an aircraft moving with a constant acceleration perturbed by a zero mean plant noise with accounts for maneuvers and/or other random factors. The position coordinate of the moving vehicle is assumed to be measured by radar at uniform intervals of time T seconds through random noise. 11 Dynamic Model Each position coordinate of the moving vehicle is assumed to be described by the following equations of motion: x k +1 x& k +1 &x&k +1 T = x k + x& k T + &x&k 2 = x& k + &x&k T = &x&k + a n 2 where xk is the vehicle position at scan k , x& k the vehicle velocity at scan k , &x&k the vehicle acceleration at scan k , T the sample time and ak the plant noise that perturbs the acceleration and accounts for both maneuvers and other modeling errors. ak is assumed to be zero mean and of constant variance σa 2 and also uncorrelated with its values at other time intervals. In vector-matrix, the dynamic equation can be written as: X k +1 = AX k + Ba k with xk X k = x& k &x&k 1 T A = 0 1 0 0 and 0 B = 0 1 2 T 2 T 1 12 Measurement Equation The measurement equation can be written as: xm(k)=Hxk+νk where xm(k) is the measured position at scan k , νk is the random noise corrupting at scan k and H = [1 0 0] . The statistical properties of the measurement noise are assumed to be the same as in the previous models. Drawbacks of the Classical Methods The problem of these last two models is that they make assumptions on the form of the acceleration: random acceleration with zero mean in Friedland's model and constant acceleration in Ramachandra's model. In some cases, these models could be inappropriate. For Friedland's model, the Kalman Filter will minimize the random acceleration, considered as noise, and will therefore try to make the velocity constant. For Ramachandra's model, as the acceleration is considered to be constant perturbed by random noise, the Kalman filter will make the velocity of the vehicle increase toward infinity. Those important drawbacks lead us to think of a better model in order to better track moving vehicles. CHAPTER 3 A TIME-VARYING KALMAN FILTER For target tracking in video images, two main assumptions are made in the Kalman filter formulation: a constant state model with the acceleration as a state, often using position and rate measurements, as in Ramachandra et al. [10]; or a constant state model with the acceleration considered as zero mean random input, as Friedland [7] described. In the first model, the acceleration is considered to be constant. In that case, the vehicle velocity remains constant for a zero acceleration or grows towards infinity in the direction of the acceleration. The alternative method to this unrealistic model is a random acceleration. But the Kalman filter, when the acceleration is an input noise, will eventually cancel out a zero mean acceleration as an input to the state equations. In that method also, the vehicle velocity will tend to be constant. In any practical model to track moving objects in images, the velocity, as well as the acceleration, should be able to vary. Still, the acceleration variations for maneuvering targets may be considered smooth. It is therefore conceivable to predict the current acceleration given the previous accelerations. In that case, the accelerations are no longer an input to the system, but become states as suggested in Singer [11]. At the same time, however, it is possible to estimate the acceleration at each step given the three previous (noisy) position measurements. This acceleration estimate will allow us to adapt the state transition matrix during operation and therefore obtain a time varying model. Consequently, the Kalman filter model becomes time varying and will be adapted to the maneuvering target movement. 13 14 Time Varying Kalman Filter The maneuvering target movement can be described by the dynamic equations x k +1 = Ak x k + Bu k y k = Hx k + Du k where xk is the state vector, uk is the input vector and yk is the output. xk will be defined as pk x k = v k , a k where pk is the position of the moving vehicle, vk is its velocity and a k a a k = k −1 N≥2, with ai being the acceleration at step i. M a k − N In our formulation there is no input to the system (uk=0) because the accelerations are considered as states. A Time-Varying State Model We have decided to work on the accelerations because they are the only possible innovation on the moving vehicle model. We also assumed that the acceleration varies with time (sample time T), because, as we already said, a constant acceleration would make the velocity either constant or eventually infinite. The second major assumption is that the current acceleration depends smoothly on the previous accelerations. If the number of previous accelerations is not too large (recent past), we furthermore assume that the acceleration can be modeled as a weighted sum of previous accelerations. At each step, we do a linear fitting of the acceleration data. Since there is noise in our 15 estimation of the acceleration, we consider that the acceleration vector is corrupted by additive Gaussian noise nk, with zero mean and covariance matrix Qk to be estimated in the design. x k +1 = x k + Tv k v = v + Ta k k k +1 a k +1 = wk +1 [1,1]a k + wk +1 [1,2]a k −1 + L + wk +1 [1, N ]a k − N + n [1] k a k = wk +1 [2,1]a k −1 + wk +1 [2,2]a k − 2 + L + wk +1 [2, N − 1]a k − N + nk [2] M a k − N +1 = wk +1 [ N + 1,1]a k − N + nk [ N + 1] where wk +1 [i, j ] is the jth element of the weight vector corresponding to the acceleration ak+1-i. In a vector-matrix form, these dynamic equations can be written as: x k +1 = Ak x k + n k pk where x k = v k , a k 1 0 M Ak = M M 0 T 0 L L 1 T 0 L 0 0 wk +1 [1,2] L M M 0 M wk +1 [2,1] O 0 O 0 0 0 0 nk [1] nk = . n k [ 2] M nk [ N + 1] 0 0 0 wk +1 [1, N ] M M wk +1 [ N + 1,1] 0 16 The measurement equation can be written as: yk = Hxk + υk 1 0 where H = 0 M 0 0 0 0 M 0 0 0 0 M 0 0 0 0 M 0 0 0 1 0 0 1 M O 0 L L L O O 0 0 0 M and nk is a zero mean noise with a 0 1 constant covariance matrix R. The measurements are pk, the vehicle position at time step k and ak-2 the acceleration at time step k, given by a k −2 = 2 ( p k − 2 p k −1 + p k −2 ) T2 Notice that if there is a measurement error, the error on ai is correlated to the error on ai-1, and therefore the noise is not exactly white. But this only changes the shape of the noise covariance matrices from diagonal matrices to symmetric Toeplitz matrices with all zero diagonals except the first principal diagonal. Only the correlation between the ak-N and ak-N-1 errors cannot be represented in the model. Model Update In practice the acceleration variations are smooth, so the assumed weighted linear dependence will not change drastically every step. Therefore, we use an on-line method to update the weights. Each weight vector wk [1], wk [2],K , wk [ N + 1] is updated using the LMS algorithm. 17 ∧ w k [i ] = w k −1[i ] + η (a k −i +1 − w k −1a k [i ])a k [i ] (6) a k −i where a k [i ] = M is the input vector to the Kalman filter at step k-1 and the desired a k − N −1 ∧ response a k −i +1 is the acceleration at the output of Kalman filter at the same step. So the ∧ error ek −1[i ] = a k −i +1 − w k −1[i ]a k [i ] is the difference between the optimal acceleration computed by the Kalman filter and the acceleration estimated by the LMS algorithm. We can also write the Model Update with the following formula: 0 0 M Ak=Ak-1+η M 0 0 L L 0 L L M ek −1 [1]a k −1 L M 0 M ek −1 [2]a k − 2 0 0 0 0 0 ek −1 [1]a k − N −1 ek −1 [2]a k − N −1 M ek −1 [ N − 1]a k − N −1 0 The step size η has to be chosen very carefully because it can make the whole algorithm fail. Too big a step size would make the model diverge, whereas too small a step size would prevent the model from adapting. Therefore, the approach for choosing η properly is to make it dependent on the system dynamics. The faster the Kalman filter predicts and corrects the states (and consequently the accelerations), the faster the LMS algorithm has to adapt. The dynamics of the Kalman filter are given by the matrix (Ak-1 – Kk-1H), where Kk-1 is the Kalman gain at time step k-1. The norm of this matrix determines the speed of the Kalman filter. Therefore, it seems natural to make the LMS step size proportional to it: η k = α k Ak −1 − K k −1 H 18 But the proportionality coefficient has now to be chosen wisely. To ensure the stability of the LMS algorithm, the proportionality coefficient αk is chosen so that the step size η k is always less than 2 2 ak , where a k 2 is the instantaneous power of the acceleration vector a k at step k. For example, the condition is satisfied if αk is chosen 2 equal to a k −1 2 Ak −1 − K k −1 H 3 +p , where a k −1 2 is the instantaneous power of the acceleration vector a k −1 at step k-1, and p is the minimum reachable power. Indeed, we have a k 2 ≤ a k −1 2 2 Ak −1 − K k −1 H and η k = α k Ak −1 − K k −1 H . Results The performance of this adaptive Kalman filter has been tested on a set of surveillance videos. These videos are very useful because the detection part of tracking can be done manually. The advantage of a manual detection is that it has theoretically no error, therefore the output of this manual detection will be considered as the ground truth. The detection noise statistics can also be controlled, allowing a better analysis of the results. The algorithm tracked only one vehicle on the ground, i.e. there is only two tracked coordinates x and y, the altitude not being considered. Three representative videos have been chosen to test the performance of the algorithm. The three of them are 320 pixels by 240 pixels. The first video shows a vehicle crossing the image relatively slowly, with little vibrations of the camera. The second one shows a moving vehicle followed by a shaky camera. The third one shows a small vehicle crossing the image followed by a 19 very shaky camera. The figures 3.1 to 3.5 represent 10 different frames from the first video, at 5 different stages. Figure 3.1 Example of video images. First video, the vehicle is appearing. Figure 3.2 Example of video images. First video, the vehicle is slowing down. Figure 3.3 Example of video images. First video, the vehicle is stopping. 20 Figure 3.4 Example of video images. First video, the vehicle is speeding up. Figure 3.5 Example of video images. First video, the vehicle is disappearing. The figures 3.6 to 3.10 show 10 different frames at five different stages from the second video. Figure 3.6 Example of videos. Second video, the vehicle is entering. 21 Figure 3.7 Example of videos. Second video, the vehicle is stopping. Figure 3.8 Example of videos. Second video, the vehicle is stopped and the camera is shaking. Figure 3.9 Example of videos. Second video, the vehicle is moving slowly. 22 Figure 3.10 Example of videos. Second video, the vehicle goes back. The figures 3.11 to 3.15 show the movement of the vehicle on the third video. Figure 3.11 Example of videos. Third video, the moving vehicle is on the road. Figure 3.12 Example of videos. Third video, the vehicle is approaching a crossroad. 23 Figure 3.13 Example of videos. Third video, the vehicle is crossing the crossroad. Figure 3.14 Example of videos. Third video, the vehicle is speeding up. Figure 3.15 Example of videos. Third video, the vehicle is disappearing To evaluate the performance of the adaptive Kalman filter, a noise with a constant variance has been added to the ground truth. The performance criteria are the errors on the position pk and acceleration ak-2 measurements (we have considered N=2 in the ak vector). The proposed algorithm is also compared to a classical method for tracking: the constant acceleration model (Ramachandra’s model [5]). They have been computed for 24 different variances on the whole set of videos. Because in both cases the error mean quickly decays to zero, the criterion to compare the two methods is the standard deviation of the position and acceleration errors. In all the following figures, 4 plots are shown: 2 plots for the x coordinate on the left-hand side and 2 plots for the y coordinate on the right-hand side. For each coordinate, the first row shows the position error standard deviation in pixels versus the adaptation steps, the solid line being the constant acceleration model error standard deviation, the dash dotted line being the noise standard deviation and the dashed line being the proposed algorithm position error standard deviation. The second row represents the standard deviation on the pixel error between the real acceleration ak-2 and the acceleration computed by the proposed algorithm (dashed line) and the constant acceleration model (solid line). The next three figures show the performance of the algorithms on the three videos when an additive Gaussian noise with a standard deviation of 5 is added. We first observe that the position errors for the proposed algorithm decrease for both the x and y coordinates. After 250 steps, the standard deviation of the filter output error goes beneath that of the measured position error. As expected, the measured position error standard deviation is around 5 pixels, whereas after 1500 steps, the filter output error standard deviation goes below 3 pixels. However, the classical method position error does not even decrease and remains high the whole time. This high error also appears in the classical algorithm, but is hardly compensated later, whereas, for the time-varying Kalman filter, the standard deviation quickly decays. 25 Figure 3.16 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=5, for the first video. x and y decreases quickly to around 3 pixels per second2, despite large errors at the beginning, whereas the classical model acceleration error standard deviation decreases to a higher value of 5 pixels per second2. For the second video, the position error standard deviation goes below the noise standard deviation only after 150 steps. But the error standard deviation for the classical algorithm remains around 15 pixels, three times as high as the noise standard deviation. For the acceleration error, the standard deviation also decreases quickly to around 2 pixels per second 2 for the time-varying Kalman filter, and to a higher value of 6 pixels per second 2 for the classical method. This third plot (Figure 3.18) also highlights the performance of the proposed algorithm compared to the classical method. 26 Figure 3.17 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=5, for the second video. The time-varying Kalman filter errors standard deviations are always lower than that of the constant acceleration model and are even much lower than the measurement error standard deviations. The three following plots (Figures 3.19, 3.20 and 3.21) show the performance of the algorithm with a higher noise standard deviation of σ=10 for the three representative videos. The behavior of the proposed algorithm is still the same even with a higher standard deviation. However, we can notice a higher error in the first hundred adaptation steps of the algorithm. The classical method acceleration error standard deviation is at least three times as high as that of the proposed algorithm. The change of the noise power does not alter the performance of the time-varying Kalman filter. 27 Figure 3.18 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=5, for the third video. The standard deviations still decay quickly, despite large errors in the beginning, and goes under the noise standard deviation. The acceleration error standard deviation is still negligible for the time-varying model (ten times as low as the theoretical noisy measurement error standard deviation), whereas for the classical model, the acceleration error standard deviation is still high (between 8 and 14 pixels per second2). We can observe in all the previous plots that the constant acceleration model gives large errors. In deed, the large errors in the first adaptation steps are not compensated and the bad estimates for the acceleration propagate through the filter. On the contrary, the timevarying model compensates for those large errors and adapts to the vehicle movement. Besides, the acceleration error is quickly negligible, meaning that the algorithm adapts quickly to the real acceleration and that therefore the choice of the step size, that controls the adaptation speed, is suited. 28 Figure 3.19 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=10, for the first video. Figure 3.20 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=10, for the second video. 29 Figure 3.21 Position and Acceleration Error Standard Deviation for an added Gaussian noise with a standard deviation of σ=10, for the third video. The performance of the algorithm can be seen on the actual videos. Let us compare the actual trajectory, the noisy measurements and the trajectory estimated with the proposed algorithm. The trajectory estimated by the constant acceleration model algorithm is not even represented because the error variance is too large and therefore adds a salt-and-pepper noise to the figures. The actual trajectory is drawn in pink, the noisy measurements in green and the estimated trajectory in black. The actual trajectory does not exactly fit the road: despite the movement of the camera, the video frames are not re-centered. The next images show the different trajectories for noise variances of 5 and 10 on the first video. We can see on both images that the algorithm improves the tracking performance. There is apparently no bias in the estimated trajectory and the error variance is rather small. 30 Figure 3.22 Actual trajectory, noisy measurements and estimated trajectory on the first video with a noise standard deviation of 5. The same observations can be made on the second video. The next figure (Figure 3.24) represents the actual, the noisy and the estimated trajectories in pink, green and black respectively. Finally, let us evaluate the influence of the chosen step size η. As in the previous tests, the state vector contains three accelerations ak, ak-1 and ak-2 for each coordinate. Figure 3.23 Actual trajectory, noisy measurements and estimated trajectory on the first video with a noise standard deviation of 10. 31 Figure 3.24 Actual trajectory, noisy measurements and estimated trajectory on the second video with a noise standard deviation of 5. Therefore, 6 LMS algorithms run in parallel. Let us see on the next plot the evolution of the 6 step sizes with respect to the adaptation time, for the second video where a noise with a standard deviation of 5 has been added. Figure 3.25 LMS algorithm step size evolution with respect to time for the second video. 32 All the step sizes have two states: a transition state and a steady state. The transition state in all the cases is rather short and consists of a quick exponential increase. In the steady state, the step sizes are almost constant, which means that appropriate step sizes have been found. The step sizes have the same types of behavior as for the second video: they reach a constant value after a short transition state. This proves the usefulness of an adapted value of the step size. For the third video, with the same noise level, the step sizes look as in Figure 3.26. Actually, if constant step sizes had been chosen in the beginning, the algorithm would have proven suboptimal. Figure 3.26 LMS algorithm step size evolution with respect to time for the third video. 33 Indeed, in order for the algorithm not to diverge, the best choice for a constant step size is to take the minimum of all step sizes. But it is obvious that, for the second video, in the steady state, the minimum value is not the optimum value, and for the third video, in the transition state, the minimum value is still not the optimal value. It is therefore important to use an adaptive step size that fits the dynamics of the Kalman filter. Consequently, the time-varying Kalman filter seems to give convincing results and to cope with the problems of the classically used methods (such as the constant acceleration method model). Indeed, the constant acceleration model is too sensitive to the initial acceleration, while the time-varying Kalman filter, even if the LMS tends to make the acceleration constant, is far more robust to acceleration changes, because it fits quickly to the new constant acceleration. Besides, the algorithm is computationally efficient, is designed to be implemented online and is the combination of two classical algorithms, the LMS algorithm and the Kalman filter, which makes it very easy to implement in hardware. However, the choice of the measurement model is not really adapted to real data; in deed, moving vehicle are often tracked by radar or ladar that collect range and angle data and not positions on the ground. Therefore, the measurement model needs to take into account the passage from position data to range and angle data, which is highly nonlinear. CHAPTER 4 THE UNSCENTED TRANSFORMATION ON NONLINEAR MEASUREMENTS In the previous chapter, we described an efficient method that could help track a vehicle on a surveillance video. The measurements, i.e. the output of the state equation model, are a position and, in certain cases, the acceleration. But, practically, the useful data are often given by radar or Ladar. In deed, the idea is to scan large areas and look for special shapes, which are characteristic for certain types of targets. Besides, the real 3D radar of Ladar measurements can see much more than a 2D camera and none have got successful in hiding its own volume on a portable system. Basic or Laser radar can see through a lot of vegetation so this is no guarantee to be hidden for a target. The 3D data consists generally of the range r and two angles, the azimuth angle θ and the elevation angle φ . Therefore, if we want to keep the state equation model, with positions, velocities and accelerations, we need to introduce a nonlinear equation to relate them to the range and the angles: 2 2 2 r x + y + z y θ = arctan( ) x φ z arctan( 2 x + y2 ) where r , θ , φ are respectively the range, the azimuth and the elevation, and x, y, z are the target positions in the Cartesian referential. 34 35 The problem with the introduction of the nonlinearity is that the classical Kalman filter no longer works. Therefore, a new method trying to cope with that issue appeared at the end of the 1980’s: the so-called extended Kalman filter (EKF). The Extended Kalman Filter Applied to Moving Vehicle Tracking Kalman filter that linearizes about the current mean and covariance is referred to as an extended Kalman filter. In something akin to a Taylor series, we can linearize the estimation around the current estimate using the partial derivatives of the process and measurement functions to compute estimates even in the face of nonlinear relationships. Let us assume that our process again has a state vector xk, but that the process is now governed by a linear or nonlinear stochastic difference equation: x k +1 = f ( x k −1 , u k , wk −1 ) with a measurement equation zk that is: z k = h( x k , v k ) where the random variables wk-1 and vk again represent the process and measurement noise. In this case the nonlinear function f in the difference equation relates the state at the previous time step k-1 to the state at the current time step k. It includes as parameters any driving function uk and the zero-mean process noise wk. The non-linear function h in the measurement equation relates the state xk to the measurement zk. In practice of course one does not know the individual values of the noise wk and vk at each time step. However, one can approximate the state and measurement vector without them as: x k +1 = f ( xˆ k −1 , u k ,•) 36 and z k = h( x k ,•) where xˆ k −1 is some a posteriori estimate of the state (from a previous time step k). It is important to note that a fundamental flaw of the EKF is that the distributions (or densities in the continuous case) of the various random variables are no longer normal after undergoing their respective nonlinear transformations. The EKF is simply an ad hoc state estimator that only approximates the optimality of Bayes' rule by linearization. A Description of the Extended Kalman Filter To estimate a process with nonlinear difference and measurement relationships, we begin by writing new governing equations that linearize an estimate about the two previous equations, x k = x k + A( x k −1 − xˆ k −1 ) + Wwk −1 z k = z k + H ( x k − x k ) + Vv k where xk and zk are the actual state and measurement vectors, x k and z k are the approximate state and measurement vectors, xˆ k is an a posteriori estimate of the state at step k, the random variables wk and vk represent the process and measurement noise, A is the Jacobian matrix of partial derivatives of f with respect to x, that is: Ai , j = ∂f i ( xˆ k −1 , u k ,•) ∂x j W is the Jacobian matrix of partial derivatives of f with respect to w, Wi , j = ∂f i ( xˆ k −1 , u k ,•) ∂w j 37 H is the Jacobian matrix of partial derivatives of h with respect to x, H i, j = ∂hi ( x k ,•) ∂x j V is the Jacobian matrix of partial derivatives of h with respect to v, Vi , j = ∂hi ( x k ,•) ∂v j Note that for simplicity in the notation we do not use the time step subscript k with the Jacobians A, W, H and V, even though they are in fact different at each time step. The complete set of EKF equations is shown below in Table 4.1 and Table 4.2. Table 4.1:Extended Kalman filter time update equations xˆ k− = f ( xˆ k −1 , u k ,•) T Pk− = Ak Pk −1 Ak + Wk Qk −1Wk T Table 4.2:Extended Kalman filter measurement update equations T T T K k = Pk− H k ( H k Pk− H k + Vk Rk Vk ) −1 xˆ k = xˆ k− + K k ( y k − H k xˆ k− ) Pk = ( I − K k C ) Pk− The basic operation of the EKF is the same as the linear discrete Kalman filter as shown in Figure 2.1. Figure 4.1 below offers a complete picture of the operation of the EKF, combining the high-level diagram of Figure 2.1 with the equations from Table 4.1 and Table 4.2. The Extended Kalman Filter and its Flaws Given the noisy observation zk, a recursive estimation for xˆ k can be expressed in the form: xˆ k = (optimal prediction of xk) +Kk [zk-(optimal prediction of zk)] 38 Figure 4.1: A complete picture of the operation of the extended Kalman filter This recursion provides the optimal MMSE estimate for xk assuming the prior estimate xˆ k and current observation zk are Gaussian. We need not assume linearity of the model. The optimal terms in this recursion are given by xˆ k− = E[ f ( x k −1 , u k , wk −1 )] K k = Pxk yk P~z−k 1~zk zˆ k− = E[h( x k− , v k )] where the optimal prediction of xk is written as xˆ k− and corresponds to the expectation of a nonlinear function of the random variables xk-1 and wk-1. The same interpretation is done for the optimal prediction zˆ k− . The optimal gain term Kk is expressed as a function of z k = z k − zˆ k− ). Note these terms also require taking posterior covariance matrices (with ~ expectations of a nonlinear function of the prior state estimates. 39 The Kalman filter calculates these quantities exactly in the linear case, and can be viewed as an efficient method for analytically propagating a Gaussian random variable through linear system dynamics. For nonlinear models, however, the extended Kalman filter approximates the optimal term as: xˆ k− ≈ f ( xˆ k −1 , u k ,•) K k ≈ Pˆxk yk Pˆ~z−k 1~zk zˆ k− = h( xˆ k− ,•) where predictions are approximated as simply the function of the prior mean value for estimates(no expectations are taken). The covariances are determined by linearizing the dynamic equations, and then determining the posterior covariance matrices analytically for the linear system. In other words, in the extended Kalman filter, the state distribution is approximated by a Gaussian random variable, which is then propagated analytically through the “first-order” linearization of the nonlinear system. As such, the extended Kalman filter can be viewed as providing “first order” approximations to the optimal terms. These approximations, however, can introduce large errors in the true posterior mean and covariance of the transformed (Gaussian) random variable, which may lead to suboptimal performance and sometimes divergence of the filter. Therefore, to improve the performance of the tracking filter, we need to address the problem of estimates of distributions propagating through nonlinear equations. The Unscented Kalman Filter The unscented Kalman filter addresses the approximation issues of the extended Kalman filter [12]. In its case, the state distribution is again represented by a Gaussian 40 random variable but is now described using a minimal set of carefully chosen points. These points capture the true mean and covariance of the Gaussian random variable, and when propagated through the true nonlinear system, capture the posterior mean and covariance accurately to the second order for any nonlinearity. The unscented Kalman filter is based on a mathematical transformation: the unscented transformation. The Unscented Transformation This transformation is used to calculate the statistics of a random variable, which undergoes a nonlinear transformation. Let us consider a random variable x of dimension L propagating through a nonlinear function y=f(x). Let us assume that x has mean x and covariance Px. To compute the statistics of y, a matrix χ of 2L+1 sigma points χi is formed, according to the following: χ0 = x χ i = x + ( ( L + λ ) Px ) i i=1,…,L χ i = x − ( ( L + λ ) Px ) i i=L+1,…,2L where λ=α2(L+ κ)-L is a scaling parameter. The constantα determines the spread of the sigma points around x and is usually set to a small value(less than 1e-4) and the constant κ is usually set to 0 or 3-L. ( ( L + λ ) Px ) i is the ith column of the square root matrix (e.g. lower triangular Cholesky factor).These sigma points are propagated through a nonlinear function: yi=f(χi) for i=0,…2L and the mean and covariance for y are estimated using a weighted sample mean and covariance of the posterior sigma points: 41 2L y ≈ ∑ wi( m ) y i i =0 2L Py ≈ ∑ wi( c ) ( y i − y )( y i − y ) T i =0 where the weights wi are defined as follows: w0( m ) = w0( c ) λ λ+L λ = + (1 − α 2 − β ) λ+L wi( m ) = wi( c ) = 1 for i=1,…,2L 2( L + λ ) where β is used to incorporate prior knowledge of the x distribution( for Gaussian distributions, β=2). The superscripts (m) and (c) indicate that the weights correspond to the computation of the mean and the covariance respectively. A simple example is shown in figure 4.2 for a 2-dimensional system: the left plot shows the true mean and covariance propagation using Monte-Carlo sampling, the center plot shows the results for the unscented transformation, the right plot shows the results for a method using a first order linearization approach as it is done in the extended Kalman filter. The Application to the Kalman Filter The unscented Kalman filter is a straightforward application of the unscented transformation applied to the recursive equation [13]: xˆ k = (optimal prediction of xk) +Kk [zk-(optimal prediction of zk)] The unscented transformation sigma points selection scheme is applied to the state vector to compute the corresponding sigma matrix χk. 42 Figure 4.2: The unscented transformation estimates for the mean and the covariance compared to the actual mean and covariance, and to the “first-order” estimates. Source: [13]. The unscented Kalman filter equations are given in Table 4.3, Table 4.4 and Table 4.5. One can notice that no explicit computation of Jacobians or Hessians is necessary to implement this algorithm. Furthermore, the overall number of computations is the same order L3 as the extended Kalman filter. The most costly operation is forming the sample prior covariance matrix Pk− . A number of variations for numerical purposes are also possible. For example, the matrix square root, which can be implemented directly using the Cholesky factorization, with order L3/6. However, the covariance matrices are 43 expressed recursively, and thus the covariance matrix can be computed in only order MxL2 (M being the size of zk) by performing a recursive update to the Cholesky factor. Table 4.3:Computation of the sigma points χ k −1 = xˆ k −1 xˆ k −1 + γ Pk −1 xˆ k −1 − γ Pk −1 [ ] Table 4.4:unscented Kalman filter time update equations χ k∗ k −1 = f ( χ k −1 , u k ) 2L xˆ k− = ∑ wi( m ) χ i∗,k k −1 i =0 2L Pk− = ∑ wi( c ) ( χ i∗,k k −1 − xˆ k− )( χ i∗,k k −1 − xˆ k− ) T i =0 [ χ k k −1 = xˆ k− xˆ k− + γ Pk− xˆ k− − γ Pk− ] z k k −1 = h( χ k k −1 ) 2L zˆ k− = ∑ wi( m ) z i ,k k −1 i =0 Table 4.5:unscented Kalman filter measurement update equations 2L P~zk ~zk = ∑ wi( c ) ( z i ,k k −1 − zˆ k− )( z i ,k k −1 − zˆ k− ) T i =0 2L Pxk zk = ∑ wi( c ) ( χ i∗,k k −1 − xˆ k− )( z i ,k k −1 − zˆ k− ) T i =0 K k = Pxk zk P~z−k 1~zk xˆ k = xˆ k− + K k ( y k − yˆ k− ) Pk = Pk− − K k P~zk ~zk K k T The weights wi(m ) and wi( c ) are defined as in the unscented transformation. The χ k∗ k −1 points are the a priori estimates of the sigma points and ~z = z k − zˆ k . k 44 Results The performance of the unscented Kalman filter applied to moving vehicle tracking has been tested on a set of simulated radar data. We have chosen to work with only 2-dimensional radar data (range and bearing), but the experiment can be easily generalized. The measurement equation becomes: 2 2 rk x k + y k y k + vk ϑ = k arctan( ) xk where v~N(0,R) is an additive white Gaussian noise. rk is the range at time step k, θk is the bearing at time step , xk is referred to as the cross-axis position and yk is referred to as the thrust-axis position. The state equation is the same as for the time-varying Kalman filter. The criteria to analyze the performance of the unscented Kalman filter are the estimated trajectory of the moving vehicle, the estimation residual for range with respect to the number of measurements and the East-West and the North-South position with respect to time. The performance of the unscented Kalman filter is compared with that of the extended Kalman filter. The experimental set is taken from the MATLAB demonstration program radmod.m and is described in Figures 4.3 and 4.4. The extended Kalman filter algorithm is a modified version of the extkalman.m MATLAB program. It has been changed in order to include the acceleration in the state model and the time-varying Kalman filter in the linear part of the algorithm. 45 Figure 4.3: Experimental set for the unscented Kalman filter performance test Figure 4.3: unscented Kalman filter subsystem The aircraft motion adds a slowly varying acceleration with noise, that affects this initial velocity. The initial longitudinal and the cross-axis positions are set to 0 and the initial velocity is set to 400 ft/s. To compare the performance of the unscented Kalman filter and that of the extended Kalman filter, the state noise power, and the measurement noise power are changed. 46 Robustness to State Noise The cross-axis acceleration noise and the thrust-axis acceleration noise variances are set to 10-3. The noise variances for the measurements are set to 1. The following figures (Figure 4.5 and Figure 4.6) show the actual and the estimated trajectories in polar coordinates for the extended and the unscented Kalman filters. The actual trajectory is drawn in red and the estimated one in green. One can notice that the trajectory estimated by the unscented Kalman filter is much smoother than the trajectory estimated by the extended Kalman filter. Figure 4.4: Actual and Estimated Trajectories for the extended Kalman filter for a state noise of 10-3. 47 Figure 4.5: Actual and Estimated Trajectories for the unscented Kalman filter for a state noise of 10-3. This is due to the fact that the noise propagates through the extended Kalman filter and introduces large errors in the true posterior mean and covariance of the state estimate. Besides, the unscented Kalman filter seems to give a much closer estimated trajectory. The following plots (Figure 4.6 and Figure 4.7) show the range estimation residual with respect to the number of measurements for both filters. We can observe that the extended Kalman filter range estimate diverges, while the unscented Kalman filter estimate remains bounded, even if the target moves away. The figures 4.8 and 4.9 show the cross-axis position x (N-S position) and the thrust-axis position y (E-W position). The actual positions are drawn in red, the noisy measurements in green and the estimated positions in blue. 48 Figure 4.6: Range Estimation Residual for the extended Kalman filter for a state noise of 10-3. Figure 4.7: Range Estimation Residual for the unscented Kalman filter for a state noise of 10-3. 49 Figure 4.8: Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the extended Kalman filter for a state noise of 10-3. Figure 4.9: Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the unscented Kalman filter for a state noise of 10-3. The extended Kalman filter positions estimates diverges from the actual position. In deed, the acceleration noise is integrated and is propagated through the filter that 50 increases the error, because of the first-order approximation. Nevertheless, the unscented Kalman filter keeps tracking efficiently the cross-axis and thrust-axis positions. Let us increase the state noise power to 10-1 on both axes. The next set of figures shows the same plots as in figures 4.4 to 4.9, but with this new noise power. Figure 4.10: Actual and Estimated Trajectories for the extended Kalman filter for a state noise of 10-1. We observe on all the figures that the extended Kalman filter could not withstand the noise power increase. The estimated trajectory completely diverges from the actual trajectory and is still very noisy, the range estimation residual increases linearly with time and the cross-axis and thrust-axis positions do not correspond at all to the actual positions. 51 Figure 4.11: Actual and Estimated Trajectories for the unscented Kalman filter for a state noise of 10-1. Figure 4.12: Range Estimation Residual for the extended Kalman filter for a state noise of 10-1 52 Figure 4.12: Range Estimation Residual for the unscented Kalman filter for a state noise of 10-1 Figure 4.13: Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the extended Kalman filter for a state noise of 10-1. 53 Figure 4.14: Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the unscented Kalman filter for a state noise of 10-1. Robustness to Measurement Noise The cross-axis acceleration noise and the thrust-axis acceleration noise variances are set to 10-3 as in figures 4.4 to 4.9. But the noise variances for the measurements are now set to 100. The following plots show the effects of this change of measurement noise. The unscented Kalman filter tracks the object for about 70 seconds. After 70 seconds, the range estimation residual grows linearly, because the state estimates do not fit the actual positions anymore. However, the unscented Kalman filter seems to cope better with the noise power increase. This dramatic change of measurement noise makes the extended Kalman filter completely fail. On the contrary, the unscented Kalman filter withstood this dramatic increase of measurement noise power. 54 Figure 4.15: Actual and Estimated Trajectories for the extended Kalman filter for a measurement noise of 100. Figure 4.16: Actual and Estimated Trajectories for the unscented Kalman filter for a measurement noise of 100. 55 Figure 4.17: Range Estimation Residual for the extended Kalman filter for a measurement noise of 100. Figure 4.18: Range Estimation Residual for the unscented Kalman filter for a measurement noise of 100. 56 Figure 4.19: Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the extended Kalman filter for a measurement noise of 100. Figure 4.20: Cross-Axis Position (N-S position) and the Thrust-Axis Position (E-W position) estimates for the unscented Kalman filter for a measurement noise of 100. 57 The moving object is still tracked, though the estimates are less smooth. The range estimation residual remains bounded and the position estimates seem not to be biased. Conclusion The introduction of radar measurements leads to nonlinear equations that cannot be solved by the classical Kalman filter. The extended Kalman filter is the commonly used method to deal with the nonlinearity in moving vehicle tracking. It takes first-order approximates to the nonlinear equations to obtain a new set of linear state and measurement equations, that can be recursively solved by the Kalman filter. Unfortunately, in radar tracking, the high measurement noise propagates through the approximated filter and makes the algorithm fail. We have introduced in this chapter a new Kalman filter, the unscented Kalman filter that copes with the extended Kalman filter flaws. Coupled with the time-varying Kalman filter, described in the previous chapter, this new filter gives convincing results when it is applied to simulated radar data. CHAPTER 5 CONCLUSION AND FURTHER RESEARCH Conclusion In the first part of this work, we have introduced a time-varying Kalman filter applied to moving vehicle tracking. The key idea is to consider the accelerations not as inputs to the system, but as states, in the model. Besides, in that model, the accelerations are modeled as a weighted sum of the previous accelerations. A linear fitting is recursively performed using an LMS algorithm. The computed model (state transition matrix, state error covariance matrix) is then used in a classic Kalman filter that has the vehicle position and several accelerations as its outputs. We have demonstrated the effectiveness of this filter on a set of airborne vehicle surveillance videos. After a short period of time, the algorithm becomes efficient, independently of the detection noise. In the second part of this work, we have coupled the time-varying Kalman Filter with a new extension of the Kalman Filter to nonlinear equations, the Unscented Kalman Filter. This new algorithm outperforms the classical Extended Kalman Filter, especially in very noisy environments. We test the combination of the two improved algorithms on simulated radar data. The tests show the robustness of this algorithm to state noise and measurement noise, and reveal that this algorithm is particularly appropriate for target tracking. Further Research In order for the Kalman Filters to be applicable, we make the assumption that the random variables have Gaussian probability distributions. We then consider that, after 58 59 they propagate through the nonlinear functions, they are still Gaussian. Even if the approximation is accurate to at least the second order with the Unscented Transformation, the assumption is still wrong. To avoid those assumptions in the target tracking problem, the particle filter can be used. The Particle Filter is a sequential Monte Carlo based method that allows for a complete representation of the state distribution, using sequential importance sampling and resampling. It makes no assumption on the form of the probability densities in question, i.e. full nonlinear, non-Gaussian estimation. Nevertheless, the implementation of the Particle Filter is not yet very efficient. Research needs to be done in order for this algorithm to be implemented in hardware, and eventually be used in target tracking purposes. LIST OF REFERENCES [1] Daum F.E., and Fitzgerald R.J. (1983). Decoupled Kalman Filters for phased array radar tracking. IEEE Transactions on Automatic Control, AC-28, vol.3, pp 269-283. [2] Chang C.B., and Tabaczynski J.A. (1984). Application of state estimation to target tracking. IEEE Transactions on Automatic Control, AC-29, vol.2, pp 98109. [3] Kalman R.E. (1960). A new approach to linear filtering and prediction problems. Transactions of the ASME-Journal of Basic Engineering, vol.82, Series D, pp 3545. [4] Hoshino M., Gunji Y., Oho S. and Takano K. (1996). A Kalman filter to estimate direction for automotive navigation. Proc. of the IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems. pp. 145-150. [5] Gavrilets V., Shterenberg A., Dahleh M.A. and Feron E. (2000). Avionics system for a small unmanned helicopter performing aggressive maneuvers. Proc. of the 19th Digital Avionics Systems Conferences, DASC, vol.1, pp. 1E2/1-1E2/7. [6] Zarchan P. (2000). Tracking and intercepting spiraling ballistic missiles. Position Location and Navigation Symposium 2000. pp 277-284. [7] Friedland B. (1980). Steady state behaviour of Kalman filter with discrete- and continuous-time observations, IEEE Transactions on Automatic Control, AC-25, pp. 988-992. [8] Williams D., Friedland B. and Richman J. (1983). Design of an integrated navigation, guidance, and control system for a tactical missile. Proc. of AIAA Guidance and Control Conference, pp.57-66. [9] Ramachandra K.V. (1988). State estimation of maneuvering targets from noisy radar measurements, IEEE Proceedings, vol.135, Pt.F,no.1 , pp. 82-84. [10] Ramachandra K.V., Mohan B.R. and Geetha B.R.(1993). A three-state Kalman tracker using position and rate measurements. IEEE Transactions on Aerospace and Electronic Systems, vol. 29, no.1, pp. 215-222. 60 61 [11] Singer R.A (1970). Estimating tracking filter performance for manned maneuvering targets, IEEE Transactions on Aerospace and Electronic Systems, AES-6, pp. 473-483. [12] Julier S.J., and Uhlman J.K. (1997). A new extension of the Kalman filter to nonlinear systems. Proc. of AeroSense: the 11th Int. Symp. on Aerospace/Defence Sensing, Simulation and Controls, pp. 52-56. [13] Wan E.A., and Van der Merwe R. (2000). The unscented Kalman filter for nonlinear estimation. Adaptive Systems for Signal Processing, Communications, and Control Symposium 2000, AS-SPCC, pp. 153-158. BIOGRAPHICAL SKETCH Nicolas Obolensky was born on June 28, 1977, in Paris, France. He received his Engineering Diploma in electrical engineering with an emphasis on signal processing and control from the National School of Electrical Engineers of Grenoble (ENSIEG) in 2001. Since August 2000, he has been pursuing a Master of Science in electrical and computer engineering at the University of Florida in the area of signal processing. 62
© Copyright 2024