Effective Perception Capability for a Planetary Exploration Rover via

4TH EUROPEAN CONFERENCE FOR AEROSPACE SCIENCES
Effective Perception Capability for a Planetary Exploration
Rover via Stereo Vision and Data Fusion
?
Enrica Zereik? , Andrea Biggio?? , Andrea Merlo?? and Giuseppe Casalino?
Department of Communication, Computer and System Science, University of Genoa
Via Opera Pia 13, 16145 Genoa, Italy
??
Thales Alenia Space - Italy
Strada Antica di Collegno 253, 10146 Turin, Italy
Abstract
Current space mission plans foresee a strong robotic presence, with the purpose to rely on automatic systems for all tasks resulting difficult or dangerous to man. In the present paper, a project for the development
of a robotic platform, to demonstrate robotic autonomy, is generally described. It is based on a flexible and
modular software architecture in which each functional module implements a key functionality of the GNC
(Guidance Navigation and Control). Details about the stereo vision algorithm for the three-dimensional
world point reconstruction are provided, together with hints about a possible fusion with data from a laser
scanner. Finally, some preliminary experimental results are shown.
1. Introduction
Since plans for future space missions involving a massive robotic presence are currently being made, many activities
in the field are being carried out; robotic autonomy represents a crucial point, for the highly desirable possibility to
assign to robots important operations to be performed without any human supervision. To this aim, a prototype able to
demonstrate robotic autonomous skills play a key role, as it allows scientists and engineers to develop and test effective
control algorithms and evaluate their performances. Within this scenario, the present work describes an internal project
by Thales Alenia Space - Italy, developed in strict collaboration with the GRAAL Lab of the University of Genoa.
Such a project has brought to the realization of the TBRA (Test Bench for Robotics and Autonomy), that is a terrestrial
prototype for an autonomous mobile robot, aimed to demonstrate the capabilities required to an automatic system in
space. Such a robot is based on a software architecture (Framework Engine) able to ease the development of the single
functionalities needed by the rover (e.g. locomotion, perception, path planning. . . ). Each functionality is implemented
by a module that is able to communicate with all other ones thanks to standardized interfaces, properly designed for
the exchange of the necessary information. This approach lends flexibility to the system, since each subsystem can be
replaced with a new one without affecting the overall functionalities of the GNC itself.
Moreover, among all the necessary robotic skills, it is clear that perception (and vision in turn) plays a key role
in all autonomously performed complex operations. In the present work, particular focus is given to the environment
perception: in order to correctly detect potential obstacles located on its path, avoiding them and safely reaching its
goal, the rover must perceive its working area and build a corresponding DEM (Digital Elevation Model). The overall
control algorithm responsible for the DEM computation is strongly based on stereo vision and showed good results;
it is here described and experiments showing the system performances are detailed. During the system development,
the stereo vision technique confirmed to be badly affected by noise, which can be rejected via the adoption of various
filtering techniques, in order to further improve the entire procedure. Such an enhancement can strongly increase the
robustness quality of the perception module. Moreover, again in order to obtain a more reliable and robust world 3D
reconstruction, a multi-sensor data fusion, e.g. based on a laser scanner extra information, can be exploited in such
a way to compensate the effect of noise on the camera images. The interested reader can refer to [1], [2] and [3] for
details about multi-sensory schemes and data fusion.
Copyright © 2011 by E.Zereik, A.Biggio, A.Merlo and G.Casalino. Published by the EUCASS association with permission.
FD&GNC - EXPLORATION AND SPACE TRANSPORTATION GNC
2. TBRA Architecture
As already suggested, within the described project, a flexible architecture has been developed, strongly relying on a
previous work realized at GRAAL Lab (see [4]). Usually, the autonomous navigation system of a rover is composed
of a set of functional interconnected modules, scheduled according to the selected navigation cycle (e.g. stop-and-go,
perception during motion). The overall developed architecture leads to many advantages as it allows:
• Validation and trade-off of different algorithms for the single functional module, based on:
– open-loop testing of the single module
– closed-loop testing of the whole navigation cycle, only replacing the single module
• Validation and trade-off of the navigation cycle based on closed loop testing
• Parallel implementation of the functional modules as “black boxes”
• Possibility to change each subsystem without affecting the overall functionalities of the GNC
Figure 1: TBRA architecture
Starting from the above assumptions, the flexible TBRA, whose architecture is shown in Fig. 1, has been realized
to develop and test innovative GNC systems, with the focus on locomotion, localization, perception and navigation
subsystems. Flexibility is achieved by the definition of a modular software architecture (Framework Engine) in which
each module implements a key functionality of the GNC. The GNC system is also characterized by an adequate level
of autonomy, which will lead, in further studies, to the development of a complete goal oriented mission planning. This
deliberative layer, replacing the executive and planning ones of the classical three layer architecture, is implemented as
a single module. Subsystems can communicate via standardized interfaces, properly designed for data exchange, and
they can be replaced by a new module with a different behavior, thus allowing an easy reconfiguration of the system.
Each module can be configured to be local (running in the same processing unit of the framework) or remote (connected
to the framework through a TCP/IP connection) by changing just one line of code; this is very useful to distribute the
computation among different CPUs possibly available.
3. Robotic Platform
The robotic platform is made up of a PowerBot commercial platform by MobileRobot (see [5]), with additional integrated GNC sensors; Fig. 2 shows the complete setup. The PowerBot is on its own endowed with 28 sonars, 2 wheels
encoders (internally integrated to perform wheel odometry localization), 12 bumpers and 2 commercial Desktop PCs
(Pentium Dual Core at 1.66 GHz, 2 GB DDR2, 80 GB HDD) with Ethernet connection. This basic hardware has been
integrated with the following GNC sensors:
2
E.Zereik, A.Biggio, A.Merlo and G.Casalino.
• Localization Sensors:
– Systron Donner MotionPak II Inertial Measurement unit (IMU)
– 2 SPI-TRONIC Pro3600 Inclinometers
– PointGrey Research BumbleBee2 Stereo Camera (2-eyes) for Visual Odometry (VO)
• Perception Sensors:
– PointGrey Research BumbleBee XB3 Stereo Camera (3-eyes)
– Fraunhofer Institute rotating 3D Laser Scanner (based on sick LMS200)
The 3-eyed stereo camera is mounted on a Directed Perception D46-70 Pan-Tilt Unit position. Moreover, to enhance
networking capabilities, a router has been installed, ensuring that the antennas are visible in any pose assumed by the
robot, powered by two lead batteries (24 V - 88 Ah).
Figure 2: TBRA robotic platform
4. Functional Modules
In the current TBRA architecture the following modules have been implemented:
• Sensors Modules:
– Inertial Measurement Unit (IMU)
– Inclinometers
3
FD&GNC - EXPLORATION AND SPACE TRANSPORTATION GNC
• Actuator Modules:
– Locomotion Actuator
– Pan-Tilt Unit (PTU)
• Perception Modules:
– Stereo Vision
– 3D Laser Scanner (3DLS)
– Perception Data Fusion
• Locomotion Module
• Localization Modules:
– Wheels Odometry (WO)
– Inertial Odometry (IO)
– Visual Odometry (VO)
– Localization Data Fusion (LDF)
• Navigation Modules:
– Navigator
– Traversability
– Path Planner
• Deliberative Modules:
– Autonomy
4.1 Sensors
The sensor modules are responsible for the acquisition and processing of the raw measurements coming from the
GNC sensors embedded on the robotic platform. These modules exploit the Hardware Abstraction Layer (HAL) of the
framework to transparently interact with the underlying hardware.
4.2 Actuators
These modules are in charge of sending the control references to the underlying actuators. The platform locomotion
actuator module, for example, receives the speed and jog references generated by the higher level control and computes
the corresponding wheel velocities. The communication is performed exploiting the abstraction provided by the HAL.
4.3 Perception
The rover perception skill actually relies on both a stereo vision algorithm and a laser scanner, as well as on a data
fusion technique trying to merge information from both sensors. The Stereo Vision Perception module must compute
the Digital Elevation Map (DEM) of the surrounding environment via stereo vision-based techniques. This module
receives a “generate DEM” command from the Perception Data Fusion, together with the indication of a specific
spatial range to cover, typically 180 deg or 360 deg. Then the module computes the needed camera poses (in terms
of pan and tilt values for the scan) and commands the PTU (from DirectedPerception, see [6]) to move the 3-eyed
stereo camera. Each perception is cut to keep only its highest confidence region (1 − 5 m depth, 60 deg field of view)
and is added to a vectorial DEM with a 0.05 m x − y discretization. With the aim to reject the acquisition noise, a
median filter is applied to the obtained DEM. The result is a 400 × 400 matrix where each pixel represents a 0.05 × 0.05
m region of the environment. The discretization parameter along x − y has been chosen as a trade-off between the
4
E.Zereik, A.Biggio, A.Merlo and G.Casalino.
sensor accuracy, the required region coverage and the amount of data to be exchanged between the modules (and so
the memory allocation size).
The Laser Scanner Perception module must generate a DEM of the surrounding environment, too. Again, the
module receives a “generate DEM” command from the Perception Data Fusion module and starts collecting data from
the rotating 3DLS. The output of such an acquisition is very detailed for points in the first 2 − 3 m but becomes sparse
at greater distances. This is due to the angular resolution of the 3DLS rotating device. In order to have a total scan
time of maximum 40 s the 3DLS angular resolution is limited to 0.25 deg and the speed is 10 deg/s. Vertical scanning
is performed at 0.5 deg resolution over 200 points from +40 deg to −60 deg in 26.5 ms. Fig. 3 shows an example of
DEM generated by the 3DLS Perception module.
Figure 3: 3DLS Perception Module DEM
The Perception Data Fusion (PDF) module mainly performs two tasks: the merging of DEMs generated from
each Perception Module and the noise reduction of the resulting DEM. The PDF module can be configured to rely on
one sensor only (no merging needed) or to merge the stereo vision and laser scanner DEMs giving higher priority to
one of them, according to the user preference. The merging process takes the higher priority DEM as the starting one
and then fills the unknown regions with the lower priority DEM in order to reduce, as much as possible, the unknown
points. Despite the former filtering that is performed by each perception module, the experimental results shown that
this merged DEM is still affected by a salt-and-pepper-like noise. Thus, it needs a specific filtering which has been
designed considering the environment the rover has to work into: the basic assumption is that the main obstacles in
a planetary environment (e.g. Mars) are rocks and boulders. Therefore, the filter is intended to result in the deletion
of thin and tall objects (e.g. thin columns) from the map. Fig. 4 shows an example of output DEM after the filtering
process. The global output DEM is a 800 × 800 matrix where each pixel represents a 0.05 × 0.05 m region of the
surrounding environment.
This fundamental perception skill will be more deeply detailed in the subsequent sections of the present work.
4.4 Locomotion
The Locomotion Module is in charge of generating the speed and jog reference values to follow the path provided by
the Path Planner. A closed loop control is performed reading the feedback from the Localization Data Fusion (LDF)
module that computes the rover full pose estimation (x, y, z, roll, pitch, yaw). While it is moving, this subsystem reads
the rover pose and tries to correct the trajectory in order to keep it within a boundary of 0.20 m around the nominal
5
FD&GNC - EXPLORATION AND SPACE TRANSPORTATION GNC
path. If the rover position exceeds the maximum allowed error (e.g. due to slipping) the Locomotion module stops
the rover movement and reports an error to the Navigator that performs the proper recovery action. The Locomotion
module executes also Hazard Detection in terms of slopes and unexpected obstacles; in both cases an error is reported
to the Navigator module.
4.5 Localization
This subsystem is made up of the three different odometry algorithms (WO, IO and VO) and the corresponding data
fusion process. The WO module evaluates the differential rover displacement and heading (x, y, yaw) between the rover
current and last estimated position. The IO module provides the differential rover attitude (roll, pitch, yaw) between its
current and last attitudes. Finally, the VO module estimates the complete rover differential pose (x, y, z, roll, pitch, yaw).
Each module output is provided to the LDF module, in order to estimate the global rover pose.
The LDF module computes the overall rover pose (x, y, z, roll, pitch, yaw) ∈ <6 by merging all the data coming
from the three odometry sensors discussed above. This data fusion is performed using an Extended Kalman Filter
which takes as input the differential estimations of each localization module together with their confidence parameter
(related to each sensor error covariance).
4.6 Navigation
The Navigation module mainly includes three different functionalities, each responsible for a specific task. First of
all, the Traversability Module is in charge of generating the Navigation Map (NavMap), updating the stored NavMap
with the current perceived local DEM. The resulting NavMap is a 1024 × 1024 matrix, with each pixel representing a
0.05 × 0.05 m region of the environment.This module executes two independent processes: discontinuities detection
and slope detection. In the NavMap generated by the first procedure, the costs are related to the soil discontinuities;
Figure 4: 3DLS Perception Module filtered DEM
whenever a discontinuity is greater than the one assumed as the maximum negotiable by the rover, the corresponding
map region is marked as an obstacle and the dimension of all obstacles is increased by half the rover size (0.45 m),
for safety reasons. The second duty of this module consists in the slopes detection, performed via the evaluation of
terrain steepness every 30 deg around each DEM point, hypothetically placing the rover in such point and orienting
it in the various directions. All costs are computed on the basis of the highest relative steepness of opposite areas:
6
E.Zereik, A.Biggio, A.Merlo and G.Casalino.
every time the steepness is greater than the rover maximum negotiable slope, the point is marked as obstacle. The
two partial NavMaps are then merged to obtain the local NavMap related to the current perception. Moreover the
older NavMap, containing the rover exploration “history”, is aligned to the current rover position and merged with the
currently computed one; regarding the last perception as the most reliable, all overlapped points are replaced with the
newest values. Values of old NavMap are exploited to complete the currently unknown regions. Finally, a cost function
is applied around all obstacles, in such a way to keep the rover far from hazardous areas. The whole traversability
process takes maximum 350 ms to be completed; Fig. 5 shows a sequence of merged NavMaps during an outdoor test.
Figure 5: Merged NavMap Sequence with Overlaid Rover Path
The second functionality of the Navigator is represented by the Path Planner Module, responsible for the generation of a path (sequence of way-points) toward the required goal. This process is divided into two main steps:
“coarse path finding” and “fine path finding”. In the first step the NavMap is subdivided into quadrants and stored
in a QuadTree data structure, used to minimize memory consumption and maximize performances. After this, the
NavMap is composed by big-tiles (from 0.40 m to 0.80 m by default), each one containing the cost mean if the region
is completely navigable. On the opposite, if unknown or obstacle points falls inside the region, it is entirely regarded as
an unknown or obstacle region. The coarse path is then calculated running an ‘ad-hoc’ algorithm deriving from A*: if
such a path exists, a subgoal is identified among its way-point list. Subgoal selection depends on the maximum distance
the rover can cover between two subsequent perceptions and DEM generation; this last follows from a trade-off among
perception, localization and locomotion performances. The maximum distance at the present day is 3 m. The NavMap
is then windowed around the rover current position and a complete QuadTree (maximum depth, 0.05 m granularity)
is grown on this sub-NavMap; the path is finally computed using the previously mentioned ‘ad-hoc’ algorithm. The
whole path planning process can be completed within 500 ms.
Finally, the Navigator module is able to trigger and monitor the execution of all the lower level modules (Perception, Traversability, Path Planning, Locomotion and Localization). It receives and executes all medium-level commands from the Autonomy module and performs a low-level Failure Detection, Isolation and Recovery (FDIR) for
Path Planning and Locomotion. Path Planner errors result in a request to the PDF, aiming to build a new DEM; if
three consecutive errors are generated, the failure is reported to the Autonomy Module. Locomotion can cause both a
“Rover out of path” and a “Hazard Detection” error. The former can be recovered stopping the rover and generating a
new perception while biasing the localization sensors, where as the latter is handled moving the rover away from the
unexpected obstacle and requiring a new DEM generation. All other situations are reported directly to the Autonomy
Module for higher level error management.
4.7 Deliberative Module
This subsystem, basically consisting in the Autonomy module, must transform the high-level requests (e.g. Explore a
specified area) into a sequence of medium-level commands, forwarded to the Navigator. This module is also in charge
of performing FDIR for all errors unmanaged by the Navigator. In the current version, the Autonomy generates a
spiral path inside the interesting area, in order to totally cover the region of interest, defined by the explore command
parameters (central point and radius of the circular area to be explored). If one point on the spiral path is not reachable
(e.g. it is inside a non-navigable area) it is discarded and the next point is considered. Fig. 6 shows all module
interaction while executing the “Explore Area” command.
7
FD&GNC - EXPLORATION AND SPACE TRANSPORTATION GNC
Figure 6: Explore Task Execution
5. Focus on Perception
As already suggested, perception is considered as one of the fundamental skills for such a robotic platform, since it is
essential in order to autonomously and safely interact with the system working environment; within the present project,
both stereo vision techniques and laser scanner-based approaches have been exploited. In the following sections these
two different systems will be described and some results will be shown.
5.1 Perception via Stereo Vision
Exploiting the Bumblebee functionalities (refer to [7]), the first step of the rover perception algorithm is represented by
the surrounding world three-dimensional reconstruction (refer to [8], [9] and [10]); this procedure is highly influenced
by errors of different nature: bad lighting conditions, wrong computed stereo correspondences, camera model parameter uncertainties, input image noise badly affect it. Some of these issues can be strongly reduced via the use of the
precise Bumblebee stereo system; for example, camera parameters and stereo geometry of the system are so accurately
known that precision and robustness of the stereoscopic reconstruction is highly increased. Even if this possibility has
not yet been exploited, according to the specific environment to be perceived, one can choose to use different pairs of
cameras, thus each time achieving a different stereo baseline and, consequently, a varying field of view; furthermore, a
great improvement in 3D reconstruction accuracy can be achieved via the employment of all three Bumblebee cameras
(up to now only two of them are used). For details about multi-view reconstruction refer to [11] and [12].
When the rover is required to make a new perception of its surrounding area, the vision-based algorithm searches
the correct stereo camera (i.e. the Bumblebee XB3) among the system devices and acquires pairs of stereo images at
specifically computed locations for the camera: the Pan/Tilt Unit is commanded to assume these predefined positions.
The latter are computed as follows: first of all, knowing the whole desired spatial range to cover, the number nP of
8
E.Zereik, A.Biggio, A.Merlo and G.Casalino.
necessary perception steps is retrieved, simply as
nP =
R
C f ov
(1)
where R is the spatial range and C f ov stands for the camera field of view (in this case about 60 deg). As an example,
if the rover has to explore the world in a range of 360 deg, 6 are the necessary perceptions: 6 pairs of stereo images
are acquired by the system, one for each camera pose. The Pan/Tilt Unit provides 2 degrees of freedom that must be
commanded in a proper way; in this case the tilt angle is always kept at a predefined value (about 30 deg toward the
ground), while the pan assumes a different value, each determined as
pani = pan0 + i
R
nP
(2)
where i represents the current perception step, where as pan0 is the starting Pan/Tilt position, while acquiring the first
image pair, and can be computed as
R
R
pan0 = − +
(3)
2 2nP
A simple example of 360 deg world coverage is depicted in Fig. 7. Each of these acquired pairs is used in order to
Figure 7: Camera position while performing a complete range world perception
retrieve the 3D structure of the world and perform range measurements via stereo image processing; basically, three
are the essential steps:
1. Establish correspondences between image features in different views.
2. Compute disparity of matched features.
3. Determine the 3D location of the feature relative to the camera frame.
There are quite different methods to find correspondences between two images; here the Sum of Absolute Differences
correlation method is employed, i.e. neighborhoods are compared via the following formula
m
m
2
2
X
X
Iright [x + i] y + j − Ile f t [x + i + d] y + j min
dmax
d=dmin
(4)
i=− m2 j=− m2
where dmin and dmax are minimum and maximum considered disparities, m is the mask size and Iright , Ile f t are the right
and left images. At this point, once computed the disparity map (disparity values for all matched features), the threedimensional reconstruction of the world points is retrieved. Fig. 8 shows the acquired image pair for a single perception
9
FD&GNC - EXPLORATION AND SPACE TRANSPORTATION GNC
(a) Left image
(b) Right image
(c) 3D reconstruction - frontal view
(d) 3D reconstruction - side view
Figure 8: Stereo 3D reconstruction for a single perception
(Fig. 8(a) and 8(b)) and the relative 3D point reconstruction (Fig. 8(c) and 8(d) showing two different views of the same
scene). When the 3D world points are reconstructed for each stereo pair, they all have to be combined in such a way to
obtain the resulting perception of the environment around the rover. To this aim, the reconstructed points are projected
onto a common fixed frame < w >; in Fig. 9 the rear part of the PTU-mounted stereo camera and the relative frame
positioning are shown. The already mentioned frame < w > is fixed and independent of any rover or PTU motion (e.g.
coincident with the rover starting position at the beginning of the mission). Note that all the reference frames < base >,
Figure 9: PTU and camera frames
< pan >, < tilt > and < ptu > are assumed coinciding in the same point A, indicated in Fig. 9 and representing the
intersection of the PTU rotation axes (one for pan and one for tilt): they are separately drawn only for sake of clarity.
Transformation matrices between < ptu > and < base > and between < camera > and < tilt > are obviously known
pan
and constant, while the matrices base
pan T and tilt T depends on, respectively, the pan and tilt angles and so they have to
10
E.Zereik, A.Biggio, A.Merlo and G.Casalino.
(a) Left image 1
(b) Left image 2
(c) Left image 3
(d) Left image 4
(e) Left image 5
(f) Left image 6
(g) 3D reconstruction - frontal view
(h) 3D reconstruction - rear view
Figure 10: Input left images and stereo 3D reconstruction for a complete 360◦ perception
be accordingly updated. Note that these pan/tilt rotations induce not only changes in the orientation part of base
pan T and
pan
T
,
but
also
in
the
translation
components,
since
the
camera
does
not
rotate
with
respect
to
its
frame
origin
but to
tilt
another point.
An OpenGL viewer has been realized in such a way to display the 3D reconstruction, thus evaluating its performances; in Fig. 10 a complete range perception around the rover, realized during an outdoor test, is shown. Only
points with depth (distance from the camera) falling within a predefined range are considered valid: points further away
from the camera are more sensitive to noise. However, as from Fig. 10, it is clear that noise is not entirely rejected;
furthermore, overlapping areas of the reconstructed world are not suitably managed (up to now, old values are simply
overwritten by the new ones). This is a still open issue and a proper policy correctly managing such a point overlapping should be implemented. At this point, the corresponding DEM can be computed: in order to retrieve the planar
representation of the perceived world, the two planar coordinates of each 3D point are exploited to find the location of
this point onto the DEM map and to obtain its (i, j) image coordinates. The third coordinate of the 3D point is used to
assign a color code to each pixel: in Fig. 11, blue points represent navigable areas, while red pixels stand for obstacles;
black areas represent unknown and still unexplored points. Consider Fig. 11: black pixels behind the box are due to
the occlusion produced by the box itself (terrain points behind it are not visible and so the corresponding DEM portion
is marked as unknown).
In conclusion, since the 3D reconstruction is prone to errors due to noise, mismatched correspondences, bad
illumination and so on, a suitable solution (e.g. a smart filtering) is to be identified and implemented for the system.
11
FD&GNC - EXPLORATION AND SPACE TRANSPORTATION GNC
Figure 11: DEM computed in relation to the previously described rover outdoor perception
At the present day, a simple Median calculation is performed: for each pixel (i, j), all the 3D point values belonging
to the current pixel are stored within a preliminary structure; when all the computed points have been considered and
correctly sorted within such a map, only one value (i.e. the median) is assigned to each DEM pixel (i, j).
5.2 Perception via Laser Scanner and Data Fusion
As already suggested, the robot has been endowed with a 3D laser scanner in such a way to have an alternative tool to
build up DEMs of the surrounding environment. After a complete revolution, the 3D laser scanner retrieves a cloud of
points, each expressed as [x, y, z]T , with respect to its reference frame, located in the middle of the Sick scanner; these
points are then correctly transformed with respect to the already mentioned common reference frame, in such a way
to be eventually comparable with those evaluated by the vision system. Finally, the DEM is created with the desired
granularity (usually 0.05 m).
Thus the rover can rely on two different DEMs coming from two different sensors and describing the same
environment: many policies are possible. For example, if only one of the sensors (vision and laser scanner) is reliable,
data from the other sensor can be simply disregarded and only a filtering on the exploited data is to be performed.
Anyway, a more reliable policy can be adopted, considering all available data: a “priority” can be assigned to each
sensor and DEM points from the higher priority sensor are firstly filtered. Then, for each invalid point after the filtering
process, data from the second sensor are exploited (if the point remains still invalid after this procedure, then the final
output point will be invalid too). Obviously enough, a third and more effective method can be implemented, that is
the actual data fusion: a confidence value can be computed for all points of each sensor, according to some criteria
(for example points of the vision DEM located far from the camera or at the very sides of its field of view can be less
trusted). Thus it is sufficient to perform a simple weighted fusion; anyway this last technique is still under consideration
and is being analyzed and tested.
6. Experimental Results
Thanks to the modularity of the underlying framework, each module has been firstly tested as a separate entity (standalone tests). Then each standalone module has been turned into the corresponding TBRA module and the closed loop
tests have been performed. These tests led to further optimize algorithms in such a way to improve performances in
terms of time and accuracy of each subsystem. The early performance tests have been executed on paths with length
less than 20 m. Further tests are ongoing and will result in a more precise characterization of the system performances.
However, preliminary tests have shown very good results as the rover was able to correctly navigate outdoor, avoiding
many obstacles placed all along its path toward the goal.
12
E.Zereik, A.Biggio, A.Merlo and G.Casalino.
7. Conclusions and Future Work
The current output of the TBRA R&D project is a rover capable of both indoor and outdoor autonomous navigation, without using any a-priori knowledge but only relying on its on-board GNC sensors. This study provided an
implementation of the test bench including the complete framework and a set of new functional modules; all these
subsystems should be updated with new more performing features, within further activities foreseen in the next years.
The importance of such a project is clear and leads to the development of the following branch of research/products:
• Functional Evaluator of GNC algorithms concerning performances, optimization and open/closed loop testing.
• Test Bench for the development of new GNC algorithms based on new sensors, suitable for space applications.
• Test Bench composed by two or more cooperating robots for studies about swarm and cooperation.
In particular, future very interesting activities are related to these multi-robot coordination and control issue as the
“swarm” approach is very suitable for space exploration: a swarm of smaller, simpler devices can cover far more
ground (and maybe in a more effective manner) than even the most complex single unit.
References
[1] Kweon, I.S. and T. Kanade. 1992. High-resolution terrain map from multiple sensor data. IEEE Transactions on
Pattern Analysis and Machine Intelligence. 14(2).
[2] Kumar, M. and D.P. Garg. 2009. Multi-sensor data fusion in presence of uncertainty and inconsistency in data.
Sensor and Data Fusion.
[3] Vershinin, Y.A. 2002. A data fusion algorithm for multisensor systems. Proceedings of the Fifth International
Conference on Information Fusion. Volume 1.
[4] Simetti, E., E. Zereik, A. Sperindé, S. Torelli, D. Ducco, F. Frassinelli, A. Turetta and G. Casalino. 2011. A new
software architecture for developing and testing algorithms for space exploration missions. Intelligent Service
Robotics. Volume 4, Number 2, 135-146.
[5] MobileRobots PowerBot. 2011. URL: http://www.mobilerobots.com.
[6] DirectedPerception FLIR
http://www.dperception.com.
Motion
Control
Systems
-
Pan/tilt
unit
systems.
2007.
URL:
[7] Point Grey Research Product catalog - stereo. 2009. URL: http://www.ptgrey.com.
[8] Hartley, R. and A. Zisserman. 2004. Multiple View Geometry in Computer Vision. Cambridge University Press.
Second edition.
[9] Jung, H.G., D.S. Kim, P.J. Yoon and J.H. Kim. 2005. Stereo vision based localization of free parking site. Computer Analysis of Images and Patterns - Lecture Notes in Computer Science.
[10] Botterill, T., R. Green and S. Mills. 2008. Stereo reconstruction for visual navigation. Proceedings of the New
Zealand Computer Science Research Student Conference.
[11] Hernandez, Esteban C. and F. Schmitt. 2002. Multi-stereo 3d object reconstruction. Proceedings of the First
International Symposium on 3D Data Processing Visualization and Transmission.
[12] Jin, H., S. Soatto and A.J. Yezzi. 2005. Multi-view stereo reconstruction of dense shape and complex appearance.
International Journal of Computer Vision. 63(3).
13