Social robots are required to work in daily life environments. The navigation algorithms they need to safely move through these environments require reliable sensor data.
We present a novel approach to increase the obstacle-avoidance abilities of robots by mounting several sensors and fusing all their data into a single representation. In particular, we fuse data from multiple RGBD cameras into a single emulated two-dimensional laser reading of up to 360 degrees. While the output of this virtual laser is two-dimensional, it integrates the obstacles detected at any height, so it can be safely used as input for regular two-dimensional navigation algorithms (both VFH* and R-ORM have been tested). Experiments conducted on real scenarios demonstrate the usefulness and efficiency of the proposed solution, which allows the robot to reach goals while avoiding static and dynamic obstacles.