Motivation

A key problem in active object detection and Next-Best-View planning is the accurate prediction of potential information gain when the sensor is moved to new pose. Here in this work, from a model perspective, we are focusing on how to construct an information rich object model which will provide more accurate prediction and better differentiation among similar objects.

Method

The object model consists two point cloud: 1) a dense point cloud which describes the geometry shape of the object and 2) a sparse feature cloud which consists the feature points (SIFT in our case), as shown below.

model

The first one is used for scene reconstruction and occlusion prediction and the latter one is designed for object detection and pose estimation. In [1], we add two more attributes to the each feature from the sparse feature cloud: 1) maximum observable distance and 2) maximum observable angle. This 2 additional properties help in predicting whether the feature will be matched correctly rather than whether its observability.

In recent submission [2], in order to differentiate similar objects in Next-Best-View planning, we further attach another attribute to each feature: importance weight, which denotes the similarity of the feature across other objects. With these attributes, under a naive brute force Next-Best-View planning algorithm, our approach is able to generate reasonable trajectory in order to detect targeting objects in the environment. The following video visualise the feature weighted model in 3D.

MODEL_VIDEO

Results

Reconstructed scene in different steps and the generated trajectory are shown below.

scene

trajectory

Data and Code

Coming soon

 

Reference

  • [1] Kanzhi Wu, Ravindra Ranasinghe, and Gamini Dissanayake. “Active recognition and pose estimation of household objects in clutter.” Robotics and Automation (ICRA), 2015 IEEE International Conference on. IEEE, 2015.
  • [2] Kanzhi Wu, Ravindra Ranasinghe, and Gamini Dissanayake, “Active recognition and pose estimation of household objects in clutter.” Autonomous Robots, Special Issue on Active Perception. (Submitted)