CAD-BASED VIEWPOINT ESTIMATION OF TEXTURE-LESS OBJECT FOR PURPOSIVE PERCEPTION USING DOMAIN ADAPTATION

Changjian Gu, Chaochen Gu, Kaijie Wu, Liangjun Zhang and Xinping Guan

Keywords

CAD (computeraided design) models, neural networks, viewpoint estimation, domain adaptation

Abstract

In vision-based robot manipulation tasks, precise pose estimation is an important problem. In practical application, however, it is difficult to directly acquire the precise 6-DOF (degree of freedom) pose relation between camera and object in real environment. For a vision-based robot system, it is often necessary to transfer the camera to certain viewpoints for better observation or manipulation. Therefore, the viewpoint estimation can be regarded as a fundamental process for precise pose estimation. In this paper, the viewpoint estimation is considered a two-axis orientation measurement and converted to viewpoint classification problem. We define an object-centred viewpoint sphere and propose an efficient pipeline utilizing computer-aided design (CAD) environment and convolutional neural networks (CNNs) to acquire a two-dimensional viewpoint of the object relative to the camera. We first utilize CAD models and render techniques to automatically build a large-scale synthetic dataset for training. As these rendered images are taken in ideal conditions, the data distribution of synthetic images is different from that captured in real environment. To bridge the gap, we propose a two-stream network with the aid of an unsupervised domain adaptation method to train a classifier that can be applied in real environment. The experiment results evaluated on annotated real images demonstrate that the proposed pipeline can successfully address the problem of viewpoint estimation for texture-less objects in real environment and produce promising results.

Important Links:



Go Back