Image Co-segmentation by Incorporating Color Reward Strategy and Active Contours Model

The design of robust and efficient co-segmentation algorithms is challenging because of the variety and complexity of the objects and images. In this paper, we propose a new co-segmentation model by incorporating color reward strategy and active contours model. A new energy function corresponding to the curve is first generated with two considerations: the foreground similarity between the image pairs and the background consistency in each of an image pair. Furthermore, a new foreground similarity measurement based on the rewarding strategy is proposed. Then, we minimize the energy function value via a mutual procedure which uses dynamic priors to mutually evolve the curves. The proposed method is evaluated on many images from commonly used databases. The experimental results demonstrate that the proposed model can efficiently segment the common objects from the image pairs with generally lower error rate than many existing and conventional co-segmentation methods.


Fanman Meng, Hongliang Li, Guanghui Liu, King Ngi Ngan, "Image Cosegmentation by Incorporating Color Reward Strategy and Active Contour Model," IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol.PP, no.99, pp.1-13, 2012.  [PDF]

Original image pairs and the ground truth masks used in this paper.

The results of [28], [31], [36], [52] and the proposed method. The first row displays original images. The results of [28], [31], [36], [52] and the proposed method are shown from the second row to the last row, respectively.

The results of [28], [31], [36], [52] and the proposed method in terms of error rate.
1. Ground truth Used for Comparison
   The used image pairs (30 image paris) and the ground truth can be downloaded from here (dataset).
2. Source Code (MATLAB)
   Source code can be downloaded from here (source code)
This work was supported in part by the National Natural Science Foundation of China under Grants 60972109 and 61271289, by the Ph.D. Programs Foundation of the Ministry of Education of China under Grant 20110185110002, and by the Fundamental Research Funds for the Central Universities under Grant E022050205.