科学研究
Scientific research
您所在的位置:首页>科学研究

Image Co-segmentation by Incorporating Color Reward Strategy and Active Contours Model

Abstract

The design of robust and efficient co-segmentation algorithms is challenging because of the variety and complexity of the objects and images. In this paper, we propose a new co-segmentation model by incorporating color reward strategy and active contours model. A new energy function corresponding to the curve is first generated with two considerations: the foreground similarity between the image pairs and the background consistency in each of an image pair. Furthermore, a new foreground similarity measurement based on the rewarding strategy is proposed. Then, we minimize the energy function value via a mutual procedure which uses dynamic priors to mutually evolve the curves. The proposed method is evaluated on many images from commonly used databases. The experimental results demonstrate that the proposed model can efficiently segment the common objects from the image pairs with generally lower error rate than many existing and conventional co-segmentation methods.


Paper

Fanman Meng, Hongliang Li, Guanghui Liu, King Ngi Ngan, "Image Cosegmentation by Incorporating Color Reward Strategy and Active Contour Model," IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol.PP, no.99, pp.1-13, 2012. 

paper.pdf



Results

meng_1.jpg
Original image pairs and the ground truth masks used in this paper.

meng_2.jpg

The results of [28], [31], [36], [52] and the proposed method. The first row displays original images. The results of [28], [31], [36], [52] and the proposed method are shown from the second row to the last row, respectively.


meng_3.png
The results of [28], [31], [36], [52] and the proposed method in terms of error rate.

Downloads

1. Ground truth Used for Comparison
 
   The used image pairs (30 image paris) and the ground truth can be downloaded from here 

dataset.rar


2. Source Code (MATLAB)
 
   Source code can be downloaded from here 

code.rar


Acknowledgments
 

This work was supported in part by the National Natural Science Foundation of China under Grants 60972109 and 61271289, by the Ph.D. Programs Foundation of the Ministry of Education of China under Grant 20110185110002, and by the Fundamental Research Funds for the Central Universities under Grant E022050205.


联系方式:

邮编:611731

实验室地址:成都市高新区(西区)西源大道2006号

电子科技大学信息与通信工程学院

技术支持:成都今网科技

版权所有 © 智能视觉信息处理与通信实验室, 2018