Jing Chen This email address is being protected from spambots. You need JavaScript enabled to view it.1,2, Can-Hui Cai2 and Cui-Hua Li1
1School of Information Science and Engineering, Xiamen University, P.R China 2Institute of Information Science and Engineering, Huaqiao University, P.R China
Received: February 27, 2013 Accepted: June 28, 2013 Publication Date: September 1, 2013
A novel stereoscopic object segmentation method based on disparity and temporal-spatial cues is proposed in this paper. First, a foreground layer map is generated from a patched disparity image and a motion object map is produced using an adaptive reference frame selection method. Then the intersection of these two maps is used to extract the foreground motion objects. In the foreground layer map generation stage, two intensity image sequences of the input stereo videos are separately mapped to the rank space by a rank transform to eliminate the parameter deviation of binocular cameras, and reduce the interference of the environment fluctuation and noises. A fast multi-window matching strategy is proposed to speed up the stereo matching process. Then, a disparity patching process is performed to bridge the disparity discontinuity in the smooth region to improve the accuracy of the disparity map. In the motion object map generation stage, an adaptive reference frame selection method is introduced to obtain salient motion and produce the motion mask of each view. Experimental results have indicated the good performance and the practicability of the proposed stereo segmentation algorithm.
[1] Zhang, Q. and Ngan, K. N., “Segmentation and Tracking Multiple Objects under Occlusion from Multiview Video,” IEEE Transactions on Image Processing, Vol. 20, No. 11, pp. 33083313 (2011). doi: 10.1109/TIP. 2011.2159228
[2] Tola, E., Lepetit, V. and Daisy, F. P., “An Efficient Dense Descriptor Applied to Wide-Baseline Stereo,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, pp. 815830 (2010). doi: 10. 1109/TPAMI.2009.77
[3] Kanade, T. and Okutomi, M., “A Stereo Matching Algorithm with an Adaptive Window: Theory and Experiment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 16, pp. 920932 (1994). doi: 10.1109/34.310690
[4] Tang, L., Wu, C. and Chen, Z., “Image Dense Matching Based on Region Growth with Adaptive Window,” Pattern Recognition Letters, Vol. 23, pp. 11691178 (2002). doi: 10.1016/S0167-8655(02)00063-6
[5] Yoon, K. and Kweon, I., “Adaptive Support-Weight Approach for Correspondence Search,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, pp. 650656 (2006). doi: 10.1109/ TPAMI.2006.70
[6] Banks, J. and Bennamoun, M., “Reliability Analysis of the Rank Transform for Stereo Matching,” IEEE Transactions on Systems Man and Cybernetics, Part B, Vol. 31, Issue 6, pp. 870880 (2001). doi: 10.1109/3477. 969491
[7] Kolmogorov, V., Criminisi, A., Blake, A., Cross, G. and Rother, C., “Probabilistic Fusion of Stereo with Color and Contrast for Bilayer Segmentaion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, pp. 14801492 (2006). doi: 10.1109/ TPAMI.2006.193
[8] Wu, Y., Wang, P. P. and Li, J., “Bi-Layer Segmentation from Stereo Video Sequences by Fusing Multiple Cues,” IEEE Int. Conf. on Multimedia and Expo (ICME), Vol. 1, pp. 13531356 (2008). doi: 10.1109/ICME.2008. 4607694
[9] Chen, Y. and Cai, C., “Two-Level Stereo Matching Based on Rank Transform Domain,” Computer Engineering and Applications, Vol. 44, No. 34, pp. 188 190 (2008).
[11] Fusiello, A., Roberto, V. and Trucco, E., “Efficient Stereo with Multiple Windowing,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 858863 (1997). doi: 10.1109/CVPR.1997.609428
[12] Karsten, Dennis, M., Hesser, D. and Reinhard, J., “Calculating Dense Disparity Maps from Color Stereo Images, an Efficient Implementation,” IEEE Workshop on Stereo and Multi-Baseline Vision, pp. 3036 (2001). doi: 10.1109/SMBV.2001.988760
We use cookies on this website to personalize content to improve your user experience and analyze our traffic. By using this site you agree to its use of cookies.