CFP last date
20 January 2025
Reseach Article

FRANSAC: Fast RANdom Sample Consensus for 3D Plane Segmentation

by Ramy Ashraf Zeineldin, Nawal Ahmed El-Fishawy
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 167 - Number 13
Year of Publication: 2017
Authors: Ramy Ashraf Zeineldin, Nawal Ahmed El-Fishawy
10.5120/ijca2017914558

Ramy Ashraf Zeineldin, Nawal Ahmed El-Fishawy . FRANSAC: Fast RANdom Sample Consensus for 3D Plane Segmentation. International Journal of Computer Applications. 167, 13 ( Jun 2017), 30-36. DOI=10.5120/ijca2017914558

@article{ 10.5120/ijca2017914558,
author = { Ramy Ashraf Zeineldin, Nawal Ahmed El-Fishawy },
title = { FRANSAC: Fast RANdom Sample Consensus for 3D Plane Segmentation },
journal = { International Journal of Computer Applications },
issue_date = { Jun 2017 },
volume = { 167 },
number = { 13 },
month = { Jun },
year = { 2017 },
issn = { 0975-8887 },
pages = { 30-36 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume167/number13/27832-2017914558/ },
doi = { 10.5120/ijca2017914558 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T00:14:46.235484+05:30
%A Ramy Ashraf Zeineldin
%A Nawal Ahmed El-Fishawy
%T FRANSAC: Fast RANdom Sample Consensus for 3D Plane Segmentation
%J International Journal of Computer Applications
%@ 0975-8887
%V 167
%N 13
%P 30-36
%D 2017
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Scene analysis is a prior stage in many computer vision and robotics applications. Thanks to recent depth camera, we propose a fast plane segmentation approach for obstacle detection in indoor environments. The proposed method Fast RANdom Sample Consensus (FRANSAC) involves three steps: data input, data preprocessing and 3D RANSAC. Firstly, range data, obtained from 3D camera, is converted into 3D point clouds. Next, a preprocessing stage is introduced where a pass through and voxel grid filters are applied. Finally, planes are estimated using a modified 3D RANSAC. The experimental results demonstrate that our approach can segment planes and detect obstacles about 7 times faster than the standard RANSAC without losing the discriminative power.

References
  1. Y. M. Kim, N. J. Mitra, D.-M. Yan, and L. Guibas, “Acquiring 3D indoor environments with variability and repetition,” ACM Trans. Graph., vol. 31, no. 6, p. 1, 2012.
  2. C. V. Nguyen, S. Izadi, and D. Lovell, “Modeling kinect sensor noise for improved 3D reconstruction and tracking,” Proc. - 2nd Jt. 3DIM/3DPVT Conf. 3D Imaging, Model. Process. Vis. Transm. 3DIMPVT 2012, pp. 524–530, 2012.
  3. M. Niessner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3D Reconstruction at Scale Using Voxel Hashing,” ACM Trans. Graph., vol. 32, no. 6, p. 169:1--169:11, 2013.
  4. M. Zollh et al., “Real-time Non-rigid Reconstruction using an RGB-D Camera,” 2013.
  5. A. Dai, M. Nießner, M. Zollhöfer, S. Izadi, and C. THEOBALT, “BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration,” arXiv.org, 2016.
  6. M. Innmann, M. Zollhöfer, M. Nießner, C. Theobalt, and M. Stamminger, “VolumeDeform: Real-time Volumetric Non-rigid Reconstruction,” pp. 1–17, 2016.
  7. L. Alexandre, “3D Object Recognition using Convolutional Neural Networks with Transfer Learning between Input Channels,” 13th Int. Conf. Intell. Auton. Syst., 2014.
  8. S. Aigerim, A. Askhat, and A. Yedilkhan, “Recognition of 3D object using Kinect,” Appl. Inf. Commun. Technol. (AICT), 9th Int. Conf., pp. 341–346, 2015.
  9. G. Pang and U. Neumann, “Fast and Robust Multi-view 3D Object Recognition in Point Clouds,” 3D Vis. (3DV), Int. Conf., pp. 171–179, 2015.
  10. O. Hilliges, Kim, S. Izadi, M. Weiss, and A. Wilson, “Holodesk: Direct 3D interactions with a situated see-through display,” Proc. CHI 2012, pp. 2421–2430, 2012.
  11. A. Wilson, H. Benko, S. Izadi, and O. Hilliges, “Steerable augmented reality with the beamatron,” UIST ’12 Proc. 25th Annu. ACM Symp. User interface Softw. Technol., pp. 413–422, 2012.
  12. R. Du, “Video Fields : Fusing Multiple Surveillance Videos into a Dynamic Virtual Environment,” pp. 1–8, 2016.
  13. M. a. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981.
  14. P. J. Huber, Robust Statistics, vol. 82, no. 3. 1981.
  15. C. V. Stewart, “Robust Parameter Estimation in Computer Vision,” SIAM Rev., vol. 41, no. 3, pp. 513–537, 1999.
  16. J. Matas and O. Chum, “Randomized RANSAC with Td,d test,” Image Vis. Comput., vol. 22, no. 10 SPEC. ISS., pp. 837–842, 2004.
  17. P. Gyawali and J. McGough, “Simulation of detecting and climbing a ladder for a humanoid robot,” IEEE Int. Conf. Electro Inf. Technol., 2013.
  18. J. Pardeiro, J. V. Gómez, D. Álvarez, and L. Moreno, “Learning-based floor segmentation and reconstruction,” Adv. Intell. Syst. Comput., vol. 253, pp. 307–320, 2014.
  19. A. Hidalgo-Paniagua, M. A. Vega-Rodríguez, N. Pavón, and J. Ferruz, “A Comparative Study of Parallel RANSAC Implementations in 3D Space,” Int. J. Parallel Program., vol. 43, no. 5, pp. 703–720, 2014.
  20. L. Dung, C. Huang, and Y. Wu, “Implementation of RANSAC Algorithm for Feature-Based Image Registration,” J. Comput. Commun., vol. 2013, no. November, pp. 46–50, 2013.
  21. J. W. Tang, N. Shaikh-Husin, and U. U. Sheikh, “FPGA implementation of RANSAC algorithm for real-time image geometry estimation,” Proceeding - 2013 IEEE Student Conf. Res. Dev. SCOReD 2013, no. December, pp. 290–294, 2015.
  22. J. Vourvoulakis, J. Lygouras, and J. Kalomiros, “Acceleration of RANSAC algorithm for images with affine transformation,” 2016 IEEE Int. Conf. Imaging Syst. Tech., pp. 60–65, 2016.
  23. J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” in IEEE International Conference on Intelligent Robots and Systems, 2012, pp. 573–580.
Index Terms

Computer Science
Information Sciences

Keywords

RANSAC point cloud plane segmentation Kinect RGB-D Voxel