International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 1 - Number 28 |
Year of Publication: 2010 |
Authors: V.M.Deshmukh, S.Y.Amdani, G.R.Bamnote, S.A.Bhura, Sachin Agrawal |
10.5120/592-464 |
V.M.Deshmukh, S.Y.Amdani, G.R.Bamnote, S.A.Bhura, Sachin Agrawal . Camera calibration Using Receptive fields.. International Journal of Computer Applications. 1, 28 ( February 2010), 69-74. DOI=10.5120/592-464
Camera calibration is to identify a model that infers 3-D space measurements from 2-D image observations. In this paper, the nonlinear mapping model of the camera is approximated by a series of linear input–output models defined on a set of local regions called receptive fields. Camera calibration is thus a learning procedure to evolve the size and shape of every receptive field as well as parameters of the associated linear model. Since the learning procedure can also provide an approximation extent measurement for the linear model on each of the receptive fields, calibration model is consequently obtained from a fusion framework integrated with all linear models weighted by their corresponding approximation measurements. Since each camera model is composed of several receptive fields, it is feasible to unitedly calibrate multiple cameras simultaneously. The 3-D measurements of a multi- camera vision system are produced from a weighted regression fusion on all receptive fields of cameras. Thanks to the fusion strategy, the resultant calibration model of a multi-camera system is expected to have higher accuracy than either of them. Moreover, the calibration model is very efficient to be updated whenever one or more cameras in the multi-camera vision system need to be activated or deactivated to adapt to variable sensing requirements at different stages of task fulfillment. We studied the Simulation proposed by Jianbo Su in this paper and we are trying to implement his proposed model.