International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 145 - Number 8 |
Year of Publication: 2016 |
Authors: Preeti Mahadev, P. Nagabhushan |
10.5120/ijca2016910737 |
Preeti Mahadev, P. Nagabhushan . Incremental Feature Transformation for Temporal Space. International Journal of Computer Applications. 145, 8 ( Jul 2016), 28-38. DOI=10.5120/ijca2016910737
Temporal Feature Space generates features sequentially over consecutive time frames, thus producing a very large dimensional feature space cumulatively in contrast to the one which generates samples over time. Pattern Recognition applications for such temporal feature space therefore have to withstand the complexities involved with waiting for the arrival of new features over time and handling the knowledge hidden in large dimensions. Although, the problem of deriving the knowledge can be overcome by dimensionality reduction techniques like feature subsetting or feature transformation, the complexity due to the large dimensions still prevails. Even though the arrival of features is temporally incremental in nature, generally the pattern analysis is not carried out over time frames to enable the production of knowledge in incremental model for more effective management over time. However, temporal data in real time applications demand that the decisions be taken in the interim or at every temporal point even before all the features arrive temporally. This problem can be overcome by accumulating and building the knowledge for pattern analysis at the end of each temporal phase in an incremental mode. The temporal arrival of features would provide an environment to accumulate the knowledge in the transformed feature space at the end of every phase thereby minimizing a large dimensional space. Since the cumulative knowledge is built upon and passed on from one phase of the temporal space to the other without looking back at the previous data, the feature space required for computing at a given instant would remain fairly constant and comparatively smaller. As fewer transformed features remain in scope for processing at each phase for further reduction and knowledge extraction, computation is minimized and memory is efficiently utilized. In this proposed research, the pattern analysis and recognition of temporal space occur at every temporal point instead of at the end when all features are available. At each temporal point, the proposed model not only withstands the mandatory wait time but also works towards generating the most updated and best available transformed feature space thus far by means of continuous knowledge extraction.