CFP last date
20 January 2025
Reseach Article

Visual Object Tracking using Sparse Representation and Interest Points in a Double Step Approach

by Mohamad Hosein Davoodabadi Farahani, Mohsen Khan Mohamadi, Mojtaba Lotfizad
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 175 - Number 10
Year of Publication: 2020
Authors: Mohamad Hosein Davoodabadi Farahani, Mohsen Khan Mohamadi, Mojtaba Lotfizad
10.5120/ijca2020920563

Mohamad Hosein Davoodabadi Farahani, Mohsen Khan Mohamadi, Mojtaba Lotfizad . Visual Object Tracking using Sparse Representation and Interest Points in a Double Step Approach. International Journal of Computer Applications. 175, 10 ( Aug 2020), 1-9. DOI=10.5120/ijca2020920563

@article{ 10.5120/ijca2020920563,
author = { Mohamad Hosein Davoodabadi Farahani, Mohsen Khan Mohamadi, Mojtaba Lotfizad },
title = { Visual Object Tracking using Sparse Representation and Interest Points in a Double Step Approach },
journal = { International Journal of Computer Applications },
issue_date = { Aug 2020 },
volume = { 175 },
number = { 10 },
month = { Aug },
year = { 2020 },
issn = { 0975-8887 },
pages = { 1-9 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume175/number10/31486-2020920563/ },
doi = { 10.5120/ijca2020920563 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T00:24:38.320411+05:30
%A Mohamad Hosein Davoodabadi Farahani
%A Mohsen Khan Mohamadi
%A Mojtaba Lotfizad
%T Visual Object Tracking using Sparse Representation and Interest Points in a Double Step Approach
%J International Journal of Computer Applications
%@ 0975-8887
%V 175
%N 10
%P 1-9
%D 2020
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Nowadays, various approaches have been proposed for visual target tracking, amongst which the sparse representation-based approaches have shown efficiency. In this paper, a two-stage approach for visual target tracking is proposed. In the first stage, the approximate target position is determined based on the corner points and sparse representation. In the following, the appearance model memory of the target will be used to determine the exact location of the target to perform the target localization accurately. Experimental results demonstrate that the proposed approach can effectively handle challenges such as abrupt illumination variation, occlusion, and blurriness. Furthermore, based on the evaluations of the qualitative and quantitative results, the proposed algorithm is comparable in performance with other state-of-the-art algorithms.

References
  1. Emilio Maggio and Andrea Cavallaro. Video tracking: theory and practice. 2011.
  2. Alper Yilmaz, Omar Javed, and Mubarak Shah. Object tracking: A survey. Acm computing surveys (CSUR), 38(4):13, 2006.
  3. Qing Wang, Feng Chen, Wenli Xu, and Ming-Hsuan Yang. An experimental comparison of online object-tracking algorithms. In Wavelets and Sparsity XIV, volume 8138, page 81381A. International Society for Optics and Photonics.
  4. Qiang Guo and Chengdong Wu. Fast visual tracking using memory gradient pursuit algorithm. J. Inf. Sci. Eng., 32(1):213–228, 2016.
  5. Hamd Ait Abdelali, Fedwa Essannouni, Leila Essannouni, and Driss Aboutajdine. Fast and robust object tracking via acceptreject color histogram-based method. Journal of Visual Communication and Image Representation, 34:219–229, 2016.
  6. Mohamad Hosein Davoodabadi Farahani and Mojtaba Lotfizad. Visual tracking via decision-based particle filtering based on sparse representation. Journal of Electronic Imaging, 27(4):043027, 2018.
  7. Wenhan Luo, Junliang Xing, Anton Milan, Xiaoqin Zhang, Wei Liu, Xiaowei Zhao, and Tae-Kyun Kim. Multiple object tracking: A literature review. arXiv preprint arXiv:1409.7618, 2014.
  8. Mingliang Gao, Jin Shen, and Jun Jiang. Visual tracking using improved flower pollination algorithm. Optik-International Journal for Light and Electron Optics, 156:522–529, 2018.
  9. Shengping Zhang, Xiangyuan Lan, Hongxun Yao, Huiyu Zhou, Dacheng Tao, and Xuelong Li. A biologically inspired appearance model for robust visual tracking. IEEE transactions on neural networks and learning systems, 28(10):2357–2370, 2016.
  10. Yuankai Qi, Shengping Zhang, Lei Qin, Qingming Huang, Hongxun Yao, Jongwoo Lim, and Ming-Hsuan Yang. Hedging deep features for visual tracking. IEEE transactions on pattern analysis and machine intelligence, 41(5):1116–1130, 2018.
  11. Naresh Kumar and Priti Parate. Fragment-based real-time object tracking: A sparse representation approach. In Image Processing (ICIP), 2012 19th IEEE International Conference on, pages 433–436. IEEE.
  12. Jaideep Jeyakar, R Venkatesh Babu, and KR Ramakrishnan. Robust object tracking with background-weighted local kernels. Computer Vision and Image Understanding, 112(3):296–309, 2008.
  13. Jing Yang, Kaihua Zhang, and Qingshan Liu. Robust object tracking by online fisher discrimination boosting feature selection. Computer Vision and Image Understanding, 153:100–108, 2016.
  14. Shengping Zhang, Xiangyuan Lan, Yuankai Qi, and Pong C Yuen. Robust visual tracking via basis matching. IEEE Transactions on Circuits and Systems for Video Technology, 27(3):421–430, 2016.
  15. Chao Ma, Xiaokang Yang, Chongyang Zhang, and Ming-Hsuan Yang. Long-term correlation tracking. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 5388–5396. IEEE.
  16. Fan Yang, Huchuan Lu, and Ming-Hsuan Yang. Robust superpixel tracking. IEEE Transactions on Image Processing, 23(4):1639–1651, 2014.
  17. Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang. Real-time object tracking via online discriminative feature selection. IEEE Transactions on Image Processing, 22(12):4664–4677, 2013.
  18. Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang. Real-time compressive tracking. In European conference on computer vision, pages 864–877. Springer.
  19. Shengping Zhang, Huiyu Zhou, Feng Jiang, and Xuelong Li. Robust visual tracking using structurally random projection and weighted least squares. IEEE Transactions on Circuits and Systems for Video Technology, 25(11):1749–1760, 2015.
  20. Shengping Zhang, Hongxun Yao, Huiyu Zhou, Xin Sun, and Shaohui Liu. Robust visual tracking based on online learning sparse representation. Neurocomputing, 100:31–40, 2013.
  21. Yuankai Qi, Lei Qin, Jian Zhang, Shengping Zhang, Qingming Huang, and Ming-Hsuan Yang. Structure-aware local sparse coding for visual tracking. IEEE Transactions on Image Processing, 27(8):3857–3869, 2018.
  22. Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (surf). Computer vision and image understanding, 110(3):346–359, 2008.
  23. David G Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004.
  24. Chris Harris and Mike Stephens. A combined corner and edge detector. In Alvey vision conference, volume 15, page 10.5244. Citeseer.
  25. Jianbo Shi. Good features to track. In Computer Vision and Pattern Recognition, 1994. Proceedings CVPR’94., 1994 IEEE Computer Society Conference on, pages 593–600. IEEE.
  26. Julien Mairal, Michael Elad, and Guillermo Sapiro. Sparse representation for color image restoration. IEEE Transactions on image processing, 17(1):53–69, 2008.
  27. John Wright, Allen Y Yang, Arvind Ganesh, S Shankar Sastry, and Yi Ma. Robust face recognition via sparse representation. IEEE transactions on pattern analysis and machine intelligence, 31(2):210–227, 2009.
  28. R Venkatesh Babu and Priti Parate. Robust tracking with interest points: A sparse representation approach. Image and Vision Computing, 33:44–56, 2015.
  29. Xue Mei and Haibin Ling. Robust visual tracking using l1 minimization. In Computer Vision, 2009 IEEE 12th International Conference on, pages 1436–1443. IEEE.
  30. Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Online object tracking: A benchmark. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2411–2418, 2013.
  31. Kaihua Zhang, Lei Zhang, Qingshan Liu, David Zhang, and Ming-Hsuan Yang. Fast visual tracking via dense spatio-temporal context learning. In European Conference on Computer Vision, pages 127–141. Springer.
  32. Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang. Fast compressive tracking. IEEE transactions on pattern analysis and machine intelligence, 36(10):2002–2015, 2014.
  33. Jongwoo Lim, David A Ross, Ruei-Sung Lin, and Ming-Hsuan Yang. Incremental learning for visual tracking. In Advances in neural information processing systems, pages 793–800.
  34. Junseok Kwon and Kyoung Mu Lee. Visual tracking decomposition. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 1269–1276. IEEE.
  35. Julien Mairal, F Bach, J Ponce, G Sapiro, R Jenatton, and G Obozinski. Spams: A sparse modeling software, v2. 3. URL http://spams-devel. gforge. inria. fr/downloads. html, 2012.
Index Terms

Computer Science
Information Sciences

Keywords

Visual Tracking Sparse Representation Interest Point Target Template Memory Model