CFP last date
20 December 2024
Reseach Article

Computing Number of Bits to be processed using Shift and Log in Arithmetic Coding

by Jyotika Doshi, Savita Gandhi
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 62 - Number 15
Year of Publication: 2013
Authors: Jyotika Doshi, Savita Gandhi
10.5120/10155-5016

Jyotika Doshi, Savita Gandhi . Computing Number of Bits to be processed using Shift and Log in Arithmetic Coding. International Journal of Computer Applications. 62, 15 ( January 2013), 14-20. DOI=10.5120/10155-5016

@article{ 10.5120/10155-5016,
author = { Jyotika Doshi, Savita Gandhi },
title = { Computing Number of Bits to be processed using Shift and Log in Arithmetic Coding },
journal = { International Journal of Computer Applications },
issue_date = { January 2013 },
volume = { 62 },
number = { 15 },
month = { January },
year = { 2013 },
issn = { 0975-8887 },
pages = { 14-20 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume62/number15/10155-5016/ },
doi = { 10.5120/10155-5016 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T21:11:52.169712+05:30
%A Jyotika Doshi
%A Savita Gandhi
%T Computing Number of Bits to be processed using Shift and Log in Arithmetic Coding
%J International Journal of Computer Applications
%@ 0975-8887
%V 62
%N 15
%P 14-20
%D 2013
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Arithmetic coding is a very efficient and most popular entropy coding technique. Arithmetic coding method is based on the fact that the cumulative probability of a symbol sequence corresponds to a unique subinterval of the initial interval [0, 1). In this method, when encoding a symbol, it first computes new interval [low, high) based on cumulative probability segment of the symbol. Thereafter it iterates in a loop to output code bits till the interval becomes 2b-2 wide, where b is number of bits used to store range of an interval. In conventional implementation of arithmetic coding algorithm, in single loop iteration, only one bit is processed at a time. When most significant bit (msb) of low and high of a subinterval matches, it writes this msb in coded message and doubles the interval by extracting msb. When underflow occurs, it extracts second msb to expand an interval. Processing such single bit and expanding an interval is also called renormalization in a loop. In this paper, an upgradation of this conventional arithmetic coding algorithm is proposed, wherein more than one bit is processed at a time instead of just single bit in single iteration. This proposed multi-bit processing arithmetic coding algorithm is implemented here to reduce the iterations needed in renormalizing an interval. It is observed that processing multiple output bits at a time leads to big improvement in execution time. To determine the number of maximum possible matching most significant bits to output, two alternatives are used here; (i) Using shift operation in a loop (ii) Using log function. It is found that first technique is far better than second one with respect to execution time. As compared to conventional implementations processing single bit at a time, about 52% overall saving in execution time is observed when processing multi-bits using shift operation in a loop; whereas about 31% overall loss in performance is observed with the technique of using log function. We have also tried these two alternative ways to determine the number of consecutive occurrences of underflow and process them all in single iteration; but it has not shown any significant gain in speed. As expected, in using any of the above methods, there is no compromise in compression ratio.

References
  1. J. Rissanen, "Generalized kraft inequality and arithmetic coding", IBM J. Res. Develop. , vol. 20, pp. 198–203, May 1976.
  2. G. G. Langdon, Jr. , and J. Rissanen, "Compression of black-white images with arithmetic coding", IEEE Trans. Commun. , vol. COMM-29, pp. 858–867, 1981.
  3. C. B. Jones, "An efficient coding system for long source sequences", IEEE Trans. Inform. Theory, vol. IT–27, pp. 280–291, 1981.
  4. I. H. Witten, R. M. Neal, and J. G. Cleary, "Arithmetic coding for data compression" Commun. ACM, vol. 30, pp. 520–540, 1987.
  5. P. G. Howard and J. S. Vitter, "Arithmetic coding for data compression", Proc. IEEE, vol. 82, pp. 857–865, 1994.
  6. F. M. J. Willems, Y. M. Shtarkov, and T. J. Tjalkens, "The context-tree weighting method: Basic properties", IEEE Trans. Inform. Theory, vol. 41, pp. 653–664, May 1995.
  7. J. C. Kieffer and E. H. Yang, "Grammar-based codes: A new class of universal lossless source codes", IEEE Trans. Inform. Theory, vol. 46, pp. 737–754, 2000.
  8. J. C. Kieffer, E. H. Yang, G. J. Nelson, and P. Cosman, "Universal lossless compression via multilevel pattern matching", IEEE Trans. Inform. Theory, vol. 46, pp. 1227–1245, July 2000.
  9. D. S. Taubman and M. W. Marcellin, JPEG2000: Image Compression Fundamentals, Standards and Practice. Norwell, MA: Kluwer Academic, 2002.
  10. T. Wiegand, G. Sullivan, G. Bjontegaard, and A. Luthra, "Overview of the H. 264/AVC video coding standard," IEEE Trans. Circuits Syst. Video Technol. , vol. 13, no. 7, pp. 560–576, Jul. 2003.
  11. Detlev Marpe, Heiko Schwarz, and Thomas Wiegand, "Context-Based Adaptive Binary Arithmetic Coding in the H. 264/AVC Video Compression Standard", IEEE Trans. On Circuits and Systems for Video Technology, vol. 13, no. 7, pp. 620-636, July 2003
  12. M. Dyer,D. Taubman, and S. Nooshabadi, "Improved throughput arithmetic coder for JPEG2000", Proc. Int. Conf. Image Process. , Singapore, Oct. 2004, pp. 2817–2820.
  13. R. R. Osorio and J. D. Bruguera, "A newarchitecture for fast arithmetic coding in H. 264 advanced video coder", Proc. 8th Euromicro Conf. Digital System Design, Porto, Portugal, Aug. 2005, pp. 298–305.
  14. Ranjan Bose,Saumitr Pathak, "A Novel Compression and Encryption Scheme Using Variable Model Arithmetic Coding and Coupled Chaotic System", IEEE Trans. Circuits and Systems, vol. 53, no. 4, pp. 848-857, April 2006
  15. Kwok-Wo Wong, Qiuzhen Lin, Jianyong Chen, "Simultaneous Arithmetic Coding and Encryption Using Chaotic Maps", IEEE Trans. On Circuits and Systems, vol. 57, no. 2, pp. 146-150, February 2010
  16. M. Grangetto, E. Magli, and G. Olmo, "Multimedia selective encryption by means of randomized arithmetic coding," IEEE Trans. Multimedia, vol. 8, no. 5, pp. 905–917, Oct. 2006.
  17. Hyungjin Kim, Jiangtao Wen, John D. Villasenor, "Secure Arithmetic Coding", IEEE Trans. On Signal Processing, vol. 55, no. 5, pp. 2263-2272, May 2007
  18. Boris Ryabko and Jorma Rissanen, "Fast Adaptive Arithmetic Code for Large Alphabet Sources With Asymmetrical Distributions" , IEEE COMMUNICATIONS LETTERS, VOL. 7, NO. 1, JANUARY 2003 pp. 33-35
  19. A. Moffat, N. Sharman, I. H. Witten, and T. C. Bell, "An empirical evaluation of coding methods for multi-symbol alphabets," Inf. Process. Manage. , vol. 30, pp. 791–804, 1994.
  20. E. Bodden, MalteClasen, Joachim Kneis, "Arithmetic Coding revealed-A guided tour from theory to praxis", Sable Technical Report No. 2007-5, May 2007, available at http://www. bodden. de/legacy/arithmetic-coding/
  21. I. MengyiPu, Fundamental Data Compression, Butterworth-Heinemann, 2006
  22. D. Salomon, Data Compression-The Complete Reference, 3rd Edition, Springer, 2004
  23. A. Drozdek, Elements of data compression, Brooks/Cole, 2002
  24. M. Nelson and Jean-loupGailly, The Data Compression Book,2nd edition, M&T Books, New York, NY 1995
  25. Compression and Coding Algorithms: Kluwer Academic Publishers, 2002.
  26. A. Moffat, R. Neal, and I. Witten, "Arithmetic coding revisited," ACM Trans. Inform. Syst. , vol. 16, no. 3, pp. 256–294, July 1998.
  27. A. Said, "Introduction to Arithmetic Coding - Theory and Practice", available at http://www. hpl. hp. com/techreports/2004/HPL-2004-76. pdf
  28. Jyotika Doshi, Savita Gandhi, "Improved Performance Of Arithmetic Coding By Extracting Multiple Bits At A Time", International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181, Vol. 1 Issue 8, October – 2012
Index Terms

Computer Science
Information Sciences

Keywords

Arithmetic coding computing number of output bits computing number of consecutive occurrences of underflow faster execution lossless data compression multi-bit processing renormalizing interval