CR & Bit Per Pixel image compression

hello urgent help
i am looking for a help in ( Bit per Pixel ) and ( Compression Ratio ) on image compression , i had make my project about ( EZW ) image compression and i got the reconstructed image , but i want to know how to calculate both ( Bit Per Pixel & CR ), what is the role for calculating them for Embeded Zero Tree Wavelet Transform ( EZW ) and in which stage should i calculate and which paprameters should i got for calculation ., because ( EZW ) generate ( dominant list ) and ( subordinate list ) ...

 Respuesta aceptada

Walter Roberson
Walter Roberson el 28 de En. de 2012

1 voto

Bit Per Pixel = total number of bits in final file, divided by number of pixels in final file
Compression Ratio = total number of bits in final file, divided by number of bits in original file
The rest of your question is about theoretical details of a particular transform, and so should be addressed to a forum that details with transform theory. It is not a MATLAB question.

5 comentarios

Diyar Aldusky
Diyar Aldusky el 28 de En. de 2012
thank you very much for this fast answering ... i am grateful for you..
vishnu
vishnu el 28 de Feb. de 2012
how to calculate total number of bits in a final file and number of pixel ? thank you WALTER sir
Walter Roberson
Walter Roberson el 28 de Feb. de 2012
Total number of bits in the final file is the size in bytes of the final file, multiplied by 8.
Number of pixels in the final file is the same as the number of pixels in the input image (unless it got cropped or something like that.)
Renjith V Ravi
Renjith V Ravi el 4 de Mzo. de 2017
@Walter Roberson : Please provide the reference for the formula, you had written for compression ratio.
Walter Roberson
Walter Roberson el 4 de Mzo. de 2017
For the formula I gave in the comments below https://www.mathworks.com/matlabcentral/answers/27322-cr-bit-per-pixel-image-compression#comment_410392 involving log2() of the states, you can use Bell, Cleary, Witten, "Text Compression", https://www.amazon.com/Text-Compression-Timothy-C-Bell/dp/0139119914 which is a very nice text that discussions the theory of data compression at length.

Iniciar sesión para comentar.

Más respuestas (1)

ENG Amina
ENG Amina el 30 de Nov. de 2016

0 votos

HI, I want to calculate the compression ratio ,i had make code BTC compression ON image could you help me?
Greetings

3 comentarios

Walter Roberson
Walter Roberson el 1 de Dic. de 2016
Editada: Walter Roberson el 24 de Jun. de 2017
After compression, you have one or more blocks of memory holding the compression results. Take log base 2 of the number of different states that memory can potentially be in. That tells you the number of bits that would be required to represent the output. Compare that to the log base 2 of the number of different states the input could be in in order to figure out the compression ratio.
For example if the input was an integer from 0 to 255 then that would be 256 states and log base 2 of that is 8 - it is an 8 bit integer. Now suppose the compression method was
floor(double(Input) /2)
Then the possible outputs are the floating point integers 0 through 127, which is 128 possibilities, log base 2 is 7, so 7 bits of output would be required and the compression ratio would be 8:7. This would hold true even though each of the double precision outputs takes 64 bits to represent: 7 bits of output would be enough to exactly recreate which double precision number was being used.
If you start with a 4 by 4 block of uint8 there are only a limited number of different means and standard deviations that can be produced. You do not need to transmit a double precision mean and a double precision standard deviation: you could number all of the possible outcomes and then each time transmit the index numbers. The receiver can then look each one up in a table when doing the reconstruction.
Note by the way that if the compression method had been
round(double(Input)/2)
then the possible outputs would be the floating point integers from 0 to 128, which is 129 possibilities, log base 2 of which would exceed 7. In isolation, that would require 8 bits to represent -- no compression achieved. However, a clever output scheme such as arithmetic encoding could approach log2(129) bits for the representation.
Walter Roberson
Walter Roberson el 1 de Dic. de 2016
I indicated earlier that for a 4 x 4 block of uint8 there are only a limited number of different means.
The sum of 16 integer values in the range 0 to 255 can take on any number in the range 0*16 to 255*16. That is 4081 different potential results, and since the mean always divides by 16 for such a block, that would give 4081 different possible means. Those different means could be numbered 0 to 4080 -- indeed, the numbering could be the sum of the 16 different values. That would require only 12 bits to represent, which would be a lot less than 64 bits required for a floating point mean.
16 bits for the intensity map plus 12 bits for the representation of the mean would give 28 bits; to that you would have to add however many bits turned out to be required to represent the standard deviation (I have not figured out yet how many possibilities there are.) Worst case would be that a full 64 bit double would be needed, for a total of 16 + 12 + 64 = 92 bits, compared to the 16 * 8 = 128 bits required to store the original data. My experiments suggest that in practice there are far fewer possible standard deviations, but I am having difficulty finding a formula for the situation.

Iniciar sesión para comentar.

Categorías

Más información sobre Denoising and Compression en Centro de ayuda y File Exchange.

Preguntada:

el 28 de En. de 2012

Editada:

el 24 de Jun. de 2017

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by