math - What is the "Law of the Eight"? -


while studying document on evolution of jpeg, came across "the law of eight" in section 7.3 of above document.

despite introduction of other block sizes 1 16 smartscale extension, beyond fixed size 8 in original jpeg standard, fact remains block size of 8 still default value, , other-size dcts scaled in reference standard 8x8 dct.

the “law of eight” explains, why size 8 right default , reference value dct size.

my question

what "law of eight" ?

  • historically, study performed evaluated numerous images sample arrive @ conclusion 8x8 image block contains enough redundant data support compression techniques using dct? large image sizes 8m(4kx4k) fast becoming norm in digital images/videos, assumption still valid?

  • another historic reason limit macro-block 8x8 have been computationally prohibitive image-data size larger macro-blocks. modern super-scalar architectures (eg. cuda) restriction no longer applies.

earlier similar questions exist - 1, 2 , 3. none of them bother details/links/references mysterious fundamental "law of eight".


1. references/excerpts/details of original study highly appreciated repeat modern data-set large sized images test validity of 8x8 macro blocks being optimal.

2. in case similar study has been carried-out, references welcome too.

3. do understand smartscale controversial. without clear potential benefits 1, @ best comparable other backward-compliant extensions of jpeg standard 2. goal understand whether original reasons behind choosing 8x8 dct block-size (in jpeg image compression standard) still relevant, hence need know law of eight is.

my understanding is, law of 8 humorous reference fact baseline jpeg algorithm prescribed 8x8 block size.

p.s. in other words, "the law of eight" way explain why "all other-size dcts scaled in reference 8x8 dct" bringing in historical perspective -- the lack of support other size in original standard , defacto implementations.

the next question ask: why eight? (note despite being valid question, not subject of present discussion, still relevant if value picked historically, e.g. "law of ten" or "law of thirty two".) answer 1 is: because computational complexity of problem grows o(n^2) (unless fct-class algorithms employed, grow slower o(n log n) harder implement on primitive hardware of embedded platforms, hence limited applicability), larger block sizes become impractical. why 8x8 chosen, small enough practical on wide range of platforms large enough allow not-too-coarse control of quantization levels different frequencies.

since standard has scratched itch, whole ecosphere grew around it, including implementations optimized 8x8 sole supported block size. once ecosphere in place, became impossible change block size without breaking existing implementations. highly undesirable, tweaks dct/quantization parameters had remain compatible 8x8-only decoders. believe consideration must what's referred "law of eight".

while not being expert, don't see how larger block sizes can help. first, dynamic range of values in 1 block increase on average, requiring more bits represent them. second, relative quantization of frequencies ranging "all" (represented block) "pixel" has stay same (it dictated human perception bias after all), quantization bit smoother, that's all, , same compression level potential quality increase unnoticeable.


Comments

Popular posts from this blog

c# - Send Image in Json : 400 Bad request -

jquery - Fancybox - apply a function to several elements -

An easy way to program an Android keyboard layout app -