The DCT is a cool primitive. By extracting the low frequency coefficients, you can get a compact blurry representation of an image. This is used by preload thumbnail algorithms like blurhash and thumbhash. It's also used by some image watermarking techniques to target changes to a detail level that will be less affected by scaling or re-encoding.
Whenever I see those 'blocky' artifacts on a low-quality image, I used to just think of it as 'bad tech.' After reading this, it's cool to realize you're actually seeing the 8x8 DCT grid itself. You're literally seeing the math break down because there wasn't enough bit-budget to describe those high-frequency sine waves. It’s like looking at the brushstrokes on a digital painting.
Having played a bit with Discrete FFT (with FFTW on 2D images in a Shake plugin we made at work ages ago) makes the DCT coefficients make so much more sense! I really wonder whether the frequency-decomposition could happen at multiple scale levels though? Sounds slightly like wavelets and maybe that's how jpeg2000 works?.. Yeah I looked it up, uses DWT so it kinda sounds like it! Shame it hasn't taken off so far!? Or maybe there's an even better way?
The discrete wavelet transform (DWT) compresses an image by repeatedly downscaling it, and storing the information which was lost during downscaling. Here's an image which has been downscaled twice, with its difference images (residuals): https://commons.wikimedia.org/wiki/File:Jpeg2000_2-level_wav.... To decompress that image, you essentially just 2x-upscale it, and then use the residuals to restore its fine details.
Wavelet compression is better than the block-based DCT for preserving sharp edges and gradients, but worse for preserving fine texture (noise). The DCT can emulate noise by storing just a couple of high-frequency coefficients for a 64-pixel block, but the DWT would need to store dozens of coefficients to achieve noise synthesis of similar quality.
The end result is that JPEG and JPEG 2000 achieve roughly the same lossy compression ratio before image artefacts show up. JPEG blurs edges, JPEG 2000 blurs texture. At very low bitrates, JPEG becomes blocky, and JPEG 2000 looks like a low-resolution image which has been upscaled (because it's hardly storing any residuals at all!)
FFmpeg has a `jpeg2000` codec; if you're interested in image compression, running a manual comparison between JPEG and JPEG 2000 is a worthwhile way to spend an hour or two.
I found a jpeg2000 reference PDF somewhere. It may as well have been written in Mandarin.
I got as far as extracting the width and height. Its much more advanced than jpeg. Forget about writing a decoder.
About: "I wonder if other species would look at our images or listen to our sounds and register with horror all the gaping holes everywhere.", yes.
In particular, dogs:
> While people have an image frame rate of around 15-20 images per second to make moving pictures appear seamless, canine vision means that dogs need a frame rate of about 70 images per second to perceive a moving image clearly.
> This means that for most of television’s existence – when they are powered by catheode ray tubes – dogs couldn’t recognize themselves reliably on a TV screen, meaning your pups mostly missed out on Wishbone, Eddie from Fraisier and Full House’s Comet.
> With new HDTVs, however, it’s possible that they can recognize other dogs onscreen.
It's about being able to perceive it as a "living" moving creature and not something different.
You can understand something below the perception threshold is supposed to be a creature because you both have a far more advanced brain and you've been exposed to such things your entire life so there's a learned component; but your dog may simply not be capable of making the leap in comprehending that something it doesn't see as living/moving is supposed to be representative of a creature at all.
I've personally seen something adjacent to this in action, as I had a dog over the period of time where I transitioned from lower framerate displays to higher framerate displays. The dog was never all that interested in the lower framerate displays, but the higher framerate displays would clearly capture his attention to the point he'd start barking at it when there were dogs on screen.
This is also pretty evident in simple popular culture. The myth that "dogs can't see 2D" where 2D was a standin for movies and often television was pervasive decades ago. So much so that (as an example) in the movie Turner and Hooch from 1989, Tom Hanks offhandedly makes a remark about how the dog isn't enjoying a movie because "dogs can't see 2D" and no further elaboration on it is needed or given; whereas today it's far more common to see content where dogs react to something being shown on a screen, and if you're under, say, 30 or so, you may not have ever even heard of "dogs can't see 2D".
> While people have an image frame rate of around 15-20 images per second to make moving pictures appear seamless,
This is just...wrong? Human vision is much fast and more sensitive than we give it credit for. e.g. Humans can discern PWM frequencies up to many thousands of Hz. https://www.youtube.com/watch?v=Sb_7uN7sfTw
The overwhelming majority of people were happy enough to spend, what, billions on screens and displays capable of displaying motion picture in those formats.
That there is evidence that most(?) people are able to sense high frequency PWM signals doesn’t make the claim that 15 to 20 frames per second is sufficient to make moving pictures appear seamless.
I’ve walked in to rooms where the LED lighting looks fine to me, and the person I was with has stopped, said “nope” and turned around and walked out, because to them the PWM driver LED lighting makes the room look illuminated by night club strobe lighting.
My dog doesn't react to familiar voices over the phone at all. The compression and reproduction of audio, while fine for humans, definitely doesn't work for her animal ears.
What would happen if the Cr and Cb channels used different chroma subsampling patterns? E.g. Cr would use the 4:2:0 pattern, and Cb would use the 4:1:1 pattern.
I usually have a script/alias cmd to automatically convert images to webp. The webp format has pretty much replaced jpg/jpeg (lacks transparency/alpha support) and png (no compression) formats for me.
There is also AVIF format which is newer and better but it needs to still mature a bit with better support/compatability.
If you are hosting images it is nice to use avif and fallback to webp.
I know it’s more efficient, but It’s too bad webp is basically supported in browsers and nowhere else. I don’t think any OS even makes a thumbnail for the icon! Forget opening it in an image editor, etc. And any site that wants you to upload something (e.g. an avatar) won’t accept it. So, webp seems in practice to be like a lossy compression layer that makes images into ephemeral content that can’t be reused.
(Yes, I know, I should just make a folder action on Downloads that converts them with some CLI tool, but it makes me sad that this only further degrades their quality.)
The only OS that doesnt as far as I'm aware is windows. And what image editors still have problems? Affinity has supported it for several years, GIMP, lightroom/PS, photopea, everywhere I test webp works fine. All work just fine.
Most social media sites take webp these days no issue, its mostly older oft php-based sites that struggle far as im aware. And when it cuts down bandwidth by a sizeable amount theres network effects that tend to push some level of adoption of more modern image formats.
To be clear, PNG only supports lossless compression, while WebP has separate lossy and lossless modes. AVIF can do lossless compression too, but you're usually better off using WebP or PNG (if you need >8 bpc) instead as it really isn't good at that.
It is not that trivial, because there are tons of existing JPEG files and lossy recompression costs quality. (PNG does get replaced primarily because lossless WebP is kinda a superset of what PNG internally does.)
On your last point: I find it super annoying when both lossy and lossless codecs have the same name, and, more importantly, file extension. I get it that internally they are "almost the same thing", just with one extra step of discarding low-impact values, but when I see a PNG/FLAC file I know, that if the file was handled properly and wasn't produced by Windows clipboard or something, it is supposed to represent exactly the original data. When I see JPEG/MP3, I know that whatever it went through, it is not the original data for sure. When I see WEBP, I assume it's lossy, because it's just how it's used, and I cannot just convert all my PNG files to a newer format, because after that I won't be able to tell (easily) which is the original and which is not.
Ah, in that case you would be more annoyed to learn that lossy WebP and lossless WebP are completely distinct. They only share the container format and their type codes are different.
Awesome. It would be interesting to learn why they even thought it is a good idea. Content-agnostic containers may make sense for video, but for the vast majority of use-cases a "video" is in fact a complex bundle of (several) video, audio, metadata tracks, so the "container" does much heavier lifting than specifying codec details and some metadata.
I made a notebook a few years back which lets you play with / filter the DCT coefficients of an image: https://observablehq.com/d/167d8f3368a6d602
This tool uses more clever math to replace what's missing: https://github.com/victorvde/jpeg2png
Wavelet compression is better than the block-based DCT for preserving sharp edges and gradients, but worse for preserving fine texture (noise). The DCT can emulate noise by storing just a couple of high-frequency coefficients for a 64-pixel block, but the DWT would need to store dozens of coefficients to achieve noise synthesis of similar quality.
The end result is that JPEG and JPEG 2000 achieve roughly the same lossy compression ratio before image artefacts show up. JPEG blurs edges, JPEG 2000 blurs texture. At very low bitrates, JPEG becomes blocky, and JPEG 2000 looks like a low-resolution image which has been upscaled (because it's hardly storing any residuals at all!)
FFmpeg has a `jpeg2000` codec; if you're interested in image compression, running a manual comparison between JPEG and JPEG 2000 is a worthwhile way to spend an hour or two.
I wonder if other species would look at our images or listen to our sounds and register with horror all the gaping holes everywhere.
In particular, dogs:
> While people have an image frame rate of around 15-20 images per second to make moving pictures appear seamless, canine vision means that dogs need a frame rate of about 70 images per second to perceive a moving image clearly.
> This means that for most of television’s existence – when they are powered by catheode ray tubes – dogs couldn’t recognize themselves reliably on a TV screen, meaning your pups mostly missed out on Wishbone, Eddie from Fraisier and Full House’s Comet.
> With new HDTVs, however, it’s possible that they can recognize other dogs onscreen.
Source: https://dogoday.com/2018/08/30/dog-vision-can-allow-recogniz...
If I watch a video in 10fps it looks shite but I still recognise everything on screen
You can understand something below the perception threshold is supposed to be a creature because you both have a far more advanced brain and you've been exposed to such things your entire life so there's a learned component; but your dog may simply not be capable of making the leap in comprehending that something it doesn't see as living/moving is supposed to be representative of a creature at all.
I've personally seen something adjacent to this in action, as I had a dog over the period of time where I transitioned from lower framerate displays to higher framerate displays. The dog was never all that interested in the lower framerate displays, but the higher framerate displays would clearly capture his attention to the point he'd start barking at it when there were dogs on screen.
This is also pretty evident in simple popular culture. The myth that "dogs can't see 2D" where 2D was a standin for movies and often television was pervasive decades ago. So much so that (as an example) in the movie Turner and Hooch from 1989, Tom Hanks offhandedly makes a remark about how the dog isn't enjoying a movie because "dogs can't see 2D" and no further elaboration on it is needed or given; whereas today it's far more common to see content where dogs react to something being shown on a screen, and if you're under, say, 30 or so, you may not have ever even heard of "dogs can't see 2D".
This is just...wrong? Human vision is much fast and more sensitive than we give it credit for. e.g. Humans can discern PWM frequencies up to many thousands of Hz. https://www.youtube.com/watch?v=Sb_7uN7sfTw
> make moving pictures appear seamless
True enough.
NTSC is 30fps, while PAL is 25fps.
The overwhelming majority of people were happy enough to spend, what, billions on screens and displays capable of displaying motion picture in those formats.
That there is evidence that most(?) people are able to sense high frequency PWM signals doesn’t make the claim that 15 to 20 frames per second is sufficient to make moving pictures appear seamless.
I’ve walked in to rooms where the LED lighting looks fine to me, and the person I was with has stopped, said “nope” and turned around and walked out, because to them the PWM driver LED lighting makes the room look illuminated by night club strobe lighting.
That doesn’t invalidate my experience.
The maximum frame rate we can perceive is much higher, for regular video it's probably somewhere around 400-800.
You can experience something like that by using plugins which simulate CVD / color blindnesses.
seems like website doesn't work without webgl enabled... why?
There is also AVIF format which is newer and better but it needs to still mature a bit with better support/compatability.
If you are hosting images it is nice to use avif and fallback to webp.
(Yes, I know, I should just make a folder action on Downloads that converts them with some CLI tool, but it makes me sad that this only further degrades their quality.)
Most social media sites take webp these days no issue, its mostly older oft php-based sites that struggle far as im aware. And when it cuts down bandwidth by a sizeable amount theres network effects that tend to push some level of adoption of more modern image formats.
To be clear, PNG only supports lossless compression, while WebP has separate lossy and lossless modes. AVIF can do lossless compression too, but you're usually better off using WebP or PNG (if you need >8 bpc) instead as it really isn't good at that.