Also the parent comment was about that you really shouldn't just let a random Russian guy run any javascript on any website you visit, that's stupid.
Also also, am I missing something, or Firefox extensions are broken, there is no way to limit an extension to websites (allow or disallow), or even just to check the source code of an extension?
So what, you think they were just lying when they said that they'll ship JXL when it has a Rust implementation? You think Mozilla devs were just bluffing when they were working directly with the JXL devs over the last year to make sure everything would work right?
I would not install non-recommended Firefox addons for things that can be achieved in about:config.
Just do set image.jxl.enabled flag in about:config to true.
Webpage Display
Media: JPEG XL
With this feature enabled, Nightly supports the JPEG XL (JXL) format. This is an enhanced image file format that supports lossless transition from traditional JPEG files. See bug 1539075 for more details.Like this demo page: https://bevara.github.io/Showcase/libjxl/
Works great on PaleMoon, one of the earliest browsers to support JPEG XL and "Global Privacy Control" ( https://globalprivacycontrol.org/ ).
https://op111.net/posts/2025/10/png-and-modern-formats-lossl...
I compare PNG and the four modern formats, AVIF, HEIF, WebP, JPEG XL, on tasks/images that PNG was designed for. (Not on photographs or lossy compression.)
Is there a reason you used only synthetic images, ie, nothing from group 1?
The motivation behind the benchmarks was to understand what are the options today for optimizing the types of image we use PNG for, so I used the same set of images I had used previously in a comparison of PNG optimizers.
The reason the set does not have photographs: PNG is not good at photographs. It was not designed for that type of image.
Even so, the set could do with a bit more variety, so I want to add a few more images.
Numbers for decompression speed is one of the two things I want to add.
The other is a few more images, for more variety.
For example, I used to work with depth data a lot, which is best expressed as monochrome 16-bit floating point images. Previously, TIFF was the only format that supported this. Many shops would instead save depth images as UINT16 .PNG files, where the raw pixel intensity maps to the camera distance in mm. The problem with this is that pixels more than 65.535 meters away aren't representable. (Hot take: I personally think this is one reason why nobody studies depth estimation for outdoor scenes.)
JPEG-XL supports more weird combinations here, e.g. storing greyscale float32 images (with alpha even! you can store sparse depth maps without needing a separate mask!)
It's like, uniquely suited to these sorts of 3D scene understanding challenges and I really hope people adopt the format for more scientific applications.
And it is probably the reason why browser vendors disliked it. Lots of complexity, it means a big library, which is high maintenance with a big attack surface. By comparison, webp is "free" if you have webm, as webp is essentially a single frame video.
As just one of innumerable examples, it's the basis for Adobe's DNG raw photo format and many proprietary raw formats used by camera manufacturers (Nikon NEF, Canon CRW and CR2, etc.).
Speaking as an outside observer, the ISO Base Media File Format seems to have more mindshare for newer applications, presumably on account of its broader scope and cleaner design.
Hopefully my photo processor will accept JPEG XL in the near future!
The chrome://flags/#enable-jxl-image-format is not even found in the build :(
Aren't print shops, machining shops, other small manufacturers etc. ones that always lag behind with emerging technologies?
If there’s a large amount of paper that’s been purchased for a job, I definitely wouldn’t want to be the one who’s responsible for using JPEG XL and – for whatever reason – something going wrong.
Pixels are cheaper than paper or other physical media :)
The company that owns whatever system can and should be able to convert formats.
Here's who I order from, you can see the particulars of what they request.
https://support.bayphoto.com/hc/en-us/articles/4026658357979...
Their job is getting an image file into reality, not to be the absent owner of a big machine.
> That would be inconsistent from what the user sent.
If the machine accepts some type of normal image file, then they can losslessly convert other file formats to that type. There is nothing inconsistent about that.
My first statement is an opinion/judgement, not an assumption.
I'm confident my second statement is true. Note that any argument that says niche formats are a problem because color space might be ambiguous also applies to the formats they do accept.
There are very few ‘lossless” conversions possible if you consider the loss of a data or metadata could affect the result. So if printer did accept a file that needed to be converted, and then during printing and converting they found conversion could lead to unexpected results should they cancel the print run? There is just too much to go wrong in printing already without these extra problems.
The print industry has a long and storied history, and for whatever set of reasons, printers only accept very specific profiles of specific formats.
even with `image.jxl.enabled` I don't see it on firefox
Maybe the zen fork is a bit older and still using the C++ one?
I grabbed the nightly firefox, flipped the jxl switch, and it does indeed render fine, so I guess the rust implementation is functioning, just not enabled in stable.
... also, I see no evidence that it was ever enabled in the stable builds, even for the C++ version, so I'm guessing Zen just turned it on. Which... is fine, but maybe not very cautious.
2. Social trackers are selectively allowed, unsigned extensions are enabled by default, and Enhanced Tracking Protection isn't fully implemented.
There's just a theme of incompetence, trying to cover it up and just in general being clueless about security.
Chromium Has Merged JpegXL
A proper test page should have HDR images, images testing if 10-bit gradients are posterised to 8-bit or displayed smoothly, etc...
iOS for example can show a JPEG XL image, but can't forward it in iMessage to someone else.
https://jpegxl.info/resources/jpeg-xl-test-page https://caniuse.com/jpegxl
> "I was also surprised to see that, in Safari, JPEG XL takes 150% longer (as in 2.5x) to decode vs an equivalent AVIF. That's 17ms longer on my M4 Pro. Apple hardware tends to be high-end, but this could still be significant. This isn't related to progressive rendering; the decoder is just slow. There's some suggestion that the Apple implementation is running on a single core, so maybe there's room for improvement.
> JPEG XL support in Safari actually comes from the underlying OS rather than the browser. My guess is that Apple is considering using JPEG XL for iPhone photo storage rather than HEIC, and JPEG XL's inclusion in the browser is a bit of an afterthought. I'm just guessing though.
> The implementation that was in Chromium behind a flag did support progressive rendering to some degree, but it didn't render anything until ~60 kB (39% of the file). The rendering is similar to the initial JPEG rendering above, but takes much more image data to get there. This is a weakness in the decoder rather than the format itself. I'll dive into what JPEG XL is capable of shortly.
> I also tested the performance of the old behind-a-flag Chromium JPEG XL decoder, and it's over 500% slower (6x) to decode than AVIF. The old behind-a-flag Firefox JPEG XL decoder is about as slow as the Safari decoder. It's not fair to judge the performance of experimental unreleased things, but I was kinda hoping one of these would suggest that the Safari implementation was an outlier.
> I thought that "fast decoding" was one of the selling points of JPEG XL over AVIF, but now I'm not so sure.
> We have a Rust implementation of JPEG XL underway in Firefox, but performance needs to get a lot better before we can land it."
[0]: https://jakearchibald.com/2025/present-and-future-of-progres...
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
If decode speed is an issue, it's notable that avif varied a lot depending on encode settings in their test:
> Interestingly, the decode speed of AVIF depends on how the image was encoded: it is faster when using the faster-but-slightly-worse multi-tile encoding, slower when using the default single-tile encoding.
This would be great.
For video you can't avoid it, as people expect several hours of laptop battery life while playing video. But for static images - I'd avoid the pain.
Edit: I found A Lightning fork called Fulguris. It didn't work with the JPEG XL test image, but I really like the features and customizability. It's now my default browser on Android.
Works fine for me in Orion on both desktop and mobile ( https://orionbrowser.com ).
They're running out of good options, but I hope they stick with it long enough to release "JPEG XP" :-)
There's also a JPEG XE now (https://jpeg.org/jpegxe/index.html), by the way.
Either that or a photo that has been edited from a RAW and is a final version to be posted online.
What makes jpeg bad is that the compression artifacts multiply when a jpeg gets screen captured and then re-encoded as a jpeg, or automatically resized and recompressed by a social media platform. And that definitely isn’t a problem that has gone away since dialup, people do that more than ever.
Though maybe some people think the JPEG committee is now creating spreadsheet formats...
Of all the people who interact with image formats in some way, how many do even know what an image format is? How many even notice they’ve got different names? How many even give them any consideration? And out of those, how many are immediately going to think JPEG XL must be big, heavy and inefficient? And out of those, how many are going to stop there without considering that maybe the new image format could actually be pretty good? Sure, there might be some, but I really don’t think it’s a fraction of a significant size.
Moreover, how many people in said fraction are going to remember the name (and thus perhaps the format) far more easily by remembering it’s got such a stupid name?
* A new lossy image Codec
* A lossless image codec (lossless modular mode)
* An alternative lossy image codec with different kinds of compression artifacts than those typically seen in JPEG (lossy modular mode)
* JPEG packer
Because it includes a JPEG packer, you can use it as such.
(Kidding.)
ISO: "Challenge accepted." [1]
Actually, I remember when JPEG XL came out, and I just thought: cool, file that one away for when I have a really big image I need to display. Which turned out to be never.
Names have consequences.
WEBP can only do 16,383px per side and the AVIF spec can technically do 65,535, but encoders tap out far before then. Even TIFF uses 32-bit file offsets so can't go above 4GB without custom extensions.
Guess which format, true to its name, happens to support 1,073,741,823px per side? :-)
Honestly, that's exactly what it sounds like to me too. I know it's not, but it's still what it sounds like. And it's just way too many letters total. When we have "giff" and "ping" as one-syllable names, "jay-peg-ex-ell" is unfortunate.
Really should have been an entirely new name, rather than extending what is already an ugly acronym.
Surprised to see it working on iOS 17.
Firefox version 146.0.1 on Windows 11
I have the flag enabled but it's still broken in FF, needs to be a nightly build to work
Who is going to take the bait, and say that Safari isn't like IE?
¹ https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
* add an correct HTML image alt information
* compress your HTML and CSS with brotli (or gzip)
thanks!
There is also an extension for this: https://chromewebstore.google.com/detail/jpeg-xl-viewer/bkhd...