Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People get funny about accuracy in maps. Being able to specify accuracy, even a limited accuracy, is worth a lot. Saving bytes by reducing specified accuracy is in a lot of use-cases better than saving bytes by fuzzing data.


You can compute the maximum error when you encode, which tells you how much precision you can still claim to have


How? I thought it was up to the JPEG decoder how to actually decode the image into pixels. (Not that JPEG couldn't be workable in practice if some care was put into a solution.)


Decoding JPEG doesn't leave much room for interpretation and the images should essentially always be decoded the same. For encoding, it's a different story, as there are steps to downsample the image data (chrominance data is often, but not necessarily, sampled at a lower rate than luminance data), there can be a different cutoff point for which of the DCT coefficients to discard (usually related to the "compression" or "quality" setting for the compressor). All JPEG decoders should reproduce the same image from a given JFIF file, but I'd be surprised if different encoders produced the exact same JFIF file from a given source image.


For grayscale data there's very little ambiguity, since chroma and colorspace conversions aren't involved. Basically just rounding in DCT, for which you can make reasonable assumptions.

Moreover the JPEG XT spec (not to be confused with JPEG XL or the ton of other X-somethings JPEG made) has specified precisely how to decode the classic JPEG format, and has blessed libjpeg-turbo as the reference implementation.


I take your point; but I don’t think features on an in-browser map are ever small enough for compression artifacts to ruin the integrity of the map.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: