Hacker Newsnew | past | comments | ask | show | jobs | submit | jchw's commentslogin

I don't think so: JPEG 2000, as far as I know, isn't generally supported for web use in web browsers, but it is supported in PDF.

So Firefox (or others) can't open a pdf with a embedded jpeg-2000/XL? Or does pdf.js somehow support it?


Apparently I really flubbed my wording for this comment. I'm saying they do support it inside of PDF, just not elsewhere in the web platform.

JPEG-XL is recommended as the preferred format for HDR content for PDFs, so it’s more likely to be encountered:

https://www.theregister.com/2025/11/10/another_chance_for_jp...


I'm not convinced HDR PDFs will be a common thing anytime soon, even without this chicken and egg problem of support

What I mean to say is, I believe browsers do support JPEG 2000 in PDF, just not on the web.

the last time that I check it, I find that I need to convert to Jpeg to show the image in browsers.

A *PDF* with embedded JPEG 2000 data should, as far as I know, decode in modern browser PDF viewers. PDF.js and PDFium both are using OpenJPEG. But despite that, browsers don't currently support JPEG 2000 in general.

I'm saying this to explain how JPEG XL support in PDF isn't a silver bullet. Browsers already support image formats in PDF that are not supported outside of PDF.


I truly believe that Valve has two fundamental things working in their favor:

Firstly: Despite inventing or at least popularizing a lot of new microtransaction concepts, they've just never been the greediest company in the business when it comes to microtransactions. Mobile gacha games have cleaned up their business quite a lot lately, with most of them being significantly less predatory than they used to be, but even back when TF2 introduced lootboxes and hats, the important thing was that the game was not pay to win; you could get all of the items in relatively short order just by playing, and the only benefit to paying was cosmetics.

Contrast this to the earlier reign of Korean MMOs: pretty much all of them had egregious microtransactions. MapleStory, PangYa, Gunbound, etc, and even some current platforms like Roblox. Valve also came into this whole thing before lootboxes became the root of all evil, and while TF2's lootbox mechanism looks bad in retrospect, there was simply no stigma against a system like that, and it never felt like a big deal during the game's heyday. Just my opinion, but I strongly believe it to be true.

Secondly: The most egregious things going on are not things Valve is directly involved in, they are merely complicit, in that they don't do much to curtail it. It's not even necessarily cynical to say that Valve is turning a blind eye, they benefit so significantly from the egregious behavior that it is hard to believe they are not influenced by this fact. But: It is consistent with Valve's behavior in other ways: Valve has taken a very hands-off stance in many places, and if it weren't for external factors it seems likely they would be even more hands-off than they are now. I think they genuinely take the position that it's not their job to enforce moral standards, and if you really do take this position seriously it is going to wind up looking extremely bad when you benefit from it. It's not so dissimilar from the position that Cloudflare tries to take with its services: it's hard to pick apart what may be people with power trying to uphold ideals even when it is optically poor versus greedy companies intentionally turning a blind eye because it might enrich them. (And yes, I do understand that these sites violate Valve's own ToS, but so does a lot of things on Steam Workshop and elsewhere. In many cases, they really do seem consistently lax as long as there isn't significant external pressure.)

Despite these two things, there is a nagging feeling that every company gives me that I should never take anything but a cynical view on them, because almost all companies are basically lawnmowers now. But I really do not feel like I only give Valve the benefit of the doubt just because they support Linux; I actually feel like Valve has done a substantial amount to prove that they are not just another lawnmower. After all, while they definitely are substantially enriched by tolerating misuse of their APIs, they've probably also gotten themselves into tons of trouble by continuing to have a very hands-off attitude. In fact, it seems like owing to the relatively high standards people have for Valve, they get criticized and punished more for conduct than other companies. I mean seriously, Valve has gotten absolutely reamed for their attempt at adding an arbitration clause into their ToS, with consequences that lingered long after they removed and cancelled the arbitration clause. And I do hate that they even tried it -- but what's crazy to me is that it was already basically standard in big tech licensing agreements. Virtually everyone has an insane "you can't sue us" rule in their ToS. It numbs my mind to try to understand why Valve was one of the first and only companies to face punishment for this. It wouldn't numb my mind at all if it was happening to all of them, but plenty of these arbitration clauses persist today!

So when I consider all of this, I think Valve is an alright company. They're not saints, but even if the bar wasn't so terribly low, they'd probably still be above average overall. That can be true simultaneously with them still having bad practices that we don't all like.


In my opinion, three basic things are needed:

- Device emulation: uinput covers this; requiring root is reasonable for what it does.

- Input injection. Like XTEST, but ideally with permissions and more event types (i.e. tablet and touch events.) libei is close but I think it should be a Wayland protocol.

- UI automation: Right now I think the closest you can get is with AT-SPI2, for apps that support it. This should also be a Wayland protocol.

None of these are actually easy if you want to make a good API. (XTEST is a convenient API, but not a particularly good one. Win32 has better input emulation and UI automation features IMO.)

Also the tangent about how crazy the compatibility layers are is weird. Yes, funny things are being done for the sake of compatibility. XWaylandVideoBridge is another example, but screen sharing is an area where Wayland is arguably better (despite what NVIDIA has to say) because you can get zero copy window and screen contents through PipeWire thanks to dmabufs.

Some of the lack of progress comes down to disagreements. libei mainly exists, by my best estimate, because the GNOME folks don't like putting things in Mutter, and don't want to figure out how to deal with moving things out of process while keeping them in protocol. (Nevermind the fact that this still has to go through Mutter eventually, since it is the guy sending the events anyways...) However, as far as I know, lack of progress on UI automation and accessibility entirely comes down to funding. It's easy to say "why not just add SetCursorPos(x, y)" and laugh it off, but attacking these problems is really quite complex. There was Newton for the UI automation part, but unfortunately we haven't heard anything since 2024 AFAIK, and nobody else has stepped up.

https://blogs.gnome.org/a11y/2023/10/27/a-new-accessibility-...

Color management is the perfect example of how a simple ask can be complicated. How hard could it really be? Well, see for yourself.

https://gitlab.freedesktop.org/wayland/wayland-protocols/-/m...

If Wayland lasts as long as X11 did, it's preposterous to not spend the time to try to get the "new" version of these things right even if it is painful in the meantime.

After all, it isn't like UI automation on Linux was ever particularly good. Anyone who has ever used AutoHotkey could've told you that.


This is a good and informative comment.

While that is true, I feel like it is irrelevant here since it seems like Okta definitely wants (and perhaps needs) the fixes. God only knows why GitHub still forces it on though. Early on it might've been some mechanism to encourage people to accept contributions to push the social coding aspect, but at this point I have no idea who this benefits, it mostly confuses people when a project doesn't accept PRs.

> Okta definitely wants (and perhaps needs) the fixes

They definitely don't want them if their process requires signed commits and their solution is 1) open another PR with the authors info then sign it for them, and 2) add AI into the mix because git is too hard I guess?

No matter how you slice it, it doesn't seem like there are Okta employees who want to be taking changes from third parties.


I think that they absolutely still want the free labor. All of those signals just suggest that they're not willing to reciprocate any effort that you put in when you contribute.

Social on today's Internet = bots and occasionally trolls

IANAL but unfortunately, I think the fix itself shown here might be too simple to actually clear the bar for copyright eligibility. (And in fairness to copyright law, it is basically the only sane way to fix this.) That means that there's probably not much you can really do, but I will say this looks fucking pathetic, Okta.

I'm more confused by the fact that the OP freely submits a PR into an open source repo but then wants to use "copyright" because the code he submitted ended up being used under the wrong name, which was then corrected.

Licensing your code under open source licenses does not nullify your rights under copyright law, and the license in this case does not waive any rights to attribution.

It would indeed be copyright violation to improperly attribute code changes. In this case I would absolutely say a force push is warranted, especially since most projects are leaning (potentially improperly) on Git metadata in order to fulfill legal obligations. (This project is MIT-licensed, but this is particularly true of Apache-licensed projects, which have some obligations that are surprising to people today.) A force push is not the end of the world. You can still generally disallow it, but an egregious copyright mistake in recent history is a pretty good justification. That or, literally, revert and re-add the commit with correct attribution. If you really feel this is asking too much, can you please explain why you think it's such a big problem? If it's such a pain, a good rule of thumb would be to not fuck this up regularly enough that it is a major concern when you have to break the glass.


Why is it confusing to you to expect attribution?

thats not the confusing part, its rather confusing to threaten to sue for copyright because of mistaken attirbution

Mistaken attribution, or taking something that doesn't belong to you and saying it belongs to someone else is a core function of copyright law and should not be confusing to anyone who has dealt with it before.

What is your understanding of what license and rights the author was providing them - understanding this I can figure out where you are confused.


He even asked them to force-push a new history because they got the name wrong!

Mistakes happen, I guess this hurts his 'commits in a public repo' cv score.


I didn't see any threat to sue. What's your source?

I generally like Thunderbird... but something is weird. What ever happened to Sync? It was around the corner for next release like two years ago. And I'm not complaining about Exchange support, but I am a bit sad that JMAP is nowhere to be found yet.

We implemented this in the Daily build of the desktop app last year, using a staging environment for Firefox Sync. But Firefox Sync is called Firefox Sync because it’s built for Firefox. Thunderbird profiles, in comparison, have a lot more data points. This meant we had to build something completely different. As we started to spin up Thunderbird Pro, we decided it made more sense to have a Thunderbird account that would manage everything, including Sync. Unfortunately, this meant a lot of delays. So Sync is still on our radar, and we hope to have it next year, barring further complications. Source: https://blog.thunderbird.net/2025/09/state-of-the-thunder-mo...

In other words, it was more work to adapt Firefox Sync than they thought at the beginning. It's still actively developed so finger crossed it's coming soon.


I think you're still in the edit window, FYI. (At least for a few more minutes.)

It's a non-sequitur.

One thing that makes Cloudflare worse for home usage is it acts as a termination point for TLS, whereas Tailscale does not. If you use a Tailscale Funnel, you get the TLS certificate on your endpoint. With Cloudflare, they get a TLS certificate for you, and then strip and optionally re-add TLS as traffic passes through them.

I actually have no idea how private networks with WARP are here, but that's a pretty big privacy downgrade for tunneling from the Internet.

I also consider P2P with relay fallback to be highly desirable over always relaying traffic through a third party, too. Firstly, less middlemen. Secondly, it continues working even if the coordination service is unavailable.


I ended up building something in this space recently (TunnelBuddy – https://www.tunnelbuddy.net I’m the author) that lets you use a friend’s machine as an exit node over WebRTC.

One of the design decisions I made was P2P or nothing: there’s a small signalling service, but no TURN/relay servers. If the peers can’t establish a direct connection, the tunnel just doesn’t come up.

The trade-off is fewer successful connections in weird NAT setups, but in return you know your traffic never transits a third-party relay – it goes straight from your client to your friend’s endpoint.


My traffic will transit third parties all the time, since it's going over the Internet. What's the problem with relays, if the traffic is end-to-end encrypted?

Fair point!

- With a TURN/relay, you’re introducing a single, purpose-built box that: - sees all the tunnel metadata for many users (IP pairs, timing, volume), - is easy to log at or subpoena/compel, - and becomes a natural central chokepoint if someone wants to block the system.

- Without that relay, your traffic still crosses random ISPs/routers, but: - those hops are *generic Internet infrastructure*, not “the TunnelBuddy relay”, - there’s no extra entity whose whole job is to see everyone’s flows.


Zero Trust, except for the trust in Cloudflare.

I generally prefer tailscale and trust them more than cloudflare to not rug-pull me on pricing, but the two features that push me towards cloudflared is the custom domains and client-less access. I could probably set it up with caddy and some plugins, but then I still need to expose the service and port forward.

I'm definitely not trying to dissuade anyone from using Cloudflare, just making sure people realize the potential privacy implications of doing so. It isn't always obvious, even though some of the features pretty much require it (at least to be handled entirely on Cloudflare's side. You could implement similar features that are split between the endpoint and the coordination server without requiring full TLS stripping. Maybe Tailscale will support some of those as features of the `serve` server?)

> client-less access

JFYI, Tailscale Funnels also work for this, though depending on your use case it may not be ideal. Ultimately, Cloudflare does handle this use case a bit better.


Tailscale funnels do work, but it's public only. No auth.

Yeah, because the auth can't be done on Tailscale's end if they don't terminate the TLS connection. However, it is still possible to use an authentication proxy in this situation. Many homelab and small to medium size company setups use OAuth2 Proxy, often with Dex. If you wanted to get fancier, you could use Tailscale for identity when behind the firewall and OAuth2 Proxy when outside the firewall.

This may seem like a lot of effort and it is definitely not nothing, but Cloudflare Tunnels also has a decent number of moving parts and frankly their authentication gateway leaves a bit to be desired for home users.


Tailscale ‘serve’ works well at my startup. SSL and DNS still but unlike funnel it’s limited to your tailscale network.

> I could probably set it up with caddy and some plugins, but then I still need to expose the service and port forward.

Not so! I have custom domains on my Tailnet with Caddy. You just need to set up Caddy to perform the ACME DNS challenge to get certs for your domain (ironically I use Cloudflare for DNS because they make it very easy to set this up with just an API key). No plugins or open ports needed.


That's a fair personal decision, but if I would have to put money on it I'd say the chances of new company that raised 160 million of VC funding this year alone vs. established profitable company with a track record of offering free services for many years already I'd put my money on the latter.

> Cloudflare […] acts as a termination point for TLS

This doesn’t sounds zero-trust at all to me. In fact, it’s as far from zero trust as you can get.


The other option from this great list https://github.com/anderspitman/awesome-tunneling which seems to meet both sets of goals is NetFoundry.

1. End-to-end encryption.

2. Performance and reliability. 100+ PoPs in all major clouds running their data plane routers if they host (still E2EE), or run routers anywhere if you self-host. Dynamic routing to find best paths across the routers.


I don't see any indication that NetFoundry zrok supports end-to-end encryption from the client to the web server. The default configuration definitely terminates SSL on NetFoundry's server, and I don't see any documentation for how to avoid that. There's a TCP tunneling mode, but servers that use this mode can only be accessed by clients that are themselves also connected to the NetFoundry VPN service, not by clients on the public web. What's needed is a TLS tunneling mode that figures out the correct target via SNI, and zrok doesn't seem to provide that.

You are correct, zrok doesn't support mutual TLS. zrok is the free offering that NetFoundry supports so it's easy to see why you looked there for information.

The productized version, NetFoundry Frontdoor (doc here https://netfoundry.io/docs/frontdoor/how-to-guides/create-mt...) is what offers mutual TLS support.

It'll still terminate TLS at the servers, though. It's not mTLS all the way through to the endpoint.


    > It'll still terminate TLS at the servers, though. It's not mTLS all the way 
    > through to the endpoint.
That was the entire point, though. If NetFoundry Frontdoor can see the traffic (because it gets terminated on their servers, mTLS or not), then it's not end-to-end encrypted as the parent commenter claimed.

I think the issue is zrok vs. NetFoundry/OpenZiti. Zrok is the easy button to project a public endpoint from inside a network. It is not encrypted all the way through, as it is a proxy solution. NetFoundry/OpenZiti provides methods to provide tunnels all the way through. NetFoundry is a company, OpenZiti is a FOSS project/technology sponsored by NetFoundry, and zrok is a product of NetFoundry built on OpenZiti tech, so it is easy to cross things up. I think the comment was in regard to NetFoundry/OpenZiti, while your response referenced zrok. The list above has both.

i should have been more clear - you have the option:

+ e2ee via netfoundry's zero trust products

+ non-e2ee via netfoundry frontdoor


Tunneling p2p with relay fallback is essentially what connet [1] aspires to be. There are a lot of privacy/security benefits exposing endpoints only to participating peers. You can either run it yourself or use hosted version [2].

[1] https://github.com/connet-dev/connet

[2] https://connet.dev


TLS termination is neither required nor enabled by default, right?

For tunnels many of the features basically have to work this way, so I'd be surprised if you could avoid it. It's also impossible to avoid if you use normal Cloudflare "protected" DNS entries. You can use Cloudflare as just a DNS server but it's not the default, by default it will proxy everything through Cloudflare, since that's kind of the point. You can't cache HTTP requests you can't see.

Correct. We run it without it and just use the DNS filtering aspect.

How does it do DNS filtering without TLS interception - takeover for DNS resolution?

In what way are DNS resolution and TLS related except for the little-used DoT?

Thats a big privacy issue if they strip TLS, does it have a technical reason or they just don't want to offer privacy?

Is it technically possible to have something like Tailscale funnel but with something like Cloudflare Access authentication (at least for some options)?

That would be great!!


For that kind of end-to-end encryption I use pinggy.io tls tunnels.

Anubis is definitely playing the cat-and-mouse game to some extent, but I like what it does because it forces bots to either identify themselves as such or face challenges.

That said, we can likely do better. Cloudflare does good in part because Cloudflare runs so much traffic, so they have a lot of data across the internet. Smaller operators just don't get enough traffic to really deal with banning abusive IPs without banning entire ranges indefinitely, not ideal. I hope to see a solution like Crowdsec where reputation data can be crowdsourced to block known bad bots (at least for a while since they are likely borrowing IPs) while using low complexity (potentially JS-free) challenges for IPs with no bad reputation. It's probably too much to ask for Anubis upstream which is probably already too busy dealing with the challenges of what it already does at the scale it is operating, but it does leave some room for further innovation for whoever wants to go for it.

In my opinion there is at least no reason why it is not plausible to have a drop-in solution that can mostly resolve these problems and make it easier for hobbyists to run services again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: