Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Supply Chain Attacks on Linux Distributions – Fedora Pagure (fenrisk.com)
216 points by akyuu 12 months ago | hide | past | favorite | 85 comments


Because bash for some goddamn reason loads the bashrc for interactive shells AND when started by sshd, regardless of whether the shell is interactive or an tty is present. Bash (and only bash) literally has a special case for sshd to enable this kind of exploit.

As a result of this, git and rsync wont work at all if the bashrc on the remote machine writes any data to stdout. Like setting a window title.

To work around that, every bashrc on this earth needs a case switch to return early to avoid this specific bug.


Wait, ssh doesn't let you specify the remote command without running it through the user's shell? That seems like a deficiency too IMHO.


Command strings must be expanded to an argument vector by some kind of shell. SSH itself does not allow to execute a program by argument vector like execv.


It is simpler to export paths locally, so the remote doesn't have to know your file/folder structure.


Sure, it's convenient to have PATH, but why not have an optional way to say "hey, ssh, run /usr/bin/rsync on the remote with the following arguments, directly and without a shell please"? Equivalent to a Dockerfile containing

  cmd foo
vs

  cmd ["/bin/foo"]
(IIRC, at least - been a minute since I needed to do this.)


See the OpenSSH bug above.


This is sort of a feature that allows for restricted shells such as menu systems.


It seems like ForcedCommand should work regardless?


It's a limitation in the ssh protocol. I wish they would fix it, but I'm not holding my breath. Trying to do anything about it would be a compatibility nightmare.

If you need to pass data through ssh you're better off doing it through stdin.


Suggestion for adding it is here, basically they were skeptical there are use-cases for it:

https://bugzilla.mindrot.org/show_bug.cgi?id=2283


> We can’t change git’s shell to /sbin/nologin or /bin/false, or users wouldn’t be able to connect over SSH.

Git actually has a solution for this! I don’t know if it would work with the custom python stuff going on, but you can set the login shell to `git-shell`


Yeah, I tried that, but it doesn't work well with git-lfs (large file storage). At least, it didn't last time I tried.


So, it works perfectly to most sane use cases with git.


Are you saying storing pointers to large files isn't sane to do in Git? What are your suggested solution for dealing with large assets you want versioned and easily accessible?

If you're really dogmatic about it, I guess you would have no dependency lock files either, but commit all that code directly to git instead of having references. Some people do that, so wouldn't be a huge surprise.


It means that the large majority of projects that don't use git-lfs can improve their security immediately and without any trouble.

It also may mean that git-shell could use a few PRs adding whatever is missing for git-lfs to work, given that git-lfs does not do anything extra fancy.


use something that can actually version your large files. git-lfs is silly and trying too hard. it's the literal faster horse of file versioning. it's so wrong by design i don't even know where to start.


Git isn't that opinionated.


Or just use git over https. And heck, if that's such a big problem, switch to a vcs that you can properly manage


You shouldn't use Git over HTTPS. With SSH, you can use a hardware authenticator that requires both proof of ownership (i.e., the unlock PIN) and proof of possession (i.e., physical touch) out of the box. That's technically possible over HTTPS, of course, but I have yet to see a Git server that works that way.


Both HTTPS and SSH are perfectly adequate transports for git, both can be used securely, both have their footguns. No need to be tribal about it. Use what's appropriate in your situation.


FTR the "Proof of possession" relies on a (specific) random number being received from a device.

It won't be long before that hardware device that is supposedly being held by a person becomes a soft device, which will then be impersonated.


It already is (macos passwords app)


Good point. Looking at this from the user's perspective, my concern is to limit the ability of someone with access to my computer from using a connected hardware authenticator. Maybe that physical touch activation of the authenticator has a better term—proof of presence?


For my money - I'm incredibly sceptical of the technology. To me it looks like a false sense of security.

There's a number of problems with the idea:

- The device (eg. Phone or Tablet) gets owned and gives up its value as a provider of confirmation

- Users end up nominating a password service like 1Password as the device

- Someone manages to convince the authentication systems that their faux device emitting the signal, is the device in question

For the second point, when I was integrating a (FOSS) passkey system into a product, it drove me up the wall having to have my phone right next to me every time I was working. I ciouldn't leave it in the next room on charge, for example.

As a user I drew no comfort from the system, and viewed it as a burden, and was concerned about replacing the device in the event of a loss/theft or destruction - which is usually the pathway malicious individuals take to insert their device as the one to be used instead of the original.


Indeed, we used those RSA SecurID key fobs for VPN login in some places, and people would just "forget" them in the laptop bag.

At a certain point, the user behavior IDS constraints become more important than building a deeper moat. 2FA cellphone based anything is $23 away from some adversaries control... ultimately more security theater in my opinion YMMV =3


> The device (eg. Phone or Tablet) gets owned and gives up its value as a provider of confirmation

Any device could get exploited. The requirement that it happen to both that device and my computer, simultaneously, significantly raises the difficulty bar.

> Users end up nominating a password service like 1Password as the device

I don't see the issue. Isn't that the user's choice to make? If you want better security then don't make that particular choice.

> Someone manages to convince the authentication systems that their faux device emitting the signal, is the device in question

By that logic why have passwords or keys or anything at all? Someone might exploit the server and get in regardless so why bother?

I've got at least a few objections to modern auth schemes but these aren't them.


> Any device could get exploited.

Yes.

> The requirement that it happen to both that device and my computer, simultaneously, significantly raises the difficulty bar.

Well, only one of the two devices needs to be exploited. There's twice the attack surface for you.

> I don't see the issue. Isn't that the user's choice to make? If you want better security then don't make that particular choice.

Sure, and allowing people to use "password" as their password is bad, but the design of this system encourages using a "cloud" device, because of the issues already pointed out.

> By that logic why have passwords or keys or anything at all? Someone might exploit the server and get in regardless so why bother?

The system has added nothing that makes passwords or keys more secure.

> I've got at least a few objections to modern auth schemes but these aren't them.

/me shrugs

I'm almost at the point of caring.


> only one of the two devices needs to be exploited

I think we must be thinking of very different things here. Maybe I misunderstand the scheme you're objecting to. Would you mind elaborating?


Why do you think that both devices need to be exploited for a problem in the scheme to occur?


For some reason the comment responding to this cannot be replied to (probably a good thing given the lack of understanding that author has demonstrated thus far)


What's the point of participating on a site like HN if instead of technical discussion you opt to dodge questions and make condescending but vague statements? How could I possibly demonstrate lack of understanding when you actively avoid clarifying what you're even talking about in the first place?

Perhaps what you had in mind are the PoP tokens from RFC 7800? (Sure would be nice not to play guessing games though.) Given that those are intended to replace OAuth bearer tokens they are quite a decent upgrade from a security perspective. Used with a purely software based password manager it is no longer enough to simply exfiltrate a token from the browser. Used with TPM backed keys it is approximately equivalent to the chip in a bank card.


> What's the point of participating on a site like HN

Because I get to talk to people that actually understand technical discussions.

> How could I possibly demonstrate lack of understanding when you actively avoid clarifying what you're even talking about in the first place?

You came into the thread with no understanding, then some time later decided to play word games.

All of the rest of your contributions from now are going straight to /dev/null where they belong.


> the scheme

As I already stated I'm no longer certain what scheme we're actually taking about. Is there some reason you don't feel like elaborating?

For my part I thought we were taking about a proof of possession scheme in addition to regular auth. The "regular auth" scheme was unspecified so could be password, software based TOTP, hardware token, whatever.

Widespread proof of possession already exists in the form of chips in bank cards. It seems to be quite resilient to attack in practice.

Even if I use software on my phone for proof of possession you still have to compromise the computer I'm actually logging in with, no? So the attack surface is either the same (primary device) or increased (primary plus secondary, simultaneously) depending on various practical details regarding implementation and usage.

If the "regular auth" scheme involves a half decent hardware token then good luck compromising that remotely. I won't be betting in your favor.


> my concern is to limit the ability of someone with access to my computer from using a connected hardware authenticator.

You don't need anything special to do that. Just select a hardware token that refuses to sign anything until it receives physical input from you.


Why not? Your password manager or passkeys do the same thing.

Heck, just true the public key as the authentication header.

Or use authentication certificates.

If ssh is such a big problem, use something else


The key difference is https certificates often require signing authority integrity, and leak-free SSL libraries.

Traditionally both facets of the 3rd party trust model have had CVE over the years. SSL protocol misconfiguration is also very common, and connections can be downgraded by adversaries into a vulnerable version of the protocol.

It could be argued ssh is a weaker Trust on first use model, but in most cases the keys will rarely change for the short service life of the server instance... and the server may be setup from a local physical terminal, and keys communicated out-of-band to remote users.

At some point one has to admit if someone really wants in they will physically pull the drive in the data center. However, people using web vulnerability scanners on your systems are less of a nuisance.

Best regards =3


> It could be argued ssh is a weaker Trust on first use model

That is but an optional aspect of the configuration (albeit by far the most common).


> The key difference is https certificates often require signing authority integrity

Couldn't you use your "traditional" SSH keys to generate an ephemeral self-signed client certificate which shares the same keypair as your SSH key? That would at least solve the CA issue for the client-to-server authentication.


It is a complex issue, and while public key encrypted exchanges could bootstrap a remote secure session (gpg/pgp etc.) to re-key the ssh server... there still is no guarantee someone isn't keeping a copy of that ssh session with the key-pair before you logged in as root.

One should not completely trust TOFU over remote sessions, but it is the most practical solution. This is the inconvenient secret of cloud technology, and why the hosting company web panels often include abstracted firewall/vpn settings or proprietary key management (or the same authoritative problem manifests.)

Good luck, =3


> If ssh is such a big problem, use something else

It's not a problem. You can use SSH today (and since years already now) with Yubikey and the likes. I'm using Git over SSH with Yubikeys and it works.

Use Git over SSH, use a Yubikey (or whatever suits you), set the login shell to git-shell.


Yubikey does look interesting, I thought of getting one. Sorry for this stupid question, but since if you use it with SSH, does that mean that somehow I may use my existing id_ed25519 file with Yubikey or does it need to generate a new one?


You need to generate a new one on key (it’s not actually generated and already on the key, but that’s a technical detail). The idea is that the private key cannot be leaked since it never leaves the key.


Is there no way to use my existing one with either version / model? :(


No. The whole point of hardware keys is that the private key bytes are securely locked inside the key, with no way out (cannot steal) and no way in (cannot forge / tamper with).

Reuse of an existing key for any reason after enrollment is not a good idea. A reluctance to just enroll another key may mean trouble with key rotation and revocation, and thus problematic security procedures. Key rotation should be fast, painless, and regular.


I have two more questions.

1. Does it ever expire?

2. What would you do exactly if you were to lose the hardware key? Same thing as if you lost your id_ed25519 file?

> Key rotation should be fast, painless, and regular.

I agree. Right now I keep changing the expiry date of my GPG keys (once in a good while).


1. A hardware key usually has no clock. But software, such as GPG, can set and check they key expiration date, and complain. For git, you usually have to have an SSH key for access, and a GPG key to sign commits, even though signing with an SSH key is possible by now. I keep signing with GPG, so that if my ssh key is rotated or revoked, my commits still remain signed.

2. If you lose a key, whatever its nature, you unenroll it where you have it enrolled. You better have some one-time codes set up, or another alternative method, like a password login to your VM via console.


Thank you for the answers! I use GPG to sign, too, and authenticate with SSH (but with my GPG key that has an A subkey through gpg-agent).


[flagged]



Sweet a new thing vulnerable to supply chain attacks to fix things vulnerable to supply chain attacks.


Agreed, the ssh service on many git servers like gitea use their own user specific process instances to handle the connection.

Combined with port-knocking and fail2ban, the setup has proven rather reliable over the years. The Go language can make surprisingly resilient servers if you have the memory available.

The ssh key handling in gitea requires manual setup, and thus does not necessarily even have to use the same administrative user login key sets.

Best regards =3


it supports the mac osxkeychain for storing credentials

https://git-scm.com/docs/gitcredentials#_available_helpers


Nice writeup!

Thinking generally it seems something like the xz/lzma vulnerability could be snuck in by 1-2 nefarious people colluding across packaging and package producing, especially if we are talking about nation-state actors who can afford to look legit for years before being trusted to work without oversight - then when no one is watching, sneak in a backdoor.

I feel we are in a very innocent age and will look wistfully back at the days we trusted our anonymous open source brethren.

On macOS I think about this every time I “brew install”, and every time oh-my-zsh auto-updates. Do Linux users think about this?


I think about this every time I install software. As a citizen of a small European nation, 100% of the software I use is under control of a foreign government, and I trust none of them. At least with open source software, there is a better chance of nefarious changes being detected by at least one of the parties building and packaging the software. With proprietary software, even that small level of assurance is not available.


There is a very big risk difference between a upstream package like XZ and running random application with brew or god forbid zsh auto update.

>Do Linux users think about this?

Yes that's why i avoid packages not in the official distro repositories and where possible further minimize risks by using additional security layer such as sandboxing (flatpak and firejail), mandatory access control (Apparmor or SELinux), virtualization (KVM/Qemu) and application firewalls (Opensnitch).


Think about it all the time. But, I find it overwhelming and shutdown.

I'll just enumerate my random thoughts

* If you're worried about nations, remember that they can, quite literally, send ninjas-in-attack-helicopters at you.

* Most nations already have "laws" that "require" you to provide passwords/data access on demand[4ab].

* If you use any "cloud", there's a high likelihood that they're already backdoored, either through legal means (i.e. through National Security Letters) or "not so legal, but who's going to stop them?" means such as just plain old hacking[1]

* All consumer CPUs already have builtin backdoors that can't really (but kind of, but who really know if it's effective) be disabled[2abc]

* Most printers print secret codes on printed documents that link back to the printer[3]

* I have no control over device firmware and some important drivers. I really don't know what my network card firmware is doing when I'm not using it and it has DMA to my system RAM. I "need" nvidia proprietary drivers to have a decent experience, no idea what they actually do.

* Nearly every piece of software includes some form of "Analytics" or "Telemetry", which often doesn't actually turn off when you click the stupid opt-out button.

[1] https://www.npr.org/sections/thetwo-way/2013/10/30/241855353...

[2a] https://en.wikipedia.org/wiki/Intel_Management_Engine

[2b] https://en.wikipedia.org/wiki/AMD_Platform_Security_Processo...

[2c] https://en.wikipedia.org/wiki/ARM_architecture_family#TrustZ...

[3] https://en.wikipedia.org/wiki/Printer_tracking_dots

[4a] https://en.wikipedia.org/wiki/Key_disclosure_law [4b] https://www.eff.org/wp/digital-privacy-us-border-2017


> I feel we are in a very innocent age and will look wistfully back at the days we trusted our anonymous open source brethren.

There seems to be some kind of inflection point of things:

* Enough users

* Enough time

* Enough cost

* Enough financial reward

Eventually, as those things culminate together, there is a loss of the shared value and...innocence(?) of any scene.

I think back to when RMS was aghast at the idea of MIT adding usernames and passwords to their machines in 1977. [1] It's a battle that I still feel bad that he lost against; a kind of presumption of universal access is a thing that I am intrinsically drawn to. It breaks my heart to think that any sort of restriction exists to technological access, in spite of the obvious need for computer security. Think about it - in the modern day, it's not only unthinkable, it's largely impossible to access most computers without some form of locked down authentication - even for publicly accessible computers at libraries! Most Operating System setups are designed with a fundamental assumption that you will be using both, regardless of whether it's a computer shared by 50 people in an office, or a personal laptop in your home, locked room that no other person will ever touch or see.

I sit here and I think about how there was a period where it was simply understood that, you treat computers and that network with a kind of prestige and respect. That you conduct yourself and the things you do in a manner that is good, because you have a desire for that thing to continue being used, accessible, and enjoyable by other people who have that same inner passion - that hunger! That, there did not need to be a threat of some kind of legal consequence - no laws even existed in regards to it. You did so...merely because you grasped that it was right, and you desired to do what was right. You did things right because you had a passion, a care, and a shared common interest among all of those other people to use those systems in a way that was good for all.

Sure, perhaps there were a few jokes and gags. A few fights or arguments over this or that. Plenty of "frivolous" things that could be done like play games and whatnot, but there was a kind of obvious social understanding that you did not do what is bad because it was bad. You wouldn't do bad things for the same reason why you wouldn't stop in the middle of a sidewalk and start using the bathroom in public - because its unacceptable to do so; it was inconsiderate to do so.

I feel like there is a flower of naivety that has wilted with time. It's not quite dead, but so many of its petals have dried up, fallen off, and crushed beneath the boot of financial incentive.

I imagine it to be similar to how the invention of the car must have felt. So much early optimism and obvious benefits. Only to be used to escape police. To transport booze and drugs. To steal from owners so it can be sold for parts. To kidnap children in. To hit pedestrians with. To use in wartime. It's a grim bleakness in life, that all things that can be cherished and enjoyed by the kindhearted, so horribly abused by the malcontent.

[1] https://en.wikipedia.org/wiki/Richard_Stallman#Harvard_Unive...


For those looking for alternatives to the status quo on Linux supply chain security, check out [Stageˣ].

It is 100% deterministic, hermetic, and reproducible. Also it is full source bootstrapped from 180 bytes of human-auditable machine code, all released artifacts are multi-party reproduced/reviewed/signed, and it is fully container-native all the way down "FROM scratch" making it easy to pin dependency hashes to reproduce your own projects with it.

I started it after years of unsuccessful pleading with existing distros to stop giving ultimate solitary trust to -any- maintainers or sysadmins involved in the project.

https://codeberg.org/stagex/stagex


This project looks amazing — I didn't think bootstrapping like this was possible. Kudos on the project :-).

This might displace chainguard as my goto docker images :-).


Pardon my naivete but I've heard Nix described in many of the same terms. What are the differences and similarities between Stageˣ and Nix/NixOS?


I am obviously pretty biased having authored a failed RFC to Nix to mandate signing, and having founded StageX, but here goes.

Unlike StageX, Nix has a wikipedia-style low friction approach to packaging allowing a large community to maintain a huge number of packages which are signed by a central trusted party, while being -mostly- reproducible. It relies on a custom language and toolchain for packaging. Nix is designed for hobbyists seeking to reproduce their own workstations.

Unlike Nix, StageX is 100% reproducible and all changes are independently signed by authors and reviewers, then artifacts are reproduced and signed by multiple distributed maintainers. It is also full source bootstrapped, and is built on the broadly compatible OCI standard for packaging. StageX is designed for production threat models where only compiler and infrastructure toolchains are needed and trusting any single person is unacceptable.

One maintainer even uses Nix as a workstation to contribute to StageX. They have fundamentally different goals and use cases.


I've got to say, Git resignifying -- and requiring --end-of-options instead is bonkers


The pathway of untrusted/malicious input -> trusted command line argument seems to be a common problem, and one that could possibly be mitigated by better type/taint checking.

It looks like there is some prior work in this area, but it hasn't resulted in commonly available implementations (even something basic like a type/taint checking version of exec() etc. on one side and getopt() etc. on the other.)


I could've sworn I remember something about bash and glibc cooperating to indicate which arguments come from expanded variables but I cannot find anything on the internet or in the sources. Either I'm going insane or it was an unmerged proposal.



Yeah this is exactly what I was thinking of, thanks for finding it.


> "disabled later" because "it caused problems"

;-(


Seems like it would be quite painful to do that in C without heavy refactoring.

Maybe a Rust alternative ?


Why would that be painful in C? The code is already handling all the pieces


Besides what has already been said in other comments, I think reality has already shown how it _is_ painful in C. It's painful to implement both safe and ergonomic. The amount of subtly incorrect and/or differing implementations doing easier things out there is just incredible.


Painful to write vs painful to audit for correctness.

Writing code that you barely understand is a surefire recipe to have /no clue/ how to debug said code (because debugging it will be more complex).

(let alone audit it for correctness/security).


That just sounds like you don't believe it's ever possible to change existing C code, which... is a position you can argue, but I'm pretty sure that bash and glibc are actively developed to the point where I wouldn't personally commit to that position.


Insert obligatory ref to "on trusting trust" what is the compiler doing? Has the pre-processor been vetted?

This is looking more and more like hardware solutions are becoming more attractive.


1. I don't see why you need to solve Trusting Trust to make libc and the shell more robust.

2. If we are worried about Trusting Trust, then Rust is worse; at least C has the wide range of compilers needed for diverse double-compiling and as of https://guix.gnu.org/en/blog/2023/the-full-source-bootstrap-... we arguably have a working solution. Rust only has a single compiler, and that compiler is used to build itself, making it the poster child for Trusting Trust targets.


Rust has a second compiler, mrustc, written in C++ and that is able to bit-reproduce rustc. This has been the case for a few years now.


Oh, excellent; I didn't realize that was capable of building rustc. In that case, I'm wrong and Rust is just as good as C.


Sadly due to legacy.

—- disambiguates revisions and paths in some commands so another option was needed.


Just write all commands with structured I/O instead, like Powershell.

Now I want an operating system where everything is a YANG model...


>In addition, this is a self-service application, in the sense that anyone can create a Fedora contributor account and gain authenticated access to various services.

I legitimately wanted to get a package into Fedora a few years ago, a service that did not exist already, and I couldn't get past the fact that they require new contributor accounts to be sponsored by someone already a contributor. I was unable to secure sponsorship by anyone and just gave up.


this could be a useful mechanism actually... shame it didn't work out


Then you have someone to blame for sponsoring an exploit vs some unsponsored person who identified the same exploit patched it, and no one bothered to check either way. I guess I got lucky never trusting Mint. I did trust Slackware, because I knew someone who trusted Pat V. I guess I also need to rub shoulders with Linux maintainers vs people like Marc Andressen. I did meet RMS, but he has no personality. Woz? Wolfram? Anna V? I met the SUSE people, and one of the RedHat maintainers, who gave me a tee-shirt, a poster, two bumper stickers, and the latest distribution of RedHat. It was easier a decade ago...

Despite the amount of brilliance here, again, we never have had a single meet and greet.


Agreed, I have been a RHEL ecosystem user for 11 years now. My experience in the Fedora community was actually comforting to me.


Now that is cool.


NoLimitSecu, French cybersecurity podcast, released an episode yesterday with the authors: https://www.nolimitsecu.fr/compromission-de-distributions-li...

It was amazing to hear that they chose the weakest path, argument injection and were able to found a vector in two weeks twice (fedora + opensuse).


Does anyone else have the title overlay taking up 2/7ths of the top of the screen?


"Let's use red hat's products but let's not get them from red hat to save some bucks"

This is the risk you take




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: