Hiding from SELinux is clever, but SELinux (for most users not running MLS) is a final level of defense. If you get to the point where SELinux is saving your butt, you've got problems higher up in the stack.
Audit is supposed to be able to track anything and everything that happens on a Linux box. Every login, application, socket, syscall, all of it. The fact that they can bypass it is HUGE. You're not supposed to be able to disable auditd without rebooting the system (when correctly configured). And rebooting the system should* trigger other alarms for the security team.
The rootkit runs in ring0, at that point all kernel-enforced security controls are potentially compromised. Instead, you need to prevent the kernel module from being loaded in the first place. There are multiple ways to ensure no further kernel modules can be loaded without rebooting the computer, e.g. by having pid=1 drop CAP_SYS_MODULE out of it's bounding set before starting any child processes. After it has been loaded it's too late to do anything about the integrity of your system.
That is a critical observation. Last time I had to root an Android device it hat pretty robust defenses like dm-verity and strict SELinux policies (correctly configured) and then everything collapsed because the system loaded a exfat kernel module from an unverified filesystem.
Permitting user-loaded kernel modules effectively invalidates all other security measures.
What would it be checking against? There's no central signing authority the way there is with Windows. (I mean I guess a distro could implement that but then how would I load my own custom modules?)
The kernel provides the option to embed a signing key for kernel modules at compile time. But (AFAIK) you'll need to compile your own kernel to go that route.
I don't mean this to come off as rude, but how much did you know about SELinux?
Because in my experience, when people are "dealing with weird...issues" and "[finding] little value in it" they usually don't understand what it is and how to use it.
Don't misunderstand my original post. SELinux is AMAZING. But, if SELinux in the default "targeted" policy is the thing that's protecting you, that's good, but it means there are some major bugs or misconfiguration higher up (i.e., in your web server).
I assume you know what a network firewall is. Think of SELinux like a "System Call Firewall". SELinux will protect you from many so-called "zero-day" vulnerabilities. It watches every syscall an application makes, looks at its policy, and decides if that syscall should be allowed/denied. It is a good thing.
However, SELinux is really not user-friendly, though it is extremely well documented and learnable. (run `man -k selinux` to see all the man pages) Red Hat also has thorough documentation (https://docs.redhat.com/en/documentation/red_hat_enterprise_...)
Specifically, to your "weird permission issues". That is a "problem" with SELinux; it doesn't surface errors well. The TL;DR is: if you get a "permission denied" error, and you rule out the obvious (i.e., filesystem permissions), then you need to know to blame SELinux and look at the `/var/log/audit/audit.log` file.
That file is technically human readable, but there are tools that make it much easier, such as `ausearch` and `sealert -a`.
"Now this is a horrible exploit but as you can see SELinux would probably have protected a lot/most of your valuable data on your machine. It would buy you time for you to patch your system."
...as a last line of defense. MAC is also a stronger system than DAC to being with, so a lot of places may opt to have it in place anyway for inexperienced/careless/lazy admin mistakes. Sorry you struggled with writing SEL policies, but it's a very valuable tool when you run systems that are exposed to the internet or other hostile environments.
This does not seem to work with Fedora Atomic. Because the system is read-only, the kernel module cannot be loaded. You would have to create an RPM package for the rootkit that you can then layer. In addition, due to Secure Boot, the kernel module would have to be signed with the same key as the system itself.
The encrypted page memory manager hardware in some ancient Sun systems prevented a lot of these context isolation problems. However, the modern IT landscape chose consumer grade processor architecture and bodged GPUs as the cloud infrastructure foundation.
Thus, there currently is economic inertia entrenching vulnerable system design. I don't think there is a company large enough to change the situation anytime soon, as the market has spoken. =3
Rule #3: popularity is not an indication of utility.
If Kernel Lockdown is enabled, a zero-day exploit is required to bypass module restrictions without a reboot.
Unfortunately, threat actors tend to have a stash of them and the initial entry vector often involves one (container or browser sandbox escape), and once you have that, you are in ring 0 already and one flipped bit away from loading the module.
The Linux kernel is not really an effective privilege boundary.
A kvm hypervisor is not perfect, as sandbox escape was demonstrated even with https://qubes-os.org/ . On modern AMD/Intel/ARM64 consumer processors it is not possible to completely prevent bleeding keys across regions.
Only the old Sun systems with hardware encrypted mmu pages could actually enforce context isolation.
If performance is not important, and people are dealing with something particularly nasty... than running an emulator on another architecture is a better solution. For example, MacOS M4 with a read-only windows amd64 backing-image guest OS is a common configuration.
It was a POC from shortly after Spectre CVE dropped, and I'm not sure if the source code made it into the public. I heard about the exploit in a talk by Joanna Rutkowska, where she admitted the OS could no longer completely span TCSEC standards on consumer Intel CPUs. YMMV
The modern slop-web is harder to find things now, and I can't recall specifically if it was something more than just common hypervisor guest escape. =3
Red teams (internal or consultants) use this sort of tooling in the real world. Their job is to emulate a real, competent threat actor. APTs routinely use high-quality rootkits for EDR evasion.
Persistence is actually quite rare nowadays - since it's the most easily detected, red teams usually prefer not to and stay memory-only.
Many servers and systems are rarely rebooted, and many campaigns are not that long term. There may not be a reason to compromise the target again.
For example, a ransomware gang may compromise a company's network, steal data, deploy the cryptolocker, and then get out. There's no need to have persistent access; they got what they wanted.
I know that very well considering I have servers that have 5 years of uptime, but generally the environment isn't the same as it was with cloud services living less than a few hours (or even seconds for functional endpoints) this becomes a problem.
my first thoughts is that this is actually a vector against people rather than servers which do reboot daily.
Assuming someone manages to first get root, can kernels only allowing signed modules to be loaded (Talos does that if I'm not mistaken, for example) prevent that stealth rootkit from being loaded? Or can root just bypass that check?
Or is the only line of defense a kernel compiled without the ability to load modules?
I know all bets are off once someone already gained root, but not allowing the installation of a stealth rootkit is never bad.
This looks impressive, haven't had a chance to give it a go, would love a consumable "counters" tutorial for this type of intrusion..."Be a researcher, not a criminal." might be wishful thinking.
does this work on normal linux desktops ?
My impression was that either:
1). Kernel is too big. Try making modules - link error
or
2) System will not boot due to missing/misconfigured parts.
The sole blocker of CONFIG_MODULES=n is WiFi and only just prior to network UP state of WiFi (during initial WiFi initialization).
Also, kernel build will fail during 'make modules'/'make all'/'make' but will succeed for 'make bzImage'/'make install'
Desktop Linux distros' WiFi required SIGNED module support for internationalization of radio band selection.
SO, for kernel modules to be disabled on desktop and still use WiFi, one needs to rebuild WiFi without module support and specifically to comply with their country's radio authority.
Many embedded systems or supercomputers disable modules for security or simplicity, but then all needed drivers must be built in. WiFi is a common casualty because it’s normally modular due to firmware blobs provided as-is from WiFi manufacturers.
Also, many supercomputing facilities and hardened servers prohibits direct networking with WiFi drivers (because, unverifiable firmware blobs).
Your homelab should provide the direct Ethernet connect to your desktop.
Yes. Offline is how a lot of rootkits are analyzed after the admin notices peculiar behavior. There are a lot of other tells that could be run online to find this rootkit though, most notably, its behavior with ftrace. Disabling ftrace, and then running a program that uses ftrace would tell right away that something's wrong.
Thanks. So for virtualized systems it would make sense to routinely clone the HDD and do such a comparison. Could easily be included in the backup software.
The rootkit now disables SELinux enforcing mode on-demand when the ICMP reverse shell is triggered, leaving zero audit logs.
How it works: SELinux maintains a global kernel structure called selinux_state that contains the enforcement flag. The rootkit resolves this non-exported symbol via kallsyms at module load time, then directly writes enforcing = 0 when triggered. This bypasses the normal setenforce() interface entirely.
The clever part is the dual-layer approach:
* Hooks netlink_unicast to drop audit messages for hidden PIDs
* Attempts to modify selinux_state->enforcing directly in kernel memory
On kernels built with CONFIG_SECURITY_SELINUX_DEVELOP=y, SELinux enforcement may stop at the kernel decision level, while userspace tools continue to report enforcing mode and /var/log/audit/audit.log shows nothing.
- Advanced Network Hiding
Previous versions only hide TCP connections from /proc/net/tcp* by hooking tcp_seq_show, which blocked netstat. But modern tools like ss and conntrack bypass /proc entirely - they query the kernel directly via netlink.
The new version filters at the netlink layer:
* SOCK_DIAG filtering: ss uses NETLINK_SOCK_DIAG protocol to get socket info directly from the kernel. Singularity hooks recvmsg to intercept and filter these netlink responses before userspace sees them. Commands like ss -tapen or lsof -i return empty for hidden connections.
* Conntrack filtering: Connection tracking (nf_conntrack) maintains state for all network flows. Reading /proc/net/nf_conntrack or running conntrack -L would expose hidden connections. The rootkit now filters both the proc interface and NETLINK_NETFILTER messages with conntrack types.
* UDP hiding: Added hooks for udp4_seq_show and udp6_seq_show - previous versions only hide TCP.
- Other improvements:
* Optimized log filtering (switched from multiple strstr() calls to switch-case with strncmp())
* Audit statistics tracking (get_blocked_audit_count(), get_total_audit_count())
* Automated setup script
I understand that this is to drive research and help security researchers in this case, but I personally think Github should take a harder stance against this kind of repo, education purposes or not - saying it is for educational purposes is definitely not going to stop someone (especially people who wouldn't know how to develop this level of rootkit on their own) from going and using it.
Also the specific details in README regarding 'make sure you randomize this or you'll be detected!' makes it feel even less like it is explicitly for educational purposes since you are providing users easy instructions on how to work around countermeasures this code.
There are many responses to this, but I'll start with:
Security through obscurity is not security [1]
When only l33t underworld h4x0rz know about software flaws, there is very little incentive or ability for regular software developers to find and fix what enables these vulnerabilities. Only through shared knowledge can the world become a better place.
The second argument doesn't really work out in praxis. We have a quarter century knowledge about SQL injection at this point, yet it keeps happening.
Instead of trying to educate everybody about how to safely use error-prone programming abstractions, we should instead de-normalize use of them and come up with more robust ones. You don't need to have in-depth exploit development skills to write secure Rust code.
Unfortunately, there's more money to be made selling security consulting if people stick to the error-prone ones.
Do you think malware creators find out by reading HN or github? I don't understand the vitriol, the request "Github should take a harder stance" could have a chilling effect on security researchers, pushing high impact exploits deeper underground.
Another point is that Firstly Github shouldn't take a harder stance but considering its microsoft and even if One might argue that Github does take in this case and it actually does.
This would really end up doing not much because buying a domain name and such hosting should be easy.
There are some service providers who will only comply in things if you provide if and only a legal complaint which is genuine and valid (like a court order) and I think no Court can order for something like this because I feel like there is / must be a legal backing for genuinely writing "this tool is for educational/research purposes" and its actually so, so I don't really understand if github's stance would even matter in the end because if you need to get court order to remove it in the end, then github will comply it with it as well (even more so than those providers even)
I don't understand what the OP wants, like should this be obscure in some tor .onion forum for hackers or should this be on github so that people can read about this and learn abotu this vector and patch up in their servers where they may have thought it was safe but they didn't know about this issue exists in the first place! (because a hacker might still use obscure persons but a sysadmin might not comparatively)
There isn't vitriol, or atleast I didn't mean it that way. The point I was trying to make is that I've seen malicious code like viruses and keyloggers and rootkits being distributed via github and they use the 'this is for education' as a cop-out when the rest of the repo makes it extremely obvious what the real intention is
Malware is very easy to build. Competent threat actors don't need to rely on open source software, and incompetent ones can buy what they use from malware authors who sell their stuff in various forums. Concerns similar to yours about 'upgrading' the capabilities of threat actors were raised when NSA made Ghidra public, yet the NSA considers the move itself to have been good (https://www.nsa.gov/Press-Room/News-Highlights/Article/Artic...).
People will build malware. It is actually both fun and educational. Them sharing it makes the world aware of it, and when people are aware of it, they tend to adjust their security posture for the better if they feel threatened by it. Good cybersecurity research & development raises the bar for the industry and makes the world more secure.
Have you ever heard the phrase:
"To stop a hacker you have to think like a hacker."
Thats cyber security 101. Without tthe hackers knowledge or programs...you're just a victim or target. But, with this knowledge made available, now you are aware of this program/possibility. Its like when companys deploy honeypot servers to capture the methods & use cases of hackers attacking the server, to build stronger security against their methods and techniques.
For me, the real scary part is the hiding "Audit Evasion" (for those not in the know, here's a link https://www.redhat.com/en/blog/configure-linux-auditing-audi...);
Audit is supposed to be able to track anything and everything that happens on a Linux box. Every login, application, socket, syscall, all of it. The fact that they can bypass it is HUGE. You're not supposed to be able to disable auditd without rebooting the system (when correctly configured). And rebooting the system should* trigger other alarms for the security team.
reply