I really like wireguard, but one thing that bugs me is the fact that it's layer 3 (an ip tunnel) and has no code to support layer 2 (ethernet MAC tunnel). The downside for me is that you have to manage static ips in the configurations (specifically it's not compatible with ipv6 slaac and NDP). There is https://git.zx2c4.com/wg-dynamic but it's very experimental at the moment.
The level 3-only tunnel is motivated as "the cleanest approach for ensuring authenticity and attributability of the packets" (in the whitepaper), but in fact every claim and routing algorithm described (needed since the tunnel is many-to-one) would work equally well substituting "ip address" with "mac address" (i may be missing something, but for sure it's not explicit anywhere). And indeed imho it would be less surprising to have an "allowed mac address" option in the configuration than an "allowed ip address": it's already common practice to white-list mac address of physical endpoints (in office). I'm toying with the idea of forking the driver code to adapt it to ethernet frames as i don't think it would need any big rewrite but i'm realizing my inexperience in writing kernel code.
In most scenarios, you want to avoid L2 tunnels to reduce complexity and/or performance issues.
The chain of thought typically goes like this:
* Remote networks are connected via L2 tunnel.
* ARP requests are broadcasted over L2 tunnel to all connected networks, introducing scalability issues
* Proxy ARP is introduced to cache ARP responses
* Proxy ARP may become out of date or not scale as the L2 domain grows.
* BGP is introduced to keep track of and broadcast all topology changes
* How do you mitigate issues caused if Proxy ARP fails?
Most of these issues go away if you use IP tunnels instead of Ethernet because IP was designed to be routable.
For your point on security... Whitelisting MAC addresses doesn't provide security. These are trivial to spoof. Same with IP. Please start relying on cryptographic primitives to establish workload identity instead. I highly suggest looking at SPIFFE to get started here.
If you must send L2 over the VPN, please go use a L2 EVPN which is designed to handle the complexity and provide fault tolerance. There are numerous SDNs out there you can use to implement this including Tungsten Fabric and OpenDaylight. No need to complicate Wireguard to support EVPN.
[edited to improve formatting of bullets and clarity of wording]
Sure but it hurts a bit to run a tunnel on top of another tunnel, and since you have to run wireguard as-is, you still have to do the static ip thing. It's a bit insane to have ethernet > udp (l2tp) > ip > udp (wireguard) > ip > ethernet. That's at least 128 bytes overhead per frame (udp/ip: 2*48, l2tp: 4, eth: 14, wireguard: 14).
I've run VXLAN over top of wireguard connections. One advantage is that you can have multiple intermediate wireguard connections that are not visible at the VXLAN layer.
Yeah, as long as the shell script has been audited. It would probably be easy to accidentally send the GRE traffic in the clear instead of through WireGuard.
I don't have a problem with it being layer 3 rather than layer 2. But the lack of dynamic configuration is a bit of an issue. I don't care too much about needing static ip addresses, but I do want to be able to push down dynamic routes and dns servers to clients.
Substituting MAC for IP address is exactly what ZeroTier does. MACs can't be spoofed, though nodes can be designated as bridges and that allows them to impersonate MACs.
There's still the issue of authenticating IPv4 IPs though, which are too small to embed anything useful into. ZeroTier has a certificate system for that but it requires the use of the rules engine to enable it.
MACs can be spoofed. There are entire companies which begin their sales pitch with "So, I can poison the ARP cache to take over your DNS in your Kubernetes Cluster" due to the NET_RAW capability required to respond to ICMP (ping). :)
You'll want to use a crypto based identity if you want to ensure spoofing isn't occurring. Even then, you can still be DOSed by a malicious actor. Tools like eBPF may be able to help here by filtering out source MAC addresses that don't match the source interface's hwaddr.
edit: Sorry, I didn't read this comment properly. In ZeroTier I can believe that they cannot be spoofed across the VPN due to relying on a cryptographic hash. :)
You've made me look at ZeroTier. It's a bit of a shame you are being down voted because ZeroTier looks to be original, clever, and open source.
Your down voting is caused by you used the word MAC without defining what it is, so naturally people think it's a "Media Access Control Address", or a "Message Authentication Code", but it's far more complex than either. It is an address so it does perform the same function as a "Media Access Control Address", but [0] says it is "computed from the public portion of a public/private key pair. A node’s address, public key, and private key together form its identity.", and uses proof of work to prevent forgeries. Thus your statement that "MACs can't be spoofed" is correct, or at least is unless someone breaks it. The "proof of work" bit did cause an eyebrow to rise, as it is vulnerable to exponentiation drops in the price of computing.
For those still reading, my (very brief) look at ZeroTier is it does far more than IPSec / Wireguard - it solves the internet scale routing problem in it's own way, address spoofing and a number of things as well. It's undoubtedly far simpler to use than Wireguard or OpenVPN, as routing with those protocols in large networks is a complete PITA. It treats IP rather like IP treats Ethernet - as a fabric it runs on top of that unlike Ethernet connects most nodes on the planet. For nodes that aren't fully connected (like those behind a NAT, it creates paths (ie, does routing), and if multiple paths are available uses several concurrently to get the best throughput.
[0] is well worth a look if you are curious about such things. I am going to take a much closer look when I get time.
I think they're implying that MACs are authenticated the same way IP addresses are authenticated with Wireguard (you can say "only 52:54:00:7a:cc:dd can talk over this connection").
20 years ago, in college, some folks in the dorm had fun fucking with others at the Ethernet level. Most of us only had experience with the IP level, so couldn't understand what was going on.
MACs are computed directly from cryptographic hashes. For normal ZeroTier P2P traffic the MAC and Ethernet header are elided entirely too, which saves about 14 bytes of per-frame overhead.
Pretty much every time I do a migration from one data center or office migration I set up an OpenVPN that bridges the network segments at the two locations. It makes the move so much easier.
Once set up, I can shut down a machine at one location, move it, bring it back up, and it's back in business. There are situations where we might want to migrate to new machines during the move, which this makes no harder. But for many things it makes them easier.
For example, the last move went something like this: Set up the VPN+bridge. Move half the application servers. Set up new firewall/load balancer since we were replacing the old ones. Test the new fw/lb. Physically move the primary database server during a maintenance window and switch over to the new fw/lb. If there were problems, just switch back to the old one via DNS record changes (TTL was lowered weeks earlier). Move the remaining app servers. During the bridging setup, the LBs preferred the local app servers.
It's been a while since I've worked with linux networking. I would have thought it would give you a VIF in some form or fashion that you could attach to a bridge. Is that not the case?
By "it" do you mean Wireguard? I haven't used it, but you need a special type of virtual interface for bridging, a tap device can do it, a tun cannot. From some searches, Wireguard doesn't support operating on a bridge. OpenVPN, which is what I've used in the past, supports both tun and tap interfaces.
Also icmp, ipv6 brings a lot of new things to the table in that realm. I already said it above, but the neighbor discovery protocol is quite useful to do dynamic in-band configuration. icmp is over ip, but it's not useful if the ip link is already managed by the tunnel protocol.
Someone else said broadcast/multicast, so I'll also add communication with legacy systems that don't speak IP or have other wacky requirements. These do exist in industrial and embedded settings. It's a niche use case but it's very useful there.
Yes, and more. Check out what runs on factory floors sometime. There's stuff that speaks naked Ethernet, as in you type the MAC of the machine into the application. There's also stuff that speaks CANbus over Ethernet without IP in the middle.
I went through training on it back in 2012 - apparently it (at least at the time, not sure about now) was dominant in the australian mining industry, so the larger tertiary education providers were requested to at least familiarise students with it.
It was a strange beast but there were a few odd spots it was better than active directory - e.g. an "Organizational Role" could be created and have a user assigned to it, so you could more easily seperate the user (john smith) from their position and the permissions that go with it (finance director). So when john smith retires it is trivial to replace the occupant of the finance director organisational role.
I've always wondered why we don't use that as our subject for all sorts of business needs. I'm talking about normal employee-to-employee business in addition to more technical things like security groups and so forth. Don't email Karen, email [whatever her role is], at least for official requests pertaining explicitly to defined job responsibilities.
That way the sender doesn't get delayed by unknown turnover, and the new recipient has full history to look back upon instead of starting cold.
Many device discovery protocols work by sending out broadcast or multicast packets (either to announce themselves to devices who might be listening or to request devices to send them data). These packets are expected to go out to either everyone on the same layer-2 network (the broadcast case) or everyone who has subscribed to a particular multicast address (the multicast case).
In addition to device discovery, these are frequently used for heartbeat messages to indicate that you are still alive (for high-availability protocols like VRRP).
One common use case is multicast DNS (https://en.wikipedia.org/wiki/Multicast_DNS) which uses multicast for individual hosts to publish services available on them to other hosts on the network without needing a dedicated DNS server.
I disagree - spent a considerable amount of time with zerotier as a possible replacement of a small sized ipsec mesh (4 sites) and it failed horribly. Had commercial support, different hardware and even virtualized it. Latency was a major issue and quality of the links were erratic to say the least. Don't get me wrong, I think zerotier is great, but it's not prime time.
I've had a similar experience. In particular links will just "drop out" for periods of time. The public forwarding nodes were overburdened for quite a while. I set up my own "moon", but one of the sites has a cranky NAT, which will let a connection through for a while, then fail. It seems to take at least 30 seconds for zerotier to "notice" this and switch back to forwarding via the moon. Maybe the new multipath will help?
Rather obviously it isn't. I'm not sure why you'd even ask.
I'm not the only one with external NAT that I can't do anything about; the question is what to do to mitigate this.
Switching to an explicit hub-and-spoke model would work around this, but at the expense of what I consider one of ZeroTier's biggest strengths: transparent meshing. If two machines in the network are on the same LAN, I'd like them to use that rather than the network.
Faster detection of the failure of the NAT-piercing peer-to-peer link, with fallback to the "moon" while the peer-to-peer link is being re-established, would substantially increase the usability for people, like me, who are stuck with the NAT they've got. As I alluded to, the new multi-path features that ZeroTier is getting might help with that.
If it’s replacing an ipsec mesh that’s pretty hard to believe. And if that was the issue and commercial support couldn’t even identify that as the cause, ZeroTier has bigger issues.
If all sites are behind symmetric NATs, there's not much ZeroTier could do to help aside from telling him to assign direct mappings on the NAT/Firewall to each ZT instance. Symmetric NATs are antithetical to peer to peer communication. Many I've run across in the wild have special rules to handle IPSec which won't exist for other lesser known protocols. It's also possible the user wasn't willing or able to make network configuration changes to make those p2p connections possible. Without seeing what the user tried & support recommended, it's not really fair to throw out such baseless accusations.
"lesser known" as in protocols such as IPSec, ZeroTier, WireGuard, etc. Of which IPSec has been around forever and many NATs/Firewalls have special handling rules built in, just as @api mentioned in another comment. Yes, ZeroTier uses UDP underneath, but that doesn't mean symmetric NATs don't/won't cause havoc to peer to peer protocols using UDP.
Wrong layers of the network. IPSec is comparable to TCP/UDP, not wireguard/zerotier. It’s L4 and NAT can’t have enough intelligence to setup IPSec meshes without explicit configuration.
Finally, how can ZeroTier’s support be so incompetent to not recognize connectivity issues between endpoints? That’s one of the few things that goes wrong with tunnel meshes.
It was probably behind finicky and heavily restrictive symmetric NAT (very p2p-hostile) but with IPSec ALG in the NAT, making it work fine with IPSec but horribly with anything else. This is common in "enterprise" settings and hard to diagnose without direct remote access to run NAT characterization tests.
Symmetric NAT basically breaks everything that doesn't use a simple client/server hub-and-spoke networking model.
Yeah, i hear about that regularly but didn't look into it. I must say i'm not really happy about the whole business thing. The arch wiki says you need an account, i'm not sure if that is true but if it is, it's a non-starter for me. If you have good technical refs to prove me wrong i'd be happy to hear.
Most IPv6 packets are encapsulated in Ethernet frames which use MAC addresses. What may change is the vendor specific MAC address could be replaced with a MAC address generated by a cryptographic hash to preserve privacy.
So instead of not using the MAC address in the IPv6 (which any reasonably modern OS does because this problem is old, well known and trivially solved) you get rid of MAC addresses altogether? Just so you can have some illusion of privacy while sending your traffic through a supposedly compromised network?
This is paranoia about all the wrong things, focusing on irrelevant details and ignoring what’s important.
I don't believe I suggested getting rid of the MAC address altogether. Ethernet isn't going away anytime soon.
The suggestion is simply this: Don't embed your MAC address into your IPv6 address because it's a unique identifier that can deanonymize you even if you shift networks.
The level 3-only tunnel is motivated as "the cleanest approach for ensuring authenticity and attributability of the packets" (in the whitepaper), but in fact every claim and routing algorithm described (needed since the tunnel is many-to-one) would work equally well substituting "ip address" with "mac address" (i may be missing something, but for sure it's not explicit anywhere). And indeed imho it would be less surprising to have an "allowed mac address" option in the configuration than an "allowed ip address": it's already common practice to white-list mac address of physical endpoints (in office). I'm toying with the idea of forking the driver code to adapt it to ethernet frames as i don't think it would need any big rewrite but i'm realizing my inexperience in writing kernel code.