This is almost true, but not quite. WireGuard is a protocol, but it's also the Linux kernel implementation of that protocol; there are design decisions in the protocol that specifically support software security goals of the kernel implementation. For instance, it's designed to be possible to implement WireGuard without demand dynamic allocation.
This is a clever reuse of WireGuard's cryptographic design, and may indeed make sense as a way to slap some low-overhead encryption on top of your app's existing UDP packets.
However, it's definitely not a replacement for TCP in the way the article implies. WireGuard-the-VPN works because the TCP inside of it handles retransmission and flow control. Going raw WireGuard means that's now entirely up to you.
So this might be a good choice if you're doing something realtime where a small number of dropped packets don't particularly matter (such as the sensor updates the article illustrates).
But if you still need all your packets in order, this is probably a bad idea. Instead, I'd consider using QUIC (HTTP/3's UDP protocol), which brings many of the benefits here (including migration of connections across source IP address and no head-of-line-blocking between streams multiplexed inside the connection) without sacrificing TCP's reliability guarantees. And as the protocol powering 75% of web browsing¹, is a pretty safe choice of transport.
> However, it's definitely not a replacement for TCP in the way the article implies.
UDP isn’t TCP and that’s kind of the point. For a large number of use cases the pain TLS imparts isn’t worth it.
QUIC is flexible and fabulous, but heavyweight and not fit for light hardware. it also begs the question “If the browser supported raw UDP what percent of traffic would use it?”
Sure, but this article spends paragraphs talking about the (real) problems with TCP, then suggests that the solution is a UDP-based transport with WireGuard-ish crypto.
…but there's a giant guaranteed-and-ordered-delivery-sized hole in that argument, which is my point. The article never addresses what you lose when going from TCP to UDP. You can't just swap out your app's TCP-based comms with this and call it a day; you're now entirely responsible for dealing with packet loss, order, and congestion if that's important to your application. Why DIY all that if you could just use QUIC?
Granted I haven't personally tried to run QUIC on embedded hardware, so I can't speak to its weight, but I do see someone did it¹ on an ESP32 (ngtcp2 + wolfSSL), so it can be done with < 300 kB of RAM.
I wonder how much RAM this WireGuard-based approach requires. The implementation here is in .NET, so not exactly appropriate for light hardware either.
Regarding browser support for UDP, you'll never get raw UDP for obvious reasons, but the WebTransport API² gives you lowish-level access to UDP-style (unreliable and unordered) datagrams with server connections, and I believe WebRTC can give you those semantics with peers.
Does it bother anyone else when an article is so clearly written by an LLM? Other than being 3x longer than it needs to be the content is fine as far as I can tell, but I find the voice it’s written in extremely irritating.
I think it’s specifically the resemblance to the clickbaity writing style that Twitter threads and LinkedIn and Facebook influencer posts are written in, presumably optimized for engagement/social media virality. I’m not totally sure what I want instead, I’m pretty sure I’ve seen the same tactics used in writing I admired, but probably much more sparingly?
What is it that makes tptacek’s writing or Cloudflare’s blog etc so much more readable by comparison? Is it just variety? Maybe these tactics should be reserved for intro paragraphs (of the article but also of individual sections/chapters might be fine too) to motivate you to read on, whereas the meat of the article (or section) should have more substance and less clickbaiting hooks?
Specifically there’s a lot of clickbaity constructions like: “setup: payoff” or “sentence fragment, similar fragment, maybe another similar fragment”.
This paragraph has both:
> The symptom is familiar: a stream that occasionally "locks up" briefly before catching up, jitter in audio or video, or a latency spike that appears to come from nowhere, a "hang" in the application when it gets blocked waiting for a packet. It comes from a single packet forcing the entire pipeline to pause. The underlying network recovered quickly; TCP's ordering guarantee is what made it visible.
So does this!
> WireGuard's protocol is a fundamentally different design point. It's stateless — there's no connection to establish upfront, no session to track, and no certificate authority in the picture. Two keys, a compact handshake, and you're encrypting. And unlike TLS, WireGuard's cryptographic choices are fixed: Noise_IKpsk2 for key exchange, ChaCha20-Poly1305 for authenticated encryption. There's nothing to misconfigure.
Are you pretending you didn’t even have an LLM help you reword it before publishing? Because that would be an obvious lie. If you were to propose a sufficiently trustworthy way to prove one way or another, I’d bet $1,000 on it.
The post mentions the deficiencies of TCP for mobile devices over unreliable links, but I've had nothing but trouble with Wireguard when connecting from phones via mobile data.
I suspect it's due to my mobile operator doing traffic shaping / QoS that deprioritizes UDP VPN.
In contrast, connecting to OpenVPN over TCP was a huge improvement. Not at all what I expected.
Counter-anecdote: I've been using WireGuard on Android for years with no particular issues to speak of. 0.0.0.0/0 to my home network. I often forget to enable WiFi at home and don't notice (I often have it disabled when out).
I suspect ya you're right - nothing to do with Wireguard. I set it up do I could VPN into my home network from my phone. More than once, I have forgotten to turn it off. Everything worked, and I only noticed days later. Very robust, in my anecdotal experience.
The much more likely culprit is your VPN server's port. If it's running on some no-name port (such as the default 51820), that's likely to get throttled.
I'd bet that switching your VPN server port to 443 would solve the problem, since HTTP/3 runs on 443/udp.
If this contained various grammer mystaeks, but interesting content, it wouldn't have been flagged. As usual with LLM, it is based on other content. Show me the source, we used to say to binaries... ¿Que pasa?
I quit when I figured it was written by an LLM. I'm not interested in reading LLM 'content' without it providing a source.
I am willing to generate some of my own sauce with a prompt, and then requesting the sources. That way, I know at least some parameters of the input and output.
But with your article, I do not know which sources were used as reference, I do not know which prompt you used.
As for HN, they're busy with tackling the LLM problem. They know it is a problem.
Again, this was novel content. If you find a source of anything similar let me know. I'm belaboring this point for one important reason: content matters. I want to see new thoughts, not repetitive mindless drivel in personal "voice".
One thing I've seen before is people being upfront about using LLMs (at the top of the content). That way, those who dislike it will feel less tricked.
The balance at least on this site is strongly in favour of humans writing things.
You’re belabouring the point because you don’t believe that by filling the internet with slop you’re doing anything wrong when actually it’s antisocial and wrecks the commons.
If you think content matters so much then just invest the time in writing it yourself rather than trying to convince others that it is ok that you didn’t.
Did you? That is the issue we have. We can't know for sure that you even read your own article, since it has all the hallmarks of LLM generated content. It's embarrassing.
Sigh. I did write it, then I used an LLM to clean it up. Seriously, if you can find anything else out there making a similar point or providing a similar library I'd love to hear about it.