Hacker Newsnew | past | comments | ask | show | jobs | submit | rnhmjoj's commentslogin

The breeding blanket is entirely contained inside a vacuum vessel, so there isn't any oxygen to react with. Also, the are many blanket designs, but the lithium is never present in its elemental form (precisely because it would be very reactive), but in a stable chemical bond with some neutron multiplier (like lithium-lead alloys or beryllium ceramics). In some design the lithium is even immersed in the coolant itself, which is high pressure helium, so it's not going to ignite in any reasonable way.

> breeding blanket is entirely contained inside a vacuum vessel, so there isn't any oxygen to react with

When the vessel works. If the vessel breaches, that lithium could ignite. Note a showstopper. But I suppose a risk to be thought about by the engineers (probably not by policymakers).


Commonwealth Fusion Systems plan to use lithium in salt form FLiBe, a molten salt made from a mixture of lithium fluoride (LiF) and beryllium fluoride (BeF2). It does not violently react with air or water.

https://en.wikipedia.org/wiki/FLiBe


> How is that done if not using CSMA/CD (or something very similar at least)?

AFAIK, WiFi has always been doing CSMA/CA and starting with the 802.11ax standard also OFDMA. See https://en.wikipedia.org/wiki/Hidden_node_problem#Background


Thanks. So the author's point in the linked article is wrong, it's the opposite of what they wrote. Contrary to what they say, it's indeed a bus, and it isn't the case that CSMA/CD is useless, it's that isn't enough to deal with the situation, so additions have been made to it.

Thanks for your link that helped clarifying this for me!


When you have switches that link two nodes together, for only the duration of one-way transmission you don't need CSMA/CD. We literally have no use for it. We will never have two computers transmit onto the same Ethernet wire anymore.

WiFi is different of course. However as the author wrote, your WiFi devices always go through the access point where they use 802.11 RTS/CTS messages to request and receive permission to send packets. All nodes can see CTS being broadcasted so they know that somebody is sending something. So even CSMA/CA is getting less useful.


Yes I'm only talking about wifi networks. I get that CSMA/CD itself is getting less useful, but it's because something else is doing its job, not because what it did is useless (that's why I wrote "or something similar" when I asked). Wifi is still, necessarily, a common bus where everyone talks.

CSMA/CD - Collision Detection and CA Collision Avoidance. - FYI the article is from 2017!

for Non-WiFi, we don't use CD because all is bi-dirireactional and all communication have their own lane, no needed because there will never be a collision this is down to the port level on the switches, the algorithm might be still there but not use for it.

For WiFi, CD can never be good or work, because "Detecting" is pointless, it cannot work. we need to "Avoid" so it has functionality because is a shared lane or medium. CA is a necessity, now in 2026, we actually truly don't need it or use it as much since now WiFi and 802.11 functions as a switch with OFDM and with RF signal steering, at the PHY (physical level) the actual RF radio frequency side, it cancels out all other signals say from others devices near you and we "create" similar bi-directional lanes and functions similar as switches.

The article is good and represents how IETF operates a view (opinionated) of what happens inside. We actually need an IETF equivalent for AI. Its actually good and a meritocracy even though of late the Big companies try to corrupted or get their way, but academia is still the driver and steers it, and all votes count for when Working-Groups self organized. (my last IETF was 2018 so not sure how it is now in the 2020s)


Not really. Wifi does not do CSMA/CD. It does CSMA/CA, something quite different.

Wifi is in any case not considered a bus network, rather a star topology network.


How can wifi be a star topology when all clients connect to the base station using the same airwaves? If it really were a star topology, it would also not be possible to use aircrack-ng or other tools to gather data for WPA cracking by passive listening -- that can only happen on a shared medium network.

I think the most accurate classification is that wifi emulates a star topology at OSI layer 2 on top of a layer 1 bus topology.



Thanks! Macroexpanded...

The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=37116487 - Aug 2023 (306 comments)

The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=25568766 - Dec 2020 (131 comments)

The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=20167686 - June 2019 (238 comments)

The world in which IPv6 was a good design - https://news.ycombinator.com/item?id=14986324 - Aug 2017 (191 comments)


> My concern is that the word “elementary” in the title carries a much broader meaning in standard mathematical usage, and in this meaning, the paper’s title does not hold.

> Elementary functions typically include arbitrary polynomial roots, and EML terms cannot express them.

If you take a real analysis class, the elementary functions will be defined exactly as the author of the EML paper does.

I've actually just learnt that some consider roots of arbitrary polynomials being part of the elementary functions before, but I'm a physicist and only ever took some undergraduate mathematics classes. Nonetheless, calling these elementary feels a bit of stretch considering that the word literally means basic stuff, something that a beginner will learn first.


All I know is that when a class starts with 'elementary' or 'fundamentals of' you had best buckle up.

Algebraic too.

There's also the opposite in physics though, "modern" means from the 60s with square roots drawn in manually.


Introduction to ...

That's code for 101.

No. It's code for the thickest, densest book on the subject that you're ever gonna not read, as it actually assumes you're experienced in the subject and goes into everything except intro level topics.

See e.g. Petzold, et al.


I'm getting flashbacks to Spivak, who wrote a 2000 page "introduction" to differential geometry.

To be fair to Spivak, he did say it was comprehensive introduction. :)

> If you take a real analysis class, the elementary functions will be defined exactly as the author of the EML paper does.

I just looked through many of the best known real analysis texts, and not a single one defines them this way. This list included the texts by

Royden, Terence Tao, Rudin, Spivak, Bartle & Sherbert, Pugh, and a few others....

Can you cite a single text book that has this definition you claim is in every real analysis course? I find all evidence points to the opposite.


I guess you're right, I was probably mislead this whole time. I went through my old analysis class book [1] and there doesn't seem to be an explicit definition of elementary functions. The best I can find is this paragraph (I translate from italian):

> The elementary functions of analysis, that is powers, roots, exponentials, logarithms and their inverses, functions obtained from the former by arithmetic operations or composition, admit the limit f(p) for x → p, for any p in their set of definition. The study of such functions, which is not limited to the sole real functions of real variable, is carried out naturally in the setting of metric spaces.

That said, I'm relatively sure that a definition was given in class and it didn't include arbitrary roots: despite being notoriously difficult, the exam didn't require students to draw the graph of any elementary function including implicitly-defined algebraic roots.

I picked up another one of the old recommended books [2] and it seems to be similarly vague; while the book currently taught in my university [3], gives this definition:

> The following functions (from ℂ to ℂ) are called the elementary functions of the Analysis:

> 1) Rational functions (integral or fractional)

> 2) Algebraic functions (explicit or implicit)

> 3) The exponential function

> 4) The logarithm function

> 5) All those functions that can be obtained by combining a finite number of times the functions of kind 1)...4).

So, roots of arbitrary polynomials implicitly defined are indeed considered elementary. I never knew this.

[1]: https://search.worldcat.org/title/1261811544

[2]: https://search.worldcat.org/title/801297519

[3]: https://search.worldcat.org/title/935666878


So, I did a bit of research and I wasn't going crazy: there are apparently two competing definitions of "elementary" in use [1]:

> the class of functions [...] is what I would call exponential-logarithmic functions or EL functions; that is, they are the functions that can be expressed using some finite combination of constant functions, the identity function, exp, log, composition, and arithmetic operations (+−×÷). Some authors call this class of functions elementary functions, but that term is now more commonly used in a different sense, which includes algebraic functions.

Evidently my professor was in the exponential-logarithmic camp.

[1]: https://mathoverflow.net/a/442656


The definition of "elementary function" typically includes functions which solve polynomials, like the Bring radical. The definition was developed and is most fitting in algebraic contexts where algebraic structure is meaningful, like Liouvillian structure theorems, algorithmic integration, and computer algebra. See e.g.

- Page 2 and the following example of https://billcookmath.com/courses/math4010-spring2016/math401... (2016)

- Ritt's Integration in Finite Terms: Liouville's Theory of Elementary Methods (1948)

It's not frequent that analysis books will define the class of elementary functions rigorously, but instead refer to examples of them informally.


> See e.g. Page 2 and the following example of https://billcookmath.com/courses/math4010-spring2016/math401... (2016)

There appears to be a typo in that example; I assume "Essentially elementary functions are the functions that can be built from ℂ and f(x) = x" should say something more like "the functions that can be built from ℂ and f(x) = y".


Not a typo! Think of f(x) = x as a seed function that can be used to build other functions. It's one way to avoid talking about "variables" as a "data type" and just keep everything about functions. We can make a function like x + x*exp(log(x)) by "formally" writing

    f + f*(exp∘log)
where + and * are understood to produce new functions. Sort of Haskell-y.

> The definition of "elementary function" typically includes functions which solve polynomials, like the Bring radical.

What. Does that "typical definition" of elementary function includes elliptic functions as well, by any chance?


Not that I've seen.

jargon are words being used that don't carry the typical laymen definition, but a specific one from the domain of said jargon.

If a written piece is intended for an audience who knows the jargon, then it's fine to use jargon - in fact it's appropriate and succinct. If it was intended for the laymen, then jargon is inappropriate.

But it seems you're lamenting that this jargon is wrong and that it shouldn't be jargon!?


I don't know if I read this right, but I thought it's proven that "elementary functions" can't solve 5th degree or higher polynomial, so I'm confused how it's interpreted if elementary functions also include arbitrary polynomial roots. Or is it different elementary functions?

That theorem is not formulated about "elementary functions".

It says that polynomial equations of the 5th degrees or higher cannot, in general, be solved using "radicals".

While something like "polynomials" or "radicals" has a clear meaning, which are the "elementary functions" is a matter of convention.

The usual convention is to include all algebraic functions and a few selected transcendental functions.

In "all algebraic functions", are included the rational functions, the radicals and the functions that compute solutions of arbitrary polynomial equations.

Some conventions used for "elementary functions" describe the expressions that you can use to write such "elementary functions", in which case not all algebraic functions are included, but only those written by combining rational functions with radicals.

For an algebraic function that computes a solution of a general polynomial equation, which cannot be expressed with radicals, you cannot write an explicit formula, but you can write the function only implicitly, by writing the corresponding polynomial equation.

So the difference between the 2 kinds of conventions about which are "the elementary functions" is usually based on whether only explicitly-written functions are considered, or also implicit functions.


So the argument of the post is basically “this definition of elementary functions includes functions without closed form expression, and thus we cannot express these elementary functions with eml”, or sth more (that there exist elementary functions with closed form expressions that cannot be expressed by eml)?

FWIW I never thought that functions without closed form expressions were considered elementary functions, but i guess one could choose to allow this if they wanted


The term 'elementary function' doesn't really have a single universally agreed on strict definition.

Definitions are either a bit fuzzy, or not universally agreed on.

Though interestingly https://en.wikipedia.org/wiki/Elementary_function says "More generally, in modern mathematics, elementary functions comprise the set of [...]". Though at least Wikipedia thinks that 'modern mathematics' has a consensus; of course, there's no guarantee that whoever you are talking to uses the 'modern mathematics' definition that Wikipedia brings up.


In math elementary usually means fundamental or foundational not elementary school. The root word is element and the relationship to “simple subject” is tangential and more related to its teaching the elemental topics for a lifetime education than definitionally cross discipline.

> aren't exp and ln really primitives? Aren't they implemented in terms of +,-,/,* etc?

They're primitive in the sense that you can't compute exp(x) or log(x) using a finite combination of other elementary functions for any x. If you allow infinite many operations, then you can easily find infinite sums or products of powers, or more complicated expressions to represent exp and log and other elementary functions.

> Or do we assume that we have an infinite lookup table for all possible inputs?

Essentially yes, you don't necessarily need an "implementation" to talk about a function, or more generally you don't need to explicitly construct an object from simpler pieces: you can just prove it satisfies some properties and that it is has to exist.

For exp(x), you could define the function as the solution to the diffedential equal df/dx = f(x) with initial condition f(0) = 1. Then you would enstablish that the solution exists and it's unique (it follows from the properties of the differential equation), call exp=f and there you have it. You don't necessarily know how to compute for any x, but you can assume exp(x) exists and it's a real number.


Well, for different reasons, but you have similar issues with IPv6 as well. If your client uses temporary addresses (most likely since they're enabled by default on most OS), OpenSSH will pick one of them over the stable address and when they're rotated the connection breaks.

For some reason, OpenSSH devs refuse to fix this issue, so I have to patch it myself:

    --- a/sshconnect.c
    +++ b/sshconnect.c
    @@ -26,6 +26,7 @@
     #include <net/if.h>
     #include <netinet/in.h>
     #include <arpa/inet.h>
    +#include <linux/ipv6.h>
     
     #include <ctype.h>
     #include <errno.h>
    @@ -370,6 +371,11 @@ ssh_create_socket(struct addrinfo *ai)
      if (options.ip_qos_interactive != INT_MAX)
        set_sock_tos(sock, options.ip_qos_interactive);
     
    + if (ai->ai_family == AF_INET6 && options.bind_address == NULL) {
    +  int val = IPV6_PREFER_SRC_PUBLIC;
    +  setsockopt(sock, IPPROTO_IPV6, IPV6_ADDR_PREFERENCES, &val, sizeof(val));
    + }
    +
      /* Bind the socket to an alternative local IP address */
      if (options.bind_address == NULL && options.bind_interface == NULL)
        return sock;


The temporary address doesn't stay active while there's a connection on it? I think that would be the actual "fix".


I think it does, but that's not the issue: if the interface goes down all the temporary address are gone for good, not just "expired".


If you're on a stable address, and the interface goes down, will it let your connection/socket continue to exist?

Because if the connection/socket gets lost either way, I don't really care if the IP changes too.


I'm not sure what happens to the socket, maybe it's closed and reopened, but with this patch I have SSH sessions lasting for days with no issues. Without it, even roaming between two access points can break the session.


Interesting! Is there anywhere a discussion around their refusal to include your fix?


See this, for example: https://groups.google.com/g/opensshunixdev/c/FVv_bK16ADM/m/R...

It boilds down to using a Linux-specific API, though it's really BSD that is lacking support for a standard (RFC 5014).


It would also seem to break address privacy (usually not much of a concern if you authenticate yourself via SSH anyway, but still, it leaks your Ethernet or Wi-Fi interface's MAC address in many older setups).


This is a good argument for not making it the default, but it would be nice to have it as a command line switch.


Well, yss, but SSH is hardly ever anonymous and this could simply be a cli option.


Not anonymous, but it's pretty unexpected for different servers with potentially different identities for each to learn your MAC address (if you're using the default EUI-64 method for SLAAC).


The magnetron itself has about about 65% efficiency, but the paper conjectures that the longer duration of the pulses is due to defects in the cavity that result in some emission at a lower frequency (1.4 rather than the normal 2.4 GHz), so the energy radiated must be a tiny fraction of the nominal power.


What is the point of restricting a certificate to "server" or "client" use, anyway?


Trust chains. Some implementations would accept an LE certificate for foo.com as a valid login for foo.com or something like that, because they treated all trusted certs the same, whether issued by the service being authenticated to, or some other CA.

It might be possible to relay communications between two servers and have one of them act as a client without knowing. Handshake verification prevents that in TLS, but there could be similar attacks.


I have been using GitHub since 2011 and it's undeniable that the performance of the website have been getting worse. The new features that are constantly being added are certainly a factor, but I think the switch to client-side rendering that obviously shifted the load from their server to our browsers and also tend to produce ridiculously large and inefficient DOMs[1] is the main cause.

If you want a practical example, here you go. I'm a Nixpkgs commiter and every time I make a pull request that backports some change to the stable branch, GitHub unprompted starts comparing my PR against master. If I'm not fast enough to switch the target branch within a couple of seconds it literally freezes the browser tab and I may have to force quit it. Yes, the diff is large, but this is not acceptable, and more importantly, it didn't happen a few years ago.

[1]: https://github.com/orgs/community/discussions/111001


> MSS clamping is non-negotiable with tunnels. Every layer of encapsulation eats into the MTU.

Can this tunnel be avoided somehow? If I have to choose between owning my prefix and having 1500 MTU, I'd probably take the latter: MTU issues are so annoying to deal with, and MSS-clamping doesn't solve all of them.


Kind of but not really.

The whole point of BGP is to influence your routing tables. This fundamentally makes very little sense to do when you have a bunch of routers whose routing policy you don't control between you and whoever you're speaking BGP to. eBGP is just TCP and supports knobs to run over multiple hops (so up to 255), but at that point you can't really do anything with the routing information you exchange because the moment you hand the traffic off, the other party can do with it how it pleases. Also, very few people have enough public IP addresses for this, and on the Internet you obviously can't route RFC1918 space. Therefore, you need tunnels, so that you can be one hop away even if the tunneled traffic is traversing the Internet, and so that you can reach peers that let you announce whatever IP space you want.

The other thing you can do, of course, is to just do the same thing internal to your lab. You can absolutely stand up multiple ASN at home. I'd even argue that if you really want to learn BGP, this is a great way to do it, especially if you use two different platforms (say, FRR on FreeBSD peering with a cheap Mikrotik running RouterOS). That way you learn the underlying protocol and not a specific implementation, which is something that is very hard to undo in junior network engineers that have only ever been exposed to one way of doing things.

That's different from some of the goals outlined in the article, but if your goal is to learn this stuff rather than have provider-independent IP space (which even for home labs isn't very valuable to most people), doing it all yourself works fine.


You can use who you're physically connected to. If you have a physical or point–to–point connection to iFog and Lagrange Cloud, you don't need tunnels to reach them. Both these companies offer VPS services.

If your goal is to learn this stuff join dn42, the global networking lab, instead of wasting money with real allocations.


Yes, this can be avoided. All the standard advice and examples are tailored toward avoiding IP packet fragmentation entirely even when the tunnel transport can encapsulate and transmit packets larger than the underlying path MTU. Mostly this is justified for performance reasons, but it also tends to avoid even more difficult to debug situations where there's an MTU or ICMP issue between tunnel endpoints.

I haven't used Wireguard before, but I believe if you force the wg interface MTU to 1500, things will just work. I use IPSec where the solution would be to use something like link-layer tunneling that, ironically, adds another layer of encapsulation to the equation. Most tunnel solutions don't directly support fragmentation as part of their protocol, but you get it for free if they utilize, e.g., UDP or other disjoint IP protocol for transport and don't explicitly disable fragmentation (e.g. by requesting Don't Fragment (DF) flag).

If I were to do this (and I keep meaning to try), I might still lower the MSS on my server(s) just for performance reasons, but at least the tunnel would otherwise appear seamless externally.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: