Hacker Newsnew | past | comments | ask | show | jobs | submit | ogogmad's commentslogin

> GDP per capita lower than Poland

> now poorer than every state in America

You've confused the mean with the median. GDP Per Capita is not a measure of how well-off the people in a country are.

American states have a lot more income inequality than the UK does, which (due to positive "non-parametric skewness", I think) pulls their GDP Per Capita upwards.


"Even more, this coproduct can be thought of as an instance of a dependent pair indexed by a finite set" - I know what the individual terms mean, but I don't get what this is saying.

A coproduct in the category Set is a disjoint union of sets, i.e. A + B + C where A, B, C are sets.

We can think of this coproduct as involving two choices:

1) a choice of which component of the coproduct we're interested in (first, second, or third)

2) a choice of an element of that component

That is, `A + B + C` is isomorphic to `(i : Fin 3 * D i)` where `Fin 3` is a set with three elements, and `D : Fin 3 - > Type` and `D(0)=A`, `D(1)=B`, `D(2)=C`.

Then, the idea is: why index by a finite set? If you replace `Fin 3` by some arbitrary set, you start to be able to model a very general notion of a dependent type.


Use a VPN or Tor browser?


The recent news of multiple solutions to Erdos problem 1196 produced by LLMs without any human help, makes any suggestion that LLMs have hit a wall in reasoning seem less credible. To give you an idea, problem 1196 has been worked on by different experts for years. Now suddenly, LLMs have come along and solved the problem in a multitude of ways. Perhaps LLMs will eventually stall, but this paradigm still has some juice left to squeeze.

But are we talking pure LLMs, or existing AI solvers augmented with LLMs? Because while the latter is impressive, it doesn't state much outside of this specific domain.

If anything, I see greater verticality of specialized software that is using LLMs at their core, but with much aid and technology around it to really make the most out of it.


The announcement says:

> This was solved by GPT-5.4 Pro (prompted by Price)

See the discussion here: https://www.erdosproblems.com/forum/thread/1196


"are we talking pure LLMs, or existing AI solvers augmented with LLM"

Why do these distinctions matter?

is it an LLM, or symbolic, or a combo, or a dozen technologies stitched together. Who cares. It is all automation. It is all artificial.


True, it's an achievement either way. But if an "out of the box LLM" can solve difficult math problems it is an achievement by the LLM vendor. Otherwise it is an achievement by the people doing the vertical integration.

In the context of evolving LLM this is the crucial distinction.


It matters in terms of whether it’s generalizable. We had computers do impressive specific things since decades.

Sure.

But 'purely LLM' also isn't a definition for generalizable.


The distinctions matter since computational proofs have been around for decades.

What do you mean? Stop beating round the bush.

On a tangent: I've tried to connect Euclid's Elements with quantifier elimination theorems. It looks like most of the geometry follows from QE of real-closed fields. Some of the number theory relates to Presburger arithmetic. Some other number theory, including the irrationality of sqrt(2), is down to Skolem. The Pythagorean triples relate to extending Skolem to the Gaussian integers. I suspect some of the "embryonic" integral calculus could be related to holonomic functions, which seem like they admit a form of QE.

Don't have anything for the perfect numbers though.


Let's consider a hash table with an allocation of 1MB, which is about 2^20 bytes. Assume also that each entry occupies a byte. Assuming that the hash function's values are distributed randomly, the probability of there being a collision with only 1000 entries is approximately 38% = 1-(2^20)!/(2^20 - 1000)!/(2^20)^1000. See the "Birthday Problem".

Just about a week ago, Trump was joking about Pearl Harbor on TV while the Japanese PM was sitting right next to him. What's more, she's a nationalist.


I only give Trump about 25% credit for that joke, the reporter tee’d it up too much. https://m.youtube.com/watch?v=fFmA0xlINGg&ra=m


Remember that video about ethnical clensing gaza, beach resort casinos and golden Trump statues? Real estste crypto bro version of hitler!


If only Hollywood had have let Trump be in more films than Home Alone 2, none of this would have happened!


That's a revenge for not letting him to play his character in Back to the Future.


Or in Super Mario Bros 1993


Some people appreciate that an LR/LALR parser generator can prove non-ambiguity and linear time parse-ability of a grammar. A couple of examples are the creator of the Oil shell, and one of the guys responsible for Rust.

It does make me wonder though about why grammars have to be so complicated that such high-powered tools are needed. Isn't the gist of LR/LALR that the states of an automaton that can parse CFGs can be serialised to strings, and the set of those strings forms a regular language? Once you have that, many desirable "infinitary" properties of a parsing automaton can be automatically checked in finite time. LR and LALR fall out of this, in some way.


Production compilers must have robust error recovery and great error messages, and those are pretty straightforward in recursive descent, even if ad hoc.


I think even the theory of Regular Languages is somewhat overdone: You can get the essence of what NFAs are without really needing NFAs. You can get O(n) string matching without formally implementing NFAs, or using any other formal model like regex-derivatives. In fact, thinking in terms of NFAs makes it harder to see how to implement negation (or "complement" if you prefer to call it that) efficiently. It's still only linear time!

The need for NFA/DFA/derivative models is mostly unnecessary because ultimately, REG is just DSPACE(O(1)). That's it. Thinking in any other way is confusing the map with the territory. Furthermore, REG is extremely robust, because we also have REG = DSPACE(o(log log n)) = NSPACE(o(log log n)) = 1-DSPACE(o(log n)). For help with the notation, see here: https://en.wikipedia.org/wiki/DSPACE


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: