Hacker Newsnew | past | comments | ask | show | jobs | submit | kybernetikos's commentslogin

Most of the hardware had rom with its drivers in it. Meant just about everything was a plug and play experience.

And it is true that bit was fast, but once you'd customised the font and replaced all the system icons and set strongedit as your default editor in your!boot, it could take quite a long time to start up.


I think fireworkz pro was the next evolution of the concept.

Author here. Fireworkz was next, then Fireworkz Pro, both of which are discussed in the article.

Current LLM architecture doesn't learn - and you're right this is a huge piece that normal folks fail to understand, since in many ways, it's the opposite of what years of AI research has been trying to create.

However, I think it's important to remember that LLMs are embedded in larger systems, and those larger systems do learn.


If I was a frontier lab and I solved continual learning, as of today I would absolutely not release it - the society isn't ready for this; society isn't even ready for widespread diffusion of current publicly available frontier models.

If however I was a frontier lab who solved continual learning and my competitor also solved and released it, I would release mine immediately, obviously.

The point is, continual learning might be solved already, we just don't know and those who might know would rather keep their mouths shut. It isn't my base case (financial situation of frontier labs is such that they'd probably release immediately as long as they have inference compute to serve this revolutionary capability), but it isn't impossible.


You're not a frontier lab, the shareholders own those. And if shareholders get a private briefing about an unprecedented breakthrough in continual learning, they would announce it from the rooftops to take credit for the progress ASAP and reap the rewards for their stock value.

The only lab that I can exempt from this is DARPA.


Shareholders are not insiders. Public companies do secret projects all the time of which shareholders know absolutely nothing about and may never learn the details of them if they get cancelled.

Private market dynamics are not the same buddy.

Everyone owns them at this point and Google is outright public.

No youre missing the point of the poster - disclosures in private markets are different than public, especially in the context of large commitments - the company has no choice.

The implications are very different if everyone owns them even if they aren’t public. They may have no choice whether to share, but the owners which have the privilege to know (not everyone because earlier owners aren’t stupid) don’t act the same, right?

And let’s be honest - rules get bent all the time, especially when valuations are 9 figures. Stakeholders at this point won’t risk killing a golden goose.


exactly like you said - the harness might learn.

we do also have training on synthetic data. it might compound.


I've still got a firefox OS phone in a drawer somewhere. I was disappointed it got discontinued like so many other mozilla projects.


I was as well. It was very early, but not necessarily too early.

I think part of the problem is that they decided to have the flagship devices be low-end hardware, rather than high-end hardware. They were trying to ensure that development took low-end hardware into account, but they failed to consider that by the time the platform grew, high-end would become mid-range.


It's hard for me to know for sure, but to me it felt like the fact that none of the firefoxOS phones were targetted as devices developers would themselves want to use as their main phone was a big misstep in strategy.

I used S2 to make https://wherewords.id/

It was very pleasant to work with - I spent by far the majority of the project time on the wordlist.


I've seen some similar geohash-words stuff, this is super cool, thanks for sharing!


I think maybe a better analogy would be "the ladder I'm standing at the top of has some faulty rungs near the bottom. I'll set it on fire."

There are lots of things where tearing the system down and starting from scratch is a bad idea, especially if you do it while depending on it and before you have a replacement.


All systems have problems. What's your evidence that the system in the US is overall net negative? Pretty much the entire rest of the world would have loved to have a scentific system as good as that in the USA. The research output from both government and business was much more extensive and productive of value than the equivalent systems in europe for example.


> What's your evidence that the system in the US is overall net negative?

really? reproducibility crisis, stagnation in various fundamental fields, and bullshit ive seen with my own eyes working in the salt mines of academic science

> Pretty much the entire rest of the world

no doubt, the irony being that by trying to copy the us' vannevar bush model these other parts of the world will invariably fall into the same p-hacking/publication count/tenure chasing system that leads to the rotten system the us has. except the us got to pick tge low hanging scientific fruit already


If you can get something almost as capable for a fiftieth of the price, in most cases you'll do that. You might still send a few tokens to the more expensive option for the exceptional, difficult cases, but that's maybe 10% of the tokens at most. I don't see how it'll be possible to keep spending what anthropic, openai, google etc are spending if they're only going to see the trickiest 10% of tokens.


Missed the point award


Maybe I need to spell out the step that connects them - how will those companies afford to keep "iterating forward at the frontier" when they probably have a huge crash in their income coming from competition with good enough, but 1/50th the price cheaper and open models.

Iterating forward at the frontier doesn't seem like a sustainable approach if everyone else can catch up with you in 6 months.


Neural networks are universal approximators. The function being approximated in an LLM is the mental process required to write like a human. Thinking of it as an averaging devoid of meaning is not really correct.


> The function being approximated in an LLM is the mental process required to write like a human.

Quibble: That can be read as "it's approximating the process humans use to make data", which I think is a bit reaching compared to "it's approximating the data humans emit... using its own process which might turn out to be extremely alien."


Good point.

Then again, whatever process we're using, evolution found it in the solution space, using even more constrained search than we did, in that every intermediary step had to be non-negative on the margin in terms of organism survival. Yet find it did, so one has to wonder: if it was so easy for a blind, greedy optimizer to random-walk into human intelligence, perhaps there are attractors in this solution space. If that's the case, then LLMs may be approximating more than merely outcomes - perhaps the process, too.


Its fuzzier than that. Something can be detrimental and survive as long as its not too detrimental. Plus there is the evolving meta that moves the goal posts constantly. Then there's the billions of years of compute...


Negative mutations can survive for a long time if they're not too bad. For example the loss of vitamin C synthesis is clearly bad in situations where you have to survive without fresh food for a while, but that comes up so rarely that there was little selection pressure against it.


An easy counterargument is that - there are millions of species and an uncountable number of organisms on Earth, yet humans are the only known intelligent ones. (In fact high intelligence is the only trait humans have that no other organism has.) That could perhaps indicate that intelligence is a bit harder to "find" than you're claiming.


That humans are the only known intelligent ones is a very dubious statement. The most intelligent, sure, but several species of birds, great apes, and cetaceans all display significant intelligence.


> The most intelligent, sure, but several species of birds, great apes, and cetaceans all display significant intelligence.

Relative to all other non-humans. If someone is reducing intelligence to a boolean, the threshold can of course go anywhere.

I wouldn't be surprised if someone can get a dog to (technically) pass a GCSE (British highschool) exam (not full subject just exam) for a language other than English, because one dog learned a thousand words and that might just technically be enough for a British student to get a minimum pass in a French GCSE listening test.

But nobody sane ever hired a non human animal to solve a problem that humans consider intellectually challenging.

If intelligence is ability to learn from few examples, all mammals (and possibly all animals I'm not sure about insects) beat all machine learning and by a large margin. If it is the ability to learn a lot and synthesise combinations from those things, LLMs beat any one of us by a large margin and are only weak when compared to humanity as a whole rather than a specific human. If it is peak performance, narrow AI (non-LLM) beats us in a handfull of cases, as do non-human animals in some cases, while we beat all animals and all ML in the majority of things we care about.

Driving is still an example of a case where humans hold the peak performance.


> If someone is reducing intelligence to a boolean, the threshold can of course go anywhere.

Indeed, it would be very surprising if multiple species had exactly the same intelligence. It's more likely there this variable samples some distribution. Of course the species at the top can set the threshold so that all other species don't meet it, if they feel like declaring themselves uniquely intelligent. But that's not very useful.

> Driving is still an example of a case where humans hold the peak performance.

Other great apes can drive too.

https://www.youtube.com/watch?v=RZ_0ImDYrPY

I think it's very hard to look at this video and not recognize that orangutans are intelligent


> Other great apes can drive too.

As can dogs. However, I said "peak".


I supposed you were comparing humans with machines


> if it was so easy

That’s one giant leap you got there.

That the probably that intelligent life exists in the universe is 1, says nothing about that ease, or otherwise, with which it came about.

By all scientific estimates, it took a very long time and faced a very many hurdles, and by all observational measures exists no where else.

Or, what did you mean by easy?


> By all scientific estimates, it took a very long time and faced a very many hurdles, and by all observational measures exists no where else.

We know how long it took. We have a good idea when life started, and for almost all its history, it was single-cellular. Multi-cellular life is relatively fresh, and on evolutionary time scales, the progression from first eukaryotes to something resembling a basic nervous systems to basic brains to humans, was fairly quick. We have many examples of animals alive today from every part of the progression, and we know they actively use it. We know how natural selection works, that it makes small moves, and that each increment has to be net non-negative in terms of fitness (at least averaging out over populations) - otherwise it would die out instead of accumulating.

All that adds up to, yes, it's surprising evolution stumbled on our level of intelligence so easily.


> We know how natural selection works, that it makes small moves, and that each increment has to be net non-negative in terms of fitness (at least averaging out over populations) - otherwise it would die out instead of accumulating.

If you’re going to get about claiming to know how evolution works, at least know how evolution works:

https://en.wikipedia.org/wiki/Punctuated_equilibrium


I don't think of it as "devoid of meaning". It's just curious to me that minimizing a loss function somehow results in sentences that look right but still... aren't. Like the one I quoted.


A human in school might try to minimise the difference between their grades and the best possible grades. If they're a poor student they might start using more advanced vocabulary, sometimes with an inadequate grasp of when it is appropriate.

Because the training process of LLMs is so thoroughly mathematicalised, it feels very different from the world of humans, but in many ways it's just a model of the same kinds of things we're used to.


> Thinking of it as an averaging devoid of meaning is not really correct.

To me, this sentence contradicts the sentence before it. What would you say neural networks are then? Conscious?


They are a mathematical function that has been found during a search that was designed to find functions that produce the same output as conscious beings writing meaningful works.


Agreed, and to that point, the way to produce such outputs is to absorb a large corpus of words and find the most likely prediction that mimics the written language. By virtue of the sheer amount of text it learns from, would you say that the output tends to find the average response based on the text provided? After all, "over fitting" is a well known concept that is avoided as a principle by ML researchers. What else could be the case?


I think 'average' is creating a bad intuition here. In order to accurately predict the next word in a human generated text, you need a model of the big picture of what is being said. You need a model of what is real and what is not real. You need a model of what it's like to be a human. The number of possible texts is enormous which means that it's not like you can say "There are lots of texts that start with the same 50 tokens, I'll average the 51st token that appears in them to work out what I should generate". The subspace of human generated texts in the space of all possible texts is extremely sparse, and 'averaging' isn't the best way to think of the process.


My previous phone was refurbished and was great in all ways except for battery life. I have now bought a new phone that I wouldn't have bought if batteries were replaceable.

Having said that, I do like having waterproof phones, and I expect this rule would make that harder.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: