Hacker Newsnew | past | comments | ask | show | jobs | submit | afavour's commentslogin

Yeah, reading this my reaction is “so why didn’t they do it?”. A less prominent app would have been fulled first and notified later.

It has a massive user base. And political connections. And lawsuit money. Apple (and Google) will absolutely treat these publishers differently than a random app developer.

Apple doesn't provide any enforcement for apps that are in the top percentile.

https://techcrunch.com/2026/04/14/how-the-rewards-app-freeca...

You'd think Apple would go after the top-charting apps that are leveraging the scam companies (like Monopoly Go and Disney Solitaire) for actively engaging with scams like this to pump their own numbers up...

(https://old.reddit.com/r/FreeCash/comments/1i4132r/monopoly_... - like this. What the everloving hell? Straight up enticing users to shove themselves into a game, expose themselves to ads galore, and then keep goading them into blowing even more money in the partner app under the guise of 'real cash'.)


Because it makes Android a more attractive option than it otherwise would have been.

Maybe—I don't think anyone is choosing between the two based on access to grok of all things. I think it's simply treated as an extension of twitter, which will almost certainly never be forced out while it remains the premier app for diplomacy and AI porn.

That argument didn't stop them from pulling Fortnite in its hay day though.

Yeah, Apple doesn't care about losing money or pissing off a large user-base. They assume they have enough money and they'll always have the larger user-base.

They care about people pissing in their ocean.


Surely the ability to compile your WASM is a pretty big benefit over TypeScript, if it’s something you need.

AssemblyScript exists.

AssemblyScript seems to be seriously languishing these days, and the team has falling-outs with a lot of the Wasm ecosystem.

Which I actually agree with, as the Wasm ecosystem is trying to be yet another UNCOL outside the browser, bringing CORBA back while pretending it is some great new idea.

Sure. It isn’t TypeScript though.

C subset + compiler extensions for some embedded systems isn't proper C, and people still call it C, given how close enough it is.

> So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?

Because we know what they actually are on the inside. You're talking as if they're an equivalent to the human brain, the functioning of which we're still figuring out. They're not. They're large language models. We know how they work. The way they work does not result in a functioning consciousness.


I think that the interior structure doesn't necessarily matter—the problem here is that we don't know what consciousness is, or how it interacts with the physical body. We understand decently well how the brain itself works, which suggests that consciousness is some other layer or abstraction beyond the mechanism.

That said, I think that LLMs are not conscious and are more like p-zombies. It can be argued that an LLM has no qualia and is thus not conscious, due to having no interaction with an outside world or anything "real" other than user input (mainly text). Another reason driving my opinion is because it is impossible to explain "what it is like" to be an LLM. See Nagel's "What Is It Like to Be a Bat?"

I do agree with the parent comment's pushback on any sort of certainty in this regard—with existing frameworks, it is not possible to prove anything is conscious other than oneself. The p-zombie will, obviously, always argue that it is a truly conscious being.


I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.

I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.


I remind you of why nuclear weapons exist.

They exist because human minds conceived them, and human hands made them.

One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.


Yeah, the problem with AI is that they can become too good at performing general tasks, ranging from, like, designing cancer treatments, or designing bioweapons, and everything in between

You can't create and enrich nuclear materials inside a datacenter.

Everyone recognized that it was so dangerous to use them after the first two mass casualty events. At the time and even into the 50s it was not universally obvious, and the arguments in favor of nuclear weapons use were quite similar to arguments I often see with regards to AI: bombing cities into rubble is not a new concept, traditional explosives well within the supply capacity of large militaries are capable of it, so what are we even talking about when we say that there's scary new capabilities?

> Everyone recognized that it was so dangerous to use them after the first two mass casualty events

I really don’t think that’s true. Those who actually knew about the nuclear weapons knew very well how dangerous they were. Truman was deeply conflicted about using them.


Truman changed after learning the real civilians death numbers that they caused. The military leaders absolutely knew the impact before, and kept advocating for its in later wars.

By any quantifiable measure, yes, and not by small numbers either.

Until someone can demonstrate a quantitative measure of intelligence - with the same stability of measurement as "meters" or "joules" - any discussion of "Super-AI" as "the most dangerous X" is at best qualitative/speculative risk narratology, at worst discursive distractions. The architecture of the "social web" amplifies discursion to a harmful degree in an open population of agents, something I think we could probably prove mathematically. I am more suspicious of this social principle than I am scared of Weakly Godlike Intelligence at this moment in history; I am more scared of nuclear weapons than literally anything else.

People think we are out of the woods with nuclear weapons, but I don't think we've even seen the forest yet. We are Homo Erectus, puffing on a flame left by a lightning strike, carrying this magic fire back to our cave.


Nuclear weapons have rarely been used kinetically. Their real force multiplier is the fear.

A.I. is being used by so many people for so many diabolical things, hidden, unknown things that we may never fully understand it's purpose. But that doesn't mean it's purpose won't destroy us in the end.

The expression "Drinking the Koolaid" is used to explain the Jonestown mass suicide. It is an information hazard, aka, a cult that created the end result: 900 people drinking poisoned flavoraid. That's just one example of a human caused information hazard. What happens when someone with similar thinking applies that to A.I.? Will we even be able to sleuth out who did it?


It does feel like a bizarre moment, where the AI companies are deliberately trying to scare us about their own product in a bid to, I think, show the inevitability of it? Or to sell themselves as the one responsible power to constrain it?

It's very odd. "It's going to take all your jobs" is not a great selling point to the everyday public.


There are so many reason if you look at how it's being sold.

* We need to completely deregulate these US companies so China doesn't win and take us over

* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner

* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)

* If you don't use AI, you will not be able to function in a future job

* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is right

They have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.


It is very odd, indeed. It's a bit of both well known "hells of marketing": fomo on the one side (you better use us as heavily as possible), combined with mysticism of "we don't know what we created, but it's powerful and you better follow us to be on the right side"

Yeah, the messaging felt weirdly pyromanic, like telling everyone about the unimaginable dangers of fire and then saying that's why I have I to burn everything, to protect us from the fire...

They're selling the product to the class of people who would love for it to take the jobs of our class of people.

> It does feel like a bizarre moment, where the AI companies are deliberately trying to scare us about their own product

That is direct CEO to CEO marketing. They're working really hard to convince high up decision makers that these tools will lower their head count and reduce costs.


They are being honest and you don't want to deal with the implications, so you stretch for conspiracy theories.

The ones at the top are the true believers. Engage with them at that level.


That's a good view point. Perhaps they're not being alarmists or trying to scare people, but being honest about the capabilities.

Perhaps it can be better articulated and framed in a way that's well received. But, maybe that would be over-promising or not being honest about the future.


People are renowned voting or buying in their own worst interests. And it goes back to before Trump and in many countries.

Yes but in order to get someone to vote against their interests you need to sell them on something else that's a benefit. They don't just automatically vote against themselves.

"This technology might escape our control, might devastate the economy but also serves as a serviceable chatbot for your entertainment" isn't a vote winner.


The way out of this for the seller is to lie and conflate the rich with everyone. "This technology will make your retirement account grow and other countries will be giving us money!"

Why is it highly relevant? It’s a bunch of people betting on the outcome.

I've spent years watching prediction markets and finding them to be, by a wide margin, the most accurate way for me to understand the world. It is not remotely close.

It sucks that they're going mainstream, providing incentives to bad actors to profit from their power, and it sucks that they've gone so heavily for the predatory gambling market to boot.

I really hate this duality.


> the most accurate way for me to understand the world

Are you sure it's not survivorship bias or similar? I've seen multiple trend lines that are very confident only to switch to the opposite outcome at the very end.


Are you sure you're not the one seeing the survivorship bias? Something that is 10% likely to happen ends up switching to the opposite outcome at the very end 1/10 times. There are thousands of prediction markets up at any given time, so there are going to be plenty of examples of unlikely events happening.

But there is plenty of research on how well-calibrated they are. For example, https://polymarket.com/accuracy


Prediction markets, like many other micro-financialization trends, is unhealthy for society. I'm not going to trust research from the very company selling the product. History provides ample examples of how that works without the need to gamble on it.

I would invite you to look into the statistics on foreclosures, bankruptcy, and gambling hotline traffic which compare jurisdictions that have allowed this stuff vs not. Those with demographic breakdowns help to show those most at risk.


I agree about Banksy. But in this case Satoshi controls a huge about of bitcoin. If, whoever they are, they did something with it, it would absolutely move markets.

I can see that, but would that not also apply to other people who hold large amounts? And to play devils advocate for a moment, isn't one of the points of a decentralized currency is the inability to be tracked ?

> would remain silent while it became a $2 trillion phenomenon

I can see how it might be preferable. Satoshi has an incredible amount of wealth in a form that’s very easy to transfer anonymously. Anyone that admits to being him will be a huge target.


I feel like the author is overly cynical here. You get so many updates because there's so much information available about your delivery, and I for one appreciate having it! I wish there was as standardized format so my e-mail client could just roll it all up into one status box but it's hardly the end of the world.

Agreed. It’s cited so often on Reddit by people who want to establish their superiority over the masses. “It’s a documentary!!” is a meme unto itself.

It’s also got a kind of weird eugenics-y vibe to it (like establishing “stupid people breeding makes stupid people” as incontrovertible fact) when you step back and examine it as a movie that’s making Serious Statements. But it isn’t. It’s not a bad movie. But it’s a comedy, the satirical elements are heavily over exaggerated by fans.


It's kind of funny when you say the movie isn't making serious statements when the highest of our publicly elected officials isn't a serious person. We elect people that are actively harmful to our well being. These people say things so incredibly stupid it can be painful. And then you wonder why people look at the movie like it's a documentary?

> We elect people that are actively harmful to our well being.

People choose policies that will actively harm themselves and their family/friends:

* https://en.wikipedia.org/wiki/Dying_of_Whiteness


He might not present as a serious person but he is. The nativist impulses, the gutter racism, the “F you I’ve got mine” attitude, the party establishment that enabled him despite all that… these are all serious things worth serious analysis.

“Stupid people vote for stupid guy” is exactly the kind of analysis I’m critical of Idiocracy for.


I think you may misunderstand what the term "not a serious person" means. Just because someone is an ego driven performer doesn't mean their actions don't have consequences, it means you've fucked up if you follow them and take them for face value.

There has been a ton of analysis for why said stupid people vote for stupid people, but very little of it can prevent said behaviors.


> “Stupid people vote for stupid guy” is exactly the kind of analysis I’m critical of Idiocracy for.

Critical of what exactly?


Just to be clear, the smartest person is still a minister in Idiocracy, and the whole premise hinges on the idea that the elite still recognizes intelligence as something desirable.

Trump voters identify with the idiots.

It's not a eugenics-y vibe. The inciting incident is dysgenics, and the in-narrative apocalypse would have been prevented by eugenics.

It doesn't preclude the movie from being enjoyed or appreciated. The movie also came out at a time when test scores, literacy rates, and whatnot were all _increasing_, so that was the more salient lens to criticize it by.

That trend has reversed now, though. I don't agree with the dysgenic narrative, but I have often found myself thinking, "Gotta hand it to the movie Idiocracy, it's feeling familiar".

For all its flaws, I was a child at the time saturated in post-Y2K optimism that tomorrow would always be better than the day before. It was one of the first things that made me seriously consider, "What if humanity is not on a linear path of improvement"?


This thread is a sort of extension to that, eh? Hacker News knowing the truth of a matter while observing Reddit down the barrel of a nose.

It’s a “I’m not like the other ‘not like the other’” virtue game.

Given the number of people in this thread saying “it’s a documentary” I don’t think there’s a significant difference. And there’s also plenty of criticism of Idiocracy on Reddit too.

Right, because you see the situation so much better than them ;)

No no no, see I’m at the very top of the hill. There’s no hill above me. No definitely not…

Oh gee, here he comes. Another person prepared to admit that what he knows is that he knows nothing.


Was thinking the same thing

I never understood that eugenics criticism of the movie. They make zero references to genetics in that opening sequence, and the nurture side of that argument is readily trotted out as a truism even here on HN: "people from affluent parents have easier access to education".

The introduction describes it as a "turning point in human evolution", and that "natural selection ... began to favor different traits". These are some of the very first sentences of the movie.

The thesis is given: "Evolution does not necessarily reward intelligence. With no natural predators to thin the herd, it began to simply reward those who reproduced the most, and left the intelligent to become an endangered species". The characters dramatizing the inciting incident in the introduction are introduced with their IQs. It's very explicitly a dysgenic apocalypse narrative, which could have been avoided with earlier eugenicist intervention. (They attempt "genetic engineering" later on, but they fail, as the unintelligent are able to win by sheer numbers.)

It's okay to like the movie, and it is fiction. But it's certainly a dysgenic narrative which has eugenicist implications.


That's not a eugenics argument, that's merely an evolutionary argument (identifying a change in selection pressure). The eugenics argument would first have to make the case that the people are stupid/intelligent because of their genetic lineage rather than their upbringing.

To repeat, in narrative, they attempt genetic engineering to fix the declining intelligence.

On top of that, it is explicitly a dysgenics narrative, which comes with an implicit eugenics argument unless it's explicitly addressed.

I'm not trying to argue you can't like the movie (it is fiction, after all), but the eugenics argument is right there in the text.


This is one of those threads that's making me feel like I'm taking crazy pills. Like, I don't think enjoying Idiocracy makes someone a bad person or anything like that, but it's pretty clearly making a eugenics argument without any mitigating counter-hypothesis.

It's particularly amusing because there are people quoting Neal Stephenson in this thread, ignoring the fact that when Stephenson tackles similar subject matter, he's very careful to make it clear that he's talking more about the cultural axioms which have a long-term effect on how people value learning and intellectualism. It's not even subtext, I've been reading The Diamond Age recently and very early on there's a line where a character clearly states that there's no coherent genetic theory of human intelligence, and the entire thesis of the book runs counter to that notion that intelligence is primarily genetic.


"Stupid people raise stupid people" is probably a better way to put it.

but even then thats entirely too simplistic as well.


> It’s not a bad movie.

I hadn't seen it since it came out, but had a that kind of general movie recollection that it was as funny as it was prescient. Watched it again with my wife who had not seen it before: it's not funny. Maybe I'm getting too old.

(I do still laugh at the "Ow! My balls!")


> like establishing “stupid people breeding makes stupid people” as incontrovertible fact

That’s based on environment and not on genes. You might not be born “stupid”, but if you’re surrounded by retards (like in the movie), chances are you won’t be splitting atoms.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: