Hacker Newsnew | past | comments | ask | show | jobs | submit | zkmon's commentslogin

People tried creating alternatives for MS Office, over the decades. Didn't succeed. Not because of technical moat. Some companies like Google reworked internet serach problem. They succeeded because of technical moat (page-ranking) and later they quickly moved on to the more common moat - user base, supply chain hijacking (ads) and branding. Social media firms hardly have any technical moat. They 100% rely on locking of user base.

Technical work let's you enter the field, but the survival depends on the squatting in the supply chain routes or locking away your customers in your dungeons.


Social media's moat is network effect

Congrats! You just rediscovered something called water-fall model.

Waterfall was bad due to the excessively long feedback loops (months-to-years from "planning" to "customer gets to see it/ we receive feedback on it"). It was NOT bad because it forced people to think before writing code! That part we should recover, it's not problematic at all.

If people actually read the original paper by Royce 1970 they would see that it's an iterative process with short feedback-loops.

The bad rep comes from (defense|gov.) contracting, where PRDs where connected to money and CR were expensive, see http://www.bawiki.com/wiki/Waterfall.html for better details.


When you do most of the thinking before you start implementing the whole thing, and if you think that that's enough, then you've missed the unknown unknowns part, which was a big talking point in the mid 2000s, back when the anti-waterfall discourse got going (and for good reason).

But I expect the AI zealots to start (re-)integrating XProgramming (later rebranded as Agile) back into their workflow, somehow.


That's not what's considered waterfall, though. Specs are always required for any work, even if they're only in your head, even if the work takes 15 minutes. It's the length of the feedback loop and the resistance to spec change that makes waterfall, and by his use of tracer bullets I very much doubt it's the case here, if there was any doubt at all to have.

Did you know that agile is just waterfall scaled down to two weeks? Now you know!

No /s here so just in case this is a serious point:

Agile is a set of four principles for software development.

Scrum is the two-week development window thing, but Scrum doesn't mandate a two week _release_ window, it mandates a two week cadence of planning and progress review with a focus on doing small chunks of achievable work rather than mega-projects.

Scrum prefers lots of one-to-three day projects generally, I've yet to see training on Scrum that does not warn off of repeatedly picking up two-week jobs. If that's been your experience, you should review how you can break work down more to get to "done" on bits of it faster.


All good points here (and yeah I didn't add /s, hopefully "now you know!" was sufficiently obvious over-the-top).

All that said, in most orgs I've worked with, they were following agile processes over agile principles - effectively a waterfall with a scrum-master and dailies.

This is not to diss the idea of agile, just an observation that most good ideas, once through the business process MBA grinder, end up feeling quite different.


> All that said, in most orgs I've worked with, they were following agile processes over agile principles - effectively a waterfall with a scrum-master and dailies.

In my experience, they're all waterfall in scrum skin, except they also lose the one thing that was a strength of the old-school method: building up a large, well thought out, thoroughly checked spec up front.

So in the end, "business process MBA grinder" reshapes any idea to adapt to leadership needs - and so here, Agile became all about the things that make software people predictable cogs in the larger corporate planning machine. They got what they need anyway, but we threw away the bits that were useful to us.


> Agile is a set of four principles

Twelve :-) Twelve principles and four values


:) I keep saying it - the AI will cost us all dearly, but not in the ways the AI boosters are saying it will....

Technology for greed vs technology for need. Greed has its cost.

You never needed a godzilla or a megatron to get on with your life. But the sellers of those monsters would make every attempt, in connivance with the authorities, to make it a basic necessity to use their services. That's a survival strategy for the monsters. The owners can't keep the monsters in cages for too long, even if the owner is a state actor.

Those button at the top link to different domains altogether, but present the same page. So it is one page with multiple domains, instead of one domain with multiple pages.

How do you connect back from VPS to local LLM?

History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives. Colonial empires proved it only a few centuries back. The invading alien powers are fuelled by the inviting natives.

AI (and computing technology in general) is an alien as it defies all wordly norms. It can have exact identical copies, can replicate, can exist everywhere, communicate across huge distance without time lapse, do huge work without time lapse, has no physical mass of it's own,, no respect for time, distance, mass and thinking work, not a living thing but can think.... Just the perfect alien creature qualities.

Why are they allowed to invade Earth? The business goals, of course. To get a temporary edge over the competitors, until they acquire the same. But once everyone has the same Ai, there is no going back. Ai has established itself through the weak channels that are filled with greed, that can bribed by giving toys (business edge), in return to the keys to the dominance of human race.


uhg this entire way of treating AI like a magical alien invasion is the problem, it just a statistical model, text-in, text-out (and it humans that feed the input and act on the output). Its not some alien invasion that can't be stopped, its just another technology that we as humans need to figure out how we want to use. Seriously people need to stop trying to anthropomorphize AI, because doing so is one of the biggest hurdles to practical/common-sense AI adoption IMO.

It is definitely not "just" a statistical model. It is inextricably linked to the datasets it is trained on. Datasets that these companies possess, but that ordinary people do not. That is one half of where they get their power (the training techniques being the other, but those tend to bubble out to the general public, or at least the interested public).

How they were created doesn't change what they are, or how humans choose to use them.

[flagged]


again, all the things you listed are just humans acting like humans, not aliens. Thinking these things are not things that fall within you own human nature is rather arrogant don't you think.

> it just a statistical model, text-in, text-out (and it humans that feed the input and act on the output).

You're not thinking long-term. What happens when AI is put in charge of systems that interact with the physical world?


That is a choice a human made. Imagine if someone proposed sending the outputs of a random number generator to a space laser and had it fire at will, would we blame the number generator for the destruction it causes? You may say that LLMs are not random number generators, and I would somewhat agree, but at least in their current state and level of understanding we have about how they derive their output they might as well be.

So, imagine that some humans make this choice and then AI autonomously takes over and humans can't stop it anymore. Is that enough to treat AI in such a situation as a magical alien something that can threaten your or my survival?

One thing that the whole AI debate has shown to me is how many people completely lack any sort of imagation.


My point is that wild imaginations about the current state of LLMs is the problem, we wouldn't even consider connecting a random number generator or a statistical model to a weapons system but if we start thinking of it as an intelligence some actually would be tempted to do so.

I'm sorry, but do you realize it's 2026, not 1980s anymore? Whatever you call intelligence, if LLMs don't pass your "intelligence test", there is a lot of people who won't pass it either.

And I'm pretty sure that there is plenty of countries who would make soldiers out of those people and give them weapons.


The definition of intelligence hasn't changed since the 1980's, most would say that true intelligence requires intentionality which is not something LLMs are capable of, defining intelligence can turn into a fairly deep philosophical debate (which I have no interest in having).

>The invading alien powers are fuelled by the inviting natives.

And the massive amounts of people (software engineers, lawyers, doctors, etc) currently being paid as contractors to help train the next AI models. They're essentially the inviting natives who are being paid in trifles to tell them the secret ways of the natives farther inland. Sucking out all of the tribal knowledge of the industry like a vacuum.


> internal competition and in-fighting of the natives.

What about diseases which killed up to 95% of the population? I think you are basically correct, except for the historical analogy.


The initial Spanish conquest of the Inca empire by 168! Spaniards was not a question of disease as much a war of succession the Incas fought amongst themselves that Pizarro knew to exploit. Throw in horses, steel, and gunpowder and you have a one-sided affair.

Actually this is another good counterexample! As I recall, Incas lost battles against the Spaniards where they had something like 100x the numbers. It's true that they were initially divided, but they quickly united against the Spanish--and it didn't really help. The technological advantage was insurmountable.

How could it have been? It wasn’t like they had machine guns. In best case I believe it takes something like a full minute to reload a musket. Zerg rush would be sufficient tactics. 100 yard dash means your hoard of unarmed natives is through the musket range in maybe 10-15 seconds and pulling limbs off the spaniards already.

Why this wasn’t done is I think the big mystery and lends credence to the idea of spaniards having significant force numbers through allies.


Don't forget horses, armor, and steel weapons. It seems like Incan weapons had a lot of trouble penetrating Spanish armor, while the reverse was not true. Also, the Incas didn't just lack cavalry; they lacked the weapons and tactics to counter cavalry (such as pike formations.)

That said, I was thinking of the Battle of Cajamarca, which was actually a Spanish ambush. 100x was probably overstating it; under other circumstances (e.g. rough terrain) Spanish technology had less of an edge.


You don’t have to penetrate the armor with such a manpower advantage. Just throw four or ten people on each spaniard and rip them limb for limb. Don’t need to penetrate any armor. Can just take it off or stab sticks in between the plates. Conquistadors were not fully armored either.


Turns out I misremembered. Incas never fully united, and even though Spaniards had a huge technological advantage in some battles, the war as a whole was more evenly matched. Technology, disease, and infighting ALL played a part in their victory.

> The technological advantage was insurmountable

How's that playing out in the Middle East in 2026?


Pizarro might have been illiterate but I did not get the impression that he was a moron. That last bit is a crucial ingredient.

This is not true of everywhere that was colonized. See Africa, or India. It would not be possible, even with very great tech advantage, to sustain millitary campaigns so far from europe without a safe port to base supplies etc, not to mention the manpower etc. These were very much made possible by what was essentially a standard playbook of allying with some natives against others, and using trade imbalance, violence, strongarming and other things to turn those "allies" into protectorates, and eventually colonies

Right. I am not saying diseases were a factor in every conquest. Just refuting parent saying that conquest is "only possible" through infighting. It's not - overwhelming technological advantage or disease are also sufficient even against a united culture.

Yeah. Basically conquest is possible when the victim is weakened. There are many ways to become weakened. Infighting and disease are common causes of weakening.

Wait, you think AI won’t eventually have full control over a bio lab, where it can manipulate an unsuspecting tech to produce and release a bioweapon to accomplish that explicit goal?

Because I think that seems virtually inevitable at this point.


Humans will give a slop machine control of a lab full of CRISPR machines because they think it might make them a dollar? It wouldn’t take Supreme Super Intelligence for that to go badly.

They don’t have to hand over control to lose control to AI. People are easily manipulated, and AI has proven itself able to manipulate people. How long until a tech is tricked or coerced into doing something dumb on a planet scale, based on intentional misinformation given by its apparently benevolent AI assistant?

> benevolent AI assistant?

“Volent” is the problem there. Whose fault is it that someone was tricked by a boy?


>History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives.

Not true. Overwhelming technological advantage also works. As Hilaire Belloc put it:

  Whatever happens, we have got
  The Maxim gun, and they have not.
The AI arms race is a race for that kind of advantage. Whoever wins (assuming they don't overshoot and trigger the "everybody dies" ending) becomes de-facto king of the world. Everybody else is livestock.

I used to think this, but the AI labs sure seem neck-and-neck in the model race. Doesn't appear that anyone is developing an enormous lead. So I've become skeptical of the runaway king-of-the-world-maker model scenario.

The open models seeming to be ~6 months behind is very encouraging, too.


AI progress can potentially be extremely non-linear because of feedback effects. The first to build an AI smart enough to accelerate building even smarter AIs wins (or loses along with everybody else if it's more successful than they expected).

People have said this, but so far if anything the opposite has been empirically true. OpenAI had a huge lead and it just didn't matter, Anthropic and Google both caught them and now they're neck and neck. It seems like compute overhang forecloses the possibility of runaway progress which eliminates all your competitors.

Any feedback process has a hard threshold for instability. The PA system doesn't howl until the microphone is close enough to the loudspeaker. The atomic bomb doesn't explode until the fissile material reaches critical mass. If you don't know where the threshold is you can't extrapolate.

Compute is a limiting factor now, but there have already been huge improvements in compute efficiency, e.g. mixture of experts. It seems extraordinarily unlikely that there are no more to be found. And compute capacity continues to increase too.


This would imply that evolution, which is also an arms race that disrupts and obsoletes the status quo, is due to some “weakness”.

AI doesn’t actually come from the outside.

The fact it’s economics have high winner-take-a-lot aspects, doesn’t mean you can eliminate the current winners and end up anywhere different, because it’s actually a natural decentralized progression of improving efficiency.

So that framing makes no sense.

However, the thesis for the potential for violence is sound. I don’t see a way out of that, given unending disruption, with no coordinated responsible response.

I do not think is this essay is hype.

This moment requires great leadership and competence, but that is not what is getting elected.

The last two decades patience with massive businesses scaling up profitable conflicts of interest, and centralizing gatekeeper and dependency powers, that offer no recourse to any individuals they mistreat, strongly suggest we are incapable of dealing with AI fallout. Which will only accelerate and add to those trends.


It reads like someone discovered analogies and decided they’re a substitute for thinking.

The entire argument lives and dies on one move: calling AI an “alien.” And it’s not even consistent. It starts with “alien” as in foreign invader, then quietly upgrades it to “space alien,” and from that point on everything just inherits whatever sci fi trait sounds dramatic. That’s not reasoning, that’s a word doing a costume change and dragging the argument along with it.

And honestly, the quality of comments on HN feels like it’s been tracking the broader decline in cognitive performance. The long running Flynn Effect has stalled or reversed in parts of the US. Some datasets show small but real drops in IQ related measures over the past decade. You read threads like this and it’s hard not to feel like you’re watching that play out in real time.


I get the point of your metaphor but its missing the forest for the trees

> Ai has established itself through the weak channels that are filled with greed,

That explains the prolific AI use as incompetent agencies like the DoJ, DOGE, and others under the current administration


Unless the agent code is open-sourced, there is hardly any transparency in how the agent is spending your tokens and how does it calculate the tokens. It's like asking your lawyer why they charged some amount.

Lawyers can give you a breakdown by the minute in some cases. A better example can be military contracting.

You can insert a proxy in between and look at precisely what it is sending if you’re so inclined

CC accepts http endpoints so doesn’t require anything too complicated


Was there any writeup on the actual goals and accomplishments of this mission. I'm sure there is some very valuable scientif8c data and observations done, but what exactly are those (other than 'wow' media)?

Getting the public excited also has its place. Also sending humans so far from earth has enough challenges to make it worthwile. Wikipedia (https://en.wikipedia.org/wiki/Artemis_II) has the experiments listed.

> sending humans so far ...

I see that they travelled 1.6% more distance compared to a tech that was used 56 years ago. If NASA is really excited about this, I think they are having a low news day. I'm sure they must be other goals.


In general, I never allowed Claude to manage the complexity. Claude is fantastic coder, but very bad at higher level work. I engage gemini or qwen top models for anything that happens before getting to write code. Claude gets a very strict and elaborate requirement and design spec that it need to execute without any variation.

Claude could get too much creative and bloat it's way for non-coding tasks, as these tasks cannot be "sandboxed" with full specs as it can be done for coding.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: