uhg this entire way of treating AI like a magical alien invasion is the problem, it just a statistical model, text-in, text-out (and it humans that feed the input and act on the output). Its not some alien invasion that can't be stopped, its just another technology that we as humans need to figure out how we want to use. Seriously people need to stop trying to anthropomorphize AI, because doing so is one of the biggest hurdles to practical/common-sense AI adoption IMO.
It is definitely not "just" a statistical model. It is inextricably linked to the datasets it is trained on. Datasets that these companies possess, but that ordinary people do not. That is one half of where they get their power (the training techniques being the other, but those tend to bubble out to the general public, or at least the interested public).
again, all the things you listed are just humans acting like humans, not aliens. Thinking these things are not things that fall within you own human nature is rather arrogant don't you think.
That is a choice a human made. Imagine if someone proposed sending the outputs of a random number generator to a space laser and had it fire at will, would we blame the number generator for the destruction it causes? You may say that LLMs are not random number generators, and I would somewhat agree, but at least in their current state and level of understanding we have about how they derive their output they might as well be.
So, imagine that some humans make this choice and then AI autonomously takes over and humans can't stop it anymore. Is that enough to treat AI in such a situation as a magical alien something that can threaten your or my survival?
One thing that the whole AI debate has shown to me is how many people completely lack any sort of imagation.
My point is that wild imaginations about the current state of LLMs is the problem, we wouldn't even consider connecting a random number generator or a statistical model to a weapons system but if we start thinking of it as an intelligence some actually would be tempted to do so.
I'm sorry, but do you realize it's 2026, not 1980s anymore? Whatever you call intelligence, if LLMs don't pass your "intelligence test", there is a lot of people who won't pass it either.
And I'm pretty sure that there is plenty of countries who would make soldiers out of those people and give them weapons.
The definition of intelligence hasn't changed since the 1980's, most would say that true intelligence requires intentionality which is not something LLMs are capable of, defining intelligence can turn into a fairly deep philosophical debate (which I have no interest in having).