Hacker Newsnew | past | comments | ask | show | jobs | submit | cjbgkagh's commentslogin

As a side effect of bailing out banks they have a lower risk profile than entities that don’t get bailouts. It’s essentially an additional subsidy on top of the already generous ‘money creation’ they’re allowed to do. It’s impossible to compete against this. The riskier the bets the stronger the subsidy so of course they’ll crank risk up to 11. Since the fate of banks and pensions are tied you can’t punish the banks without also punishing pensioners and pensioners are a strong voting block. It’s a distorted economic system that can only either crash or become more distorted. Since becoming more distorted enriches the already wealthy then that’s the only option that will be taken. The only thing that can stop it is to run out of resources to spend trying to save it. Importing a large number of foreigners is a rather creative (desperate) solution. I don’t know how long this will last but I’m confident I will see a major economic calamity in my lifetime.

I have this problem with NixOS as one of my build servers doesn’t have enough ram. There doesn’t seem to be a way to know if a compilation is likely to be ram heavy and either use a tagged server with more ram or use few threads on servers with less ram.

This seems to be AI written, or co-written, hard to tell though. It seems AI is converging on a more terse information dense style that is closer to my own, which is good but I do worry that it’ll make my writing look like AIs.

"AI style" is an artifact of certain writing styles being overrepresented in training data. I expect in the long run it will be impossible to distinguish AI writing reliably. https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li...

I did predict as much, I figured it would occur as a side effect of improved information density. As the models got smarter they would have more useful pints to make. On one hand it feels validating, on the other hand I am a bit worried that my written work will look like AI. While I would consider it no longer slop I do worry about a loss of relative advantage.

I don’t agree, it seems pretty normal.

I also don’t really like policing writing style when there aren’t any glaring errors.


> This seems to be AI written, or co-written, hard to tell though.

I didn't get that at all. Calling out AI for the sake of it is the new virtue signal, unfortunately.


I don't know whether it's a sign of it being AI or not but I did find it a bit weird that within the first 3 sentences there were 2 different "less like X and more like Y" statements:

> the reason is that this is less like painting a wooden fence, which is easy, and more like changing the colour of a carbon-fibre Formula 1 part, which requires re-calculating the weight, strength and aerodynamics.

and

> this is less like making ice cubes and more like baking a complex soufflé where every degree of temperature and milligram of ingredients matters.

Not a problem, but it felt odd enough that I noticed it, so maybe that's what got them thinking it was AI written/assisted?


I consider it very well written, perhaps too well written, I think we are departing the AI slop era. I’m not decrying it as slop not worth reading, I just think it’s an interesting development.

Neat, I see AOT, will this be able to target WASM? I’m guessing there will be a mode that doesn’t use reflection emit since AOT doesn’t support that? I would check myself but I’m away from my computer.

It’s not just a donkey, it’s a donkeys ass.

The ‘let me google that for you’ is set to be replaced with ‘let me ask ChatGPT for you’.

I generally agree that humans and LLMs benefit similarly from programming language features. I would tweak that a bit and suggest that their ability floor is higher than the human lowest common denominator so I would skew towards the more advanced human programming languages. There are many typing / analyzer features that would be frustrating for humans to use given they’ll cause the type checking to be slower. This is much less of a problem for LLMs in that they’re very patient and are much better at internalizing the type system so they don’t need to trigger it anywhere nearly as often.

Post type check analyzers can work with more than just the type information, you can really do whatever you want at this stage. The normal highly optimized type checker handles the bulk of the checking and the post type check analyzers can work on the residual. You wouldn’t type check a file that doesn’t parse, and you wouldn’t run the analyzers on code that doesn’t type check.

The problem is these checks can be rather slow and people don’t want to wait a long time for their type checking and analyzers to finish. But LLMs can both wait longer and by internalizing the logic can reduce the number of times it will need to trigger them.

Edit: I’ll need to examine this project to know where (or if) they draw the distinction between normal type checking and a post type check analyzer. If they blend the two and throw the whole thing into Z3 it’ll work but it’ll be needlessly slow.

Edit: What I’m calling a post type check anyalizer they’re calling a contract verifier and it’s a distinct stage with ‘check’ (type check) then ‘verify’ (Z3).


The same pre war consensus also thought that war with Russia was unthinkable, it is Russia that focused on artillery tactics so the two assumptions went hand in hand.

It’s my opinion that artillery is out of date and by the end of the Ukraine war they will be even more out of date. It’s hard to make artillery more cost effective than it already is yet still many more opportunities to increase drone effectiveness.


Artillery is just one piece in the puzzle and it has its place, with drone spotting. You can't jam a shell.

But once your artillery positions can't be protected from drones then its game over for sure.


Would be better to think of it as ‘agreeableness’ and agreeable people are more likely to shift their views to agree with those they are talking to.

I would call it obedience, and it's not the same as friendliness.

The difference, in a repeated prisoner dilemma: Friendliness is cooperating on the first move, and then conditionally. Obedience is always cooperating.


Agreeableness is a Big Five personality trait so a lot of the formal research into personalities uses it as one of the dimensions.

Yeah but I would argue it's different from both friendliness and obedience.

Do you have a standard and a body of work you can point to in an effort to aid with communication these thoughts to others? At the very least there should be a reversible projection to the Big 5 standard.

I don't think Big5 applies to LLMs. They don't share people's morality or common sense, and the traits are predicated on that.

BTW: https://claude.ai/share/78a13035-0787-42a5-8643-398b26887e42


Lol, you convinced a LLM to agree with you. I use the Big5 as a way of communicating where there is a common reference and a large body of work. How people think they think and how they actually think are two different things, people are much closer to LLMs than they think they are. I can't provide evidence for this for a variety of reasons so at this point we're just going to have to agree to disagree.

Actually, it's the other way around - I used LLM to think about it independently to check if my intuition made sense.

I agree with its arguments (and I generally found LLMs argue better than myself, that's why I use them).

It's disappointing that you dismiss it without providing a counterargument.


I have privileged access to information that I cannot share, I would rather keep my access than win some argument online.

> and agreeable people are more likely to shift their views to agree with those they are talking to

Agreeable people are more likely to shift their expressed views to agree with those they are talking to.

If they're more likely to shift their views, we call them "gullible", not "agreeable".

But this is a distinction you can't apply to language models, which don't have views.


Agreeable people are also the most suggestible in that they are the most likely to actually change their views. These traits share the same axis.

My point is that LLMs are not humans, so projecting intuitions from human psychology onto LLMs is not helpful.

Your point was that humans did not display such behavior even though it has been extensively studied and they do. There is plenty of evidence that highly agreeable people will agree with you on incorrect ideas and conspiracy theories. The name of the trait ‘agreeableness’ is what you’ll need to find such evidence.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: