I wasn't going to comment, but this is just too dumb on too many levels.
The hotline is not the way to deal with suicidality - suicidality is a longer process and something you can ask your GP about and most help is covered under most western versions of universal health care.
The hotline is an idea that intervenes in the last steps of a suicide process. The idea can reach into the moment where people have convinced themselves they're stuck - and they can reach out with extremely low effort or barrier to entry.
If you have some better 'idea' we can spread into the culture that does this better, then by all means enlighten us.
---
You could have made a case and started a discussion how too many people see the existence of the hotline as _the_ way to deal with suicidality, but you didn't. You just decided to spread some shallow vibe nonsense.
In the vain of redneck-network-engineering; I was lacking a wifi dongle for a server, then I realized I could plug its Ethernet directly into a macmini and set up ip forwarding; making it a strong contender for being the most expensive dongle i'll ever use.
No - it indicates gross incompetence of the people in charge of the war and its communication.
Any media worth its salt spend a brief time explaining why invading Kharg would mean mass US casualties, while having no critical objectives only achievable by seizing Kharg.
If one of your media sources only echoed "reports of the US looking to seize Kharg island" without that context it was wasting your time for attention.
So this is a tangent on a thought i had after reading the title, but it might be a cool idea that I'll not have the time to do anything with so feel free to use it:
Human checkable fingerprints for pubkeys/hashes dont really work. None of the schemes i've seen hold up under somebody willing to spend compute to get a near-enough collision to fool most people most of the time.
But we can take those random bits and transform them and feed them into a seeded image generation LLM, and then have a person remember/compare the deterministic output image.
You might even make the case its the perfect machine to create memorable-2-human image artifact from random data.
There are a bunch of methods for transforming a hash into something that's easier to compare. You've probably already seen the RandomArt thing that openssh uses for comparing host keys on first use. Some apps produce a sequence of emoji for fingerprint comparison. It's a small but fertile little research niche.
I can't offhand think of anything that an LLM image generator would do to improve the process; it'd be an interesting research task. You'd need a way to transform the 256-bit hash into LLM input in a way that would maximize the perceptual difference in generated images. The problem is that it's absolutely critical that two different implementations work the same, which means the spec would need to specify the exact set of model weights to use.
> Day 1, 14:47 UTC — Among the exfiltrated credentials: the maintainer of vulpine-lz4, a Rust library for “blazingly fast Firefox-themed LZ4 decompression.” The library’s logo is a cartoon fox with sunglasses. It has 12 stars on GitHub but is a transitive dependency of cargo itself.
I got a bit curious and here is an incomplete list of crates to compromise to be part of the cargo build and that already have a build.rs so it doesn't stand out to much:
flate2
tar
curl-sys
libgit2-sys
openssl-sys
libsqlite3-sys
blake3
libz-sys
zstd-sys
cc
As a nice bonus - if you get rights for xz2 you can compromise rustup.
-sys crates are just bindings and doing something else in them is highly suspect. The rest I recognize as being owned by a Rust maintainer like alexcrichton or rustlang itself.
With crates.io using GH as its IdP, I think there would be much farther reaching consequences to account pwning in that scenario. I agree, though, that the security model for crates.io is only as strong as the weakest link there, and would pray someone like Alex is using physical tokens or the like for his MFA and can't be conned by a well-crafted email.
sys crates are also mostly generated and lack a lot of eyeballs. Sneaking something into the build.rs of a sys crate would not be difficult and would land in the builds of everything downstream of it.
I had pondered the same thing about other package ecosystems in the past, in general. Now with the benefit of hindsight we can comfortably say that the absence of known (!) attacks doesn't really say anything about how relatively difficult an attack would be. Are -sys crates, or build script attacks, particularly potent? Who knows. When I did a cursory search, the only attempts I saw were at runtime rather than build time[1]. Which raises a good point; pwning a developer machine or CI box with a build script may be quite valuable, but if you might get that and prod with a runtime exploit, is the build time exploit that much more valuable? Guess it depends! (Of course, I personally think having at least optional build time sandboxing is even better than hoping it won't be valuable to attack.)
Of course, crates.io has surely had some malicious packages. (I'd assume it isn't all that unlikely there could be some undiscovered right now; it's definitely large enough for something like that to slip under the radar, even if it is relatively small compared to say, NPM.) But, I think it really hasn't had its XZ backdoor moment, its left-pad, where you really get to see how well it does or doesn't handle a serious challenge. Since I have actually not published on crates.io, I'm not really sure how the security posture is, but if it's more similar to other programming language repositories than it is to Linux repos, I dunno exactly why it would be hard to believe a high-level compromise is possible and could slip in (really, anywhere, be it a build script or otherwise.). Of course, "would not be difficult" is all relative. I'm sure many of these attacks are not really all that simple, but a lot of them aren't exactly groundbreaking either. It was well executed and took quite a lot of time, sure, but there wasn't all that much about the XZ backdoor that was novel. (Except maybe the slyness with which the payload was hidden in test files. That was pretty cool.)
ou can defend a claim that literally anything is a supply-chain risk from this logic. Don't use vim to edit your config files because you don't have any way to know that someone couldn't slip a "reflections on trusting trust" compiler attack into clang so that your MacOS binary distributed by homebrew detects when you're editing an npm.json and exfiltrates your ssh keys so that they can push rogue builds!
Yes. Your comment reads to me as a defense of the comment above the one you replied to:
>>> sys crates are also mostly generated and lack a lot of eyeballs. Sneaking something into the build.rs of a sys crate would not be difficult and would land in the builds of everything downstream of it.
>> Surely that's why we see evidence of all these build script attacks, since it's so easy?
> Now with the benefit of hindsight we can comfortably say that the absence of known (!) attacks doesn't really say anything about how relatively difficult an attack would be.
Given that you were responding to a critique of what seems to be fully conjecture with the argument that we don't know for sure that it's not feasible, it's not clear to me why that wouldn't be a sufficient defense of any claim of a potential vulnerability without regard for merit. You don't propose any alternative than ignoring the only hard evidence we have, so although you might not have intended it this way, it's not clear to me what discussion you could expect to happen with that standard of evidence other than throwing up our hands and saying everything is screwed.
> Given that you were responding to a critique of what seems to be fully conjecture with the argument that we don't know for sure that it's not feasible,
But wait, that isn't what's on the table. Inserting malware into a build script itself is absolutely feasible; you can demonstrate that part locally. Gaining sufficient access to a developer or CI machine that has credentials to publish such malware to package repositories is also, well, feasible. If you are using GitHub Actions, all it would take is literally any action you run in any CI script in a job leading up to publishing getting pwned, a task that is easy if you can compromise any one of them due to the lack of a real way to pin actions, and because of vectors like the ever-enduring pwn request.
The unknown quantity is how easy it is. That part is hard to say, and also inherently relative. For me pulling off such an attack would probably be hard. For the folks behind the XZ backdoor? Well, I do not wield a crystal ball, but it sure seems like what they accomplished for that was significantly more than what would've been needed to pwn the Rust ecosystem. They didn't even need temporary access to publish a crate, they flat-out took over an upstream the old fashioned way; found a weak link and worked their way up the chain with social engineering and good old hard work.
What exactly makes you think the feasibility of pwning a crate is somehow up for debate?
> it's not clear to me why that wouldn't be a sufficient defense of any claim of a potential vulnerability without regard for merit.
You have inverted my logic. What I said was that the absence of an attack is not evidence of difficulty (or infeasibility).
> You don't propose any alternative than ignoring the only hard evidence we have,
What you have isn't "hard evidence" at all. It's not evidence in the first place. It is the literal lack of evidence.
This shouldn't be comforting. I suggest it shouldn't be comforting, because there is a period of time from whence a package repository goes online to its first major supply chain attack attempt, and you never really know when it will be. Crates.io is currently vastly smaller and lower traffic than NPM for example, so if you are looking for a vast number of potentially easy targets it is the wrong place to look. That may not always be true and to some attacker it may not matter as much. It isn't just inherently safe.
> so although you might not have intended it this way, it's not clear to me what discussion you could expect to happen with that standard of evidence other than throwing up our hands and saying everything is screwed.
The main takeaway is that hope is not a strategy. Will crates.io get pwned like NPM does periodically now? Probably. Can we do anything about this? Sure, lots of things, although I'm not sure people will love them.
- You could abandon a unified namespace/central repository and mimic Go's approach. Doesn't prevent compromise, but it helps avoid issues like typosquatting.
- You could require secure attestation of the build process. Don't mistake this suggestion as me vouching for it, but Bazel has implemented this for BCR.
- You could... Have moderated, curated repositories, possibly with graduated release channels like Linux distros do. Again... Could, not should, but it absolutely helps enable more scrutiny and time for things to bake. It's feasible.
- You could enforce reproducible builds with provenance information. This would stop someone from publishing releases that don't match the actual source repo, a technique often used in these supply chain compromises.
As for what Crates.io should do, I don't know. All I know is it would be a huge mistake to just do nothing and assume it won't get pwned due to the absence of attacks. Security will never be perfect. Best we can do is acknowledge this and continue to iterate.
We do in fact see them a lot. Typically they target Python or Node because those ecosystems are much more popular than Rust. But build.rs provides exactly the same opportunities to attackers for Rust.
No, we don't. We see build system attacks, such as injecting malicious scripts into their CI and getting malicious code into the artifact for use at runtime. You don't see someone doing a drive-by PR to a `setup.py`.
Not a drive-by PR, but once a package is compromised it often does spread to its reverse-dependencies via mechanisms like setup.py at build time. There was case like this with setup.py less than two months ago: https://www.stepsecurity.io/blog/forcememo-hundreds-of-githu...
Lots of npm supply chain attacks propagate at build time via post-install hooks, too.
I actually think there is a second level to this. Yes HTML will get you most anywhere, but I found that letting the LLM define its own language is also unreasonably effective.
Currently working on a dumb little mobile game with isometric view and sound:
- told codex to write a tool that lets its place blocks in a prepared three.js document and have chromium dev tools take a screenshot. It made up a little JSON structure that defines blocks / colors and some other effects and it outputs 2.5d tilesets.
- told it to create a uv python script that would let it define sounds and music, and it made a yaml format that lets it create noises.
We completely shot past the svg pelican test. Codex has created both perfectly adequate prototype art of soldiers/knights/priests as well as a prototype soundtrack.
They can map things like this. They are amazing translation layers. As long as it is a shape of problem or data they are trained on they can translate. The DSL they made up is shaped like some other data format they know for that latent space. It seems amazing, and it is, but it is also a core feature of how LLMs work. The problem is it works until it doesn’t. Fuzzy can only get you so far before it decoheres without rigor.
Consider if your ID can contain a timestamp besides a random value. The answer is usually yes. UUIDv7 is fine.
If you've spend the time to really work through the whole problem and have written down a proof how that leads to unacceptable info leak: Congratulations your system is complex and slow enough that you might as well take a strong cryptographic hash or UUIDv5 if you're lazy.
Every time i see a comment chain like this i'm annoyed. In the last 3 decades we never truly found the words to define what kind of skills, problems, and people /-space exist in the industry, and AI has literally added a whole axis to the space so we're more unable to communicate than ever.
Having said that, and feeling more with you than the other guy, there is nothing for you to "disagree" with.
Mediocre was always buggy and broken in some ways, but for all intents and purposes it was good enough. Today somebody with a year of study can reasonably deploy something - for which the appearance of taking ownership and shipping a full stack of features has reached the bar of good enough.
Consider 10 years ago: Did you believe it was more likely that in the quality-distribution-of-software that we would, over time, create proportionally more quality? I dont think so and AI didn't meaningfully change the trend.
It changed the work dynamics, and still is changing, and with our inability to communicate is going to be an annoying mess.
Dont let the annoyances blind you to what LLMs can do for your point in space, or to where most of the points lie for the rest of the world.
The problem with AI isn't that it's mediocre, I can work with mediocre. The problem with AI is that it produces absolutely stellar world-class code with two hidden 0days in it.
I can't work with that sort of surprise. I'm tuned to consistency, and I can work with consistently bad, but not with "95% absolutely amazing, 5% abysmal".
And I say this as someone who develops exclusively with LLMs now.
I am using Claude Code with Opus 4.5 and I have to correct it every day. It produces working code but it makes mistakes. The code is more verbose than it should be, misunderstands/ignores edge cases, etc. Daily.
And I am not a stellar world-class programmer. I am pretty average. I just read what it produces.
With junior programmers I typically just look for high level patterns that are commonly wrong. Sure if they are touching our cross thread communications code I need to spend a lot of time on that because it is so complex nobody gets it right - but we only have a tiny amount of that and most people look at it and run the opposite way (even me - I wrote it but I still do my best not to touch it when I can avoid it - that is hard hard hard)
I think we should care that our engineers have put the effort to understand the code they are responsible to produce. I don't care specifically about how they get that knowledge (I am using AI to learn myself, for example). But I disagree with the implicit assumption on the statement, which is, in my view, "humans don't need to understand the code any more" (because some fresh out of university might think they understand, but they really don't).
I could argue in all the ways my personal experience disagree, but lets just Occam's razor:
Most people agree big orgs regularly have dysfunctional incentives. We've seen it happen a thousand times.
Your suggestion requires we also assume a 10x faster delivery time by people spending 200$ vs 1000$ - something I've yet to witness or hear a credible account of.
So while that might be true in a small number of cases, in general its foolish to go with the "10x delivery speed" hypothesis.
The hotline is not the way to deal with suicidality - suicidality is a longer process and something you can ask your GP about and most help is covered under most western versions of universal health care.
The hotline is an idea that intervenes in the last steps of a suicide process. The idea can reach into the moment where people have convinced themselves they're stuck - and they can reach out with extremely low effort or barrier to entry.
If you have some better 'idea' we can spread into the culture that does this better, then by all means enlighten us.
---
You could have made a case and started a discussion how too many people see the existence of the hotline as _the_ way to deal with suicidality, but you didn't. You just decided to spread some shallow vibe nonsense.
reply