Fair, but from the user side it still hurts. Setting up an Ed25519 signing context used to be maybe ten lines. Now you're constructing OSSL_PARAM arrays, looking up providers by string name, and hoping you got the key type right because nothing checks at compile time.
Yeah. Some of the more complex EVP interfaces from before and around the time of the forks had design flaws, and with PQC that problem is only going to grow. Capturing the semantics of complex modes is difficult, and maybe that figured into motivations. But OSSL_PARAMs on the frontend feels more like a punt than a solution, and to maintain API compatibility you still end up with all the same cruft in both the library and application, it's just more opaque and confusing figuring out which textual parameter names to use and not use, when to refactor, etc. You can't tag a string parameter key with __attribute__((deprecated)). With the module interface decoupled, and faster release cadence, exploring and iterating more strongly typed and structured EVP interfaces should be easier, I would think. That's what the forks seem to do. There are incompatibilities across BoringSSL, libressl, etc, but also cross pollination and communication, and over time interfaces are refined and unified.
Lockfiles help more than people realize. If you're pinned and not auto-updating deps, a package getting sold and backdoored won't hit you until you actually update.
The scarier case is Dependabot opening a "patch bump" PR that probably gets merged because everyone ignores minor version bumps.
I mitigate this using a latest -1 policy or minimum age policy depending upon exactly which dependency we're talking about. Combined with explicit hash pins where possible instead of mutable version tags, it's saved me from a few close calls already... Most notably last year's breach of TJ actions
I wish those PRs made by the bot can have a diff of the source code of those upgraded libraries (right in the PR, because even if in theory you could manually hunt down the diffs in the various tags...in practise nobody does it).
It also doesn't bother checking what's already in your project. Grep around a bit and you'll find three `formatTimestamp` functions all doing almost the same thing.
This clicks for message parsing too. Had field lookups in a std::map, fine until throughput climbed. Flat sorted array fixed it. Turns out cache prefetch actually kicks in when you give it sequential scans.
Biggest reason is usually the toolchain. Debuggers, sanitizers, profilers all just work when your target is C. Go through LLVM and you get similar optimization but now you own the backend. With C, gcc and clang handle that part.
Are you saying that by providing a globally available service, they're engaging in vendor lock-in, because otherwise you'd have to build your own globally available and distributed service...? ...yeah?
But if you wanted to run Workerd on EC2 or Google Cloud or whatever, you could...so not really sure how that applies here.
You cannot say you’re the spiritual successor to WordPress, if your software doesn’t support running plugins out of the box on arbitrary installs. WordPress is very easy to host and scale, you only need a basic server and a CDN. A spiritual successor would follow that and also have all the new shiny stuff.
Spent yesterday pruning dependencies in a project. Cut half of them and everything still worked. Makes you wonder how much stuff we pull in without thinking about it. Same thing with AI-generated PRs honestly, one bad suggestion and it ships.
reply