I've always liked China's business model for music. In China, all music is free to stream and download. Musicians make their money the more traditional way, through performances, merchandise, promotions/advertising, etc.
If the operators of Anna's Archive live somewhere like Russia or China, there's a good chance nothing will ever come of any of this legal action. Anna's Archive's biggest challenge is just maintaining availability of infrastructure.
If they were not physically in Russia or similar country out of the jurisdiction of the court, then they have likely moved to one or operate from one.
At this point, the court is just a willing instrument of corporate anger and assistant to help vent their frustration. The secondary purpose, is to erode rights and privacy, for a continual surveillance state and gain as much control over the DNS infrastructure as possible.
Chevron hired a private prosecutor who was friends with the judge who took the case, to prosecute Donziger after he won a case outside of the US against Chevron.
That’s a big if. My bet is that they are in Central or Northern Europe, just like the Pirate Bay people. Unlikely anyone in Russia or China would care to offer a service primarily to the benefit of the western world. I bet there are similar sites in the Runet or behind the Great Firewall we don't even know about and that simply don't bother catering to us.
Yep. You can also see that in the design language and the written English on their sites and blog posts. Something created by people with a Russian or Chinese background would approach a myriad of little things differently.
Z-library was/is very likely run by Russians. They were even arrested by FBI, but escaped. Archive.is is likely run by a Russian. LibGen was run by Russians.
They are all not Anna's Archive and one is not like the other. Z-library, LibGen maybe, Archive.is might be eastern Europe but almost certainly not Russia. Just because it's advantageous in some cases to appear Russian or Chinese doesn't mean it is true. Some are better in their camouflage others like https://migflash.com/ not so much.
There is no "strong case" in this article. Yeah the guy linked to it has a slavic name and likely speaks Russian. Guess what? That's true for most of eastern Europe you
will find plenty of people matching these criteria all over the rest of Europe.
> Unlikely anyone in Russia or China would care to offer a service primarily to the benefit of the western world.
Russians are huge on the piracy scene and have been for decades, primarily because it’s an effective way for the Russian Federation to thumb their nose at the Americans. China has more than a billion people in it. I’m sure between the two of them there is at least one person that identifies with citizen of the world style liberalism (and, if I could venture to be an optimist, probably a lot more than one).
What are you on about? rutracker, libgen, sci-hub, z-lib are all Russian/ex-Soviet projects and cater heavily to westerners. I'm 99% sure archive.is and anna's-archive are also in this category.
While the technology is young, bugs are to be expected, but I'm curious what happens when their competitors' mature their product, clean up the bugs and stabilize it, while Claude is still kept in this trap where a certain number of bugs and issues are just a constant fixture due to vibe coding. But hey, maybe they really do achieve AGI and get over the limitations of vibe coding without human involvement.
This is the biggest bottleneck for me. What's worse is that LLMs have a bad habit of being very verbose and rewriting things that don't need to be touched, so the surface area for change is much larger.
Not only that, but LLMs do a disservice to themselves by writing inconcise code, decorating lines with redundant comments, which wastes their context the next time they work with it
I have had good luck in asking my agent 'now review this change: is it a good design, does it solve the problem, are there excessive comments, is there anything else a reviewer would point out'. I'm still working on what promt to use but that is about right.
It's kind weird; I jumped on the vibe coding opencode bandwagon but using local 395+ w/128; qwen coder. Now, it takes a bit to get the first tokens flowing, and and the cache works well enough to get it going, but it's not fast enough to just set it and forget it and it's clear when it goes in an absurd direction and either deviates from my intention or simply loads some context whereitshould have followed a pattern, whatever.
I'm sure these larger models are both faster and more cogent, but its also clear what matter is managing it's side tracks and cutting them short. Then I started seeing the deeper problematic pattern.
Agents arn't there to increase the multifactor of production; their real purpose is to shorten context to manageable levels. In effect, they're basically try to reduce the odds of longer context poisoning.
So, if we boil down the probabilty of any given token triggering the wrong subcontext, it's clear that the greater the context, the greater the odds of a poison substitution.
Then that's really the problematic issue every model is going to contend with because there's zero reality in which a single model is good enough. So now you're onto agents, breaking a problem into more manageable subcontext and trying to put that back into the larger context gracefully, etc.
Then that fails, because there's zero consistent determinism, so you end up at the harness, trying to herd the cats. This is all before you realize that these businesses can't just keep throwing GPUs at everything, because the problem isn't computing bound, it's contextual/DAG the same way a brain is limited.
We all got intelligence and use several orders of magnitude less energy, doing mostly the same thing.
yes, this is accurate for US and “works” but it’s against code here. you’ll get mildly shocked by metallic cabinets and fixtures especially if you’re barefoot and become the new shortest path to ground.
old construction in the US sometimes did this intentionally (so old, the house didn’t have grounds. Or to “pass” an inspection and sell a place) but if a licensed electrician sees this they have to fix it.
I’m dealing with a 75 year old house that’s set up this way, the primary issue this is causing is that a 50amp circuit for their HVACs are taking a shorter path to ground inside the house instead of in the panel.
As a result the 50 amp circuit has blown through several of the common 20amp grounds and neutrals and left them with dead light fixtures and outlets because they’re bridged all over the place.
If an HVAC or two does this, I’d advise against this for your 3200 watt AI rig.
EU, you don’t want to try to energize your ground. They use step down transformers or power supplies capable of taking 115-250 (their systems are 240-250V across the load and neutral lines. Not 120 across the load and neutral like ours.)
in the US. you’re talking about energizing your ground plane with 120v and I don’t want to call that safe… but it’s REALLY NOT SAFE to make yourself the shortest path to ground on say. a wet bathroom floor. with 220V-250v.
> I’m dealing with a 75 year old house that’s set up this way
I can’t tell what practice you’re referring to. Are you perhaps referring to older wiring that connects large appliances to a neutral and two hots but no ground, e.g. NEMA 10-30R receptacles? Those indeed suck and are rather dangerous. Extra dangerous if the neutral wiring is failing or undersized anywhere.
But even NEMA 10-30R receptacles are still 120V RMS phase-to-ground. (And, bizarrely, there’s an entire generation of buildings where you might find proper 4-conductor wiring to the dryer outlet and a 10-30R installed — you can test the wiring and switch to 14-30R without any rewiring.)
The exception for residential wiring is when the neutral feed from the utility transformer fails, in which case you may have 240V phase-to-phase with the actual Earth floating somewhere in the middle (via the service’s ground connection), which can result in phase-to-neutral and phase-to-ground measured anywhere in the house varying from 0 to 240V RMS.
> wet bathroom floor
A GFCI receptacle adds a considerable degree of safety and can be installed with arbitrarily old wiring. It’s even permitted by code to install one with no ground connection as long as you label it appropriately — look it up in your local code.
it’s worse than no ground connection. There’s no neutral connection so they replaced neutrals with ground.
I believe that’s kinda naughty.
It works, but it energizes your ground plane and people do get mildly shocked. that’s making me a little nervous.
So holes have been drilled in ceilings and walls and single wire neutrals or grounds have been fished down the walls, repeatedly, by yours truly , but there’s still at least one “gfci” outlet that’s wired this way
And they’re balking at getting an electrician back out here for.
bridging neutral to ground because the neutral lines dead, uh, “works” to be technical but whoever did this moved on years ago and heaven only knows how many outlets or fixtures this was done in. I’m just finding out one by one as someone goes “hey this stopped working!”and you pull it and the neutral or ground blew like a fuse.
So that’s my whole point, this is an extremely bad idea for a 3200watt computer.
yes, they are all getting snipped and blank wall plated and marked as hazards that need to be remediated with a Dymo labeler as I discover them.
I don’t work here I just live here and have kind of a slummy owner who doesn’t want to do anything about any of it and doesn’t care if the plumbing or electrical works.
But they paid some guy like $4000 to install a totally unnecessary subpanel that’s bridging conflicting phases into the same circuits because he didn’t figure out this was what was going on. Dios Mio. I would have fixed the whole house for $1000. Miracle this hovel hasn’t burned to the ground yet.
I’m putting up with it for now but should probably bail before it does.
Late reply: I think you misunderstood my comment. I was replying to:
> It definitely comes in at a higher voltage.
The voltage supplied to a US house is 120V RMS measured phase-to-ground. You will not find a higher voltage in your house. This does not mean that it’s appropriate to run any non-negligible current from phase to the ground (green / “equipment grounding conductor”) wires.
One can get vaguely close to an accurate understanding by imagining that there are four wires coming out of your main panel: +120V, -120V, 0V white (the “actually use me” wire) and 0V green (a safety wire where any current more than a few mA or maybe tens of mA depending on application is at least a mistake). There’s no 240V to be found.
This explanation falls apart pretty quickly — the US system is AC, not DC.
I'm personally appreciative of these comments. It's good that people make claims, be challenged, and both sides walk away with informative points being made. It's entirely possible both sides here are correct and wrong in their own way.
I wonder if requiring it twice a month would fix both issues, since it's too frequent to plan around (versus quarterly), while frequent enough to allow transparency (versus annually).
Alan Kay's argument against static typing was it was too limited and didn't capture the domain logic of the sort of types you actually use at a higher level. So you leave it up to the objects to figure out how to handle messages. Given Ruby is a kind of spiritual ancestor of Smalltalk.
the problem is that nobody listened to Alan Kay and writes dynamic code the way they'd write static code but without the types.
I always liked Rich Hickey's point, that you should program on the inside the way you program on the outside. Over the wire you don't rely on types and make sure the entire internet is in type check harmony, it's on you to verify what you get, and that was what Alan Kay thought objects should do.
That's why I always find these complaints a bit puzzling. Yes in a dynamic language like Ruby, Python, Clojure, Smalltalk you can't impose global meaning, but you're not supposed to. If you have to edit countless of existing code just because some sender changed that's an indication you've ignored the principle of letting the recipient interpret the message. It shouldn't matter what someone else puts in a map, only what you take out of it, same way you don't care if the contents of the post truck change as long as your package is in it.
That's a terrible solution because then you need a bunch of extra parsing and validation code in every recipient object. This becomes impractical once the code base grows to a certain size and ultimately defeats any possible benefit that might have initially been gained with dynamic typing.
>then you need a bunch of extra parsing and validation code in every recipient object.
that's not a big deal, when we exchange generic information across networks we parse information all the time, in most use cases that's not an expensive operation. The gain is that this results in proper encapsulation, because the flipside of imposing meaning globally is that your entire codebase is one entangled ball, and as you scale a complex system, that tends to cost you more and more.
In the case of the OP where a program "breaks" and has to be recompiled every time some signature propagates through the entire system that is significant cost. Again if you think of a large scale computer network as an analog to a program, what costs more, parsing an input or rebooting and editing the entire system every time we add a field somewhere to a data structure, most consumers of that data don't care about?
this is how we got micro-services, which are nothing else but ways to introduce late binding and dynamism into static environments.
> when we exchange generic information across networks we parse information all the time
The goal is to do this parsing exactly once, at the system boundary, and thereafter keep the already-parsed data in a box that has "This has already been parsed and we know it's correct" written on the outside, so that nothing internal needs to worry about that again. And the absolute best kind of box is a type, because it's pretty easy to enforce that the parser function is the only piece of code in the entire system that can create a value of that type, and as soon as you do this, that entire class of problems goes away.
This idea is of using types whose instances can only be created by parser functions is known as Parse, Don't Validate, and while it's possible and useful to apply the general idea in a dynamically typed language, you only get the "We know at compile time that this problem cannot exist" guarantee if you use types.
> The goal is to do this parsing exactly once, at the system boundary
You are only parsing once at the system boundary, but under the dynamic model every receiver is its own system boundary. Like the earlier comment pointed out, micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively. Yes, you are only parsing once in each service, but ultimately you are still parsing many times when you look at the entire program as a whole. "Parse, don't validate" doesn't really change anything.
> but under the dynamic model every receiver is its own system boundary
I'm not claiming that it can't be done that way, I'm claiming that it's better not to do it that way.
You could achieve security by hiring a separate guard to stand outside each room in your office building, but it's cheaper and just as secure to hire a single guard to stand outside the entrance to the building.
>micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively
I think microservices emerged for a different reason: to make more efficient use of hardware at scale. (A monolith that does everything is in every way easier to work with.) One downside of microservices is the much-increased system boundary size they imply -- this hole in the type system forces a lot more parsing and makes it harder to reason about the effects of local changes.
> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.
Same thing, no? That is exactly was what Kay was talking about. That was his vision: Infinite nodes all interconnected, sending messages to each other. That is why Smalltalk was designed the way it was. While the mainstream Smalltalk implementations got stuck in a single image model, Kay and others did try working on projects to carry the vision forward. Erlang had some success with the same essential concept.
> I'm claiming that it's better not to do it that way.
Is it fundamentally better, or is it only better because the alternative was never fully realized? For something of modern relevance, take LLMs. In your model, you have to have the hardware to run the LLM on your local machine, which for a frontier model is quite the ask. Or you can write all kinds of crazy, convoluted code to pass the work off to another machine. In Kay's world, being able to access an LLM on another machine is a feature built right into the language. Code running on another machine is the same as code running on your own machine.
I'm reminded of what you said about "Parse, don't validate" types. Like you alluded to, you can write all kinds of tests to essentially validate the same properties as the type system, but when the language gives you a type system you get all that for free, which you saw as a benefit. But now it seems you are suggesting it is actually better for the compiler to do very little and that it is best to write your own code to deal with all the things you need.
> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.
Scaling different areas of an application is one thing. Being able to use different technology choices for different areas is another, even at low scale. And being able to have teams own individual areas of an application via a reasonably hard boundary is a third.
reply