Hacker Newsnew | past | comments | ask | show | jobs | submit | linsomniac's commentslogin

I like to think of it more like a bytecode.

>I can maintain the Python code myself and I can execute it everywhere. [and share it more easily]

Python can be kind of a pain in the butt to execute everywhere because of libraries. I thought uv script headers and she-bang was going to fix a lot of that, but I'm still running into issues (machines firewalled off, uv can't grab the deps. I have some code that just doesn't seem to work in uv on a Mac...). And for sharing code once the code splits out into multiple files and modules, sharing the code starts looking like sharing any code.

Don't think I'm a Python detractor; I'm a PSF Fellow, I love Python, and Claude has been writing quite good python for a while here. But I just tried a serious project with Claude writing golang (an apt proxy/cache that is resilient against upstream DDoSes, a fairly complex piece of software), and I must say it did a fantastic job. I end up with an executable I can easily run and copy around.

I'm still going to be using python for a lot, but I can definitely see myself having Claude write golang for more things in the future.


The Ubuntu DDoS got me to thinking: If we had a critical need to respin machines (like our data center caught fire), we would have been in for a real challenge. We run apt-cacher-ng, but it did nothing for us during the DDoS, and worse: Every few weeks or a month ac-ng will go out to lunch and we have to fix it.

So: ac-ng didn't reduce the impact of the DDoS, but it does lead to impact when there is no DDoS. Worst of both worlds.

So I'm working on an apt-cacher that goes to lengths to keep working as much as possible when the upstream is down. It will check the repo metadata and keeps a list of your "hot packages", and will download those before flipping the new metadata to be live, effectively a snapshot. It won't allow you to download a package you've never downloaded before in the case of a DDoS, but packages that you do download regularly (machine re-installs, apt updates), it will ensure are available in the repo.

I'm calling it apt-cacher-ultra. It is pretty early days, it'll probably be another week before it's ready for a beta. I'm running it in my dev cluster right now, successfully.

https://github.com/linsomniac/apt-cacher-ultra


>And honestly, everybody else's stuff is in use-1

Yeah, but why put your eggs in that basket? I moved all our services from east to west/oregon a decade ago and haven't looked back.


Not OP, but I do single-region us-east-1 for a few reasons:

1. The severity and frequency of us-east-1 outages are vastly overstated. It's fine. These us-east-1 outages almost never affect us. This one didn't; not even our instances in the affected AZ. Only that recent IAM outage affected us a little bit, and it affected every other region, too, since IAM's control plane is centrally hosted in us-east-1. Everybody's uptime depends on us-east-1.

2. We're physically close to us-east-1 and have Direct Connect. We're 1 millisecond away from us-east-1. It would be silly to connect to us-east-1 and then take a latency hit and pay cross-region data transfer cost on all traffic to hop over to another region. That would only make sense if we were in both regions, and that is not worth the cost given #1. If we only have a single region, it has to be us-east-1.

3. us-east-1 gets new features first. New AWS features are relevant to us with shocking regularity, and we get it as soon as it's announced.

4. OP is right about the safety in numbers. Our service isn't life-or-death; nobody will die if we're down, so it's just a matter of whether they're upset. When there is a us-east-1 outage, it's headline news and I can link the news report to anyone who asks. That genuinely absolves us every time. When we're down, everybody else is down, too.


Sometimes you need capacity and you have to choose where the capacity is not where you would like it to be. Unfortunately, the days of cloud bursting, and thinking of the cloud as an unlimited resource where you can spin up and spin down machines at will is vanishing. Power availability and supply chain lead times combined with unprecedented demand are the reason for this. That's why you see all the hyperscalers recently reporting on their "backlog" in their earnings reports.

But it’s okay to be down when the whole internet is down.

90% of customers are located in use-1. Latency to use-1 is more important than being up when everyone else is down.

I was half expecting Fowler to tie it in to right-sizing agent teams.

I've typically leaned towards Python for my agentic programming, because the LLMs have been good at it and I'm familiar with it if I need to take a look. But I'm just finishing up an apt-cacher replacement and decided to use golang and the experience has been really great.

I'm using CC+Opus 4.7 max effort, and it's produced a working apt cacher from the first phase of development, so far there have only been a few things I've had to ask it to fix. This is over ~52KLOC (counted by "wc -l"), going on day 3 of it working on it. This includes: caching proxy, garbage collection, "http://HTTPS///" kludge (apt-cacher-ng semantics), MITM https proxy, admin website + metrics, deep validation of metadata and rejecting invalid updates, snapshots of upstream state and delayed metadata update until "hot packages" are available after metadata update...

10/10, would go again.

FYI: My agent loop is: "Work on next step, have codex review it, compact", and then a couple rounds at the end of a phase to review the code against the spec, and a couple rounds at the beginning of a phase to create the spec.


>not intellectually curious or open

This checks out. I once was at a conference where they (Azure) had a giant booth. A fairly well known person in the community brings me over to talk to his manager who is working the booth. "We should hire him, he's really smart." Within a minute of talking to this manager he says "You're a Linux guy? We do Windows." and physically turns away from me, conversation over. You know, fair enough, was an easy way to find that it wasn't a good fit. But the lack of curiosity about "what do you bring to the table" was pretty stunning.

Be curious.

edit: Clarifying "they"


Wait, is this Azure or GitHub who had the booth? If it was GitHub, I’m super confused and there must have been some serious missing context. I was at GitHub from 2020-2023 and am not aware of _any_ Windows usage in the service. The only meaningful Windows footprint was for client dev (`gh`, GitHub Desktop, etc.) and even there, Windows was the exception. Service side is all Linux; most engineers worked from a Mac.

If the context was an Azure booth, I’m still mildly surprised (they’ve long been invested in beyond-Windows) but not shocked.

(Edit: I forgot about the Actions stack. Some of that was on Windows. I was pretty far removed from that world and much closer to the classic Ruby monolith side.)


Sorry about the ambiguity: I was replying to the Azure part, this was a Azure booth.

Oof, that’s rough, especially considering that GitHub used to be a Linux shop. I wonder what happened to all the Rails folks who built the OG platform.

They’re happy and vested probably :)

Happy and definitely gone, haha. Not my circus not my monkeys.

If they were curious they wouldn't "do Windows"

Your story (and the other posts commenting on lack of intellectual curiosity) fits into a larger model of the world that I prescribe to. Being labeled "well-known" or "smart" doesn't seem to require intellectual openness anymore. In fact, openness seems to be penalized. Being open means potentially exposing yourself to scenarios where you are not the smartest/authoritative, and that reduces your authority, so you avoid those scenarios to preserve your authority. Even when you are not "the authority", being open could be a threatening signal to the authority, where you and your "openness" could be a vector that introduces ideas/scenarios that reduces their authority. So long as authority is solidified by this lack of openness, actually being open could limit your career potential.

Seeing this happen in real time is helping me understand how authoritarian regimes/institutions/movements rise to power.


Wow - why anyone would build a serious Saas platform this day and age on Windows is beyond me.

Not sure if you consider 5.7.0 (6 months old) "seriously outdated", or are talking about Ubuntu 24.04 (the previous LTS). I recently looked and decided 5.8.2 (3 weeks old), didn't have anything compelling to make me want to try to shoehorn it in.

Ubuntu 24.04. The new LTS had dropped only two weeks ago. LTS users had a very outdated podman (4.9, two years old) and couldn't use quadlet types like build units (v5.2.0, aug 2024).

We are switching our Docker systems over to using Podman, primarily to get rid of the machinations we have to do to keep "apt update" from taking down services if there's a new Docker version. We're rolling them up from 24.04 to 26.04 and just using the podman packages on 26.04.

I see, at least the good thing with 26.04 is that you are set for a while.

We keep talking about AI fatigue and burnout.

Am I the only one that is finding quite the opposite? I feel like a kid again, back when I had no responsibilities and infinite time to play around and build things. Being able to look at my existing tooling and say "there's a rough edge here" and then whip out the equivalent of a Milwaukee Bandfile [1] and smooth it out is making it fun to go to work again.

[1] https://www.milwaukeetool.com/products/details/m12-fuel-1-2-...


Curious... Unless it's a really desirable movie, I typically won't go to the theater to see it. ;-)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: