Hacker Newsnew | past | comments | ask | show | jobs | submit | ianpurton's commentslogin

That site is everything that's wrong with the internet at the moment.

A dizzying array of adverts and popups.


And it was very persistent In should enable those. No thank you.

Not a single popup, didn't find any ads more than a text-only banner on top asking to subscribe. Some whitespace where ads might go though. No adblock.

I'm in EU though, maybe they actually respect GDPR. Or maybe it is just a glitch.


Because the model that generated that list was trained before the M5 came out.

> if you face a problem in development, db, storage or anything

Developers won't pay for it. I'll pay for hosting begrudgingly and I pay for AI tokens and domain names and that's it.

And every Saas idea can now be copied with this prompt.

Build me an open source clone of -> https://your-saas.com

You need a moat these days more than ever.


When he switches from Kubernetes in the cloud to Nginx -> App Binary -> Sqlite he trades operations functionality for cost.

But, actually you can run Kubernetes and Postgres etc on a VPS.

See https://stack-cli.com/ where you can specify a Supabase style infra on a low cost VPS on top of K3s.


I think his argument is that the functionality is unnecessary. You don’t need dynamic service scaling because your single-instance service has such high capacity to begin with.

I guess it’s all about knowing when to re-engineer the solution for scale. And the answer is rarely ”up front”.


Dynamic scaling is not really even available on a single node kubernetes.

I was thinking more of

Running multiple websites. i.e. 1 application per namespace. Tooling i.e. k9s for looking at logs etc. Upgrading applications etc.


Namespaces exist in Linux [0], they weren’t invented by K8s.

You can view application logs with anything that can read a text file, or journalctl if your distro is using that.

There are many methods of performing application upgrades with minimal downtime.

0: https://www.man7.org/linux/man-pages/man7/namespaces.7.html


Thats true. The reason I like k8s is once you've gone up the learning curve you can apply that knowledge to cloud deployments, on prem, or in this case VPS.

The authors stack left me thinking about how will he re-start the app if it crashes, versioning, containers, infra as code.

I've seen these articles before... the Ruby on Rails guys had the same idea and built https://kamal-deploy.org/

Which starts to look more and more like K3s as time goes on.


I’m thinking even simple containers have automatic restarts. I wouldn’t deploy to prod using ”docker start” but I wouldn’t look askance at someone using “docker compose” for that purpose.

Namespacing is great; look at how Notepad++ was hacked. They were sharing a non-namespaced deployment with other applications, IIRC.

> I bet we will move from CLIs to something else in about 3-6 months.

My bet would be OpenAPI specs. The model will think its calling a cli but we intercept the tool call and proxy it with the oauth credentials.

There are some implementations already out there in open web ui and bionic gpt.


If you look for jobs on linkedin you'll see a lot are posted by recruiters.

Its usually pretty easy to get one to call you and they can give you an idea about the market.


This is a really good idea.

Taking something that is basically a lonely depressing activity and putting a social aspect around it.

Well done.


I don't work at Open AI but I use Codex as I imagine most people there do to.

I actually use it from the web app not the cli. So far I've run over 100 codex sessions a great percentage of which I turned in to pull requests.

I kick off codex for 1 or more tasks and then review the code later. So they run in the background while I do other things. Occasionally I need to re-prompt if I don't like the results.

If I like the code I create a PR and test it locally. I would say 90% of my PR's are AI generated (with human in the loop).

Since using codex, I very rarely create hand written PR's.


Do you use any tools to help with the code review part?


I imagine they mean a remote KVM. So you remote into a PC sitting in a basement in someones house in the US. You then make all your outgoing internet from thta setup and your IP address would look legit.


I've never worked on a mono repo that has the whole organizations code in it.

What are the advantages vs having a mono repo per team?


* Tooling improvements benefit everyone. Maybe that's a faster compiler, an improved linter, code search, code review tools, bug database integration, a presubmit check that formats your docs - it doesn't matter, everyone has access to it. Otherwise you get different teams maintaining different things. In 8 years at Microsoft my team went through at least four CI/CD pipelines (OK, not really CD), most of which were different from what most other teams in Windows were doing to say nothing of Office - despite us all writing Win32 C++ stored in Source Depot (Perforce) and later Git.

* Much easier refactors. If everything is an API and you need to maintain five previous versions because teams X, Y, Z are on versions 12, 17, and 21 it is utter hell. With a unified monorepo you can just do the refactor on all callers.

* It builds a culture of sharing code and reuse. If you can search everyone's code and read everyone's code you can not only borrow ideas but easily consume shared helpers. This is much more difficult in polyrepo because of aforementioned versioning hell.

* A single source of truth. Server X is running at CL #123, Server Y at CL #145, but you can quickly understand what that means because it's all one source control and you don't have to compare different commit numbers - higher is newer, end of story.


> What are the advantages vs having a mono repo per team?

If you have two internal services you can change them simultaneously. This is really useful for debugging using git bisect as you always have a code that passes the CI.

I might write a detailed blog about this at some point.


One of the big advantages is visibility. You can be aware of what other people are doing because you can see it. They'll naturally come talk to you (or vice versa) if they discover issues or want to use it. It also makes it much easier to detect breakages/incompatibilities between changes, since the state of the "code universe" is effectively atomic.


Not sure if I get it. If you are using a product like Github Enterprise, you are already quite aware of what other people are doing. You have a lot of visibility, source-code search, etc. If you have a CICD that auto-creates issues you already can detect breakages, incompatibilities, etc.

State of the "code universe" being atomic seems like a single point of failure.


GitHub search is insanely bad and it cannot do things like navigating to definitions between repos in an org.


If you want code search and navigation over a closed subgraph of projects that build into an artifact - opengrok does the job reasonably well.


Imagine team A vendors into their repo team B's code and starts adding their own little patches.

Team B has no idea this is happening, as they only review code in repo B.

Soon enough team A stops updating their dependency, and now you have two completely different libraries doing the "same" thing.

Alternatively, team A simple pins their dependency to team B's repo at hash 12345, then just, never updates... How is team B going to catch bugs that their HEAD introduces on team A's repo?


This is already caught by multi-repo tooling like Github today. If you vendor in an outdated version with security vulnerabilities, issues are automatically raised on your repo. Team B doesn't need to do anything. It is Team-A's responsibility to adopt to latest changes.


Curious because I haven't seen this myself. Do you mean, GitHub detects outdated submodule references? Or, GitHub detects copy of code existing in another repo, and said code has had some patches upstream?


Github has dependabot https://docs.github.com/en/code-security/dependabot/dependab... which can also raises PR's, though your mileage may greatly vary here depending on your language.

You can also configure update of dependencies https://docs.github.com/en/code-security/dependabot/dependab...

These work with vendored dependencies too.

(In our org, we have our own custom Go tool that handles more sophisticated cases like analyzing our divergent forks and upstream commits and raising PR's not just for dependencies, but for features. Only works when upstream refactoring is moderate though)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: