Not a single popup, didn't find any ads more than a text-only banner on top asking to subscribe. Some whitespace where ads might go though. No adblock.
I'm in EU though, maybe they actually respect GDPR. Or maybe it is just a glitch.
I think his argument is that the functionality is unnecessary. You don’t need dynamic service scaling because your single-instance service has such high capacity to begin with.
I guess it’s all about knowing when to re-engineer the solution for scale. And the answer is rarely ”up front”.
Thats true. The reason I like k8s is once you've gone up the learning curve you can apply that knowledge to cloud deployments, on prem, or in this case VPS.
The authors stack left me thinking about how will he re-start the app if it crashes, versioning, containers, infra as code.
I've seen these articles before... the Ruby on Rails guys had the same idea and built https://kamal-deploy.org/
Which starts to look more and more like K3s as time goes on.
I’m thinking even simple containers have automatic restarts. I wouldn’t deploy to prod using ”docker start” but I wouldn’t look askance at someone using “docker compose” for that purpose.
I don't work at Open AI but I use Codex as I imagine most people there do to.
I actually use it from the web app not the cli. So far I've run over 100 codex sessions a great percentage of which I turned in to pull requests.
I kick off codex for 1 or more tasks and then review the code later. So they run in the background while I do other things. Occasionally I need to re-prompt if I don't like the results.
If I like the code I create a PR and test it locally. I would say 90% of my PR's are AI generated (with human in the loop).
Since using codex, I very rarely create hand written PR's.
I imagine they mean a remote KVM. So you remote into a PC sitting in a basement in someones house in the US. You then make all your outgoing internet from thta setup and your IP address would look legit.
* Tooling improvements benefit everyone. Maybe that's a faster compiler, an improved linter, code search, code review tools, bug database integration, a presubmit check that formats your docs - it doesn't matter, everyone has access to it. Otherwise you get different teams maintaining different things. In 8 years at Microsoft my team went through at least four CI/CD pipelines (OK, not really CD), most of which were different from what most other teams in Windows were doing to say nothing of Office - despite us all writing Win32 C++ stored in Source Depot (Perforce) and later Git.
* Much easier refactors. If everything is an API and you need to maintain five previous versions because teams X, Y, Z are on versions 12, 17, and 21 it is utter hell. With a unified monorepo you can just do the refactor on all callers.
* It builds a culture of sharing code and reuse. If you can search everyone's code and read everyone's code you can not only borrow ideas but easily consume shared helpers. This is much more difficult in polyrepo because of aforementioned versioning hell.
* A single source of truth. Server X is running at CL #123, Server Y at CL #145, but you can quickly understand what that means because it's all one source control and you don't have to compare different commit numbers - higher is newer, end of story.
> What are the advantages vs having a mono repo per team?
If you have two internal services you can change them simultaneously. This is really useful for debugging using git bisect as you always have a code that passes the CI.
I might write a detailed blog about this at some point.
One of the big advantages is visibility. You can be aware of what other people are doing because you can see it. They'll naturally come talk to you (or vice versa) if they discover issues or want to use it. It also makes it much easier to detect breakages/incompatibilities between changes, since the state of the "code universe" is effectively atomic.
Not sure if I get it. If you are using a product like Github Enterprise, you are already quite aware of what other people are doing. You have a lot of visibility, source-code search, etc. If you have a CICD that auto-creates issues you already can detect breakages, incompatibilities, etc.
State of the "code universe" being atomic seems like a single point of failure.
Imagine team A vendors into their repo team B's code and starts adding their own little patches.
Team B has no idea this is happening, as they only review code in repo B.
Soon enough team A stops updating their dependency, and now you have two completely different libraries doing the "same" thing.
Alternatively, team A simple pins their dependency to team B's repo at hash 12345, then just, never updates... How is team B going to catch bugs that their HEAD introduces on team A's repo?
This is already caught by multi-repo tooling like Github today. If you vendor in an outdated version with security vulnerabilities, issues are automatically raised on your repo. Team B doesn't need to do anything. It is Team-A's responsibility to adopt to latest changes.
Curious because I haven't seen this myself. Do you mean, GitHub detects outdated submodule references? Or, GitHub detects copy of code existing in another repo, and said code has had some patches upstream?
(In our org, we have our own custom Go tool that handles more sophisticated cases like analyzing our divergent forks and upstream commits and raising PR's not just for dependencies, but for features. Only works when upstream refactoring is moderate though)
A dizzying array of adverts and popups.
reply