Hacker Newsnew | past | comments | ask | show | jobs | submit | veidr's commentslogin

So true.

I had this gaming PC — and once a year doing excel and dropbox exchanges with my accountant, but other than that, gaming PC — and it never had an issue, from 2020 or 2021 to last month.

So I decided to move it to the living room, and connect it to our big TV, instead of the small TV — same LG manufacturer, same 4K res, mind you — and now it just freezes every 3-4 days. And freeze means just, the screen still shows whatever it was showing when it froze, no USB mouse or keyboard does anything, cannot be RDP'd to cannot be pinged... hold-down-power-button only answer.

(I have swapped all the cabels, just to be sure.)

The only differences: moved it 20 meters physically, connected it to a slightly newer TV. ¯\_(ಠ_ಠ)_/¯

macOS and Linux also do suck, but both are AFAICT way more predictable, and less random


TBH your problem sounds like a hardware issue. Maybe the PC's new location is warmer due to a more enclosed space, triggering more unrecoverable hardware faults.

I agree it sounds like that, but (having that same thought) I kept the temp in the living room 20℃ or less for a week but nah

My best guess at this point is the 2025 LG TVs have some different HDMI ARC something something compared to the 2019 it was plugged into before.

But also my point is that there's no way a human with 3 kids and job could ever know... it either starts working or I get a PlayStation or a different PC or whatever.

Or just tell my kids, "Hey, Death Stranding works on your Mac now, so shut the fuck up until you finish that whole game." ¯\_(ಠ_ಠ)_/¯


You could look into EDID settings, lots of weird quirks around that spec.

> macOS and Linux also do suck, but both are AFAICT way more predictable, and less random

macOS maybe as long as you're only using Apple hardware. As soon as you use 3rd party peripherals, you're in for very interesting bugs that are not getting confirmed by Apple and suddenly disappear again with a macOS update (if you're lucky).


yeah — i have my kids on Macs, bc I'm lazy, but just the ones with only two USB ports and nothing else — otherwise never-ending, unresolvable nightmare unless it's just some Apple thing you're plugging in

Vernor Vinge has some hits and some misses, but A Deepness in the Sky (best to just take the plunge and read it without googling — it's good either way, but better if you don't even read the back of the paperback).

Then, a bit further afield but for me, at least, exercised what I liked in The Culture series, even though stylistically different: Spin by Robert Charles Wilson.


I think A Fire Upon the Deep would be a more enjoyable starting place for someone that likes the Culture series, even though A Deepness in the Sky is generally considered the better novel.

I can understand where you are coming from, but I myself am coming from a quite different place. I'm a long-time Deno fan, and to me Bun was less interesting because a.) it seemed like a much-less-ambitious Deno, and b.) I don't want to learn Zig, so I wasn't likely to try to hack on Bun itself, even just recreationally.

But, I warmed up to Bun over the last couple years almost against my own will — trying to maintain a pretty large body of TypeScript code in a runtime-agnostic way (including even Node, since 24.2). I don't want to make any specific TypeScript runtime a requirement for my TypeScript code, unless there are really good reasons to do so.

But Bun (like Deno) kept providing those reasons. Postgres, SQLite, S3, websockets, local secrets (Keychain/wallet), bundling, compilation, killer speed. So I (somewhat grudgingly) started using Bun more, and even made it a requirement for some of my projects (albeit, in ways I could walk back later if needed).

Today, I have a bunch of API servers and frontend app servers which are bun build --compile --bytecode single executables ,that can run and be deployed virtually anywhere.

I've been very happy with it so far. But also, I don’t think that the way I am doing it is super-common, and now that they are doing this, uh... extremely ambitious LLM port, I am perfectly positioned to regret all of my decisions around Bun if this port ends up sucking.

So I'm a little nervous, but... what if it doesn't suck? That would be cool, because a.) they will have shown something interesting about what is possible with LLMs (albeit if you are rounds-to-a-trillion-dollars valuation frontier AI lab, lol, but still). And b.) going forward, Bun will be developed in Rust. We all have our own preferences, obviously, but to me, that's a win.

And if it does suck, though — that's super interesting too! Will be annoying to me to re-architect my Bun-specific shit to Deno, but for the world at large (and me, too) that's still interesting information!

Because Bun is perfectly positioned to do a huge LLM-powered port. They are one of the premier TS/JS runtimes, it's obviously and insane marketing pillar for the AI lab that bought them, they have unfathomable resources and access to the cutting-edge models that all of us don't get to play with yet, and for all intents and purposes, they have unlimited money to do this.

So if they can't do it — which will be really obvious, I think, if true — then it really just isn't possible yet, and all the naysayers were right.


>and to me Bun was less interesting because a.) it seemed like a much-less-ambitious Deno

I don't know, I've followed Deno, and it appeared to me an incredibly low ambition from the get go.


lol — what you're saying doesn't make sense to me, but I'm sure it makes sense to somebody

What I was specifically referring to is Deno (originally) trying to fix the (glaring, fundamental) problems that Node imposes on the world, vs just do them faster.


Yes, but "fixing some fundamental Node problems" is a low bar, hardly the high mark of ambition now, was it?

And to offer a counter example, something like Dart appeared much more ambitious to me.


I guess it depends on how you define ambition. If you are talking about in an absolute sense, yeah of course, the Dart project had to build a whole language, VM, and ecosystem. That's way more ambitious than Deno.

Though if you look relative to the team size and resources going into it, a project like Deno can still be considered ambitious. Creating an alternative ecosystem to nodejs is a large undertaking.


OK. But without changing programming laguages, "fix some fundamental Node problems" vs "don't fix those problems, just run them faster, and maybe inline the most popular dependencies"...

Surely we can agree that one of those positions is relatively less ambitious?


Well it remains to be proven how they can make a business out of fixing nodejs fundamental problems.

> what if it doesn't suck? > And if it does suck

Why not both? How about that: perfectly fine for Anthropic but suck for everyone else.


well to me that would still count as "it sucks"

but sure anthropic might not agree


Is there much value in it being written in rust if it's all AI slop?

Well "slop" is doing a lot of work there. If it's all incomprehensible garbage-code that no human can understand? Then... yeah very marginal value to me, in terms of hacking on it.

However, I think if it turns out that that's the case, then their port will fail in two ways (to paraphrase Hemingway): gradually, and then suddenly.

I don't think this port can be a success unless they end up — on the other side of it, not necessarily immediately — with maintainable Rust code.


if they succeed nothing will change for you

If they succeed the software will be more reliable with less memory issues that are very likely significant security issues at least some of the time.

When we've seen linux having a new significant exploit every other day now thanks to LLMs being better at weaponizing memory bugs this seems significant.


No, and there's been a lot of confusion about that on this website.

They did cite Rust's safety as a motivating factor for the port. That doesn't imply trying to achieve that simultaneously with the language change — which is good, because that would be insane. (Or, if you prefer, even more insane.)

You cannot faithfully port a codebase to a new language while also radically re-architecting it. You have to choose.

They want the safety benefits of Rust going forward; i.e., after it's finished, when they then write new code in Rust.


Yeah, exactly. The typical approach is to do a mechanical translation such as with rust2c, that is full of unsafe, and then gradually refactor safety in.

But nobody makes announcements and blog posts about running that.

There's several blog posts here. https://www.memorysafety.org/initiative/av1/

And the first post is about the team working on the project, with about two and a half sentences on c2rust, and making it very clear they just started.

The newer posts go into detail about the rearchitecting that follows.


And indeed, the bun team has not done that

Did they not make the announcement? And they definitely promised a blog post even if it's not out yet.

Not on their blog, website, or twitter, so no?

You have no idea if it was a lie or not. I routinely have my clanker fleet spend a couple days toiling on some crap that I assume I will throw away, but it turns out pretty awesome, so I keep it.

It's entirely plausible that when that comment was posted, he doubted it would work well enough to keep.

(Sensible default for LLM code, btw. But sometimes it works great.)


We have hundreds of projects that run on Bun. (Some are Bun-specific for whatever reason, but most are "runtime-agnostic TypeScript code that runs on Bun, Node 24.2+, and Deno, but that means they run their test suites on Bun, in addition to the other two.)

Out of curiosity, I installed the canary Bun and just ran a bunch of them. It didn't take me long to find one that works on stable Bun and crashes on "canary" Bun.

      schematic git:(main)  bun upgrade --canary
    [1.55s] Upgraded.
    
    Welcome to Bun's latest canary build!
    
    Report any bugs:
    
        https://github.com/oven-sh/bun/issues
    
    Changelog:
    
        https://github.com/oven-sh/bun/compare/0d9b296af...19d8ade2c
    
      schematic git:(main)  bun run main.ts serve
    Schematic Editor running at http://localhost:4200
    Bundled page in 25ms: src/web/index.html
    frontend TypeError: Cannot destructure property 'isLikelyComponentType' from null or undefined value
        at V0 (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:2534)
        at reactRefreshAccept (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:6090)
        at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:8766:27
        at CY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8973)
        at nY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:9285)
        (...more like this...)
        at m (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8773)
        at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6482
        at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6548
        from browser tab http://localhost:4200/
    ^C
      schematic git:(main)  bun upgrade --stable
    Downgrading from Bun 1.3.14-canary to Bun v1.3.14
    [2.02s] Upgraded.
    
    Welcome to Bun v1.3.14!
    
    What's new in Bun v1.3.14:
    
        https://bun.com/blog/release-notes/bun-v1.3.14
    
    Report any bugs:
    
        https://github.com/oven-sh/bun/issues
    
    Commit log:
    
        https://github.com/oven-sh/bun/compare/bun-v1.3.14...bun-v1.3.14
      schematic git:(main)  bun run main.ts serve
    Schematic Editor running at http://localhost:4200
    [browser] Version mismatch, hard-reloading
    Bundled page in 20ms: src/web/index.html
    
    # working fine as usual... ¯\_(ಠ_ಠ)_/¯
I mean "passes test suite" is one thing. And a good thing. But... "doesn't break any (or even, say 99.5%) of the apps deployed around the world that are built on bun" is a pretty radically different thing.

It's hard to feel like this is responsible behavior, but I will reserve judgement for now, and see how long they persist this "canary" phase.

If they extend it for a lengthy period, and even like, fix bugs on the Zig version and the Rust "canary" version, then... I would be mollified to a great extent, since it is so easy to switch between the Zig stable version and the Rust canary version.

As a pretty heavy user of Bun, I'm actually pretty psyched for it to switch to Rust... but given the abruptness and speed so far, I can't quite shake the "new AI dealer getting high on his own supply" vibe.

But I hope they enter an intensive phase of prioritizing any and all "canary" bugs, and come out on the other side with a better product, and an even faster rate of improvement (which has honestly been pretty wild already).

(Yes, of course, I will have my clanker file a bug report with repro... but that may take a few days.)


This bug was already reported very soon after the merge.

It's also a recipe for failure for ports in general. Same goes for the "not idiomatic Rust" comments above — that would be nonsense.

You want to port it as faithfully as possible to the original, porting it bug-for-bug, quirk-for-quirk. Then, over time, after the port has been proven to be as identical to the original as possible, you can gradually fix those kinds of internals.

That's why TypeScript's tsgo native port is so good.


tsgo will inherit many benefits from go, even if it is never fully "idiomatic".

This is in direct contrast to this port, which requires significant re-architecting (or made "idiomatic", if you wish) in rust to achieve any of the benefits of the language. You can't re-architect one step at a time.


I don't think you want to achieve any benefits of Rust in the initial port. Because at this scale you will definitely introduce new, and probably subtle, bugs that are not present in the Zig version.

You just want it to be the same, to the maximum extent the language allows. E.g. 1000+ unsafe is the right move, for now.

Reaping the benefits of Rust is for _future_ development.


That's my point - I don't see any hope of removing the 10,000+ unsafe calls, especially not one step at a time.

As such, this is a publicity stunt.


You could do, but maybe they never will. I have no idea.

But the point is, in 2027, 2028... your new code doesn't have to suffer from these frankly 1970s issues

You could also gradually fix the internals — if you wanted to


The irony being that machine-translation of code language also dates from the 1970's.

By having Codex port Deno to Zig, you mean?

This is funny, but Copilot is still an interesting case-study and (probably) failed predictor of where we are headed.

We all know, and have known for a long time, that the AI labs selling dollars for a nickel are going to pull that rug, and up that price, at some point.

Copilot, though, has been consistently the weakest mainstream AI coding offering. Inferior to Cursor or Windsurf at editor completions, inferior to Codex, Claude, OpenCode, blah blah blah, at agentic coding and also the old-school chat-style...

And now, it's no longer cheap AND now sucks even more than it has all along — the new $39/month plan is not only worse than all its competitors, but worse than its own $10 plan was a month ago — by a lot.

The thing is, you can't jack the price up unless you're good enough — at least on some axis, to some customer segment — to jack the price. And when you're not good enough, and you have vastly superior competitors who are not doing that yet... you're just forfeiting the game.

Which I agree, Copilot should do — it's the Windows Phone of AI coding assistants, after all — it still seems weird to me to just commit humiliating suicide rather that trying to make some deal with one of those superior competitors.

Instead of just jumping into a dumpster and lighting yourself on fire.


I suspect Microsoft will reneg if enough people cancel.

Even before yesterday, I assumed they made money via the gym model. I'd have months where I'm too busy to use my co pilot subscription in any meaningful way.

Canceling and restarting is too much of a hassle.

But with the pricing update I'd probably use up the 10$ plan within 3 days.

I don't know if anything else is integrated so we'll into GitHub though. I might keep the 10$ plan just for the occasional GitHub AI PR.


If their pricing turns out to be what they claim, and copilot cli has accurate token counts, they had the best deal around.

Just today, when I wasn't being especially chatty with GHCP, I used about 12 requests to get a few thousand line changes in 3 projects I'm juggling. The last project repo of copilot I closed, in 3 hours burned 38M input tokens, 28M cache, and like 400K out. For GPT5.4, high. That's like $135, in half the day, 1 of 3 instances. No crazy tool use, just lots of docs and unorganized code. GHCP charged like 70 cents for that on the old plan.


> it's the Windows Phone of AI coding assistants, after all

It seems everything Microsoft does is like this nowadays. They just can't seem to win anymore.


Microslop has lost their way from their ole acquisition investments and have instead hedged a bet on vibing their way into other industries.


It's at least considerate of them to jump into the dumpster first, less of a mess to deal with.


turns out it was spelled "lusage" the whole time


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: