Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

China already operates like this. Low cost specialized models are the name of the game. Cheaper to train, easy to deploy.

The US has a problem of too much money leading to wasteful spending.

If we go back to the 80s/90s, remember OS/2 vs Windows. OS/2 had more resources, more money behind it, more developers, and they built a bigger system that took more resources to run.

Mac vs Lisa. Mac team had constraints, Lisa team didn't.

Unlimited budgets are dangerous.

 help



Though I do agree with you, I just came back from a trip to China (Shanghai more specifically) and while attending a couple AI events, the overwhelming majority of people there were using VPNs to access Claude code and codex :-/

Parent's point was about deployment, not agentic coding.

On the Mac vs Lisa team, I generally agree but wasn't there a strong tension on budget vs revenue on Mac vs Apple II? And that Apple II had even more constrained budget per machine sold which led to the conflict between Mac and Apple II teams. (Apple II team: "We bring in all the revenue+profit, we offer color monitors, we serve businesses and schools at scale. Meanwhile, Steve's Mac pirate ship is a money pit that also mocks us as the boring Navy establishment when we are all one company!")

By the logic of constraints (on a unit basis), Apple II should have continued to dominate Mac sales through the early 90s but the opposite happened.


Perhaps its because american hyperscalers want unlimited upside for their capital?

It has been a very bad bet that hardware will not evolve to exceed the performance requirements of today's software tomorrow, just as it is a bad bet that tomorrow someone will rewrite today's software to be slower.

Eh, but then as hardware evolves, the software will also follow suit. We’ve had an explosion of compute performance and yet software is crawling for the same tasks we did a decade ago.

Better hardware ensures that software that is “finished” today will run at acceptable levels of performance in the future, and nothing more.

I think we won’t see software performance improve until real constraints are put on the teams writing it and leaders who prioritize performance as a North Star for their product roadmap. Good luck selling that to VCs though.


> Low cost specialized models

Can you elaborate on this? Is this something that companies would train themselves?


You can fine-tune a model, but there are also smaller models fine-tuned for specific work like structured output and tool calling. You can build automated workflows that are largely deterministic and only slot in these models where you specifically need an LLM to do a bit of inference. If frontier models are a sledgehammer, this approach is the scalpel.

A common example would be that people are moving tasks from their OpenClaw setup off of expensive Anthropic APIs onto cheaper models for simple tasks like tagging emails, summarizing articles, etc.

Combined with memory systems, internal APIs, or just good documentation, a lot of tasks don't actually require much compute.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: