Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hard for me to reconcile the idea that they don't have enough compute with the idea that they are also losing money to subsidies.
 help



- Not enough compute for the requests they have

- Selling those requests at less money than it cost to run the compute for those requests (because if you raise price clients go to openai)

The statements are not contradicting each other? They keep subsidizing to try to grow customer base, but they can't serve the customer base they have, they're expecting customer base grows faster than it drops from people bothered with rate limits (it probably will, average user won't hit rate limits enough to change)

Probably expecting a breakthrough in efficiency for compute, or getting enough cash flow (IPO?) to get more compute before it all comes crashing down


they clearly arent losing money, i dont understand why people think this is true

People think it's true because it is true, and OpenAI has told us themselves.

They (very optimistically) say they'll be profitable in 2030.


They're saying Anthropic doesn't have enough compute, not OpenAI. They said OpenAI specifically invested early in compute at a loss.

They are loosing money because the model training costs billions.

Model inference compute over model lifetime is ~10x of model training compute now for major providers. Expected to climb as demand for AI inference rises.

For sure and growth also costs money for buying DCs etc.

They are constantly training and getting rid of older models, they are losing money

Which part of "over model lifetime" did you not understand?

That's not a sufficient condition for profitability if both inference and scaling costs continue to increase over time.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: