I think he's saying that during interviews the candidates were being asked to dive deep into their preceding employers' tech stacks. Which does seem to be asking them to tread in dicey legal waters in a coercive situation.
I see. Always stuggled with this. I think design interview on hypotheticals is better. Or have you used X with follow up questions about X? Probably OK to say we used kubetnetes. But not OK to describe inner workings of a custom controller that speeds up their workloads even if candidate wrote the code.
Meta could turn Whatsapp into the next Slack. I know a lot of businesses (especially international ones) that use it for team communication. It's so much better than Teams.
I guess they think it's a small market, or maybe you can't really monetize enterprise with ads and it's all they know how to do.
Sorry, are all of those model 3 and Y vehicles robotaxis?
Or are you saying that because they produced 1.5 million non-robotaxi cars in 2025 that the estimate of producing 1 million robotaxis in the following year is pretty reasonable, because making them autonomous taxis is a minor feature bump...?
No, I'm saying that the original content is low-effort shitposting, and that Tesla has the ability to scale industrial production to over 1mm 'things' per year, as evidenced by production last year. I did the OP the mild courtesy of asking him to open up a useful conversation. For instance, "Is there going to be demand for 1mm robots, and if so, when?" Or "How much actual retooling is necessary in Fremont for this?" Both seem like useful and interesting things to talk about.
I think teslas issue is that they need the AI5 chip for robotaxi ops, the current chip just doesn’t cut it. So if they have batches end of 2026 and start optimizing the models, by mid 2027 volume production you might have robotaxis coming online at about 100k per month. Waymo currently has less than 10k cars on the road.
Lots of ifs here. If they can enable hardware 4 for robotaxi ops then they can have 3m+ cars ready to go. But I am skeptical of it. And given that Elon’s top priority is scaling chips and AI5, I think that is proof that he thinks it is likely necessary too.
So 1m robotaxis by end of 2026 is theoretically possible but I think unlikely, and it’s more likely in the 200k-1m by end of 2027. If they pull that off, they could still be largest by then if Waymo doesn’t rapidly scale. Fun times!
My understanding is superficial, so do knock it down, not it seems to me that tesla insists on vision-only hour self driving, which vastly increases the requirement for ML. Whereas Waymo has a lower sum technology requirement by using both lidar and vision, and have moved faster. So when you say "tesla needs the AI5 chip", i hear the rider "...to avoid a public volte face".
I suppose that bulky lidar modules are undesirable in premium consumer goods, but i don't see that downside for taxis.
It might be that the Waymo sensor suite was key to launch; I don't have any actual knowledge. My impression though was that they basically wired in a large call center for years first to make decisions for the fleet, and have slowly narrowed the scope of those decisions. Elon definitely wasn't interested in staffing large call centers.
I also believe that Waymo relied on much more intensive mapping than Tesla does/did -- so you could imagine two really different graphs -- Waymo's quality and deployment starts higher, but it is perhaps capped against places they're willing to do the scanning, and by their labor Capex. They will be racing to lower the scanning requirements and lower the labor requirements. Meanwhile Tesla looks worse for a long time because they've bet on getting tech together for an 'everywhere' launch, and it's a J curve around quality -- useless at 3 9s, and very, very useful at 5 9s. If those are the actual dynamics, figuring out who will be the 'winner' needs the following strategy assessment:
1) A take on whether or not robotaxis are 'winner take all/most' (I propose they are not, switching cost for consumers is super low)
2) If you think both companies will get to 'good enough' a take on capital dynamics for at-scale launch (I think Tesla wins here, because Elon will rely on owner's capital for at-scale launch, or at least can if he wants to, while it seems very unlikely that Waymo will start selling their cars to individual operators at scale in a timely fashion)
3) An organizational assessment - if we assume that vision only ML will eventually work at all for 5 9s, can Waymo 'trim down' their data and labor stack faster than Tesla can scale up their vision-only ML?
Upshot - I wouldn't bet against Tesla being the dominant robotaxi in ten years. But I would be very surprised if it matters very much or they were the only one - eventually the stack will get commoditized. Tesla's solved almost all the hard problems of getting most of these on the road, except for that last 9 of reliability -- you'd have to really hate Elon to think they won't get there at all with the AI resources between SX/xAI and Tesla available.
Dynamics can make on the order of 25k robots a year though. Not enough to matter in a gdp sense. There is one US company that can scale this kind of manufacturing currently. So to my mind the question is : can Tesla ever get there on tech, and if so, can they be first to scale to a million units? You don’t need them to have the best robot now. Or ever really if they’re the first to scale.
Loudoun County in Virginia generates $1 billion in property tax revenues from data centers.
It funds half of all of their expenditures.
Can you imagine having half of your total municipal government budget being paid for by data centers?
Their citizens pay much lower property tax rates, and get much better schooling and police.
Henrico County (also VA) took $60 million in unexpected new revenues from data centers and created an affordable housing trust that is subsidizing low-cost housing.
Although these counties are figuring it out, it's an incredible failure in imagination for many of these liberals in other states to look at an immense source of new funding that could support schools, housing and health and just spurn it because they heard from a friend of a friend that they consume a lot of water based on a discredited book with elementary math errors.
They’re an anomaly that benefits from a number of factors like being close to the government for contracting, early data centers built there and they tended to congregate and dumb luck.
They’re an outlier and don’t really prove much of anything.
Oregon has lots and lots of data centers and not much to show for it on any front, other than higher electric prices for consumers
Oregon gave a lot of time-limited property tax breaks. They also don't have a sales tax.
So I would agree that giving away the #1 way that data centers contribute to the government isn't optimal, though you could argue it's a long-term play.
As the tax break terms expire, Oregon will get $450 million in annual property taxes from the data centers, or about 1.4% of the state budget.
Hopefully they don't end up with a "Digital Detroit" when datacenters start closing.
Though even if the AI market collapses, the capital spent means they'd probably keep operating; paying for 30 employees is much different than paying for 3,000 at a factory. But the datacenter might be owned by the creditors at that time.
> that they consume a lot of water based on a discredited book with elementary math errors.
How exactly do you think they dissipate the heat of a continuous 100 MW or 1 GW power draw? I have no idea what book you're referring to but you can do the math yourself it's quite straightforward.
Basically, the author (of the book) compares a data center outside Santiago to usage of water by humans, erroneously imputing that the average human uses only 200 cc of water per day.
Perhaps part of the problem here is that most towns that have proposals for AI data centres (including my own) have the developers demanding 10 year tax abatements, so we aren't going to see any of that tax revenue.
It's surprising to me how much LLM "personality" seems to matter to people, more than actual capability.
I do turn to Anthropic for ideation and non-tech things. But I find little reason to use it over codex for engineering tasks. Sometimes for planning, but even there, 5.4 is more critical of my questionable ideas, and will often come up with simpler ways to do things (especially when prompted), which I appreciate.
And I don't do hard-tech things! I've chosen a b2b field where I can provide competent products for a niche that is underserved and where long term relationships matter, simply because I'm not some brilliant engineer who can completely reinvent how something is done. I'm not writing kernels or complex ML stacks. So I don't really understand what everyone is building where they don't see the limits of Opus. Maybe small greenfield projects with few users.
> I'm not some brilliant engineer who can completely reinvent how something is done
With an honest evaluation of your own capabilities you are already far above average. Also its hard to see the insane amount of work that often was necessary to invent the brilliant stuff and most people can not shit that out consistently.
> It's surprising to me how much LLM "personality" seems to matter to people, more than actual capability.
> I do turn to Anthropic for ideation and non-tech things. But I find little reason to use it over codex for engineering tasks. Sometimes for planning, but even there, 5.4 is more critical of my questionable ideas, and will often come up with simpler ways to do things (especially when prompted), which I appreciate.
Aren't you saying here that the LLM personality matters to you, too? Being critical of you is a personality attribute, not a capabilities one.
Not necessarily. Criticism is the analysis, evaluation, or judgment of the qualities of something. This is a matter of intellectual act. However, you could say that being habitually critical can be partly a result of "personality" or temperament.
(Of course, strictly speaking, LLMs have neither temperament, "personality", nor intellect, but we understand these terms are used in an analogical or figurative fashion.)
It's not that expensive. The Starlink Mini is around $200, and service is $50/mo for 100gb.
I've been somewhat skeptical of the addressable market (doesn't fiber + cell tower network offer good enough coverage?) but I know so many people who have put it on their RV, their boat, or are using it rurally that I've started changing my mind. And the service really is better than cell phone networks, which are far too patchy to provide reliable service at decent speed.
And you can put it on standby mode for $5/mo, so you're not even really locked into $50/mo if you're occasionally doing travel where you want to stay connected.
And in places like Africa, they've had to tightly rate limit new customers because demand is so high.
Yeah, as an RVer, I can tell you that you would probably be surprised by how much of the country does not have readily available cell service. And even if it does, they might not have it on your network.
I was paying more to have SIM cards for all of the big three, and getting much less out of it
The markets are additive. The great thing about Starlink is that it is GLOBAL. Meaning if you want to offer it for ships and planes (where there are no alternatives) you might as well also offer it to RV. And to rural people. And to the military. And you can do so in every country on the whole planet at the same time.
Having a few 1000s of sats to cover the whole planet is crazy efficient.
If you look at just the satellites, the build + launch costs are about $2.5M ea, which is impressive to be sure. But they only last 5 years, so that's $500k per year replacement costs. Then if you look at their capacity, they still can't meet their FCC / RDOF broadband designation speeds, but let's be generous and say they can serve 1000 simultaneous users per satellite (their current ratio, let's say it's good enough, incl. oversubscription ratio). So that already means 50%-100% of the entire monthly Internet bill from a consumer is going to just be replacing satellites. Let alone everything else to be an ISP.
This is very basic math. They need to launch more satellites if they want to hit their RDOF throughput goals and serve customers in the remaining areas. The most valuable extra-rural areas were low hanging fruit and already drying up.. the future addressable market is more dense and competitive suburban areas, which further limits the number of users per satellite because everyone shares the same spot beam spectrum.
But as you know well--having your personal connections to SpaceX it seems as you always defend them on HN--Starlink is about Golden Dome not consumer internet, so the private markets will fund it.
Yes and unless you're paying Starlink say $300/mo, they are taking a loss to serve you internet. Cities are especially difficult for them because more users are in the same spot beam so everyone shares the spectrum and they need even lower oversubscription ratios.
Yeah I don't know about the math. I've seen numbers that differ significantly from yours, but none which make it profitable at a reasonable price. I am sure he will continue to drop launch costs and I assume satellite improvements will make them able to serve more people, maybe orbit longer as they get smaller.
Complete nonsense. They didn't start in 2015 and didn't get investment into Starlink from Google because hopefully some presidnet would want Goldon Dome in the future. Starlink is a good business and has plenty of military value without Goldon Dome.
100 Mbps down / 15-35 Mbps up, unlimited data, includes hardware rental: €29/month in Europe, $39/month in the US.
200 Mbps down / 15-35 Mbps up, unlimited data, includes hardware rental: €49/month in Europe, $69/month in the US.
400+ Mbps down / 20-40 Mbps up (QoS higher priority), unlimited data, includes hardware rental: €69/month in Europe, $109/month in the US.
A good high-speed fiber connection is obviously better quality and value; but if you don't have one, then Starlink is absolutely the most competitive option you're going to get.
I don't have a lot of data points, but in metropolitan France at least I think you would always be better off with either a fiber or a 5G subscription, because it will be cheaper for more throughput, and because fiber is very widespread.
In Germany I think you are still better off with a cable subscription which also seems to be widespread in my experience and is cheaper than Starlink even if it's not as good as French deals (I only take in account offers without a contract for fairness, but if you don't mind you may be able to get even cheaper offers).
It's odd because I no longer really like ChatGPT. For chat-type requests, I prefer Claude, or if it's knowledge-intensive then Gemini 3 Pro (which is better for history, old novels, etc).
But GPT 5.3 Codex is great. Significantly better than Opus, in the TUI coding agent.
I don't know about Opus, but Codex suddenly got a lot better to the point that I prefer it over Sonnet 4.6. Claude takes ages and comes up with half baked solutions. Codex is so fast that I miss waiting. It also writes tests without prompting.
Why not? It's a physical building with lots of equipment that produces products shipped to its customers.
Its products are sequences of electrons, instead of atoms. But so are power plants. And in the context of what happens when they're hit by missiles, a factory, data center, and power plant all behave the same.
reply