Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> the proof is that LLMs _can_ reliably generate (relatively small amounts of) working code from relatively terse descriptions

> Sometimes the interpolated detail is wrong (and indeterministic)

... You consider incorrect, non deterministic results to be "reliable"?



Do you consider the implementation of such specs by another human to (always) be correct and deterministic?

Heck, if I reimplement something I worked on a month ago it’s probably not going to be the exact same. Being non deterministic needn’t to be a problem, as long as it falls within certain boundaries and produces working results.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: