A helper tool that I can ask a question and which responds with relevant information gleaned from the vast collection of human-gathered knowledge and experience would be fantastic.
What we have instead is something that often gets things mostly right, if you don't look too hard at it. And the poisoned output of this thing seeps back into the knowledge pool, reducing its accuracy and therefore usefulness.
The problem of LLMs is the dissolution of human knowledge into a sea of slop.
A helper tool that I can ask a question and which responds with relevant information gleaned from the vast collection of human-gathered knowledge and experience would be fantastic.
What we have instead is something that often gets things mostly right, if you don't look too hard at it. And the poisoned output of this thing seeps back into the knowledge pool, reducing its accuracy and therefore usefulness.
The problem of LLMs is the dissolution of human knowledge into a sea of slop.