• 2 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
  • The concern with LLM’s as any sort of source of truth is that they have no concept of facts or truth. They simply read training material and then pattern match to come up with a response to input. There is no concept of correct information. And unless you fact check it, you will not know if it is correct or it’s reasoning is sound. Using this to teach is dangerous IMO. Using the word reasoning is a anthropomorphising it too; it’s just pattern matching.

    Could we develop some adversarial system that fact checks it in the future? Possibly. But I don’t know of one that’s effective. Besides, good luck determining what is true when your training set is the internet. Or having it account for advances in understanding.

    From the article you linked:

    The incredible capabilities of large language models like ChatGPT are centered on how they have been trained on a vast corpus of knowledge. They provide us with an unparalleled resource for information and guidance. As your virtual professor, LLMs can guide you through the intricacies of each subject for deeper understanding of complex concepts.

    That’s a very naive take on LLMs. It assumes that because the training material is valid, it’s output is valid. It is not!

    I worry about the future where LLMs become the basis of information exchange because outputs “look right”.

    Show me a system that can guarantee correct answers and I’m 100% on board.