I used to think typos meant that the author (and/or editor) hadn’t checked what they wrote, so the article was likely poor quality and less trustworthy. Now I’m reassured that it’s a human behind it and not a glorified word-prediction algorithm.
I used to think typos meant that the author (and/or editor) hadn’t checked what they wrote, so the article was likely poor quality and less trustworthy. Now I’m reassured that it’s a human behind it and not a glorified word-prediction algorithm.
This is not true. You do not know all the options that exist, or how they really work. I do. I am only using open source offline AI. I do not use anything proprietary. All of the LLM’s are just a combination of a complex system of categories, with a complex network that calculates what word should come next. Everything else is external to the model. The model itself is not anything like an artificial general intelligence. It has no persistent memory. The only thing is actually does is predict what word should come next.
Do you always remember things as is? Or do you remember an abstraction of it?
You also don’t need to know everything about something to be able to interpret risks and possibilities, btw.