For fact-based applications, the amount of work required to develop and subsequently babysit the LLM to ensure it is always producing accurate output is exactly the same as doing the work yourself in the first place.
Always, always, always. This is a mathematical law. It doesn’t matter how much you whine or argue, or cite anecdotes about how you totally got ChatGPT or Copilot to generate you some working code that one time. The LLM does not actually have comprehension of its input or output. It doesn’t have comprehension, period. It cannot know when it is wrong. It can’t actually know anything.
Sure, very sophisticated LLM’s might get it right some of the time, or even a lot of the time in the cases of very specific topics with very good training data. But its accuracy cannot be guaranteed unless you fact-check 100% of its output.
Underpaid employees were asked to feed published articles from other news services into generative AI tools and spit out paraphrased versions. The team was soon using AI to churn out thousands of articles a day, most of which were never fact-checked by a person. Eventually, per the NYT, the website’s AI tools randomly started assigning employees’ names to AI-generated articles they never touched.
Yep, that right there. I could have called that before they even started. The shit really hits the fan when the computer is inevitably capable of spouting bullshit far faster than humans are able to review and debunk its output, and that’s only if anyone is actually watching and has their hand on the off switch. Of course, the end goal of these schemes is to be able to fire as much of the human staff as possible, so it ultimately winds up that there is nobody left to actually do the review. And whatever emaciated remains of management are left don’t actually understand how the machine works nor how its output is generated.
Yeah, I see no flaws in this plan… Carry the fuck on, idiots.
Did you enjoy humans spouting bullshit faster than humans can debunk it? Well, brace for impact because here comes machine-generated bullshit! Wooooeee’refucked! 🥳
And untangling that clusterfuck can be damn near impossible.
The reaper may not present his bill immediately, but he will always present his bill eventually. This is a zero-sum thing: There is no net savings because the work required can be front loaded or back loaded, and you sitting there at the terminal in the present might not know. Yet.
There are three phases where time and effort are input, and wherein asses can be bitten either preemptively or after the fact:
Loading the algorithm with all the data. Where did all that data come from? In the case of LLM’s, it came from an infinite number of monkeys typing on an infinite number of keyboards. That is, us. The system is front loaded with all of this time and effort – stolen, in most cases. Also the time and effort spent by those developing the system and loading it with said data.
At execution time. This is the classic example, i.e. the algorithm spits out into your face something that is patently absurd. We all point and laugh, and a screen shot gets posted to Lemmy. “Look, Google says you should put glue on your pizza!” Etc.
Lurking horrors. You find out about the problem later. Much later. After the piece went to print, or the code went into production. “Time and effort were saved,” producing the article or writing the code. Yes, they appeared to be – then. Now it’s now. Significant expenditure must be made cleaning up the mess. Nobody actually understood the code but now it has to be debugged. And somebody has to pay the lawyers.
Your statement is technically true but wrong in practice. Because your statement applies to EVERYTHING on the Internet. We had tons of error ridden garbage articles written by underpaid interns long before AI.
And no, fact checking is quicker than writing something from scratch. Just like verifying Wikipedia sources is quicker than writing a Wikipedia article.
And no, fact checking is quicker than writing something from scratch. Just like verifying Wikipedia sources is quicker than writing a Wikipedia article.
For something created by a human - yes. For something created by a text generator - hell no.
for example in the code, sometimes machine errors are much harder to detect or diagnose because it is nothing like what a human would do. I would expect similarly in text, everything looks correct, because that’s what it is designed to do. Except in code you have a much higher chance of quickly knowing that there is an error somewhere, and with text you don’t even get a warning that you need to start looking for errors
A-MEN. well put. I wouldn’t make so many words, I’d just settle for “Fuck LLMs and fuck the dipshits who label it AI or think it has anything to do with AI.”
This is almost certainly what we’re looking at here. It’s the Ford Pinto for the modern age. “So what if a few people get blown up/defamed? Paying for that will cost less than what we made, so we’re still in the black.” Yeah, that’s grand.
Further, generative “AI’s” and language models like these are fine when used for noncritical purposes where the veracity of the output is not a requirement. Dall-E is an excellent example, where all it’s doing is making varying levels of abstract art and provided nobody is stupid enough to take what it spits out for an actual photograph documenting evidence of something, it doesn’t matter. Or, “Write me a poem about crows.” Who cares if it might file crows in the wrong taxonomy as long as the poem sounds nice.
Okay, yes I agree with you fully, but you can’t just say it’s a mathematical law without proof, that’s something you need to back up with numbers and I don’t think “work” is quantifiable.
Again, yes, they need to slow down, but I have an issue with your claim unless you’re going to be backing it up. Otherwise you’re just a crazy dude standing on a soapbox
I can see how it might be seen as more facile to correct/critique than to produce the original work. This is actually true, same as how its easier to iterate on something than to wholesale create the thing.
Definitely find it easier to extend or elaborate on something “old” over crapping out a new thing, altho I can see how that is not always the case if its too “legacy”. ChatGPT is intriguing because it can arguably modularly generate many of the parts, you would just need to glue them together properly and ensure all the outputs are cohesive and coherent
For example: if you’re a lawyer and you generate anything, you must at the very least
Read, not dictate
Ensure all caselaw cited
a) definitely exists and
b) is relevant to the facts and arguments they are being used to support
Sure, very sophisticated LLM’s might get it right some of the time, or even a lot of the time in the cases of very specific topics with very good training data. But its accuracy cannot be guaranteed unless you fact-check 100% of its output.
You will only guarantee what you answer for.
Since they have power to make it so, they own the good part and disown the bad part.
It’s the warfare logic, the collateral damage of FAB-1500 is high, but it makes even imps in the hell tremble when dropped.
And to be treated more gently you need a different power balance. Either make them answer to you, or cut them out. You can’t cut out a bombardment, though, and with the TRON project in Japan MS specifically have already shown that they are willing and able to use lobbying to force themselves onto you.
Of course, the end goal of these schemes is to be able to fire as much of the human staff as possible, so it ultimately winds up that there is nobody left to actually do the review. And whatever emaciated remains of management are left don’t actually understand how the machine works nor how its output is generated.
Reminiscent of the Soviet “they imitate pay, we imitate work” thing. Or medieval kings with reducing the metal percentages in coins. The modern Web is not very transparent, and the income is ad-driven, so it’s not immediately visible how generated bullshit isn’t worth nearly as much as something written by a human.
What I’m trying to say is that the way it’s interconnected and amortized this Web is going down as a whole, and not just people poisoning it for short-term income.
This is intentional, they don’t want to go down alone, and when more insularity exists, such people go down and others don’t. Thus they’ve almost managed to kill that insularity. This will still work the same good old evolutionary way, just slower.
the other big thing is that once it does start spouting bullshit or even just finds a phase or string of words, its so hard to get it out, you really just have to start over your instance or purge the memory, they get the obsession so easily sometimes without like sacrificing relevancy to the topic entirely
We use CoPilot at work and there is no babysitting required.
We are software developers / engineers and it’s saves countless hours writing boilerplate code, giving code blocks based on a comment, and sticking to our coding conventions.
Sure it isn’t 100% right, but the owner and lead engineer calculates it to be around 70% accurate and even if it misses the mark, we have a whole lot less key presses to make.
Using Copilot as a copilot, like generating boilerplate and then code reviewing it is still “babysitting” it. It’s still significantly less effort than just doing it yourself though
Until someone uses it for a little more than boilerplate, and the reviewer nods that bit through as it’s hard to review and not something a human/the person who “wrote” it would get wrong.
Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.
Agreed, using LLMs for code requires you to be an experienced dev who can understand what it pukes out. And for those very specific and disciplined people it’s a net positive.
However, generally, I agree it’s more risk than it’s worth
How is it more effort to automate boilerplate code? Seriously the worst part of being a programmer is writing the same line of code all of the time. Especially when you know that it won’t actually cause anything interesting to happen on the screen it’s just background stuff that needs to happen.
When I used to develop websites I don’t think I could have lived without Emmett, which was basically the predecessor to co-pilot.
Well you have to actually setup the boilerplate, plus copilot is generally more intelligent and context aware, especially for small snippets when you’re already coding
So, for an example we use a hook called useChanges() for tracking changes to a model in the client, it has a very standard set of arguments.
Why would we want to waste time writing it out all the time when we can write the usual comment “Product Model” and have it do the work.
Copy and Paste takes more effort as we WILL have to change the dynamic parts every time, macros will take longer as we have to create the macros for every different convention we have.
If you can’t see the benefit of LLMs as a TOOL to aid developers then I would hazard a guess you are not in the industry or you just haven’t even given them a go.
I will say I am a new developer and not amazing, but my boss the owner and lead engineer is a certified genius, who will write flawless code on damn teams to help me along at times, and if he can benefit from it in time saved then anybody would.
My PhD was in neural networks in the 1990s and I’ve been in development since then.
Remember when digital cameras came out? They were pretty crappy compared to film—if you had a decent film camera and knew what you were doing. I fell like that’s where we’re at with LLMs right now.
Digital cameras are now pretty much on par with film, perhaps better in some circumstances and worse in others.
Shifting gear from writing code to reviewing someone else’s is inefficient. With a good editor setup and plenty of screen real estate, I’m more productive just writing than constantly worrying about what the copilot just inserted. And yes, I’ve tested that.
Clearly what works for our company ain’t what would work for you, even if I think it’s preposterous what you’re claiming.
My boss was working on Open Source from the BSD days and is capable of very low level programming. He has forgotten more than I’ll ever know, and if he can find LLMs a useful tool for our literal company to improve productivity then I’m inclined to stick with what I have seen and experienced. Just not having to do and search documentation alone is a massive time saver. Unless obviously you know everything, which nobody does.
I’m a developer and typing encompasses most of my day. The owner and lead engineer has many meeting and admin work, but still is writing code and scaffolding new projects around 30% of his time.
I’m a developer and typing encompasses most of my day as well, but increasingly less of it is actually producing code. Ever more of it is in the form of emails, typically in the process of being forced to argue with idiots about what is and isn’t feasible/in the spec/physically possible, or explaining the same things repeatedly to the types of people who should not be entrusted with a mouse.
Yeah. I’m not sure that statement applies. It’s easier for humans to check something than to come up with something in the first place. But the thing is, the person doing the checking also needs to be proficient in the subject.
I disagree with the “always” bit. At some point in the future AI is actually going to get to the point where we can basically just leave it to it, and not have to worry.
But I do agree that we are not there yet. And that we need to stop pretending that we are.
Having said that my company uses AI for a lot of business critical tasks and we haven’t gone bankrupt yet, of course that’s not quite the same as saying that a human wouldn’t have done it better. Perhaps we’re spending more money than we need to because of the AI, who knows?
The current models that are in use now (and the subject of the article) are not actual AI’s. There is no thinking going on in there. They are statistical language models that are literally incapable of producing anything that was not originally part of their training input data, reassembled and strung together different ways. These LLM models can’t actually generate new content, they can’t think up anything novel, and of course they can’t actually think at all. They are completely at the mercy of whatever garbage is fed into them and are by definition not capable of actually “understanding” their output because they are not capable of understanding at all. The nature of these processes being a statistical model also means that the output is to some extent always dependent on an internal dice roll as well, and the possibility of rolling snake eyes is always there no matter how clever or well tuned the algorithm is.
This is not to say humans are infallible, either, but at least we are conceptually capable of understanding when and more importantly how we got something wrong when called on it. We are also capable of researching sources and weighing the validity of different sources and/or claims, which an LLM is not – not without human intervention, anyway, which loops back to my original point about doing the work yourself in the first place. An LLM cannot determine if a published sequence of words is bogus. It can of course string together a new combination of words in a syntactically valid manner that can be read and will make sense, but the truth of the constructed text cannot actually be determined programmatically. So in any application where accuracy is necessary, it is downright required to thoroughly review 100% of the machine output to verify that it is factual and correct. For anyone capable of doing that without smoke coming out of their own ears, it is then trivial to take the next step and just reproduce what the machine did for you. Yes, you may as well have just done it yourself. The only real advantage the machine has is that it can type faster than you and it never needs more coffee.
The only way to cast off these limitations would be to develop an entirely new real AI model that is genuinely capable of understanding the meaning of both its input and output, and legitimately capable of drawing new conclusions from its own output also taking into account additional external data when presented with it. And being able to show its work, so to speak, to demonstrate how it arrived at its conclusions to back up their factual validity. This requires throwing away the current LLM models completely – they are a technological dead end. They’re neat, and capable of fooling some of the people some of the time, but on a mathematical level they’re never capable of achieving internally provable, consistent truth.
I think people don’t yet grasp that LLMs don’t produce any novel output. If that was the case, considering the amount of knowledge they have, they’d be making incredible new connections and insights that humanity never made before. Instead, they can only explain stuff that was already well documented before.
Say it with me again now:
For fact-based applications, the amount of work required to develop and subsequently babysit the LLM to ensure it is always producing accurate output is exactly the same as doing the work yourself in the first place.
Always, always, always. This is a mathematical law. It doesn’t matter how much you whine or argue, or cite anecdotes about how you totally got ChatGPT or Copilot to generate you some working code that one time. The LLM does not actually have comprehension of its input or output. It doesn’t have comprehension, period. It cannot know when it is wrong. It can’t actually know anything.
Sure, very sophisticated LLM’s might get it right some of the time, or even a lot of the time in the cases of very specific topics with very good training data. But its accuracy cannot be guaranteed unless you fact-check 100% of its output.
Yep, that right there. I could have called that before they even started. The shit really hits the fan when the computer is inevitably capable of spouting bullshit far faster than humans are able to review and debunk its output, and that’s only if anyone is actually watching and has their hand on the off switch. Of course, the end goal of these schemes is to be able to fire as much of the human staff as possible, so it ultimately winds up that there is nobody left to actually do the review. And whatever emaciated remains of management are left don’t actually understand how the machine works nor how its output is generated.
Yeah, I see no flaws in this plan… Carry the fuck on, idiots.
Did you enjoy humans spouting bullshit faster than humans can debunk it? Well, brace for impact because here comes machine-generated bullshit! Wooooeee’refucked! 🥳
To err is human. But to really fuck up, you need a computer.
A human can only do bad or dumb things so quickly.
A human writing code can do bad or dumb things at scale, as well as orders of magnitude more quickly.
And untangling that clusterfuck can be damn near impossible.
The reaper may not present his bill immediately, but he will always present his bill eventually. This is a zero-sum thing: There is no net savings because the work required can be front loaded or back loaded, and you sitting there at the terminal in the present might not know. Yet.
There are three phases where time and effort are input, and wherein asses can be bitten either preemptively or after the fact:
Your statement is technically true but wrong in practice. Because your statement applies to EVERYTHING on the Internet. We had tons of error ridden garbage articles written by underpaid interns long before AI.
And no, fact checking is quicker than writing something from scratch. Just like verifying Wikipedia sources is quicker than writing a Wikipedia article.
For something created by a human - yes. For something created by a text generator - hell no.
Can you elaborate on that?
for example in the code, sometimes machine errors are much harder to detect or diagnose because it is nothing like what a human would do. I would expect similarly in text, everything looks correct, because that’s what it is designed to do. Except in code you have a much higher chance of quickly knowing that there is an error somewhere, and with text you don’t even get a warning that you need to start looking for errors
A-MEN. well put. I wouldn’t make so many words, I’d just settle for “Fuck LLMs and fuck the dipshits who label it AI or think it has anything to do with AI.”
The cost however is not the same. I can totally see the occasional lawsuit as the cost of doing business for a company that employs AI.
This is almost certainly what we’re looking at here. It’s the Ford Pinto for the modern age. “So what if a few people get blown up/defamed? Paying for that will cost less than what we made, so we’re still in the black.” Yeah, that’s grand.
Further, generative “AI’s” and language models like these are fine when used for noncritical purposes where the veracity of the output is not a requirement. Dall-E is an excellent example, where all it’s doing is making varying levels of abstract art and provided nobody is stupid enough to take what it spits out for an actual photograph documenting evidence of something, it doesn’t matter. Or, “Write me a poem about crows.” Who cares if it might file crows in the wrong taxonomy as long as the poem sounds nice.
Facts and LLM’s don’t mix, though.
While that works for “news agencies” it’s a free money glitch when used in a customer support role for the consumer.
Edit: clarification
Pretty sure an airline was forced to pay out on a fake policy that one of their support bots spouted.
Okay, yes I agree with you fully, but you can’t just say it’s a mathematical law without proof, that’s something you need to back up with numbers and I don’t think “work” is quantifiable.
Again, yes, they need to slow down, but I have an issue with your claim unless you’re going to be backing it up. Otherwise you’re just a crazy dude standing on a soapbox
I can see how it might be seen as more facile to correct/critique than to produce the original work. This is actually true, same as how its easier to iterate on something than to wholesale create the thing.
Definitely find it easier to extend or elaborate on something “old” over crapping out a new thing, altho I can see how that is not always the case if its too “legacy”. ChatGPT is intriguing because it can arguably modularly generate many of the parts, you would just need to glue them together properly and ensure all the outputs are cohesive and coherent
For example: if you’re a lawyer and you generate anything, you must at the very least
I think it’s worse than that. The work is about the same. The skill and pay for that work? Lower.
Why pay 10 experienced journalists when you can pay 10 expendable fact checkers who just need to run some facts/numbers by a Wikipedia page?
You will only guarantee what you answer for.
Since they have power to make it so, they own the good part and disown the bad part.
It’s the warfare logic, the collateral damage of FAB-1500 is high, but it makes even imps in the hell tremble when dropped.
And to be treated more gently you need a different power balance. Either make them answer to you, or cut them out. You can’t cut out a bombardment, though, and with the TRON project in Japan MS specifically have already shown that they are willing and able to use lobbying to force themselves onto you.
Reminiscent of the Soviet “they imitate pay, we imitate work” thing. Or medieval kings with reducing the metal percentages in coins. The modern Web is not very transparent, and the income is ad-driven, so it’s not immediately visible how generated bullshit isn’t worth nearly as much as something written by a human.
What I’m trying to say is that the way it’s interconnected and amortized this Web is going down as a whole, and not just people poisoning it for short-term income.
This is intentional, they don’t want to go down alone, and when more insularity exists, such people go down and others don’t. Thus they’ve almost managed to kill that insularity. This will still work the same good old evolutionary way, just slower.
the other big thing is that once it does start spouting bullshit or even just finds a phase or string of words, its so hard to get it out, you really just have to start over your instance or purge the memory, they get the obsession so easily sometimes without like sacrificing relevancy to the topic entirely
Llms are useful for recalling from a fixed corpus where you dictate they cite their source.
They are ideal for human in the loop research solutions.
The whole “answer anything about anything” concept is dumb.
Simply false in my experience.
We use CoPilot at work and there is no babysitting required.
We are software developers / engineers and it’s saves countless hours writing boilerplate code, giving code blocks based on a comment, and sticking to our coding conventions.
Sure it isn’t 100% right, but the owner and lead engineer calculates it to be around 70% accurate and even if it misses the mark, we have a whole lot less key presses to make.
Using Copilot as a copilot, like generating boilerplate and then code reviewing it is still “babysitting” it. It’s still significantly less effort than just doing it yourself though
Until someone uses it for a little more than boilerplate, and the reviewer nods that bit through as it’s hard to review and not something a human/the person who “wrote” it would get wrong.
Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.
Undoubtedly. Hell, even when you do mark it as such, this will happen. Because bugs created by humans also get deployed.
Basically what you’re saying is that code review is not a guarantee against shipping bugs.
Agreed, using LLMs for code requires you to be an experienced dev who can understand what it pukes out. And for those very specific and disciplined people it’s a net positive.
However, generally, I agree it’s more risk than it’s worth
How is it more effort to automate boilerplate code? Seriously the worst part of being a programmer is writing the same line of code all of the time. Especially when you know that it won’t actually cause anything interesting to happen on the screen it’s just background stuff that needs to happen.
When I used to develop websites I don’t think I could have lived without Emmett, which was basically the predecessor to co-pilot.
Well you have to actually setup the boilerplate, plus copilot is generally more intelligent and context aware, especially for small snippets when you’re already coding
Surely boilerplate code is copy / paste or macros, then edit the significant bits—a lot less costly than copilot.
That would still make more effort.
So, for an example we use a hook called useChanges() for tracking changes to a model in the client, it has a very standard set of arguments.
Why would we want to waste time writing it out all the time when we can write the usual comment “Product Model” and have it do the work.
Copy and Paste takes more effort as we WILL have to change the dynamic parts every time, macros will take longer as we have to create the macros for every different convention we have.
If you can’t see the benefit of LLMs as a TOOL to aid developers then I would hazard a guess you are not in the industry or you just haven’t even given them a go.
I will say I am a new developer and not amazing, but my boss the owner and lead engineer is a certified genius, who will write flawless code on damn teams to help me along at times, and if he can benefit from it in time saved then anybody would.
My PhD was in neural networks in the 1990s and I’ve been in development since then.
Remember when digital cameras came out? They were pretty crappy compared to film—if you had a decent film camera and knew what you were doing. I fell like that’s where we’re at with LLMs right now.
Digital cameras are now pretty much on par with film, perhaps better in some circumstances and worse in others.
Shifting gear from writing code to reviewing someone else’s is inefficient. With a good editor setup and plenty of screen real estate, I’m more productive just writing than constantly worrying about what the copilot just inserted. And yes, I’ve tested that.
Clearly what works for our company ain’t what would work for you, even if I think it’s preposterous what you’re claiming.
My boss was working on Open Source from the BSD days and is capable of very low level programming. He has forgotten more than I’ll ever know, and if he can find LLMs a useful tool for our literal company to improve productivity then I’m inclined to stick with what I have seen and experienced. Just not having to do and search documentation alone is a massive time saver. Unless obviously you know everything, which nobody does.
What if I told you that typing in software engineering encompasses less than 5% of your day?
I’m a developer and typing encompasses most of my day. The owner and lead engineer has many meeting and admin work, but still is writing code and scaffolding new projects around 30% of his time.
I’m a developer and typing encompasses most of my day as well, but increasingly less of it is actually producing code. Ever more of it is in the form of emails, typically in the process of being forced to argue with idiots about what is and isn’t feasible/in the spec/physically possible, or explaining the same things repeatedly to the types of people who should not be entrusted with a mouse.
Total bullshit. We use LLMs at work for tasks that would be nearly impossible and require obscene amounts of manpower to do by hand.
Yes we have to check the output, but its not even close to the amount of work to do it by hand. Like, by orders of magnitude.
Yeah. I’m not sure that statement applies. It’s easier for humans to check something than to come up with something in the first place. But the thing is, the person doing the checking also needs to be proficient in the subject.
I disagree with the “always” bit. At some point in the future AI is actually going to get to the point where we can basically just leave it to it, and not have to worry.
But I do agree that we are not there yet. And that we need to stop pretending that we are.
Having said that my company uses AI for a lot of business critical tasks and we haven’t gone bankrupt yet, of course that’s not quite the same as saying that a human wouldn’t have done it better. Perhaps we’re spending more money than we need to because of the AI, who knows?
…Nnnnno, actually always.
The current models that are in use now (and the subject of the article) are not actual AI’s. There is no thinking going on in there. They are statistical language models that are literally incapable of producing anything that was not originally part of their training input data, reassembled and strung together different ways. These LLM models can’t actually generate new content, they can’t think up anything novel, and of course they can’t actually think at all. They are completely at the mercy of whatever garbage is fed into them and are by definition not capable of actually “understanding” their output because they are not capable of understanding at all. The nature of these processes being a statistical model also means that the output is to some extent always dependent on an internal dice roll as well, and the possibility of rolling snake eyes is always there no matter how clever or well tuned the algorithm is.
This is not to say humans are infallible, either, but at least we are conceptually capable of understanding when and more importantly how we got something wrong when called on it. We are also capable of researching sources and weighing the validity of different sources and/or claims, which an LLM is not – not without human intervention, anyway, which loops back to my original point about doing the work yourself in the first place. An LLM cannot determine if a published sequence of words is bogus. It can of course string together a new combination of words in a syntactically valid manner that can be read and will make sense, but the truth of the constructed text cannot actually be determined programmatically. So in any application where accuracy is necessary, it is downright required to thoroughly review 100% of the machine output to verify that it is factual and correct. For anyone capable of doing that without smoke coming out of their own ears, it is then trivial to take the next step and just reproduce what the machine did for you. Yes, you may as well have just done it yourself. The only real advantage the machine has is that it can type faster than you and it never needs more coffee.
The only way to cast off these limitations would be to develop an entirely new real AI model that is genuinely capable of understanding the meaning of both its input and output, and legitimately capable of drawing new conclusions from its own output also taking into account additional external data when presented with it. And being able to show its work, so to speak, to demonstrate how it arrived at its conclusions to back up their factual validity. This requires throwing away the current LLM models completely – they are a technological dead end. They’re neat, and capable of fooling some of the people some of the time, but on a mathematical level they’re never capable of achieving internally provable, consistent truth.
I think people don’t yet grasp that LLMs don’t produce any novel output. If that was the case, considering the amount of knowledge they have, they’d be making incredible new connections and insights that humanity never made before. Instead, they can only explain stuff that was already well documented before.
deleted by creator
And so are humans, so whats your point?