abobla@lemm.ee to Science@beehaw.orgEnglish · 1 year agoEven the best AI models studied can be fooled by nonsense sentences, showing that “their computations are missing something about the way humans process language.”zuckermaninstitute.columbia.eduexternal-linkmessage-square1fedilinkarrow-up17arrow-down10cross-posted to: science@lemmit.online
arrow-up17arrow-down1external-linkEven the best AI models studied can be fooled by nonsense sentences, showing that “their computations are missing something about the way humans process language.”zuckermaninstitute.columbia.eduabobla@lemm.ee to Science@beehaw.orgEnglish · 1 year agomessage-square1fedilinkcross-posted to: science@lemmit.online
minus-squareAnalogyAddict@beehaw.orglinkfedilinkarrow-up1·1 year agoWell, yes. AI models don’t extract meaning. They parrot statistically likely responses based on words used. They had to research that?
Well, yes. AI models don’t extract meaning. They parrot statistically likely responses based on words used. They had to research that?