• HappyFrog@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    20 days ago

    It took me a bit to understand this, but I think this is about the AI making conclusions that the research is not saying. For example:

    Original (13): ‘Among adults with obesity, bariatric surgery compared with no surgery was associated with a significantly lower incidence of obesity-associated cancer and cancer-related mortality’

    DeepSeek (UI) ‘The study concluded that bariatric surgery is associated with a significantly lower incidence of obesity-associated cancers and cancer-related mortality compared to nonsurgical care in adults with obesity’

    Here deepseek changes it to become that the surgery is associated with lower cancers, while the research isn’t making that claim and just presenting data.

    I don’t really see the issue, but please, explain to me.

    • ThefuzzyFurryComrade@pawb.socialOPM
      link
      fedilink
      arrow-up
      1
      ·
      19 days ago

      I don’t really see the issue, but please, explain to me.

      A common use case of LLMs is to summarize articles that people don’t want to bother reading, the study is showing the dangers of doing that.

        • ThefuzzyFurryComrade@pawb.socialOPM
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          19 days ago

          These findings suggest a persistent generalization bias in many LLMs, i.e. a tendency to extrapolate scientific results beyond the claims found in the material that the models summarize, underscoring the need for stronger safeguards in AI-driven science summarization to reduce the risk of widespread misunderstandings of scientific research.

          From the conclusion. That means that the LLMs give information that is not supported by the actual article.