Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.

Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”

  • Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    7
    ·
    9 months ago

    I mean, the thing we call “AI” now-a-days is basically just a spell-checker on steroids. There’s nothing to really to trust or distrust about the tool specifically. It can be used in stupid or nefarious ways, but so can anything else.

    • reflectedodds@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      ·
      9 months ago

      Took a look and the article title is misleading. It says nothing about trust in the technology and only talks about not trusting companies collecting our data. So really nothing new.

      Personally I want to use the tech more, but I get nervous that it’s going to bullshit me/tell me the wrong thing and I’ll believe it.

    • SkyNTP@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      9 months ago

      “Trust in AI” is layperson for “believe the technology is as capable as it is promised to be”. This has nothing to do with stupidity or nefariousness.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        2
        arrow-down
        6
        ·
        9 months ago

        It’s “believe the technology is as capable as we imagined it was promised to be.”

        The experts never promised Star Trek AI.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              ·
              9 months ago

              Most of the CEOs in Tech and even Founder in Startups overhyping their products are lay people or at best are people with some engineering training that made it in an environment which is all about overhype and generally swindling others (I was in Startups in London a few years ago) so they’re hardly going to be straight-talking and pointing out risks & limitations.

              There era of the Engineers (i.e. experts) driving Tech and messaging around Tech has ended decades ago, at about the time when Sony Media took the reins of the company from Sony Consumer Electronics and the quality of their products took a dive and Sony became just another MBA-managed company (so, late 90s).

              Very few “laypeople” will ever hear or read the take on Tech from actual experts.

    • EldritchFeminity@lemmy.blahaj.zone
      cake
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      I would argue that there’s plenty to distrust about it, because its accuracy leaves much to be desired (to the point where it completely makes things up fairly regularly) and because it is inherently vulnerable to biases due to the data fed to it.

      Early facial recognition tech had trouble identifying between different faces of black people, people below a certain age, and women, and nobody could figure out why. Until they stepped back and took a look at the demographics of the employees of these companies. They were mostly middle-aged and older white men, and those were the people whose faces they used as the data sets for the in-house development/testing of the tech. We’ve already seen similar biases in image prompt generators where they show a preference for thin white women as being considered what an attractive woman is.

      Plus, there’s the data degradation issue. Supposedly, ChatGPT isn’t fed any data from the internet at large past 2021 because the amount of AI generated content past then causes a self perpuating decline in quality.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      12
      ·
      9 months ago

      basically just a spell-checker on steroids.

      I cannot process this idea of downplaying this technology like this. It does not matter that it’s not true intelligence. And why would it?

      If it is convincing to most people that information was learned and repeated, that’s smarter than like half of all currently living humans. And it is convincing.

      • nyan@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        9 months ago

        Some people found the primitive ELIZA chatbot from 1966 convincing, but I don’t think anyone would claim it was true AI. Turing Test notwithstanding, I don’t think “convincing people who want to be convinced” should be the minimum test for artificial intelligence. It’s just a categorization glitch.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          9 months ago

          Maybe I’m not stating my point explicitly enough but it actually is that names or goalposts aren’t very important. Cultural impact is. I think already the current AI has had a lot more impact than any chatbot from the 60s and we can only expect that to increase. This tech has rendered the turing test obsolete, which kind of speaks volumes.

          • nyan@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            edit-2
            9 months ago

            Calling a cat a dog won’t make her start jumping into ponds to fetch sticks for you. And calling a glorified autocomplete “intelligence” (artificial or otherwise) doesn’t make it smart.

            Problem is, words have meanings. Well, they do to actual humans, anyway. And associating the word “intelligence” with these stochastic parrots will encourage nontechnical people to believe LLMs actually are intelligent. That’s dangerous—potentially life-threatening. Downplaying the technology is an attempt to prevent this mindset from taking hold. It’s about as effective as bailing the ocean with a teaspoon, yes, but some of us see even that as better than doing nothing.

              • nyan@lemmy.cafe
                link
                fedilink
                English
                arrow-up
                6
                ·
                edit-2
                9 months ago

                How about taking advice on a medical matter from an LLM? Or asking the appropriate thing to do in a survival situation? Or even seemingly mundane questions like “is it safe to use this [brand name of new model of generator that isn’t in the LLM’s training data] indoors?” Wrong answers to those questions can kill. If a person thinks the LLM is intelligent, they’re more likely to take the bad advice at face value.

                If you ask a human about something important that’s outside their area of competence, they’ll probably refer you to someone they think is knowledgeable. An LLM will happily make something up instead, because it doesn’t understand the stakes.

                The chance of any given query to an LLM killing someone is, admittedly, extremely low, but given a sufficiently large number of queries, it will happen sooner or later.

                  • nyan@lemmy.cafe
                    link
                    fedilink
                    English
                    arrow-up
                    5
                    ·
                    9 months ago

                    Half of the human population is of below-average intelligence. They will be that dumb. Guaranteed. And safeguards generally only get added until after someone notices that a wrong answer is, in fact, wrong, and complains.

                    In part, I believe someone’s going to die because large corporations will only get serious about controlling what their LLMs spew when faced with criminal charges or a lawsuit that might make a significant gouge in their gross income. Untill then, they’re going to at best try to patch around the exact prompts that come up in each subsequent media scandal. Which is so easy to get around that some people are likely to do so by accident.

                    (As for humans making up answers, yes, some of them will, but in my experience it’s not all that common—some form of “how would I know?” is a more likely response. Maybe the sample of people I have contact with on a regular basis is statistically skewed. Or maybe it’s a Canadian thing.)

                  • Eccitaze@yiffit.net
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    9 months ago

                    if you even ask a person and trust your life to them like that, unless they give you good reason they are reliable, you are a moron. Why would someone expect a machine to be intelligent and experienced like a doctor? That is 100% on them.

                    Insurance companies are already using AI to make medical decisions. We don’t have to speculate about people getting hurt because of AI giving out bad medical advice, it’s already happening and multiple companies are being sued over it.

              • Krauerking
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                edit-2
                9 months ago

                Because one trained in a particular way could lead people to think it’s intelligent and also give incredibly biased data that confirms the bias of those listening.

                It’s creating a digital prophet that is only rehashing the biases of the creator.
                That makes it dangerous if it’s regarded as being above the flaws of us humans. People want something smarter than them to tell them what to do, and giving that designation to a flawed chatbot that simply predicts what’s the most coherent word sentence, through the word “intelligent”, is not safe or a good representation of what it actually is.

    • Politically Incorrect@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      11
      ·
      edit-2
      9 months ago

      ThE aI wIlL AttAcK HumaNs!! sKynEt!!

      Edit: These “AI” can even make a decent waffles recipe and “it will eradicate humankind”… for the gods sake!!

      It even isn’t AI at all, just how corps named it Is clickbait.

      • SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 months ago

        AI is just a very generic term and always has been. It’s like saying “transportation equipment” which can be anything from roller skates to the space shuttle". Even the old checkers programs were describes as AI in the fifties.

        Of course a vague term is a marketeer’s dream to exploit.

        At least with self driving cars you have levels of autonomy.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        9 months ago

        Before chatgpt was revealed, this was under the unbrella of what AI meant. I prefer to use established terms. Don’t change the terms just because you want them to mean something else.

        • FarceOfWill@infosec.pub
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 months ago

          There’s a long glorious history of things being AI until computers can do them, and then the research area is renamed to something specific to describe the limits of it.