• ∟⊔⊤∦∣≶@lemmy.nz
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    1 year ago

    5 links later, here are the actual rules (apparently): https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682

    And I would just like to say, that the term ‘AI’ is a marketing term; all the generative models are just complex digital Galton Boards. Put thing in, different thing comes out. But if you leave them alone, nothing happens. Is that really intelligence, or is that just data transformation?

    I’m far more concerned about the things that do things when you leave them alone… Drones, Boston Robotics, that kind of thing.

    • SendMePhotos@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      11 months ago

      AI is a term I was taught a long time ago to mean that a computer has transcended and become sentient. Artificial Intelligence.

      VI - is what I think we almost have now. Simulated intelligence. It’s at this point now where so many people think that Ai can think for itself that we may as well call it VI just to get on with the progress.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        11 months ago

        I’m a researcher in ML and that’s not the definition that I’ve heard. Normally the way I’ve seen AI defined is any computational method with the ability to complete tasks that are thought to require intelligence.

        This definition admittedly sucks. It’s very vague, and it comes with the problem that the bar for requiring intelligence shifts every time the field solves something new. We sort of go “well, given these relatively simple methods could solve it, I guess it couldn’t have really required intelligence.”

        The definition you listed is generally more in line with AGI, which is what people likely think of when they hear the term AI.

        • SendMePhotos@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          11 months ago

          Maybe a defined term should be used. The use of the term “AI” makes me puke in my mouth because most people associate it with “omg the robots are coming!” when it’s really just a program still. So much so that people have made videos of them implementing Ai inside of video games (see: Matrix game) and then contemplate if the computer program is suffering.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    This is the best summary I could come up with:


    LONDON (AP) — European Union negotiators clinched a deal Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity.

    Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

    The European Parliament will still need to vote on it early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.

    Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.

    However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI’s backer Microsoft.

    Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.


    The original article contains 846 words, the summary contains 241 words. Saved 72%. I’m a bot and I’m open source!

  • Muffi@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    11 months ago

    Cars are destroying the world way faster than any “AI”. Can we regulate those to hell first?

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Doing one thing does not stop you from also doing a second thing in parallel, and the best time to regulate a technology is when it’s emerging. That way you have a chance to stop it getting out of hand.

  • sugarfree@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    1 year ago

    They believe they can regulate AI, but it remains to be seen whether or not that is true. Especially as the rules come into place in 2025, that’s a very long time in AI.