There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.

What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.

AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.

  • lloram239@feddit.de
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Why are they so threatening?

    Simple example: A lot of artists would like that their images aren’t used for AI training and would like to have legislation to prevent that. Problem with that is that such legislation would grand a monopoly on AI to the Google’s, Facebook’s and Adobe’s of this world, as they are already sitting on mountains of data and have ToS that allows them to use it for training. Any Open Source project that doesn’t have the data and would need to web scraping would be illegal.

    That’s the issue. A lot of criticism on AI is extremely short sighted and ignorant, often not even understanding the very basics how it all works.

    Another more fundamental problem: What are you going to do? AI is just a collection of algorithms and math. Do you want to outlaw math? Force humans to use less efficient tools? Technological progress is not something you can easily control, especially not in advance when you don’t even know what changes it will bring.

    Imagine if we had taken an extra five minutes before embracing Facebook and all the other social media that came to define “Web2.0.”

    We did and nothing ever came of it. Projects like https://freenet.org/ or https://freedombox.org/ have been around for a decade or two. But the masses want convinience.

      • lloram239@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        we are spreading FUD

        You are spreading FUD right in this post: “poorly regurgitating our shit without our permission”

        • There is no need for permission, that’s not how copyright works
        • It’s not “poorly regurgitating”, it’s for most part original and of high quality

        You wanna be taken serious? Stop repeating the nonsense as everybody else.

        And as for “replace us at our jobs”, that’s not a problem, that’s called progress. If you want UBI or something along the line, go fight for that, don’t make stupid arguments against AI.

        That does not mean everyone’s critique is ignorant.

        Well, maybe, still waiting for reading one that isn’t. Since everybody just keeps repeating the same nonsense, including you right now.

        The only sensible one I heard so far was from Hinton, which simply suggested putting about equal money into AI safety research as we do into AI research. Nothing wrong with that, though that will probably just show us more ways in which AI can go wrong and less on how to prevent it.

          • lloram239@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Oh yeah, I forgot copyright law was this static and beautiful thing that never gets reassessed or re-contextualized.

            So you want it changed in such a way that it grands Google, Adobe and Co. exclusive ownership over powerful AI models and kills all Open Source efforts? Congratulations for proving my point.

            And you wonder why nobody takes you people seriously…