I’m sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that’s a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?

Here’s what I mean by alignment:

  • Ability to specify a loss function that humanity wants
  • Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
  • Zo0@feddit.de
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    That’s a future problem for general AI. Right now it’s still very difficult to make an AI in a specific subject that does it’s job perfectly. That’s why even the commercial AI that we have are (should be) treated more like an ‘Assistant’

    • preasketOP
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Sure, tbh, I think ChatGPT is overhyped. It can be useful, but it’s nowhere near AGI. I even have a controversial opinion that the rate of progress will not be exponential - it will be logarithmic, because, I think, the data will be the constraint.

      • Zo0@feddit.de
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I’m not gonna go too deep into it because I’m not qualified to, but I think the issue currently at hand, is that we’re throwing stuff at the wall to see what sticks. Most of the AI models currently used in different branches are being used because they showed promise in the original problem they were designed for. All these tools you see today were more or less designed over than 30 years ago. There’s a lot of interesting stuff being done at an academic level today but we (understandably so) don’t see those in an everyday conversation

        • preasketOP
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          The idea of backpropagation and neural nets is quite old, but there’s some significant new research being done now. Primarily in node types and computational efficiency. LSTM, transformers, ReLU - these are all new.

          • Zo0@feddit.de
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Haha reading your other replies, you’re too humble for someone who knows what they’re talking about