If this is the way to superintelligence, it remains a bizarre one. “This is back to a million monkeys typing for a million years generating the works of Shakespeare,” Emily Bender told me. But OpenAI’s technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problem—but only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.

https://archive.is/xUJMG

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    14 days ago

    We’re hitting the end of free/cheap innovation. We can’t just make a one-time adjustment to training and make a permanent and substantially better product.

    What’s coming now are conventionally developed applications using LLM tech. o1 is trying to fact-check itself and use better sources.

    I’m pretty happy it’s slowing down right at this point.

    I’d like to see non-profit open systems for education. Let’s feed these things textbooks and lectures. Model the teaching after some of our best minds. Give individuals 1:1 time with a system 24x7 that they can just ask whatever they want and as often as they want and have it keep track of what they know and teach them the things that they need to advance. .

    • quixote84@midwest.social
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      13 days ago

      That’s the job I need. I’ve spent my whole live trying to be Data from Star Trek. I’m ready to try to mentor and befriend a computer.

    • anonvurr@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      I mean isn’t it already that is included in the datasets? It’s pretty much a mix of everything.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        Not everything in the dataset is retrievable. It’s very lossy. It’s also extremely noisy with a lot of training data that’s not education-worthy.

        I suspect they’d make a purpose-built model trained mainly on what they actually would want to teach especially from good educators.