TL;DR: (AI-generated 🤖)
The author, an early pioneer in the field of aligning artificial general intelligence (AGI), expresses concern about the potential dangers of creating a superintelligent AI. They highlight the lack of understanding and control over modern AI systems, emphasizing the need to shape the preferences and behavior of AGI to ensure it doesn’t harm humanity. The author predicts that the development of AGI smarter than humans, with different goals and values, could lead to disastrous consequences. They stress the urgency and seriousness required in addressing this challenge, suggesting measures such as banning large AI training runs to mitigate the risks. Ultimately, the author concludes that humanity must confront this issue with great care and consideration to avoid catastrophic outcomes.
I’ll also add that I’m not actually sure that Yudkowsky’s suggestion in the video – monitoring labs with massive GPU arrays – would be sufficient if one starts talking about self-improving intelligence. I am quite skeptical that the kind of parallel compute capacity used today is truly necessary to do the kinds of tasks that we’re doing – rather, it’s because we are doing things inefficiently because we do not yet understand how to do them efficiently. True, your brain works in parallel, but it is also vastly slower – your brain’s neurons run at maybe 100 or 200 Hz, whereas our computer systems run with GHz clocks. I would bet that if it were used with the proper software today, if we had figured out the software side, a CPU on a PC today could act as a human does.
Alan Turing predicted in 1950 that we’d have the hardware to have human-level in about 2000.
That’s ~1GB to ~1PB of storage capacity, which he considered to be the limiting factor.
He was about right in terms of where we’d be with hardware, though we still don’t have the software side figured out yet.