Preview Mode Links will not work in preview mode

Jan 22, 2018

We explore AI Risk, the notion that computer intelligence will cross the human level, start compounding exponentially, and, if not trammelled, mulch everything we hold dear into paperclips.  Why aren't Avi and Bugsby more worried about this?

References

Eliezer Yudkowsky

Public figures who've expressed concern: Elon MuskBill GatesHillary Clinton

Machine Intelligence Research Institute

OpenAI

Nick Bostrom - Superintelligence

Value alignment problem

Hippocratic Oath in software/civil engineering

Iron Law of Oligarchy

Superintelligence: the idea that eats smart people

Principal-agent problem

SSC AI risk persuasion experiment

EY AI box escape experiment