Home Artists Posts Import Register

Content

Answering the other half of the questions you asked: Moral philosophy, biological intelligence enhancement, nukes, regulating compute clusters, lethal autonomous weapons, etc!

https://www.youtube.com/watch?v=0mxFgJGjk0E

Files

Answering Your AI Safety Questions, Part 2

The previous video: https://youtu.be/Q6jwEiyUmi0

Comments

GooGhoul

Doesn't "We could GM a super-human biological intelligence, but not a drastically more intelligent biological intelligence" forget the presence of a super-human biological intelligence that can do a better job than us? Intelligent explosions should be as likely in carbon as in silicon.

robertskmiles

I think that points to an important point which I wish I'd expressed in the video. The methods proposed for biological intelligence enhancement tend to be things like embryo selection, which don't scale much with increased intelligence. You do embryo selection, get smarter scientists, but there's no good reason to expect smarter scientists to be that much better at embryo selection (and iterating the loop takes decades). So you get some increased intelligence, but there isn't really the same potential for explosions

Anonymous

I had trouble with the name mentioned at 1:02, but it's Eliezer Yudkowsky https://en.m.wikipedia.org/wiki/Eliezer_Yudkowsky Thank you for your answers, Robert! Was the e-book "Rationality: From AI to Zombies?"

robertskmiles

Yeah, sort of? That book is an edited volume based on his blog posts, that came out in 2015. Whereas the ebook I read was just an unedited dump of all of the posts in chronological order, that someone put together in 2011.