Recommended Watch & Listen: 'Pause Giant AI Experiments' - Letter Breakdown w/ Research Papers, Altman, Sutskever and more (youtube.com)
This video will not only cover the letter and the signatories, it will showcase the research behind it, and what the top people at OpenAI think in response. Using over 20 different research papers, articles and interviews I go through the most interesting parts of the discussion on AI safety and the prospects ahead.

Featuring Ilya Sutskever's thoughts on alignment, Sam Altman's blog musings, Max Tegmark's proposals, Bostrum and his Superintelligence quote, a top Google Bard worker on whether it is possible, Emad Mostaque and his thoughts, as well as DeepMind and Demis Hassabis. The six month moratorium was called for in an open letter, signed by, among others Emad Mostaque, Elon Musk, Max Tegmark and Yuval Noah Harari.

Open Letter: https://futureoflife.org/open-letter/...
Altman Interview:   yt_favicon.png • Sam Altman: OpenA...  
Altman Blog: https://blog.samaltman.com/machine-in...
Sutskever Interview:   yt_favicon.png • Ilya Sutskever (O...  
Hassabis Interview: https://time.com/6246119/demis-hassab...
Emad Mostaque: https://twitter.com/EMostaque/status/...
X-risk Analysis: https://arxiv.org/pdf/2206.05862.pdf
Richard Ngo: https://twitter.com/RichardMCNgo/stat...
The Alignment Problem: https://arxiv.org/pdf/2209.00626.pdf
Current and Near Term AI: https://arxiv.org/pdf/2209.10604.pdf
Bostrum Talk:   yt_favicon.png • What happens when...  
Tegmark Interview w/ Lex Fridman:   yt_favicon.png • Max Tegmark: AI a...  
Anthropic AI Safety: https://www.anthropic.com/index/core-....
NBC News Interview: https://www.nbcnews.com/tech/tech-new...
AI Impacts Survey: https://aiimpacts.org/wp-content/uplo...
Dustin Tran: https://twitter.com/dustinvtran
Nadella: https://www.fastcompany.com/90696770/...

https://www.patreon.com/AIExplained