Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
- Yahoo Finance: https://yahoofinance.com
- MasterClass: https://masterclass.com/lexpod to get 15% off
- NetSuite: http://netsuite.com/lex to get free product tour
- LMNT: https://drinkLMNT.com/lex to get free sample pack
- Eight Sleep: https://eightsleep.com/lex to get $350 off
TRANSCRIPT:
https://lexfridman.com/roman-yampolsk...
EPISODE LINKS:
Roman's X:
/ romanyam
Roman's Website: http://cecs.louisville.edu/ry
Roman's AI book: https://amzn.to/4aFZuPb
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist:
• Lex Fridman Podcast
Clips playlist:
• Lex Fridman Podcast Clips
OUTLINE:
0:00 - Introduction
2:20 - Existential risk of AGI
8:32 - Ikigai risk
16:44 - Suffering risk
20:19 - Timeline to AGI
24:51 - AGI turing test
30:14 - Yann LeCun and open source AI
43:06 - AI control
45:33 - Social engineering
48:06 - Fearmongering
57:57 - AI deception
1:04:30 - Verification
1:11:29 - Self-improving AI
1:23:42 - Pausing AI development
1:29:59 - AI Safety
1:39:43 - Current AI
1:45:05 - Simulation
1:52:24 - Aliens
1:53:57 - Human mind
2:00:17 - Neuralink
2:09:23 - Hope for the future
2:13:18 - Meaning of life
SOCIAL:
- Twitter:
/ lexfridman
- LinkedIn:
/ lexfridman
- Facebook:
/ lexfridman
- Instagram:
/ lexfridman
- Medium:
/ lexfridman
- Reddit:
/ lexfridman
- Support on Patreon:
/ lexfridman
- Yahoo Finance: https://yahoofinance.com
- MasterClass: https://masterclass.com/lexpod to get 15% off
- NetSuite: http://netsuite.com/lex to get free product tour
- LMNT: https://drinkLMNT.com/lex to get free sample pack
- Eight Sleep: https://eightsleep.com/lex to get $350 off
TRANSCRIPT:
https://lexfridman.com/roman-yampolsk...
EPISODE LINKS:
Roman's X:
![twitter_1x_v2.png](https://www.gstatic.com/youtube/img/watch/social_media/twitter_1x_v2.png)
Roman's Website: http://cecs.louisville.edu/ry
Roman's AI book: https://amzn.to/4aFZuPb
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist:
![yt_favicon.png](https://www.gstatic.com/youtube/img/watch/yt_favicon.png)
Clips playlist:
![yt_favicon.png](https://www.gstatic.com/youtube/img/watch/yt_favicon.png)
OUTLINE:
0:00 - Introduction
2:20 - Existential risk of AGI
8:32 - Ikigai risk
16:44 - Suffering risk
20:19 - Timeline to AGI
24:51 - AGI turing test
30:14 - Yann LeCun and open source AI
43:06 - AI control
45:33 - Social engineering
48:06 - Fearmongering
57:57 - AI deception
1:04:30 - Verification
1:11:29 - Self-improving AI
1:23:42 - Pausing AI development
1:29:59 - AI Safety
1:39:43 - Current AI
1:45:05 - Simulation
1:52:24 - Aliens
1:53:57 - Human mind
2:00:17 - Neuralink
2:09:23 - Hope for the future
2:13:18 - Meaning of life
SOCIAL:
- Twitter:
![twitter_1x_v2.png](https://www.gstatic.com/youtube/img/watch/social_media/twitter_1x_v2.png)
- LinkedIn:
![linkedin_1x.png](https://www.gstatic.com/youtube/img/watch/social_media/linkedin_1x.png)
- Facebook:
![facebook_1x.png](https://www.gstatic.com/youtube/img/watch/social_media/facebook_1x.png)
- Instagram:
![instagram_1x.png](https://www.gstatic.com/youtube/img/watch/social_media/instagram_1x.png)
- Medium:
![medium_1x.png](https://www.gstatic.com/youtube/img/watch/social_media/medium_1x.png)
- Reddit:
![reddit_1x.png](https://www.gstatic.com/youtube/img/watch/social_media/reddit_1x.png)
- Support on Patreon:
![patreon_1x_v2.png](https://www.gstatic.com/youtube/img/watch/social_media/patreon_1x_v2.png)