Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
– Yahoo Finance:
– MasterClass: to get 15% off
– NetSuite: to get free product tour
– LMNT: to get free sample pack
– Eight Sleep: to get $350 off
TRANSCRIPT:
EPISODE LINKS:
Roman’s X:
Roman’s Website:
Roman’s AI book:
PODCAST INFO:
Podcast website:
Apple Podcasts:
Spotify:
RSS:
Full episodes playlist:
Clips playlist:
OUTLINE:
0:00 – Introduction
2:20 – Existential risk of AGI
8:32 – Ikigai risk
16:44 – Suffering risk
20:19 – Timeline to AGI
24:51 – AGI turing test
30:14 – Yann LeCun and open source AI
43:06 – AI control
45:33 – Social engineering
48:06 – Fearmongering
57:57 – AI deception
1:04:30 – Verification
1:11:29 – Self-improving AI
1:23:42 – Pausing AI development
1:29:59 – AI Safety
1:39:43 – Current AI
1:45:05 – Simulation
1:52:24 – Aliens
1:53:57 – Human mind
2:00:17 – Neuralink
2:09:23 – Hope for the future
2:13:18 – Meaning of life
SOCIAL:
– Twitter:
– LinkedIn:
– Facebook:
– Instagram:
– Medium:
– Reddit:
– Support on Patreon:
source