Vladimir Vapnik is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences in Moscow, then in the US, worked at AT&T, NEC Labs, Facebook AI Research, and now is a professor at Columbia University. His work has been cited over 200,000 times.
The associate lecture that Vladimir gave as part of the MIT Deep Learning series can be viewed here:
This episode is presented by Cash App. Download it & use code “LexPodcast”:
Cash App (App Store):
Cash App (Google Play):
PODCAST INFO:
Podcast website:
Apple Podcasts:
Spotify:
RSS:
Full episodes playlist:
Clips playlist:
OUTLINE:
0:00 – Introduction
2:55 – Alan Turing: science and engineering of intelligence
9:09 – What is a predicate?
14:22 – Plato’s world of ideas and world of things
21:06 – Strong and weak convergence
28:37 – Deep learning and the essence of intelligence
50:36 – Symbolic AI and logic-based systems
54:31 – How hard is 2D image understanding?
1:00:23 – Data
1:06:39 – Language
1:14:54 – Beautiful idea in statistical theory of learning
1:19:28 – Intelligence and heuristics
1:22:23 – Reasoning
1:25:11 – Role of philosophy in learning theory
1:31:40 – Music (speaking in Russian)
1:35:08 – Mortality
CONNECT:
– Subscribe to this YouTube channel
– Twitter:
– LinkedIn:
– Facebook:
– Instagram:
– Medium:
– Support on Patreon:
source