#tesla #selfdriving #karpathy
Tesla is pushing the state-of-the-art in full self-driving, and interestingly, they explicitly switch from having multiple different sensors to a vision-only system. We discuss the highlights of Andrej Karpathy’s talk about Tesla’s FSD system, how to label petabytes of data, how to sample edge-cases, how to train a neural network that has to work in real-time, and why moving to having only cameras is superior to multi-sensor approaches.
OUTLINE:
0:00 – Intro & Overview
1:55 – Current Auto-Breaking system
3:20 – Full Self-Driving from vision only
4:55 – Auto-Labelling for collecting data
8:45 – How to get diverse data from edge-cases
12:15 – Neural network architecture
16:05 – Tesla’s in-house supercomputer
17:00 – Owning the whole pipeline
18:20 – Example results from vision only
23:10 – Conclusion & Comments
Links:
TabNine Code Completion (Referral):
YouTube:
Twitter:
Discord:
BitChute:
Minds:
Parler:
LinkedIn:
BiliBili:
If you want to support me, the best thing to do is to share out the content 🙂
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar:
Patreon:
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
source