#gpt4chan #4chan #ai
GPT-4chan was trained on over 3 years of posts from 4chan’s “politically incorrect” (/pol/) board.
(and no, this is not GPT-4)
EXTRA VIDEO HERE:
Website (try the model here):
Model (no longer available):
Code:
Dataset:
OUTLINE:
0:00 – Intro
0:30 – Disclaimers
1:20 – Elon, Twitter, and the Seychelles
4:10 – How I trained a language model on 4chan posts
6:30 – How good is this model?
8:55 – Building a 4chan bot
11:00 – Something strange is happening
13:20 – How the bot got unmasked
15:15 – Here we go again
18:00 – Final thoughts
ERRATA:
– I stated that the model is better on the automated parts of TruthfulQA than any other GPT out there, which is incorrect. There exist some small GPT-models with similar performance, I was mainly talking about the flagship models, such as GPT-3 and GPT-J.
Links:
Merch:
TabNine Code Completion (Referral):
YouTube:
Twitter:
Discord:
BitChute:
LinkedIn:
BiliBili:
If you want to support me, the best thing to do is to share out the content 🙂
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar:
Patreon:
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
source