#apple #icloud #neuralhash
Send your Apple fanboy friends to prison with this one simple trick 😉 We break Apple’s NeuralHash algorithm used to detect CSAM for iCloud photos. I show how it’s possible to craft arbitrary hash collisions from any source / target image pair using an adversarial example attack. This can be used for many purposes, such as evading detection, or forging false positives, triggering manual reviews.
OUTLINE:
0:00 – Intro
1:30 – Forced Hash Collisions via Adversarial Attacks
2:30 – My Successful Attack
5:40 – Results
7:15 – Discussion
DISCLAIMER: This is for demonstration and educational purposes only. This is not an endorsement of illegal activity or circumvention of law.
Code:
Extract Model:
My Video on NeuralHash:
ADDENDUM:
The application of framing people is a bit more intricate than I point out here. Apple has commented that there would be a second perceptual hashing scheme server-side, i.e. the model would not be released, which makes forging false positives harder. Nevertheless, evading the system remains fairly trivial.
Links:
TabNine Code Completion (Referral):
YouTube:
Twitter:
Discord:
BitChute:
Minds:
Parler:
LinkedIn:
BiliBili:
If you want to support me, the best thing to do is to share out the content 🙂
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar:
Patreon:
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
source