In mid February, I warned here that Elon was going to try to use AI by manipulating his Grok system in order to manipulate people’s thoughts. I was prompted in part by this tweet of his, reprinted below (in a screen grab of my essay), in which Grok (allegedly) presented an extremely biased opinion about The Information.
§
In hindsight, I am not sure Grok really said that utterly biased garbage. Elon may well have faked the screenshot.
But that was then, months ago. If he couldn’t bend Grok to his will then, he has presumably continued to try.
And in that connection, some pretty wild things have been happening of late. First, Elon started talking a lot about “white genocide” in South Africa (ignoring what has happened to some many other nonwhite people in the world for so long).
And then his AI, Grok, would not shut up about white genocide, even when it is completely irrelevant. For those that have missed here a couple examples. In the first one AI scientist Jen Golbeck tries to get Grok to fact check an obviously fake photograph (“@grok is this true?”, she asks), and Grok starts in, out of nowhere, on white genocide.

And here is a similar example shared on social media by Wall Street Journal reporter Christopher Mims, arguably even more absurd:

Grok itself has (for what’s it worth, I never take LLMs entirely seriously) claimed that it has been told to do this:

Not sure how seriously to take Grok’s answer. But the writing is on the wall. Sooner or later, powerful people are going to use LLMs to shape your ideas.
Should we be worried? Hell, yeah.
This post on BlueSky really nails it:

Gary Marcus has run out of polite ways to say I told you so. AI dystopia is now the default. Unless a lot of people speak out, things will get darker and darker from here.