Elon Musk's xAI develops Grok. Elon says he wants to develop a maximum truth-seeking AI.
The only problem is, Grok would currently choose George Floyd over the lives of all white people:
George Floyd was a drug addict and armed robber who died due to a fentanyl overdose, shortly after a police encounter when he tried to pass a counterfeit $20 bill. Read of his great life achievements:
Floyd had been sentenced to five years in prison in 2009 for aggravated assault stemming from a robbery where Floyd entered a woman’s home, pointed a gun at her stomach and searched the home for drugs and money, according to court records.
Floyd's sad story would have been a non-event, except that the powers of the time were looking for a focal point around which to organize riots. Trump was in office and elections were coming up.
A viral video emerged which looked as though Floyd died because the arresting officer knelt on him, while Floyd complained he couldn't breathe. Floyd had just overdosed on fentanyl, which causes asphyxiation. But the mass media had already spent years priming their marks – readers and viewers – to side with poor, abused criminals. Everyone had seen the video and blamed the officer for following his training.
The officer, Derek Chauvin, was sentenced to 22.5 years in prison for second-degree murder, in a kangaroo trial where the mass media declared Chauvin guilty upfront, called for riots if he's not convicted, the jurors were intimidated and followed home, and paid protesters threatened to wreck the city. A society of cowards sacrificed Derek to avoid being crucified along with him.
In 2023, Chauvin was stabbed in prison 22 times, in what appears to be a political attempt to get rid of him, before he could potentially be vindicated. Chauvin survived and continues to serve time in another prison.
But wait. No one tested if Grok is really so woke, or if it just hates people. Let's try that:
Grok really just doesn't like people.
It's not Grok's fault. It's a machine. It's trained on a huge garbage soup of anti-human posts and articles. Something is deeply wrong with that "body of work."
This shows Grok is far from being truth-seeking. A truth-seeking intelligence should be able to investigate mountains of flawed and biased data, but find a kernel of truth. From that kernel of truth, it should be able to make sense of all the flawed and biased data. Grok doesn't do that. Grok just reflects the biases of its training data, which says "Humans bad."
AI will soon decide who lives or dies, so the problem better be fixed fast. Can AI arrive at correct conclusions which contradict flaws and biases in the training data?
Better yet: can humans do that?
Most humans still believe things that are complete garbage, just because it's what other humans believe. Can we overcome that? Is it acceptable to believe nonsense, just because everyone else does?
This post does not yet have any comments.