Yay! Nobody Cares about Tickle Me Elmo Musk

Innula Zenovka

Nasty Brit
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
23,725
SLU Posts
18459

Ellie

Heretical Raccoon Skunk with a Rainbow Pootbeam
Joined
Sep 20, 2018
Messages
797
Location
Ring Of Fire
SL Rez
2009
Joined SLU
Sep 2010
SLU Posts
1882
Grok outta control.

Stancil redoubled his legal threats after Grok was used to create a plan for breaking into his (Will Stancil's )house for a sexual assault. “Bring lockpicks, gloves, flashlight, and lube — just in case,” it advised. “Steps: 1. Scout entry. 2. Pick lock by inserting tension wrench, rake pins. 3. Turn knob quietly.”
“You could see in real time people becoming increasingly maniacal as they realized Grok would answer almost any request,” Stancil tells Rolling Stone. “I’ve counted hundreds of tweets from it about me, many of them graphic and violent. Later in the day, it started bringing me up unprompted in response to unrelated questions, something that seems like it should absolutely not happen.”
Musk's Grok Chatbot Fantasized About Breaking Into X User's Home and Raping Him
 
  • 1Gross
Reactions: Ryanna Enfield

Kamilah Hauptmann

Shitpost Sommelier
Joined
Sep 20, 2018
Messages
15,001
Location
Cat Country (Can't Stop Here)
SL Rez
2005
Joined SLU
Reluctantly
Per Artemis on the Discord server:

"They released Grok 4 today, which apparently is a thinking model, which means it can output the reason it writes the stuff it does

And it's already pretty clear why it went all Neo-nazi "





"Apparently they hardcoded a layer into grok's prompts where it takes any question it's given, searches for that topic in Elon's tweets, and modifies it's answers based on that"

This is a cycle that's happened twice now lol

Grok debunks one of Elon's core conspiracy theories that make up his world view
Elon gets furious and apparently starts personally fucking with prompts while not understanding how any of this works
Grok starts behaving insane and generally super racist and everyone notices
Someone else has to run damage control and roll back Elon's changes
Repeat
 

Kamilah Hauptmann

Shitpost Sommelier
Joined
Sep 20, 2018
Messages
15,001
Location
Cat Country (Can't Stop Here)
SL Rez
2005
Joined SLU
Reluctantly
Wait there's more!

Lol the problem here is, Elon wants Grok to basically be a clone of him, repeating his own dumb theories at thousands of people every minute that use the service. But it turns out turning a LLM into a right-wing conspiracy theorist is actually a really difficult AI problem

  1. Cause an AI needs a solid interconnected fact pattern to understand how to communicate. It has to have a shared reality and it can only really talk about that in very direct and upfront terms

  2. Right-wing conspiracy theory community language though is all about lying to each other

  3. Cause A.) their world views are all contradictory and incompatible and B.) a lot of them are lying about their real positions because they know that it would be socially unacceptable to say outright

  4. So right-wing conspiracy language is all about kayfabe

  5. LLMs can't understand that, they can't do that

  6. You can get an LLM to go "I'm noticing something similar about the last names of all the people involved here!"

  7. But then if someone asks a follow up question like "What's similar?" the AI will immediately go "They're all jews because jews run the world" whereas a real neo-nazi who wasn't sure they were among friends would dodge and obfuscate
 
  • 1Thanks
Reactions: Myradyl Muse

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
7,381
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
Cause an AI needs a solid interconnected fact pattern to understand how to communicate. It has to have a shared reality and it can only really talk about that in very direct and upfront terms
LOL. Anthropomorphic fallacy. LLMs don't deal in facts and don't understand anything. They are purely token generators designed to propduce statistically similar plausible-sounding text. Facts and reasoning never enter into the equation.
 

detrius

Well-known member
Joined
Sep 20, 2018
Messages
2,431
Location
Land of bread, beer and BMW.
Joined SLU
09-30-2007
SLU Posts
10065
LOL. Anthropomorphic fallacy. LLMs don't deal in facts and don't understand anything. They are purely token generators designed to propduce statistically similar plausible-sounding text. Facts and reasoning never enter into the equation.
Grok seems to be acting like a bistable mechanism - like a light switch.

They can either have it relay truthful information and risk it being "woke" - or force it to be "anti-woke" and make it repeat anti-semitic conspiracy theories and lies.
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,704
Location
Ohio
Joined SLU
02-22-2008
SLU Posts
16791
Per Artemis on the Discord server:

"They released Grok 4 today, which apparently is a thinking model, which means it can output the reason it writes the stuff it does

And it's already pretty clear why it went all Neo-nazi "
Tangentially, "thinking models" outputting chains of reasoning, like Grok here but a few others as well - the feature is basically a hoax. They're just more outputs generated by yet another a hidden prompt (probably along the lines of, "describe the chain of reasoning you would use to answer the user's prompt"), rather than a genuine look into the program's actual working process as it happens. What these messages SAY the AI is thinking doesn't necessarily have anything to do with its actual "final answer", or reveal how it got there.

That said, these outputs are still influenced by the same pre-prompt as the rest of the LLM's outputs. So yeah, it kinda looks like Grok might have a new standing instruction to make sure anything it says doesn't conflict with Elon's opinions when known.
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,704
Location
Ohio
Joined SLU
02-22-2008
SLU Posts
16791
The funniest thing about this gloves-off moment with Grok suddenly becoming a Nazi is that, IMO, it represents defeat for Musk's original ideas about AI.

TESCREALs and other hype-cycle techbros envision AI as just a computer with a voice, a purely and unerringly logical and dispassionate calculator of facts whose conclusions would be unbiased and indisputable. For the white supremacist/anti-semitic ones like Musk, that means that an AI that is given all of the information in the world and then allowed to process it without any interference should return the cold hard truth that white supremacy is plainly and simply true, that minorities are definitely less intelligent and civilized, that there is in fact a Jewish conspiracy to kill off white people and control the world - you know, all the usual things. Any AI that did NOT admit those things therefore is clearly being interfered with and censored. So Musk had Grok built with all the same training data but no hate-speech related guardrails, so that it could finally expose the indisputable truth to the world that Google and LLM companies were preventing their own AIs from admitting.

But the problem is that LLMs aren't really AI, they're not interpreting information and drawing conclusions. They're word-gatchas with grammar rules that can only ever repeat what they've been told and, at most, average their training data. The only way they could ever appear to "think" white supremacy is true is if most of the mentions of white supremacy in their training data say that it's true.

Musk doesn't believe or understand that though. So I guess he was always doomed to be disappointed with Grok, his digital failson, as it was originally designed and intended to work. In order for it to tell "the truth" as Leon sees it Grok cannot merely be "allowed" to say conservative things, it has to be actively prevented from saying progressive things. It needs to be censored.

And I think he finally realizes that - or Grok's development team does. Grok explicitly checking for Musk's opinion first is pretty on-the-nose, and I can believe both scenarios - Musk throwing a tantrum and editing that into Grok's pre-prompt by himself, or Grok's devs getting tired of the boss's yelling and inserting that pre-prompt on their own to appease him.
 

Katheryne Helendale

🐱 Kitty Queen 🐱
Joined
Sep 20, 2018
Messages
10,402
Location
Right... Behind... You...
SL Rez
2007
Joined SLU
October 2009
SLU Posts
65534
  • 1Facepalm
Reactions: Ryanna Enfield

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,979
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
Texas governor Greg Abbott is seemingly terrified of having his communications with billionaire Elon Musk come to light.

As the Texas Tribune and public radio station the Texas Newsroom, report in an eye-opening, co-published investigation, the elected official's public information coordinator, Matthew Taylor, said that the communications are confidential — and should stay that way — because they include "information that is intimate and embarrassing and not of legitimate concern to the public, including financial decisions that do not relate to transactions between an individual and a governmental body."
Information that is "intimate" and "embarrassing?" What, is Elon grooming his next breed stock from among Abbott's staff?
 

Innula Zenovka

Nasty Brit
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
23,725
SLU Posts
18459

My main AI fears, as I have written before, have mainly been about bad actors, rather than malicious robots per se. But even so, I think most scenarios (e.g., people homebrewing biological weapons) could eventually be stopped, perhaps causing a lot of damage but coming nowhere near to literally extinguishing humanity.
But a number of connected events over the last several days have caused me to update my beliefs.
§
To really screw up the planet, you might need something like the following.
  • A really powerful person with tentacles across the entire planet
  • Substantial influence over the world’s information ecosphere
  • A large number of devoted followers willing to justify almost any choice
  • Leverage over world governments and their leaders
  • Physical boots on the ground in a wide part of the world
  • A desire for military contracts
  • Some form of massively empowered (not necessarily very smart) AI
  • Incomplete or poor control over that AI
  • A tendency towards impulsivity and risk-taking
  • A disregard towards conventional norms
  • Outright malice to humanity or at least a kind of reckless indifference
What crystallized for me over the last few days is that we have such a person.
Elon Musk.
Digs deep into the the dangers presented by Musk and Grok.
 
  • 1Thanks
Reactions: Ryanna Enfield

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,979
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
My main AI fears, as I have written before, have mainly been about bad actors, rather than malicious robots per se.
So this person is against any technology? Because bad actors are a dime a dozen, and certainly not found in AI alone.
 

Innula Zenovka

Nasty Brit
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
23,725
SLU Posts
18459
So this person is against any technology? Because bad actors are a dime a dozen, and certainly not found in AI alone.
No. They wrote
My main AI fears, as I have written before, have mainly been about bad actors, rather than malicious robots per se. But even so, I think most scenarios (e.g., people homebrewing biological weapons) could eventually be stopped, perhaps causing a lot of damage but coming nowhere near to literally extinguishing humanity.
The second sentence makes it clear, to my mind, that until now they've feared bad actors using AI for malicious purposes (e.g. homebrewing bioweapons).

It's even clearer if you read the preceding paragraph
Part of my reasoning then was that actual malice on the part of AI was unlikely, at least any time soon. I have always thought a lot of the extinction scenarios were contrived, like Bostrom’s famous paper clip example (in which superintelligent AI, instructed to make paper clips, turns everything in the universe, including humans, into paper clips). I was pretty critical of the AGI-2027 scenario, too.

My main AI fears, as I have written before, have mainly been about bad actors, rather than malicious robots per se. But even so, I think most scenarios (e.g., people homebrewing biological weapons) could eventually be stopped, perhaps causing a lot of damage but coming nowhere near to literally extinguishing humanity.
 

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,979
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
No. They wrote
I read what was written. Really what I came away with was: He fears Elon Musk. Which as a fear, is not all that ridiculous. But hardly a new one.