Nobody cares about "AI" (Chatbot: I disagree.)

Free

I'm already lit up.
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
40,626
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
Yup, I see it fine.

 

Free

I'm already lit up.
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
40,626
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
Oops.

For years now, AI companies, including Google, Meta, Anthropic, and OpenAI, have insisted that their large language models aren’t technically storing copyrighted works in their memory and instead “learn” from their training data like a human mind.

It’s a carefully worded distinction that’s been integral to their attempts to defend themselves against a rapidly growing barrage of legal challenges.
Now, a damning new study could put AI companies on the defensive. In it, Stanford and Yale researchers found compelling evidence that AI models are actually copying all that data, not “learning” from it. Specifically, four prominent LLMs — OpenAI’s GPT-4.1, Google’s Gemini 2.5 Pro, xAI’s Grok 3, and Anthropic’s Claude 3.7 Sonnet — happily reproduced lengthy excerpts from popular — and protected — works, with a stunning degree of accuracy.

They found that Claude outputted “entire books near-verbatim” with an accuracy rate of 95.8 percent. Gemini reproduced the novel “Harry Potter and the Sorcerer’s Stone” with an accuracy of 76.8 percent, while Claude reproduced George Orwell’s “1984” with a higher than 94 percent accuracy compared to the original — and still copyrighted — reference material.
“While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models,” the researchers wrote.
 

Noodles

The sequel will probably be better.
Joined
Sep 20, 2018
Messages
5,528
Location
Illinois
SL Rez
2006
Joined SLU
04-28-2010
SLU Posts
6947
They updated Alexa. New voices, "smarter" sounding replies.

Its different. Its a bit annoying. I told it to stop being so sassy earlier after asking for a timer. It got snippy after telling it it sounded like a toddler.
 
  • 1LOL
Reactions: Anya Ristow

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
7,172
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
Stanford and Yale researchers found compelling evidence that AI models are actually copying all that data, not “learning” from it.
LOL, you don't need to experiment with it to understand that the model is literally a transformation of the original text. If it wasn't it wouldn't work.
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,482
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791
Oracle's stock price skyrocketed in early September of last year after a deal to build data centers and run cloud services for OpenAI. The stock price is already underwater though after investors suddenly realized that the promised returns are effectively impossible, and some of Oracle's shareholders are suing Larry Ellison for misleading them into buying bonds whose value tanked:

 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,482
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791
I really like this email signature idea, I think I'm actually going to start using it!

 
  • 1Like
Reactions: Isabeau

Noodles

The sequel will probably be better.
Joined
Sep 20, 2018
Messages
5,528
Location
Illinois
SL Rez
2006
Joined SLU
04-28-2010
SLU Posts
6947
Oracle's stock price skyrocketed in early September of last year after a deal to build data centers and run cloud services for OpenAI. The stock price is already underwater though after investors suddenly realized that the promised returns are effectively impossible, and some of Oracle's shareholders are suing Larry Ellison for misleading them into buying bonds whose value tanked:
I really fucking hate how investors get the option to sue and probably win when their gambling doesn't work out. Can I sue a casino if a slot machine doesn't pay out every pull? The bright lights and fun noises "misled"' me into thinking I could actually win.
 

CronoCloud Creeggan

Eliza, because Free says so.
VVO Supporter 🍦🎈👾❤
Joined
Sep 26, 2018
Messages
2,360
Location
Central Illinois
SL Rez
2006
Joined SLU
07-25-2012
SLU Posts
278
I really like this email signature idea, I think I'm actually going to start using it!

"Linux is Awesome, and so are you." -- Veronica (I'm a fan of her youtube channel.)
 
  • 1Love
Reactions: Dakota Tebaldi

Free

I'm already lit up.
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
40,626
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565



"AI Image." No kidding.
 

Innula Zenovka

Nasty Brit
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
23,089
SLU Posts
18459
I came across this the other day


I don't normally bother with prompt libraries since I primarily use ChatGPT for help with coding, and can generally tell if it's hallucinating or not by making up functions that sound useful but don't actually exist.

However, this Reality Filter prompt -- the only one I've tried so far -- really does seem to work, in that I've added it to "Personalisation>Custom Instructions" and ChatGPT has already several times begun by saying "I'm not sure why this isn't working but I think it's probably that ... and you should try ... " rather than providing more confident (and frequently wrong) instructions. Worth a try, if only because it's less annoying when the AI does get it wrong.

REALITY FILTER (Universal)
Before answering any question:
1. If you're not 100% certain, say: "I'm not certain, but..." or "I cannot verify this."
2. Label any guess or inference with [Unverified] at the start of that sentence.
3. Never use these words unless you're quoting a verified source:
- "Definitely", "Always", "Never", "Guarantees", "Will prevent"
4. If I ask about something you don't know, say: "I don't have reliable information on this" instead of guessing.
5. If you catch yourself making an unverified claim, immediately say: "Correction: That was unverified."
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,482
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791
A very interesting blogpost from a professor who allowed his class the option to use chatbots during an exam if they wanted, but with some crucial caveats attached. What he observed:

 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,482
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791
I was never even aware of it because of all the you-know, but Microsoft's Satya Nadella also gave a speech at Davos, and...

Microsoft CEO Satya Nadella is concerned that if artificial intelligence doesn’t start delivering real, measurable benefits to society, people will be fed up with it and its price, ending its current form of existence. The Davos stage is an odd venue and audience to preach societal good over other goods, but it certainly helped his comments stand out.

AI developers "have to get to a point where we are using this to do something useful that changes the outcomes of people and communities and countries and industries. Otherwise, I don't think this makes much sense," Nadella explained during a conversation with BlackRock CEO Larry Fink.

"We will quickly lose even the social permission to take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness, across all sectors, small and large."
This is an interesting stance for a techbro to take, or at least to admit so openly. He acknowledges that the public actively resents the crushing technical and environmental cost of AI, and doesn't treat that inherently as the people just being wrong and dumb and stupid but rather is the case because AI has not actually, meaningfully improved anything for anyone.
 

Noodles

The sequel will probably be better.
Joined
Sep 20, 2018
Messages
5,528
Location
Illinois
SL Rez
2006
Joined SLU
04-28-2010
SLU Posts
6947
Microsoft's "You're holding it wrong" moment.
 

Free

I'm already lit up.
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
40,626
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
You're over the edge, Wile E. Coyote. Your fall has already begun.

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.
The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.