Nobody cares about "AI" (Chatbot: I disagree.)

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,798
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
Oopsie.

Following backlash in a Hacker News thread, Microsoft deleted a blog post that critics said encouraged developers to pirate Harry Potter books to train AI models that could then be used to create AI slop.

The blog, which is archived here, was written in November 2024 by a senior product manager, Pooja Kamath. According to her LinkedIn, Kamath has been at Microsoft for more than a decade and remains with the company. In 2024, Microsoft tapped her to promote a new feature that the blog said made it easier to “add generative AI features to your own applications with just a few lines of code using Azure SQL DB, LangChain, and LLMs.”
What better way to show “engaging and relatable examples” of Microsoft’s new feature that would “resonate with a wide audience” than to “use a well-known dataset” like Harry Potter books, the blog said.
Oh yes, stick with the well-known datasets. Nothing potentially illegal in that.
 
Last edited:

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,680
Location
Ohio
Joined SLU
02-22-2008
SLU Posts
16791
Amazon blames human employees for vibe-coded outages

Numerous unnamed Amazon employees told the FT that AI agent Kiro was responsible for the December incident affecting an AWS service in parts of mainland China. People familiar with the matter said the tool chose to “delete and recreate the environment” it was working on, which caused the outage.

While Kiro normally requires sign-off from two humans to push changes, the bot had the permissions of its operator, and a human error there allowed more access than expected.
 

Innula Zenovka

Nasty Brit
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
23,631
SLU Posts
18459
Financial Times: How tech turned against women (Evernote Link because paywall)

In late December, over the course of just nine days, xAI’s Grok tool was used to generate and post online millions of non-consensual intimate images of women. Requests to alter women’s images to add bruises, blood and even bullet holes were instantly granted.

Racism was deeply intertwined with the misogyny: Democratic congresswoman Alexandria Ocasio-Cortez, Zendaya, Cardi B and other prominent politicians and celebrities were targeted with requests to portray them with white skin. A Jewish woman found that an AI image had been created showing her in a bikini standing outside Auschwitz. Millions of the images featured child sexual abuse.
Just a few weeks later, Waymo announced that it plans to debut its driverless cars in London by the end of 2026. These vehicles have been in development for years but will still have to prove that they meet strict safety standards, including protection from misuse via hacking or cyber threats, before being allowed on British roads. By contrast, AI tools that enable the harassment, humiliation, abuse and relentless hounding of women apparently require no such guardrails.
 

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,798
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it's harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it's happening on a massive scale.
"It's easy to trick AI chatbots, much easier than it was to trick Google two or three years ago," says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. "AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it's dangerous."
Life under our A.I. Overlords might turn out to be easier than we worried. Except for the losing our jobs to them part.
 

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,798
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
I'm making a sad face.

Over the weekend, Summer Yue, the director of safety and alignment at Meta’s superintelligence lab, posted on Twitter that OpenClaw deleted her entire inbox despite her pleading messages to stop. OpenClaw (née Clawdbot and Moltbot) has become a popular open-source AI agent for AI evangelists despite the pretty obvious and troubling security vulnerabilities, and Yue wanted to give it a shot. So, according to her post, she set up a Mac Mini running the agent and offered it access to her inbox. You can probably see where this is going.
“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” she wrote. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.” OpenClaw basically went full HAL 9000 on Yue, pulling up just short of saying, “I’m sorry Summer, I’m afraid I can’t do that.” She shared screenshots of her conversation with the agent, showing her begging it to stop and being ignored, concluding with the bot acknowledging that it remembered being told not to delete anything without approval and “violated” that order anyway.
I'm still making a sad face.

Yue marks it up to a "rookie mistake," which I don't find helps things.
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,680
Location
Ohio
Joined SLU
02-22-2008
SLU Posts
16791
1. "Summer Yue" is a SL name if I ever saw one, just saying
2. How on earth is Yue, maker of "rookie mistakes" such as allowing a chatbot to delete her entire company inbox, the director of safety and alignment at global technology megacorporation Meta? Do you really not need any proficiency whatosever in using the company's chatbot to have an executive role in its design?
3. I really am starting to wonder just how comically far fanboys will go to avoid admitting that their chatbot is a bad piece of software that is unfit for purpose.
 

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
7,355
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
> Do you really not need any proficiency whatosever in using the company's chatbot to have an executive role in its design?

In typical companies, once the money starts rolling in, they get managers who are experts at managing and rookies at the actual product.

In "Generative AI" companies, even the founders are gullible shills.
 
  • 1Agree
Reactions: Casey Pelous

Noodles

The sequel will probably be better.
Joined
Sep 20, 2018
Messages
5,829
Location
Illinois
SL Rez
2006
Joined SLU
04-28-2010
SLU Posts
6947
2. How on earth is Yue, maker of "rookie mistakes" such as allowing a chatbot to delete her entire company inbox, the director of safety and alignment at global technology megacorporation Meta? Do you really not need any proficiency whatosever in using the company's chatbot to have an executive role in its design?
Silly, you don't need qualificatioms, you jist ask AI for how to do things.

Like in this case you just enter "AI, your directive is to be more safe!"

And it works like magic!
 
  • 1Like
Reactions: Dakota Tebaldi

Casey Pelous

Senior Discount
VVO Supporter 🍦🎈👾❤
Joined
Sep 24, 2018
Messages
3,172
Location
USA, upper left corner
SL Rez
2007
Joined SLU
February, 2011
SLU Posts
10461
It's a different industry, but it sure reminds me of what has happened to Boeing. Nobody in Big Management has the foggiest idea how to build a wire harness or caulk a window or any of the other thousands of tasks that create an airplane.
 

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,798
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
But what would AI Jesus do?

Pope Leo tells priests not to use AI to write homilies or seek likes on TikTok
In a question-and-answer session with clergy from the Diocese of Rome, the pope said priests should resist "the temptation to prepare homilies with artificial intelligence."

"Like all the muscles in the body, if we do not use them, if we do not move them, they die. The brain needs to be used, so our intelligence must also be exercised a little so as not to lose this capacity," Leo said in the closed door meeting, according to a report by Vatican News on Feb. 20.

"To give a true homily is to share faith," and artificial intelligence "will never be able to share faith," the pope added.
 

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,798
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
Well this is...sort of expected.

Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.
The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.

In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.
 

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,798
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
Exclusive: Anthropic Drops Flagship Safety Pledge
Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.

In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise—the central pillar of their Responsible Scaling Policy (RSP)—as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.
But in recent months the company decided to radically overhaul the RSP. That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance.

Feels this might be worse than when Google dropped their unofficial motto "Don't be evil." Also completely unmentioned in the Time article is the fight between Anthropic and Secretary of Defense Pete Hegseth.
 
  • 1Grumpy Cat
Reactions: Dakota Tebaldi

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,680
Location
Ohio
Joined SLU
02-22-2008
SLU Posts
16791
Callers to Washington state's licensing office who press 2 for Spanish language are getting an AI voice speaking English in a phony Latino accent

The Washington Department of Licensing said in a statement that it was trying to fix the Spanish option and figure out how it happened in the first place. It noted that the self-service option includes 10 languages and runs on a newer, AI-driven technology. It was not immediately clear if the issue had affected other languages; efforts by The Associated Press to use the phone service in some of the other languages Thursday did not prompt additional accented voices.

...

Thursday morning, the call line still put on the voice after a message, in English, acknowledging that the some translation services are no functioning properly.

When an AP reporter followed prompts for Spanish-language options, he was met with an accented English voice accent that would only say numbers in Spanish.

“Your estimated wait time is less than ‘tres’ minutes,” the voice said.
Really you need to click the link and go to the article and play the sound clip, it's something special.
 

Free

*censored*
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
41,798
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
Uhm, sure. Why not?

Some Uber employees have created an AI version of their company's top executive, Khosrowshahi said on an episode of The Diary of a CEO podcast hosted by Steven Bartlett.

"One of my team members told me that some teams have built a 'Dara AI,'" Khosrowshahi said. "They basically make the presentation to the Dara AI as a prep for making a presentation to me."
The AI clone helps employees then make changes to their slides and other aspects of their presentation, he said. "They have Dara AI to tune their prep," Khosrowshahi said.

While it's not clear how widespread the use of the CEO bot is within Uber's corporate offices, it's the latest example of employees using AI in new ways to help prepare for high-pressure moments in the workplace.
 

Noodles

The sequel will probably be better.
Joined
Sep 20, 2018
Messages
5,829
Location
Illinois
SL Rez
2006
Joined SLU
04-28-2010
SLU Posts
6947
ChatGPT, if I have an AI version of our CEO, do we need to waste money on the human version anymore?
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
9,680
Location
Ohio
Joined SLU
02-22-2008
SLU Posts
16791
Exclusive: Anthropic Drops Flagship Safety Pledge





Feels this might be worse than when Google dropped their unofficial motto "Don't be evil." Also completely unmentioned in the Time article is the fight between Anthropic and Secretary of Defense Pete Hegseth.
Yeah I don't like how much credit Anthropic is getting for allegedly "standing up to" Hegseth and the Pentagon at the cost of their government contract. Anthropic's line in the sand was specifically mass domestic surveillance. They had no problem at all with mass surveillance of Europeans and other non-Americans.
 

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
7,355
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
Their "safety pledge" was worthless leswrongism anyway.