AI-generated images thread

Free

When The Onion asks, I answer.
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
32,599
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
The US Department of Justice has started cracking down on the use of AI image generators to produce child sexual abuse materials (CSAM).

On Monday, the DOJ arrested Steven Anderegg, a 42-year-old "extremely technologically savvy" Wisconsin man who allegedly used Stable Diffusion to create "thousands of realistic images of prepubescent minors," which were then distributed on Instagram and Telegram.
The cops were tipped off to Anderegg's alleged activities after Instagram flagged direct messages that were sent on Anderegg's Instagram account to a 15-year-old boy. Instagram reported the messages to the National Center for Missing and Exploited Children (NCMEC), which subsequently alerted law enforcement.

During the Instagram exchange, the DOJ found that Anderegg sent sexually explicit AI images of minors soon after the teen made his age known, alleging that "the only reasonable explanation for sending these images was to sexually entice the child."
 

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
5,529
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
"messages that were sent on Anderegg's Instagram account to a 15-year-old boy"

Sounds like they had spicier charges than just generated pictures.
 

Free

When The Onion asks, I answer.
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
32,599
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed "less than 0.0001 percent" of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.
Among those images linked in the dataset, Han found 170 photos of children from at least 10 Brazilian states. These were mostly family photos uploaded to personal and parenting blogs most Internet surfers wouldn't easily stumble upon, "as well as stills from YouTube videos with small view counts, seemingly uploaded to be shared with family and friends," Wired reported.
 

Free

When The Onion asks, I answer.
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
32,599
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
Six fingers by accident? Why not forty of them on purpose! Or how about some missing fingers?!

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.
A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]," details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies.
 

Jopsy Pendragon

. LOCK . HIM . UP .
Joined
Sep 20, 2018
Messages
2,010
Location
San Diego CA
SL Rez
2004
Joined SLU
2007
SLU Posts
11308
... I'm weird. I actually love some of the body horror results that AI image generators come up with. =D