Facepalm.Like when you ask for a prostrate figure and it gives you anal prolapses.
OK, but then Eliza didn't ship you shit clothing and charge your Mastercard for the benefit.Spicy autocomplete has been fooling people since the '60s when Weizenbaum's own receptionist spent hours talking to Eliza.
Decentralized AI? Like Blockchain?The CEO of Stability AI, Emad Mostaque, resigns as CEO of the company to pursue decentralised AI (which of course is bullshit). Stability AI is the company behind Stable Diffusion. He left after several key people resigned before him.
Yeah, the real danger and problem with AI. We already exist in a world where at least half the population believes lies that follow their desired worldview. Now we have robots that will tell us all the lies we can swallow, endlessly.And commenters on the thread are still talking about how it made an "educated guess" and how it was probably right and completely failing to understand what actually happened. I am so done.
Reminds me of a similar kind of thing I'd read about, where an AI was being trained to diagnose tuberculosis from x-rays. Evidently it got quite decent at making correct diagnoses, but it turned out that since most of the training and test x-rays showing positive tuberculosis results had comes from poor-ish South and Central American countries (where tuberculosis is more common) with older and less-advanced equipment, the AI had "learned" that the visible hallmarks of an x-ray image from an older machine indicated probable tuberculosis.LOL. This guy posted on Reddit about Claude making confident responses to being presented this MRI when actual doctors weren't willing to say anything was there, and commenter after commenter go "See. Claude is smarter than the doctor." If you go to the source post on Twitter the guy goes on to say that Claude was actually wrong, there really was no tumor there, it was just generating plausible responses because finding a tumor was more common in its training set than not finding one so that was a more plausible response. It wasn't actually doing any analysis at all.
And commenters on the thread are still talking about how it made an "educated guess" and how it was probably right and completely failing to understand what actually happened. I am so done.
I remember that story. And that was a legitimate machine-learning scheme with positive and negative examples and a clear testable output... not a parody generator using the image as a prompt.Reminds me of a similar kind of thing I'd read about, where an AI was being trained to diagnose tuberculosis from x-rays. Evidently it got quite decent at making correct diagnoses, but it turned out that since most of the training and test x-rays showing positive tuberculosis results had comes from poor-ish South and Central American countries (where tuberculosis is more common) with older and less-advanced equipment, the AI had "learned" that the visible hallmarks of an x-ray image from an older machine indicated probable tuberculosis.
And that's the problem. While the notorious NKVD troikas of Stalin's pre-war purges, usually comprising the local NKVD chief, the local chief prosecutor and a representative of the local Communist Party, worked to targets of condemning so many thousand Kulaks/Ukrainians/Poles/whoever a week to death or the gulag for anti-Soviet activities, which generally left them around a minute to deliberate on each victim's fate, at least there was some human intervention and human responsibility.“This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”
Another Lavender user questioned whether humans’ role in the selection process was meaningful. “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”
arstechnica.com
I for one do not want to be serviced by our AI overlords.Depending on who you ask about AI (and how you define it), the technology may or may not be useful, but one thing is for certain: AI hype is dominating corporate marketing these days—even in fast food. According to a report in The Wall Street Journal, corporate fast food giant Yum Brands is embracing an "AI-first mentality" across its restaurant chains, including Taco Bell, Pizza Hut, KFC, and Habit Burger Grill. The company's chief digital and technology officer, Joe Park, told the WSJ that AI will shape nearly every aspect of how these restaurants operate.
"Our vision of [quick-service restaurants] is that an AI-first mentality works every step of the way," Park said in an interview with the outlet. "If you think about the major journeys within a restaurant that can be AI-powered, we believe it’s endless."