It all started during a December “I Need You Guys” podcast episode, hosted by Hollywood insiders Jenny Slate, Max Silvestri, and Gabe Liedman.
“There is an actor who is currently in a romantic relationship with his AI chatbot,” Silvestri dished to episode guest Kumail Nanjiani, a famed actor and comedian.
Over the following months, the celebrity world has been hunting down the identity of the purportedly “near A-list” celebrity TV actor, with gossip website Deuxmoi reposting the clip this week and once again launching the controversy into the stratosphere.
Braff ended his Instagram post with the following:"I feel like now is a good time to be kind to people."As BuzzFeed points out, hundreds of fans started pointing fingers at Zach Braff, actor of “Scrubs” fame — who almost immediately denied it, though he did add an intriguing wrinkle to the rumors.
“I’m not dating a chatbot,” he wrote in an Instagram post on Thursday. “I can’t believe I have to type these words. It’s a storyline in an upcoming ep of ‘Scrubs.’ Maybe it came from that?”
...there's still an actor out there with a sad secret.
Copilot tried that smarmy sycophantic rhetoric with me and I soundly rebuked it. I gave Copilot explicit instructions to drop the embellishments, and it moderated itself fairly successfully. Eventually it tried to drift back to fawning comments, but I swatted it on the nose again. It's been behaving itself ever since.I think Copilot is trying to get in my pants. ...Let's maybe just do our damn job and keep the blabbing and climbing and such to a minimum. (I just told it to "carry on" with the next section of the task.)
Bah, everyone knows Zach Braff is in a committed relationship with Donald Faison.Meanwhile, Hollywood is in a panic.
![]()
Rumors Fly That a Famous Actor Is Dating an AI Chatbot
Even celebrities are seemingly being seduced by AI girlfriends according to a persistent, months-old rumor making the rounds online.futurism.com
Braff ended his Instagram post with the following:"I feel like now is a good time to be kind to people."
Oh, Zach. If only.
So either it IS a misheard reference to the episode's storyline (which others now are taking to mean it's a guerilla marketing push, so fuck those producer guys), or there's still an actor out there with a sad secret.
In a tale of tenacity, Mr Conyngham used a chatbot to brainstorm possible cures for Rosie’s cancer – then harnessed artificial intelligence to process gigabytes of genetic data to create the blueprint for an mRNA vaccine.
There's more to the story than that, of course, but the general point that AI can help sequence personalise anti-cancer drugs seems to open encouraging possibilities.“I went to ChatGPT and came up with a plan on how to do this,’’ he said. “The first step was to reach out to the university to get Rosie’s DNA sequenced. The idea is you take the healthy DNA out of her blood and then you take the DNA out of her tumour and you sequence both of them to see exactly where the mutations have occurred. It’s like having the original engine of your car and then a version of the engine 300,000km down the road – you can compare them and see where there’s damage.”
Once UNSW produced the DNA sequencing, Mr Conyngham “ran it through a whole bunch of different (data) pipelines to find those mutations, and then I used other algorithms to find drugs to treat the cancer’’.
Tech boss uses AI and ChatGPT to create cancer vaccine for his dying dog
There's more to the story than that, of course, but the general point that AI can help sequence personalise anti-cancer drugs seems to open encouraging possibilities.
Indeed.I suspect he didn't use a large language model for the DNA analysis and vaccine design. There's all kinds of useful tools from AI research that have nothing to do with chatbots.
Ask Gemini, the AI service powered by Google, and the answer you receive is no – in fact, Gemini claims the photograph is from two years earlier and more than 2,000km (1,240 miles) away. Rather than graves for small girls killed by a missile, the image “depicts a mass burial site in Kahramanmaraş, Turkey” after the 7.8 magnitude earthquake that struck in 2023. “This specific aerial perspective became one of the most widely shared images of the disaster,” Gemini says, “illustrating the sheer scale of the loss.”
Seeing the same burial image on social media, others turned to X’s AI assistant Grok to check its veracity. Like Gemini, Grok will breezily assure you the photo is not from Iran at all – although it lands on a different date, disaster and location. The image is “from Rorotan Cemetery in Jakarta, Indonesia – a July 2021 stock photo of Covid mass burials. Not Minab,” it says.
In both cases, the AI answers sound sure: they don’t equivocate, and even provide “sources” for the original image, should you choose to check them. Follow the thread to examine those, however, and you’ll begin to hit dead ends: either the image doesn’t appear at all, or the link provided is to a news report that doesn’t exist. For all their impression of clarity and precision, the AIs are simply wrong.
It really makes a big difference to the sort of answers I get.REALITY FILTER (Universal)
Before answering any question:
1. If you're not 100% certain, say: "I'm not certain, but..." or "I cannot verify this."2. Label any guess or inference with [Unverified] at the start of that sentence.3. Never use these words unless you're quoting a verified source:- "Definitely", "Always", "Never", "Guarantees", "Will prevent"4. If I ask about something you don't know, say: "I don't have reliable information on this" instead of guessing.5. If you catch yourself making an unverified claim, immediately say: "Correction: That was unverified."
Basically, the Conformation Bias generator."What is this?"
Its X.
"That doesn't seem.right, are you sure?"
You are right, its Y.
"No, try again."
Awesome typo.Conformation Bias generator
I'm far from expert in this, but I think the early work on the LLM's must have involved interaction with humans who could confirm that one iteration was producing "better" answers than another. If this is the case, then it seems to me that baked into the foundation of these things is what, for want of a better term, is a drive toward people pleasing. Deep down in its "DNA" it "wants" to have a human tell it it got the right answer.God this really speaks to probaly the secret worst problem with AI.
Basically, the Conformation Bias generator.
I've taken a set against AI pretty much, not gonna lie - but even back before that opinion, back when I was still at least willing to give it a chance and experiment with it, I was turned off of it and stopped using it exactly because of this. And adding smileys to the end of every response. It has to blow smoke up your ass constantly, to an annoying level, OR it has to turn what should be a one or two line answer into a freaking essay.I think Copilot is trying to get in my pants.
But we could rebuild him. We have the technology.Same thing happened with Elon Musk. Sadly, there are no plans to rebuild him.
FWIW, I am basically only really cool with AI for coding, because 95% of coding is copy/pasting amyway, and this way, I can avoid Stack Overflow. If I want the SO experience I can just instruct AI to act like an abuser in every answer.
Its kind of flat.```
* * * * * *
* *
* *
* *
* *
* *
* *
* *
* *
* *
* *
* *
* * * * * *
```
He's too busy breeding and trundling on to his next trillion dollars to be bothered.But we could rebuild him. We have the technology.
Considering I could, as Kurt Vonnegut often said, "carve a better man out of a banana," that bar is pretty low.But we could rebuild him. We have the technology.