ChatGPT

Free

Deviled Girl
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
30,167
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
I wonder what he actually did; the company's statement is predictably vague.
Someone appropriately asked ChatGPT for an answer.



Nondescript question, nondescript answer. Not all that helpful, ChatGPT...
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
8,018
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791
There's all kinds of speculation flying around about what led to this, everything from a hostile takeover by Microsoft (unlikely imo, since Microsoft only found out after it happened) to some kind of sexual scandal involving Altman.

The most developed rumor that actually fits the few things that are known, is this: It was an internal philosophical battle between Altman and Open AI's engineering staff, and the engineers won. What allegedly happened was, sometime very recently, the developers made a very substantial amount of progress toward achieving AGI. In case you don't know what that is - you'll notice how smug wet blankets like Argent and me like to point out that things like ChatGPT that like to be called AI by their developers and the press aren't really AI, not the kind of AI that you know and dread from scifi novels and movies. LLMs like ChatGPT when they give you answers to questions don't actually, like, reason out the answer and give that to you; they mathematically predict plausible answers in three-character chunks and then refine those through a weighting system and...well you get the idea. There's nobody "home" inside ChatGPT's house, there's just some lights on and music playing and some cardboard cutouts that Kevin McCallister rigged up with wires and pulleys to cast shadows that look like people moving around in front of the windows.

But that doesn't mean that AI companies aren't like still actively trying to achieve the scifi-movie version of AI, the real deal. They have been, the whole time, and things like ChatGPT have always been byproducts of that research. This idea is referred to as "AGI", artificial general intelligence, it's like the holy grail that they're really after. They had to start calling it something slightly different since the zeitgeist ran away with the original "AI" term.

Well anyways so, supposedly, OpenAI's chief engineers (I don't remember their names and I can't be arsed to go back and find them) are allegedly convinced that they've recently had some breakthroughs that have gotten them much closer to AGI than is being let on. I don't know how much I believe that personally, but what I think doesn't matter - THEY (allegedly) think so, and they (allegedly) demonstrated the most recent work to Altman and he was (allegedly) impressed and understood the significance of the progress. However, despite that understanding or maybe because of it, Altman (allegedly) deliberately refrained from informing the board of directors about that progress and the engineers' assessment of it.

Why would Altman (allegedly) do that? It's because OpenAI's bylaws forbid any developed AGI tech from being commercialized, and apparently(?) impose a bunch of safety protocols that would impact things like pace of development if it's suspected that AGI has been or is close to being achieved. It's a big problem if you're Sam Altman because you're now pulling in serious money from commercializing ChatGPT and have big plans for a whole "store" rollout and all that jazz - the moment an upcoming version of ChatGPT gets labeled "AGI" by OpenAI, it can't be sold on as an upgrade to Microsoft for another bajillion dollars. So you have I guess you could call it a "perverse incentive" to hide or downplay or deny the progress of your engineers as much as you can for as long as you can so that you can keep making money on it.

Supposedly, the engineering team leaders are highly upset by this. They (allegedly) believe that Altman has dollar signs in his eyes and is turning away from the non-profit's original "betterment of humanity" mission and safety philosophy and is promoting an unsafe production pace in favor of making money. So the story is, the engineering leads went to the board directly, and managed to convince them that Altman wasn't being straightforward with them about the progress of development. And the board decided it was bad enough to fire Altman. Microsoft was not consulted because they would absolutely have tried to influence the board to prevent Altman's firing, seeing as they share his incentive to keep GPT commercial for as long as possible.

So that's the rumor anyways. I don't know if ANY of that is true; but, if it is, it WOULD explain the company's "lack of candor" statement, it would explain the suddenness of the action and announcement, and it would explain why Microsoft didn't find out until the rest of us did.
 
  • 1Thanks
Reactions: Essence Lumin

Free

Deviled Girl
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
30,167
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
you'll notice how smug wet blankets like Argent and me like to point out that things like ChatGPT that like to be called AI by their developers and the press aren't really AI
That's not smugness. It's called being real.
 
  • 1Agree
Reactions: Govi

Free

Deviled Girl
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
30,167
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565
On Friday, OpenAI fired CEO Sam Altman in a surprise move that led to the resignation of President Greg Brockman and three senior scientists. The move also blindsided key investor and minority owner Microsoft, reportedly making CEO Satya Nadella furious. As Friday night wore on, reports emerged that the ousting was likely orchestrated by Chief Scientist Ilya Sutskever over concerns about the safety and speed of OpenAI's tech deployment.

"This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity," Sutskever told employees at an emergency all-hands meeting on Friday afternoon, as reported by The Information.
Internally at OpenAI, insiders say that disagreements had emerged over the speed at which Altman was pushing for commercialization and company growth, with Sutskever arguing to slow things down. Sources told reporter Kara Swisher that OpenAI's Dev Day event hosted November 6, with Sam front and center in a keynote pushing consumer-like products was an "inflection moment of Altman pushing too far, too fast."
 

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
4,955
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
THEY ARE NOT BUILDING AGI

Spicy autocomplete is a dead end if your goal is AI.

Do they actually believe their snake oil in the C-suite? Dang.
 
  • 1Agree
Reactions: Free

Bartholomew Gallacher

Well-known member
Joined
Sep 26, 2018
Messages
4,702
SL Rez
2002
More news about that firing:

1. Microsoft as a key investor in OpenAI was not informed upfront about Altman's firing. It learned about it basically one minute before the press release was issued.
2. The other founder of OpenAI, Greg Brockman, also left the company, and 3 of its senior AI researchers as well.

So especially that Microsoft was not aware about the firing upfront as such an important investor is really unsual.

Of course the why now is the source of many speculations. Are they burning money? Or was there an internal dispute about let's say military/intelligence use of the technology, to which some people agreed and others didn't, and so one side now got the upper hand? And if so, which side has now succeeded?

An important bit of information: OpenAI's board consists mostly of outsiders. After Altman and Brockman’s departures, its remaining board members are the company’s chief scientist, Ilya Sutskever, Quora CEO Adam D’Angelo, Tasha McCauley, the former CEO of GeoSim Systems, and Helen Toner, the director of strategy at Georgetown’s Center for Security and Emerging Technology.

 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
8,018
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791
THEY ARE NOT BUILDING AGI

Spicy autocomplete is a dead end if your goal is AI.

Do they actually believe their snake oil in the C-suite? Dang.
Maybe so, maybe not; but one thing is hard to deny - OpenAI is owned by a non-profit and supposed to be developing whatever it's developing with safety and humanitarian interests in mind first according to their own stated business philosophy, whereas Altman was pretty obviously pushing hard to turn it into just the next big profitable Tech Company, likely with an eye toward market domination and monopolism because that is the Techbro Dream. Whether achieving AGI is a pipe-dream in the end or not, getting rid of Altman if he was becoming profit-focused is one of the most laudable moves any tech company has ever made.

Like imagine if years and years and years ago Google, the single original company which had "Don't be evil" literally written into its original business plan and company CoC, noticed what Lex Bezos was doing and went to him and were like "Bro, you're starting to be kind of evil, sorry" and fired him. The internet would be a different place right now. Yeah we can be cynical and say that it might not be and maybe just some other company that was okay with being evil would just rise in its stead and today the 80% of the internet would look exactly the same with Yahoo everywhere instead of Google, but maybe it wouldn't.
 

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
4,955
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
They seem to be sucked into the Less Wrong community, or something adjacent to it, and are thinking of safety in terms of "not creating Roko's Basilisk or a paperclip maximizer", and would express outrage if someone told them that corporations are already paperclip maximizers and anything they create will be used by corporations to maximize paperclips.

 

Veritable Quandry

Specializing in derails and train wrecks.
Joined
Sep 19, 2018
Messages
3,838
Location
Columbus, OH
SL Rez
2010
Joined SLU
20something
SLU Posts
42
It is a bit strange. He was a board member. The chair of the board quit in protest. Both seemed surprised. Did they call a board meeting and vote without informing two members? That is the sort of thing that keeps lawyers busy.
 

Bartholomew Gallacher

Well-known member
Joined
Sep 26, 2018
Messages
4,702
SL Rez
2002
Like imagine if years and years and years ago Google, the single original company which had "Don't be evil" literally written into its original business plan and company CoC, noticed what Lex Bezos was doing and went to him and were like "Bro, you're starting to be kind of evil, sorry" and fired him. The internet would be a different place right now. Yeah we can be cynical and say that it might not be and maybe just some other company that was okay with being evil would just rise in its stead and today the 80% of the internet would look exactly the same with Yahoo everywhere instead of Google, but maybe it wouldn't.
The internet is looking like it is today because people are lazy, simple as that. It is us who gave the tech bros so much power in their hands, because we are too lazy to use the fediverse, use Facebook instead of something else, Twitter instead of Mastodon, to use own photo galleries instead of Flickr, Discord instead of Mumble and so on, Gmail instead of email...

Why? Because networking effects, everybody else is using it as well... and that's all we do care about - our convenience, lazyness. Nowadays a site like Geocities would be impossible to become a big success.

People are lazy, they want simple solutions for complex problems - this is what made the Tech bros so successful.
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
8,018
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791
It started with using Gmail instead of email; but I kind of disagree that it was about laziness. I think it started out as being about cost.

Back during the Web 1.0 days, your ISP had an email server and gave you an email address. It would've been something like yourname@yourISP.net or whatever. I was a RL kid back at the very very end of Web 1.0 so my dad had one of those because the internet service was under his name. But what about the rest of us? Even back then, in order to sign up for things you needed a personal email address. Maybe some premium ISP tiers offered email addresses for the whole family (I don't know); but for the most part if you weren't the actual billed owner of an internet account, the only way you could get an email address was by getting a gmail or hotmail or Yahoo account. They were free, and the trade-off was you had to put up with ad banners at the top of your inbox - but like, it was the internet, you were already used to that. Just the fact that families existed at all meant that inevitably there were more people on the internet than were actually paying for an internet connection, so it only took a couple of years before there were just legions more web mail accounts than user email accounts on ISP servers. Eventually ISPs just stopped maintaining email servers because nobody was using them anymore - I don't think in my entire adult life I've had a single ISP that offered an email address with my account. So when Web 2.0 rolled around with its "targeted ads" and systemic mass private-data harvesting came along, that was basically a captive audience for web mail companies like Google. They had a "free" product that by then was almost impossible to exist on the internet without.

Facebook, Twitter, "social media", all that jazz - those all came later. They were born because Google had already ruined the internet and set the stage for them.
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
8,018
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
4,955
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
Again, nothing OpenAI has done so far has been "responsible". If they were interested in being responsible they'd be paying for their source data. But of course their business model couldn't afford that.

Seriously, what they think of as being "responsible" has nothing to do with real world here-and-now concerns, and everything to do with transhumanism and singularitarian eschatology. It's basically religion.
 

Noodles

☑️
Joined
Sep 20, 2018
Messages
2,913
Location
Illinois
SL Rez
2006
Joined SLU
04-28-2010
SLU Posts
6947
Facebook, Twitter, "social media", all that jazz - those all came later. They were born because Google had already ruined the internet and set the stage for them.
I feel like a huge downward swing was when Google made a one two pinch killing personal blog. They used to have Google Blog Search, as its own seperate tab that returned JUST blogs. They folded it into I think Google News. Now Joe Blow's blog about caterpillars or whatever is SEO competing with 1000 corporate owned news sites that have probably mentioned caterpillar in some passing way so he never gets found.

Around that time they also killed Google Reader, the most easy to use and find RSS reader. It was incredibly clean too, no ads. I feel like Google basically wanted to kill RSS in part for this reason, their ONLY business is selling ads. RSS lets people read artixles without them. Also they were heavily forcing Google+ on everyone around that time, and within a month (I forget if it was before or after) they had rolled out G+ Pages. I feel like they killed RSS to "encourage" using Pages for your blog/news org/whatever. Except they still had no API. So to maintain your G+ page, you had to manually post a link to any posts.

Anyway, RSS and blogs, both effecticely being stabbed and stepped on by Google helped accelerate the death of privately run blogs and websites. I feel like Google also hated this industry because a LOT of them were simply passion projects. Joe Blow was never interested in selling ads on his blog, he just liked caterpillars and really wants to tell the world about them.
 
Joined
Sep 19, 2018
Messages
5,347
Location
NJ suburb of Philadelphia
SL Rez
2003
SLU Posts
4494
They were free, and the trade-off was you had to put up with ad banners at the top of your inbox - but like, it was the internet, you were already used to that.
Another huge advantage was you didn't lose all of your email if you switched isps.
 

Argent Stonecutter

Emergency Mustelid Hologram
Joined
Sep 20, 2018
Messages
4,955
Location
Coonspiracy Central, Noonkkot
SL Rez
2005
Joined SLU
Sep 2009
SLU Posts
20780
Before gmail I ran my own mail server, but the hoops you have to jump through to get your mail accepted by the likes of Google made that too hard.
 

Bartholomew Gallacher

Well-known member
Joined
Sep 26, 2018
Messages
4,702
SL Rez
2002
Before gmail I ran my own mail server, but the hoops you have to jump through to get your mail accepted by the likes of Google made that too hard.
Well, since I am still feeding mails from my own mail server to Gmail: you need as minimum to have SPF or DKIM implemented. And better have some rate limiting in place, because Gmail is really picky about that.

But creating a SPF record is done quite quickly. The problem starts when suddenly Gmail puts you on its internal shitlist for undisclosed and unknown reasons, and your users start crying about their valuable emails not reaching their goal. Same goes by the way with Hotmail.

Which is really funny, because nowadays Gmail is one big source of spam.
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
8,018
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791
Again, nothing OpenAI has done so far has been "responsible". If they were interested in being responsible they'd be paying for their source data. But of course their business model couldn't afford that.

Seriously, what they think of as being "responsible" has nothing to do with real world here-and-now concerns, and everything to do with transhumanism and singularitarian eschatology. It's basically religion.
I agree - I'm just saying, the biggest evangelists for those beliefs and views at OpenAI were Altman and Brockman, and OpenAI just (tried to) kicked them out.