There's all kinds of speculation flying around about what led to this, everything from a hostile takeover by Microsoft (unlikely imo, since Microsoft only found out after it happened) to some kind of sexual scandal involving Altman.
The most developed rumor that actually fits the few things that are known, is this: It was an internal philosophical battle between Altman and Open AI's engineering staff, and the engineers won. What allegedly happened was, sometime very recently, the developers made a very substantial amount of progress toward achieving AGI. In case you don't know what that is - you'll notice how smug wet blankets like Argent and me like to point out that things like ChatGPT that like to be called AI by their developers and the press aren't really AI, not the kind of AI that you know and dread from scifi novels and movies. LLMs like ChatGPT when they give you answers to questions don't actually, like, reason out the answer and give that to you; they mathematically predict plausible answers in three-character chunks and then refine those through a weighting system and...well you get the idea. There's nobody "home" inside ChatGPT's house, there's just some lights on and music playing and some cardboard cutouts that Kevin McCallister rigged up with wires and pulleys to cast shadows that look like people moving around in front of the windows.
But that doesn't mean that AI companies aren't like still actively trying to achieve the scifi-movie version of AI, the real deal. They have been, the whole time, and things like ChatGPT have always been byproducts of that research. This idea is referred to as "AGI", artificial general intelligence, it's like the holy grail that they're really after. They had to start calling it something slightly different since the zeitgeist ran away with the original "AI" term.
Well anyways so, supposedly, OpenAI's chief engineers (I don't remember their names and I can't be arsed to go back and find them) are allegedly convinced that they've recently had some breakthroughs that have gotten them much closer to AGI than is being let on. I don't know how much I believe that personally, but what I think doesn't matter - THEY (allegedly) think so, and they (allegedly) demonstrated the most recent work to Altman and he was (allegedly) impressed and understood the significance of the progress. However, despite that understanding or maybe because of it, Altman (allegedly) deliberately refrained from informing the board of directors about that progress and the engineers' assessment of it.
Why would Altman (allegedly) do that? It's because OpenAI's bylaws forbid any developed AGI tech from being commercialized, and apparently(?) impose a bunch of safety protocols that would impact things like pace of development if it's suspected that AGI has been or is close to being achieved. It's a big problem if you're Sam Altman because you're now pulling in serious money from commercializing ChatGPT and have big plans for a whole "store" rollout and all that jazz - the moment an upcoming version of ChatGPT gets labeled "AGI" by OpenAI, it can't be sold on as an upgrade to Microsoft for another bajillion dollars. So you have I guess you could call it a "perverse incentive" to hide or downplay or deny the progress of your engineers as much as you can for as long as you can so that you can keep making money on it.
Supposedly, the engineering team leaders are highly upset by this. They (allegedly) believe that Altman has dollar signs in his eyes and is turning away from the non-profit's original "betterment of humanity" mission and safety philosophy and is promoting an unsafe production pace in favor of making money. So the story is, the engineering leads went to the board directly, and managed to convince them that Altman wasn't being straightforward with them about the progress of development. And the board decided it was bad enough to fire Altman. Microsoft was not consulted because they would absolutely have tried to influence the board to prevent Altman's firing, seeing as they share his incentive to keep GPT commercial for as long as possible.
So that's the rumor anyways. I don't know if ANY of that is true; but, if it is, it WOULD explain the company's "lack of candor" statement, it would explain the suddenness of the action and announcement, and it would explain why Microsoft didn't find out until the rest of us did.