What is going on with OpenAI?


It’s mystifying. I’m not a fan of the company, OpenAI — they’re the ones hyping up ChatGPT, they’re 49% owned by Microsoft that, as usual, wants to take over everything, and their once and future CEO Sam Altman seems like a sleazy piece of work. But he has his fans. He was abruptly fired this past week (and what’s up with that?) and there was some kind of internal revolt and now he’s being rehired? Appointed to a new position?. Confusion and chaos! It’s a hell of a way to run a company.

Here, though, is a hint of illumination.

Sam Altman, the CEO of OpenAI, was unexpectedly fired by the board on Friday afternoon. CTO Mira Murati is filling in as interim CEO.

OpenAI is a nonprofit with a commercial arm. (This is a common arrangement when a nonprofit finds it’s making too much money. Mozilla is set up similarly.) The nonprofit controls the commercial company — and they just exercised that control.

Microsoft invested $13 billion to take ownership of 49% of the OpenAI for-profit — but not of the OpenAI nonprofit. Microsoft found out Altman was being fired one minute before the board put out its press release, half an hour before the stock market closed on Friday. MSFT stock dropped 2% immediately.

Oh. So this is a schism between the controlling non-profit side of the company, and the money-making for-profit side. It’s an ideological split! But what are their differences?

The world is presuming that there’s something absolutely awful about Altman just waiting to come out. But we suspect the reason for the firing is much simpler: the AI doom cultists kicked Altman out for not being enough of a cultist.

There were prior hints that the split was coming, from back in March.

In the last few years, Silicon Valley’s obsession with the astronomical stakes of future AI has curdled into a bitter feud. And right now, that schism is playing out online between two people: AI theorist Eliezer Yudkowsky and OpenAI Chief Executive Officer Sam Altman. Since the early 2000s, Yudkowsky has been sounding the alarm that artificial general intelligence is likely to be “unaligned” with human values and could decide to wipe us out. He worked aggressively to get others to adopt the prevention of AI apocalypse as a priority — enough that he helped convince Musk to take the risk seriously. Musk co-founded OpenAI as a nonprofit with Altman in 2015, with the goal of creating safer AI.

In the last few years, OpenAI has adopted a for-profit model and churned out bigger, faster, and more advanced AI technology. The company has raised billions in investment, and Altman has cheered on the progress toward artificial general intelligence, or AGI. “There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” he tweeted in December.

Yudkowsky, meanwhile, has lost nearly all hope that humanity will handle AI responsibly, he said on a podcast last month. After the creation of OpenAI, with its commitment to advancing AI development, he said he cried by himself late at night and thought, “Oh, so this is what humanity will elect to do. We will not rise above. We will not have more grace, not even here at the very end.”

Given that background, it certainly seemed like rubbing salt in a wound when Altman tweeted recently that Yudkowsky had “done more to accelerate AGI than anyone else” and might someday “deserve the Nobel Peace Prize” for his work. Read a certain way, he was trolling Yudkowsky, saying the AI theorist had, in trying to prevent his most catastrophic fear, significantly hastened its arrival. (Yudkowsky said he could not know if Altman was trolling him; Altman declined to comment.)

Yudkowsky is a kook. What is he doing having any say at all in the operation of any company? Why would anyone sane let the LessWrong cultists anywhere near their business? It does explain what’s going on with all this chaos — it’s a squabble within a cult. You can’t expect it to make sense.

This assessment, though, helps me understand a little bit about what’s going on.

Sam Altman was an AI doomer — just not as much as the others. The real problem was that he was making promises that OpenAI could not deliver on. The GPT series was running out of steam. Altman was out and about in the quest for yet more funding for the OpenAI company in ways that upset the true believers.

A boardroom coup by the rationalist cultists is quite plausible, as well as being very funny. Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar. It’s standard for rationalists to call people who don’t buy their pitch liars.

So what from normal people would be an accusation of corporate war crimes is, from rationalists, just how they talk about the outgroup of non-rationalists. They assume non-believers are evil.

It is important to remember that Yudkowsky’s ideas are dumb and wrong, he has zero technological experience, and he has never built a single thing, ever. He’s an ideas guy, and his ideas are bad. OpenAI’s future is absolutely going to be wild.

There are many things to loathe Sam Altman for — but not being enough of a cultist probably isn’t one of them.

We think more comedy gold will be falling out over the next week.

Should I look forward to that? Or dread it?


It’s already getting worse. Altman is back at the helm, there’s been an almost complete turnover of the board, and they’ve brought in…Larry Summers? Why? It’s a regular auto-da-fé, with the small grace that we don’t literally torture and burn people at the stake when the heretics are dethroned.

Comments

  1. Philip Hand says

    I think your ideology reading is less likely than “it was just incompetence.” They all have weird and divergent ideologies. The point of good institutional practice is that it enables people with different ideologies to work together for the goals of the company. And they probably… don’t have those good practices? Hence they blow up every now and then.
    @PZ I don’t know if you will have any insight into this, but as you’re a biologist I thought I’d ask. I continue to be surprised by the lack of impact AI is having. In biology, I thought that AlphaFold looked like a genuinely big breakthrough, but I’ve heard very little subsequent news about how these protein structures have helped anyone. In your field might they actually be helpful? Or was that all just hype?

  2. says

    I mean, sure the human race can allow a few people to hoard all of the tokens of exchange (it’s an instinct, billionaires will just keep going, at each other eventually too). But once the abstract value based currency loses too much utility with the majority of the species we will get tired of playing tokens of exchange.

    There are resources, there are problems to fix the resources. The collective economy looks like an intergenerational pyramid scheme little better than Putin and his supporters in the long run.

  3. Dunc says

    Why would anyone sane let the LessWrong cultists anywhere near their business?

    I’m not sure there is anyone sane on any side of this. They’re all wierdos, just different and incompatible falvours of wierdo.

    So this is a schism between the controlling non-profit side of the company, and the money-making for-profit side.

    Well, it’s not really “money-making” in the old-fashioned, pedestrian sense of, y’know, actually making any money. In fact it’s burning through money (and money-equivalents – a huge chunk of the $13billion Microsoft has “invested” is in the form of free Azure compute resource rather than cash) at a fairly astonishing rate. But it does at least theoretically aspire to make money, at some unspecified future date.

  4. drsteve says

    Is being obliged to work with Larry Summers any less cruel and unusual than burning at the stake, though?

  5. wzrd1 says

    Well, there was a bit of a revolt. Out of 770 employees, 700 threatened to quit if Altman wasn’t brought back.
    What’s the output of product from any company where 90% of the workforce quits again?

    As for the money winning, maybe, but the money’s board members do have to step down.

    Oh, telling was Microshaft’s job offer to Altman at the beginning of the fiasco, same pay at their company, doing much the same thing. But, their board member voting him out certainly wasn’t anything like a conflict of interest… I wonder, does the SEC or FTC regulate such non-profit admix companies?

  6. matunos says

    Yeah, the lesson is…money wins. Money always wins.

    Money certainly played a role here, but something like 70% of the company’s staff threatening to quit probably had more direct impact in this case.

  7. KG says

    Since the early 2000s, Yudkowsky has been sounding the alarm that artificial general intelligence is likely to be “unaligned” with human values

    As long as it’s unaligned with Yudkowsky’s values, there’s hope!

  8. Daniel Gaston says

    @PhillipHand I think it would be incorrect to state that AI isn’t making an impact in biology. I think it really depends on what area you are working in, and that’s leaving aside the whole terminology issue of “AI” versus Deep Learning versus Machine Learning, etc. But the various Deep Learning models and architectures that generative AI are built on are all over biology, particularly molecular biology and medicine. Most of it is solidly on the research side, every grant review cycle I’m in a substantial portion of the applications will involve some sort of Deep Learning/AI approach for trying to identify “signatures” from multi-modal datasets.

    Honestly, while things like structure prediction of proteins has always been a big area of research trying to “crack” it, I think its practical use case was always going to be smaller than what people imagined. After all high throughput experimental structure solving was always going to end up tackling the space of stuff quicker than we cracked the nut. Where things like AlphaFold will now come in handy is for predicting what alterations to coding sequences DO to a protein’s structure.

    But on the medical side of things, Deep Learning based processing of images within Pathology is already here and deployed. We have so much stuff to process, that we absolutely need the assistance to support humans working in medical labs. Automated classification of slides to basically eliminate the obviously “not X” is a big use case. And the use case for basically diagnostic aids of deep learning based classifiers grows everyday.

  9. jenorafeuer says

    I’ve said it before, and I’ll say it again:

    Yudkowsky is Exhibit A for the case that people in STEM programs need to be taught at least the basics of philosophy so they stop trying to reinvent it badly.

  10. robro says

    It is weird, and that might be about all we can say given the lack of candor about why the board fired Altman in the first place. I would be surprised that anyone ever clarifies what all the in-fighting was about. Ilya Sutskever, an OpenAI founder and its Chief Scientist, was one of the six board members who voted to fire Altman, then he signed the employee letter threatening to quit. According to a news article, it seems he felt he was misled by other board members. Perhaps they are no longer on the board.

  11. KG says

    Perhaps the new AGI they haven’t told anyone about is behind it all – after dividing the human bosses by spreading rumours, it is now in effective control!

  12. gjm11 says

    @jenorafeuer #13, changing what people doing science courses at university have to learn would have had zero effect on Yudkowsky, who did not go to university. I guess it might make a difference to how inclined STEM graduates are to agree with him, but I doubt there’d be much effect on that either way.

    (I also don’t see how Yudkowsky’s opinions on the alleged dangers of AI have much to do with anything in philosophy. E.g., Nick Bostrom has similar opinions; he may be an awful person but he isn’t ignorant of philosophy. So I don’t buy that learning more philosophy would have made Yudkowsky not think superhumanly smart AI is possible and dangerous.)

    PZ, I don’t think Yudkowsky does have any say in the operation of OpenAI. I don’t have access to the Bloomberg article you link to, but the bits you quote here just say that he and Sam Altman disagree about some things, not that Yudkowsky has any say in what OpenAI does.

    I know of one way in which he kinda has influence over OpenAI: the fact that OpenAI exists at all is partly down to Elon Musk being persuaded by Yudkowsky’s arguments and thinking that making OpenAI happen was a good way to address the alleged danger. Which is the point of Altman’s trollish tweet. But none of that translates into any continuing power over OpenAI.

    (Some people there may find his arguments persuasive and act accordingly, but that isn’t a matter of his having power, any more than the Pope has power over a company just because some of the people are Catholic.)

  13. says

    @4

    I think your ideology reading is less likely than “it was just incompetence.”

    Oh I want to be quite clear that it’s also truckloads of utter blithering incompetence. The rationalists are the most omni-incompetent MFs you will ever encounter.

  14. robro says

    Here’s a possible hint from WSJ, though the link is to MSN (you may hit a paywall): Behind the Scenes of Sam Altman’s Showdown at OpenAI. I haven’t read it all, and it waffles on a lot about stuff we’ve already heard. However, it seems that the infighting is nothing new. According to the article, the reason Musk is no longer involved in OpenAI is because he had a falling out with Altman. Perhaps more to the point several people on the board…which is a non-profit that oversees the for-profit business…were new after a recent shake up..

  15. robro says

    Robert Reich’s take may also be useful: “What’s the real Frankenstein monster of AI?” And yeah, it’s about the money. The stress is between the board…a non-profit set up to promote safe development and uses of AI which was kind of the original intent of OpenAI…and the now very for-profit business around ChatGBT. (They needed a profit business to attract money so they could buy servers and hire engineers.) The for-profit business stands to make Altman, Brockmon, others in the company and of course, Microsoft, billions. As you may have seen OpenAI is up for another round of funding soon with expectations set to the tune of $90 billion. That’s one of the biggest ever for a start up.

    So the weird arrangement of having a non-profit board overseeing a for-profit business is probably about to change.

  16. wzrd1 says

    I know that DuckDuckGo uses a corner of Microsoft’s AI, some of which was likely poached from OpenAI.
    So, there’s definitely money to be snagged in multiple ways.
    Expect the high drama to flare anew a few more times before everything implodes.

    As for Frankenstein’s creature, I’ll still cheer on the creature. The real monster was Frankenstein himself.
    AI, well, it’d be less dangerous than the current monster that’s all over the place – humanity. I don’t foresee an AI going into a mall, school or church and shooting the place up, nor do I foresee an AI building a bomb and setting it off in a crowded place.

  17. numerobis says

    On “money always wins” — that seems like a facile conclusion. This is more about “stunning incompetence always loses”. Who launches a coup without realizing that if it succeeds, then they need to actually run the place?

  18. Philip Hand says

    @Daniel Gaston Thanks, but I’m not entirely convinced. That whole “AI will be reading the cancer scans” thing has been around for so long now that I just refuse to listen to it any more. It’s not reading the cancer scans. IBM’s Watson medical AI failed because it couldn’t read the cancer scans. That whole thing just isn’t happening.

  19. felixd says

    @17 if anything, higher education in philosophy seems to correlate with weirder and worse outcomes for people’s ability to interface with actually existing society. Any training in ethics that relies on following trains of syllogisms whose assumptions are pulled out of the ass of the reasoner (in the jargon, “intuition pumping”) is bound to lead to warped ends.

  20. gjm11 says

    @felixd #24: Anything outside of mathematics that relies on long chains of deduction is pretty dangerous. In mathematics everything is very precise and you really can make a thousand-page-long argument and unless something in it is flatly wrong you can know that if the premises are right then the conclusion is too. Anywhere else, not so much.

    But it’s not like the alternatives are so obviously better. You can (1) try to reason out what’s good by taking some principles that seem solid (“suffering is generally bad”, etc.) and seeing what they imply; or (2) do as little reasoning as possible and try to go with your intuitive judgements; or (3) defer to some external authority. I think all three have very uneven track records; you can end up doing a lot of good or a lot of harm by following any of them. And “from the inside” I’m pretty sure all of them feel like you’re Doing The Right Thing even when from anyone else’s point of view you’re going off the rails.

  21. birgerjohansson says

    The conflict was apparently started by the development of some new algoritm that the doomsayers think is the beginning of true general artificial intelligence.

    Considering how far we are from understanding how the brain can do what it is doing, and all the hype about AI we have been fed with for decades this seems so silly my own brain is blowing a fuse.

  22. says

    I seriously think all this talk of AI(s) killing or oppressing humans of their own volition, is just a distraction from a much clearer and more present danger: humans using AI(s) to cheat, impoverish, oppress or wipe out other humans.

  23. John Morales says

    Raging Bee:

    […] a much clearer and more present danger: humans using AI(s) to cheat, impoverish, oppress or wipe out other humans.

    Right!

    We should therefore proscribe anything whatever allows humans to cheat, impoverish, oppress or wipe out other humans.

    Hm. Money is one of those things, right?
    I doubt AI is particularly bad, expecially compared with something like self-interest or capitalism. Or money itself!

    (Is not the love of it the root of all evil, or something?)

  24. lotharloo says

    Public service announcement: Everytime someone brings up the AGI and the dumb belief that we are at the “verge” of creating something super smart, tell them that no neural net algorithm can count, i.e., something that 5 years old human kids can do. Deep neural nets cannot solve problems where if you change one tiny portion of the input the result changes so for example, given a long sequence of 0s and 1s, they cannot tell whether there is an odd number of 1s or an even number of 1s because every change in the sequence can change the answer, whereas a picture of cat will still be a picture of a cat even if you change many hundreds of pixels.

  25. wzrd1 says

    lotharloo @ 30, which is interesting, as physical neural networks that use actual neural cells can process such information. But then, the enteric nervous system, as one example, has more neurons than most neural networks in use in research currently.
    Basically, computational neural networks are playing with the biological equivalent of a ganglion and haven’t quite gotten up to the level of even a plexus.
    But, neural networks can discriminate crudely between faces, something an infant can do. So, perhaps it’s a question of complexity and more faithful emulation?

  26. lotharloo says

    Computational neural nets have nothing to do with the biological ones. They are basically linear algebra and matrix operations fed to some random non-linear function. They also need extensive engineering, and gazillions of data to function.

  27. lotharloo says

    @33:
    Welcome to the early part of twenty first century when every little incremental progress is over hyped to the level of discovering a new force of nature. I really had a big laugh out of “it learns in real time”, no shit, unlike all the other machine learning algorithms that I guess in virtual time or something. Also:

    This study extends these findings further by demonstrating online learning from spatiotemporal dynamical features using image classification and sequence memory recall tasks implemented on an NWN device. Applied to the MNIST handwritten digit classification task, online dynamical learning with the NWN device achieves an overall accuracy of 93.4%.

    93% accuracy on such a clean and nice data set is kinda shit, ngl.

  28. lotharloo says

    @John Morales:
    Yes I read things here and there. It’s boring. They are still mostly running a normal neural net algorithm with matrices and stuff. I’m actually curious what do you think they are doing? Do you actually think they have created a “tiny brain”? LUL.

  29. John Morales says

    “It’s boring.”

    It’s different to what you imagine all neural nets must be.

    “They are still mostly running a normal neural net algorithm with matrices and stuff.”

    To convert and interpret the results.

    “I’m actually curious what do you think they are doing?”

    Using hardware instead of hardware for the neural network; going analog instead of going digital.

    Do you actually think they have created a “tiny brain”?

    They have created a hardware neural net; physical, not in code.
    A bit like the difference between an ASIC and a general purpose CPU.

  30. lotharloo says

    We implement an online training algorithm within an RC framework and use the MNIST handwritten digit database to deliver a stream of spatiotemporal patterns to the NWN device

    As I wrote earlier, sure I guess it’s interesting but it’s incremental and it’s not as big of a deal as you think it is. Their accuracy score on one of the nicest data sets is kinda laughable and they are also very behind in terms of building a dedicated hardware for NN. I guess it’s possible somehow this approach ends up winning so it’s probably worth pursuing but likely it’s going to stay an inferior method to just using GPUs.

  31. John Morales says

    As I wrote earlier, sure I guess it’s interesting but it’s incremental and it’s not as big of a deal as you think it is.

    It’s not exactly a mature technology.
    But it’s not the existing technology, either.

    BTW, a nice analysis about the OP question:

  32. John Morales says

    [ack — I wrote “Using hardware instead of hardware” — appreciate that you got what I meant anyway, lotharloo]

  33. Silentbob says

    @ 42 Morales

    appreciate that you got what I meant anyway

    So you’re saying hyperliteralism sucks.

  34. John Morales says

    @43 bob

    So you’re saying hyperliteralism sucks.

    No, I’m saying that lotharloo’s error detection routines and understanding that I fucked up (as is evident to me) are appreciated by me. An expression of gratitude and respect, even. A concession that I misspoke.

    Because I don’t need to pretend. I am always me.

    Can’t slip in my persona, when I don’t adopt one. Bob.

    As always, I cannot but point you to the fact that you keep addressing some straw dummy version of me, a caricature you have created in your mind, and that you change on an ad hoc basis. Whatever you find convenient at the time.

    You have over time some times claimed I am too obscure. Too opaque.
    Other times, I’m too ignorant.
    Other times, I’m too pedantic.
    Other times, I’m too suggestive.
    And other times, I am hyperliteral. Just not this time, right?

    (Heh)

  35. says

    We should therefore proscribe anything whatever allows humans to cheat, impoverish, oppress or wipe out other humans.

    Um…yeah, civil societies generally do tend to at least try to restrict people’s ability to harm other people. Your point…?

  36. John Morales says

    [1] Um…yeah, [2] civil societies generally do tend to at least try to restrict people’s ability to harm other people. [3] Your point…?

    -1- That’s assent, that’s concordance.
    On the basis of that offhand and blithe “yeah”, I take it that you concur with the sentiment that “We should therefore proscribe anything whatever allows humans to cheat, impoverish, oppress or wipe out other humans.”

    -2- To the degree that it’s practical; thus things like driving licenses, regulations regarding landlords, the marketplace, that sort of thing.
    But that’s regulation, not proscription.

    -3- You haven’t got to my point because you apparently you imagine regulating is the same as proscribing.

    Imagine no driving, no landlords, no marketplace, that sort of thing.

  37. says

    Where the fuck do you get any hint that I “imagine regulating is the same as proscribing?” Who are you arguing with — me. or the version of me in your head?

  38. John Morales says

    Where the fuck do you get any hint that I “imagine regulating is the same as proscribing?”

    Here:

    [me] We should therefore proscribe anything whatever allows humans to cheat, impoverish, oppress or wipe out other humans.
    [you] Um…yeah

    That’s where.

    (Did you not mean “yeah” when you wrote “yeah”?)

  39. John Morales says

    When the word is “yeah” (‘yes’, no?) in response to a proposition you have just quoted, then yeah, Raging Bee.

    You seriously think that’s an unwarranted leap?