Link Roundup: September 2020

OpenAI’s latest breakthrough is astonishingly powerful, but still fighting its flaws | The Verge – GPT-3 is a new language AI with astounding power.  GPT-3 first grabbed my attention when I saw someone use it to produce a response to philosophers talking about GPT-3.  Sure, some cherry-picking is involved, but the result is more cogent than the average internet commenter.  To temper (or amplify) the hype, I suggest looking at this massive compilation of GPT-3 results, including experiments that failed.  Among other things, GPT-3 is apparently terrible at making cat puns.

Although not created by GPT-3, I also thought these image completions were incredible.

Beethoven Sucks At Music | 12tone (Video, 14 min)
Music Theory and White Supremacy | Adam Neely (Video, 44 min) – Now this is the music youtube content that I am here for.  12tone explains some of the history that led Beethoven and other classical composers to be canonized.  Adam Neely discusses how “music theory” as it is commonly understood, is really the theory of 18th century European music.  The framing of 18th century European music as the objective measure by which all music must be judged is structural white supremacy.  I have a passing interest in music theory, and it’s difficult to learn in the best of times, but I find it doubly frustrating because it fails to describe any of the music I listen to.  I feel like these videos have named the problem.

If you liked these videos, you might also appreciate the paper that Neely’s video is based on: “Music Theory and the White Racial Frame“.

A Man’s Place is in the Home | Impossible Me – There’s a feminist comic explaining how women are often expected to do the work of managing the household, and men will only do specific tasks when asked.  I like the comic and think it describes the dynamic of the household I grew up in.  However, there was a complicating factor that’s just impossible to ignore: my mother would actively complain when anyone would try to clean up without her asking.  I don’t want to make it out like my mother was to blame–incredible woman that she is and always has been–but I appreciated Abbey’s discussion of the issue, which allows space for this sort of problem.


  1. Rob Grigjanis says

    I find it doubly frustrating because it [music theory] fails to describe any of the music I listen to

    It doesn’t describe Penderecki or Hindemith?

  2. says

    To my knowledge, Hindemith composed based on a theory of his own invention that never gained traction, and I’m okay with never understanding that.

  3. PaulBC says

    I love the image completions. Is there link with more than thumbnails? I’m curious how it would go about guessing a context in that case. It seems like it first needs a conceptual model of the blocked part and then a way to generate instances of it consistent with what’s visible.

    I’m also not sure that humans ever attempt to complete images to that level of detail. The most I think I do is guess the rest of a word (and presumably the other half of a familiar animal or object), or make some assumptions about movement where my view is temporarily blocked. I doubt I would even think of water drops and ripples let alone try to complete the waves.

  4. PaulBC says

    @4 Interesting. I have trouble seeing this as “neural nets” though maybe I am just so far behind the times. I don’t think even the human mind simply trains on things that look like cats and written papers and then infers images such as “cat holding up a piece of written paper.” It seems like you would have to know something about cats, know something about paper potentially containing writing, whimsically imagine a cat holding the paper with its paw (like the cat/newspaper meme). All this with a tabula rasa learning algorithm and a giant training set? Note: they did not say tabula rasa, but they didn’t really say what they did start with. Even people need to be taught what writing is (i.e. aside from those who invented it and perhaps the very rare prodigy).

    I am half skeptical on the whole thing (though the presenter says it’s not cherrypicked) and half very curious what the technique was, because they don’t describe it well enough for me to guess. If they had an open site where you could leave half a picture and come back for results I’d be more persuaded (of course that doesn’t rule out a chess Turk method). (I am not calling anything a hoax, just that when I see something this amazing, I would like a lot more supporting evidence.)

  5. says

    @PaulBC #5,
    I don’t know how this particular algorithm works, but I know a bit about neural networks. You basically train the computer to learn layers upon layers of concepts. On the bottom layers it might recognize things like lines and edges, in the middle layers it might recognize fur or eyes, and in the top layers it might recognize things like cats or paper. Or at least, that’s the idea. In practice, the semantics of each layer may differ significantly from the concepts humans would identify.

    It’s a common practice in neural networks to not start from nothing, but start with a model that was already trained on a similar task. But ultimately, yes these models are built from nothing but a giant training set–at no point is there a human specifically teaching it what rules to apply to recognize a cat.

    For an accessible intro to neural networks, I might recommend 3 Blue 1 Brown videos.

  6. PaulBC says

    @6 Thanks. Yeah, my exposure to machine learning stopped being current around 1995, so I can see how this would be very different. A bit beyond, say, recognizing disease in medical images. It is still very surprising. If it’s this good, it seems like it ought to be a breeze to infer the rest of a computer program and put people like me out of a job (which has honestly taken a lot longer than I would ever imagine).

  7. PaulBC says

    As point of reference, I have no trouble seeing intuitively how an image recognition algorithm like could generate all kinds of things involving dog faces. Also (I just started watching the video) yeah, recognizing a “3” is definitely the kind of thing you can do with machine learning (and my ATM is very good at it when it comes to checks!).

    But cats holding papers, water droplets with ripples, appropriate shadows, appropriate reflection of ambient light on faces… this seems to require an enormous store of diverse world knowledge, in some cases exceeding human ability. Very few amateur artists completing a cat’s face would be as good with the lighting.

Leave a Reply

Your email address will not be published. Required fields are marked *