Link Roundup: December 2025


Ways of Seeing (1972), John Berger Episode 1-4 | Video, 2 hours, via – An old television show, talking about the interpretation of European painting.  Episode 1 talks about the mistique placed on traditional art, episode 2 is tropes vs women in oil painting, episode 3 talks about the function of art as a class signifier, and episode 4 compares the tradition of oil painting to the modern tradition of advertisement.  Pretty fun and insightful.

What struck me was that for all the cultural reverence placed on traditional oil painting, we (as in the general public) have a fairly poor understanding of where it came from, and how its original context made it the way that it is.  And yet the origin is precisely what makes those paintings valuable in the first place.  So what does that say about art?  Do we value origin stories, or do we not?

How Dan Trachtenberg Built a Grand Unified Theory of Predator | Second Wind (video, 50 min) – I knew absolutely nothing about Predator before, so I learned a lot!  Darren Mooney discusses how it’s basically a slasher film where most of the victims are manly men, and also basically Vietnam soldiers.  I had no idea that this horror series I was barely aware of had such commentary on masculinity, and now I’m glad I don’t have to watch them to learn about it.

So I’m in the Epstein Files | Rebecca Watson (video + transcript, 35 min) – Yes, it’s really true, Rebecca Watson is in the Epstein files.  It’s because she confronted sex pest Lawrence Krauss about his “dear friend” Epstein.  And then Krauss sought advice from Epstein on how to dodge accountability.  Epstein doesn’t even give good advice!  It’s quite clear that Epstein’s habit of doling out millions to fund research resulted in him being surrounded by ass-kissers.

The Most Hated Children’s Book | Big Joel (video, 43 min) – Joel reads and interprets a book (that I had never heard of) about a young boy who befriends a Jew in Auschwitz.  The premise seems fine at first, but Joel walks through the fundamental flaws of the book.  He suggests a compelling reading that runs opposite to the author’s intentions.

Scientific study on types of detransitioners | Thing of Things – Ozy discusses detransitioners and their motivations, as described by a study.  Coming from a cis ace perspective… I think the ace community has much healthier attitudes towards departures.  When people decide they weren’t ace after all, the typical response is “Good for you, glad you figured yourself out”.  My dearest hope as an ace activist, is that if someone enters the community and eventually leaves, they leave in a better state than they started.

But when I say the ace community does better, this is not necessarily to the ace community’s credit, so much as it reflects the more hostile environment faced by the trans community.  Medical transition is harder to reverse than just telling people that you’re ace.  And detransition is very heavily politicized.  I think that detransitioners’ lives would be improved if GCs stopped wielding detransitioners as a rhetorical tool, but GCs obviously aren’t really interested in the wellbeing of detransitioners.

AI Water Use Imperfectly Explained | Hank Green (video, 23 min) – Hank discusses why there’s so much variation in estimates of AI water usage.  When it comes to water usage, it’s not just the amount that matters, but also what kind of water (e.g. municipal vs industrial) and where.  And really, even the high usage estimates are dwarfed by the agricultural usage of water, particularly corn.  It’s also weird that people tunnel-visioned on the water usage when the power usage is arguably more concerning, although still not necessarily that large in the scheme of things.

I think complaints about AI environmental impact are highly vulnerable to what-about-ism.  You think AI water usage is high, then what about the totally unnecessary US corn subsidies?  And I wanna know, how does the power usage of AI compare to that of other technologies we take for granted, such as video streaming?  But it’s not that I think we should be unconcerned about AI.  It’s like, we’re treating AI as the shiny new problem.  Dropping everything to treat the shiny new problem as a unique world-burner just doesn’t serve humanity.  Suppose I’m wrong and AI really will have a massive environmental impact; then do you want people to drop AI as a concern ten years down the road when some other shiny new problem shows up?  What if we thought about numbers instead of novelty.

Leave a Reply

Your email address will not be published. Required fields are marked *