Nice anti-Abrahamic rant

Brian Cox (not the astronomer) is an actor who often plays brusque, rude, loud characters, and he’s usually, at least superficially, a bad guy. So of course he turns out to be an atheist in real life.

Now, he’s got some spicy words for the Bible and religion, which he ultimately calls “stupid”—mostly because of the “patriarchal” lens it puts on the world.

“We created that idea of God and we created it as a control issue,” he continued on the podcast. “It’s also a patriarchal issue. That’s how it started and it’s essentially patriarchal. We haven’t given enough scope to the matriarchy and I think we need to move matriarchically.”

“We have to go more towards a matriarchy because the mothering thing is the thing which is the real conditioning of our lives,” he explained. “Our fathers don’t condition us ‘cause they’re too bloody selfish, but our mothers have to because they have an umbilical [cord],” he said, adding that women’s “umbilical relationship” to the their children contrasts a father’s: “Men do not have that, they’re just sperm banks—moveable sperm banks that walk around and come and go.”

A “matriarchal” society makes a lot more sense, Cox said, but the “propaganda” in the Bible gets in society’s way of this world view. “It’s Adam and Eve,” he continued, “The propaganda goes right the way back—The Bible is one of the worst books ever, for me, from my point of view because it starts with the idea that Adam’s rib—that out of Adam’s rib, this woman was created. I don’t believe it… ‘cause they’re stupid.”

Instead of this patriarchal worldview taken from the bible, Cox said, society should “honor” women and “give them their place.”

He ultimately concludes that people “need” religion because “they need some kind of truth,” but “they don’t need to be told lies.” And The Bible is “not the truth,” he stressed.

That’s all a bit overly simplistic, but he was on a roll.

Maybe the other one, Professor Brian Cox, will get crankier as he gets older and less pretty and grows into his name?

Credentialed leftist at work

Somehow, I wouldn’t be at all surprised if this accusation is true.

Jerry Coyne SPIT on me yesterday. This man is a professor at @UChicago and a biologist and the university has been letting him spit on students and protestors. I just heard reports that someone matching his description was roaming the camp spitting on/near students last night

In his defense, in this article from 6 years ago, his leftist “credentials” are unimpeachable. OK, man, if you say so.

Very leftist. Very free speech.

The scene at UW Madison

My daughter works here!

I notice the wall of cops with shields lined up against a group of students sitting quietly, watching the spectacle. Who has shown up prepared for violence?

Administrators claim they are “balancing students’ right to protest with a desire to minimize disruptions to their campuses and enforce a state rule banning encampments”. I don’t see much evidence of “balance” when one side brings in hired guns to enforce their will.

While my daughter is somewhere on campus, I’m pretty sure she’s nowhere near the protest. She had knee surgery a short while ago and is only able to hobble…so as much as I support the protest, it would be unwise to face off against brutal enforcers who wouldn’t be at all averse to compounding her current problems with a little violence.

The truth about spiders

The director of Infested, Sébastien Vaniček, tells the inside story of making a horror movie about spiders.

And, yes, they really did use 200 spiders on the set—but not all at once. “They are able to shoot for 10 seconds, and then after that they are tired, and you can’t have anything from them,” Vaniček explained. “You have to understand them, and understand that they are really fragile creatures. They are always afraid and they want to hide. When they are on the floor, they will run and they will seek a place in the shadow. They are able to run for about 10 seconds, and after just 10 seconds they are completely tired. You can put them on the wall and let them stay, and I can film them because I know they won’t move.”

Truth. Most spiders are laid back and placid — their whole lifestyle is about being still and quiet, and suddenly darting forward opportunistically. That might be part of the reason they scare some people, that they are capable of suddenly darting at prey, even if most of their life is spent at rest.

Protests everywhere

All across the country, students are rising up to protest US support for the genocidal state of Israel. The response is growing!

Unfortunately, Columbia University is leading the way in authoritarian counter-reaction.

New York police arrested dozens of people on two campuses Tuesday night after officers cleared out a Columbia University building occupied by protesters.

At Columbia, New York police used a massive armored vehicle to push a bridge into a window of Hamilton Hall, the building demonstrators began occupying the previous night. Officers then streamed over the bridge — quickly retaking the building.

Yeah, to their shame, the Columbia administration called in a tank to put down their students.

In happier news, the protests have spread to Antarctica.

Social media 1, ChatGPT 0

Way back in February, I made a harsh comment about ChatGPT on Mastodon.

I teach my writing class today. I’m supposed to talk about ChatGPT. Here’s what I will say.
NEVER USE CHATGPT. YOU ARE HERE TO LEARN HOW TO WRITE ABOUT SCIENCE, YOU WILL NOT ACCOMPLISH THAT BY USING A GODDAMNED CRUTCH THAT WILL JUST MAKE SHIT UP TO FILL THE SPACE. WRITE. WRITE WITH YOUR BRAIN AND YOUR HANDS. DON’T ASK A DUMB CYBERMONKEY TO DO IT FOR YOU.
I have strong opinions on this matter.

Nothing has changed. I still feel that way. Especially in a class that’s supposed to instruct students in writing science papers, ChatGPT is a distraction. I’m not there to help students learn how to write prompts for an AI.

But then some people just noticed my tirade here in April, and I got some belated rebuttals. Here, for instance, kjetiljd defends ChatGPT.

Wow, intense feelings. Have you ever written something, crafted a proper prompt to ask ChatGPT-4 to critique your text? Or asked it to come up with counter-arguments to your point of view? Or asked it to analyze a text in terms of eg. thesis/antithesis/synthesis? Or suggest improvements in readability? You know … done … (semi-)scientific … experiments with it? With carefully crafted prompts my hypothesis is that it can be used to improve both writing and thinking…

Maybe? The flaw in that argument is that ChatGPT will happily make stuff up, so the foundation of its output is on shaky ground. So I said I preferred good sources. I didn’t mention that part of this class was teaching students how to do research using the scientific literature, which makes ChatGPT a cheat to get around learning how to use a library.

I prefer to look up counter-arguments in the scientific literature, rather than consulting a demonstrable bullshit artist, no matter how much it is dressed up in technology.

kjetiljd’s reply is to tell me I should change the focus of my class to be about how to use large language models.

And if I were a student I would probably prefer advice on the use of LLMs from a scientific writing teacher who seemed to have some experience in the field, or at least seemed to … how should I say this … have looked up counter-arguments from the scientific literature …?

I guess I’m just ignorant then. Unfortunately, this class is taught by a group of faculty here, and I had a pile of sources about using ChatGPT as a writing aid, that were included in course’s Canvas page. I didn’t find them convincing.

Sure, I’ve looked at the counter-arguments. They all seem rather self-serving, or more commonly, non-existent.

So kjetiljd hands me some more sources. Ugh.

Here are a few more or less random papers on the topic – they exist, are they all self-serving? https://www.semanticscholar.org/paper/ChatGPT-4-and-Human-Researchers-Are-Equal-in-A-Sikander-Baker/66dcd18c0f48a14815edca1d715fa8be8909cca6 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10164801/ https://www.semanticscholar.org/paper/Chat

I read the first one, and was unimpressed. They trained ChatGPT on a small set of review articles and then asked it to write a similar review, and then had some people judge on whether it was similar in content and style. Is ChatGPT a dumb cybermonkey? This article says yes.

I was about done at this point, so I just snidely pointed out that scientists scorn papers written by AIs.

Don’t get caught!

https://retractionwatch.com/papers-and-peer-reviews-with-evidence-of-chatgpt-writing/

I was done, but others weren’t. Chaucerburnt analyzed the three articles kjetiljd suggested. They did not fare well.

The first paper describes a trial where researchers took 18 recent human-written articles, got GPT-4 to write alternate introductions to them, and then got eight reviewers to read and rate these introductions.

Some obvious points:

– 18 pairs of articles is not a lot. With only a small number of trials, there’s a significant risk that an inferior method will win a “best of 18” over a superior method by pure luck.
– 8 reviewers, likewise, is not a very large number. Important here is that the reviewers were recruited “by convenience sampling in our research network” – that is, not a random sample, but people who were already contacts of the authors. This risks getting a biased set of reviewers whose preferences are likely to coincide with the researchers’.
– The samples were reviewed on dimensions of “publishability” (roughly, whether the findings reported are important and novel), “readability”, and “content quality” (here apparently meaning whether they had too much detail, not enough, or just right.)

What’s missing here?

None of the assessment criteria have anything to do with *accuracy*. There’s no fact-checking to evaluate whether the introduction has any connection to reality.

Under the criteria used here, GPT could probably get excellent “publishability” scores by claiming to have a cure for cancer. It could improve “readability” by replacing complex truths with over-simple falsehoods.

And it could improve “content quality” by inventing false details or deleting important true ones in order to get just the right amount of detail, since apparently “quality” doesn’t depend on whether the details are *true*, only on how many there are.

The reviewers weren’t even asked to read the rest of the article and evaluate whether the introduction accurately represented the content.

I daresay the human authors could’ve scored a lot higher on these metrics if they weren’t constrained by the expectation that their content should be truthful – something which this comparison doesn’t reward.

They also note “We removed references from the original articles as GPT-4’s output does not automatically include references, and also since this was beyond the scope of this study.” Because, again, truthfulness is not part of the assessment here.

(FWIW, when I tried similar experiments with an earlier version of GPT, I found it was very happy to include references – I merely had to put something like “including references” in the prompt. The problem was that these references were almost invariably false, citing papers that never existed or which didn’t say what GPT claimed they said.)

I concur, and that was my impression, too. The AI written version was not assessed for originality or accuracy, but only on superficial criteria of plausibility. AI is very good at generating plausible sounding text.

Chaucerburnt went on to look over the other two articles, which I hadn’t bothered to read.

The second article linked – which feels very much like it was itself written by GPT – makes a great many assertions about the ways in which GPT “can help” scientists in writing papers, but is very light on evidence to support that it’s good at these things, or that the time it saves in some areas is greater than the time required to fact-check.

It acknowledges plagiarism as a risk, and then offers suggestions on how to mitigate this: “When using AI-generated text, scientists should properly attribute any sources used in the text. This includes properly citing any direct quotations or paraphrased information”… – this seems more like general advice for human authors than relevant to AI-generated text, where the big problem is *not knowing* when the LLM is quoting/paraphrasing somebody else’s work.

It promotes the use of AI to improve grammar and structure – but the article itself has major structural issues. For instance, it has a subsection on “the risk of plagiarism” followed by “how to avoid the risk of plagiarism”.

But most of the content in “the risk of plagiarism” is in fact stuff that belongs in the “how to avoid” section.

Some of it is repeated between sections – e.g. each of those sections has a paragraph advising authors to use plagiarism-detection software, and another on citing sources.

On the grammatical side, it has a bunch of errors, e.g.:

“AI tools like ChatGPT is capable of…”

“The risk of plagiarism when use AI to write review articles”

“Use ChatGPT to write review article need human oversight”

“Conclusion remarks”

“Are you tired of being criticized by the reviewers and editors on your English writings for not using the standard English, and suggest you to ask a native English speaker to help proofreading or even use the service from a professional English editor?”

(Later on, it contradicts that by noting that “AI-generated text usually requires further editing and formatting…Human oversight is necessary to ensure that the final product meets the necessary requirements and standards.”)

If that paper is indeed written by GPT, it’s a good example of why not to use GPT to write papers.

The third aricle gets the same treatment.

The last of the three papers you linked is a review of other people’s publications about ChatGPT. It’s more of a summary of what other people are saying for and against GPT’s use than an assessment of which of these perspectives are well-informed.

(Of 60 documents included in the study, only 4 are categorised as “research articles”. The most common categories are non-peer-reviewed preprints and editorials/letters to the editor.)

It does note that 58 out of 60 documents expressed concerns about GPT, and states that despite its perceived benefits, “the embrace of this AI chatbot should be conducted with extreme caution considering its
potential limitations.”

Not exactly an enthusiastic recommendation for GPT adoption.

Going a step further, Chaucerburnt reassures me that my role in the class is unchallenged.

I’ve seen people use AI for critique, and my impression is that it does more harm than good there.

If a human reviewer tells me that my sentences are too long and complex, there’s a very high probability that they’re saying this because it’s true, at least for them.

If an AI “reviewer” tells me that my sentences are too long and complex, it’s saying it because this is something it’s seen people say in response to critique requests and it’s trying to sound like a human would. Is it actually true, even at the level that a human reviewer’s subjective opinion is true? No way to know.

Beyond that, a lot of it comes down to Barnum statements: https://medium.com/@herbert.roitblat/this-way-to-the-egress-barnum-effect-or-language-understanding-in-gpt-type-models-597c27094f35

Many authors can benefit from generic advice like “consider your target audience”, but we don’t need to waste CPU cycles to give them that.

This term I had a couple of student papers here at the end that would not have benefited from ChatGPT at all. Once a student gets on a roll, you’ll sometimes get sections that go on at length — they’re trying to summarize a concept, and the answer is to keep writing until every possible angle is covered. The role of the editor is to say, “Enough. Cut, cut, cut — try to be more succinct!” I’ve got one term paper that is an ugly mess at 30 pages, but has good content that would make it an “A” paper at 20 pages. ChatGPT doesn’t do that. It can’t do that, because its mission is to generate glurge that mimics other papers, and there’s nobody behind it that understands the content.

Anyway, sometimes social media comes through and you get a bunch of humans writing interesting stuff on both sides of an argument. I’d hate to see how ugly social media could get if AIs were chatting, instead.

I never cared much for Nate Silver

Once upon a time, a lot of liberals were gaga for Nate Silver, who always left me cold. He seemed to be more of a numerologist, or a horse race handicapper, and I suspected he was juggling the numbers to fit his expectations (remember: quantitative & provable with numbers is not necessarily true). It’s polite of him to now confirm that yes, he’s a soulless automaton with no speck of moral reasoning in his body.

Most people don’t form political opinions through deep examination of the issues or reasoning from first principles. It’s more like picking some particular fashion label or way of dressing. Especially for younger people, who face more peer pressure.

Right. People who oppose genocide are just doing it because it’s a fad, exactly like how they pick out jeans at the store. Maybe we live in a society where even the conservative students learn at any early age that “thou shalt not kill,” and us more progressive people tell our kids to treat others as you want to be treated, but nah, students can see bombings and killings and snipers taking out civilians and be unperturbed by any foundational moral principles.

I think Silver is projecting here. He gets his morality from a spreadsheet, so of course no one else could possibly make a decision by examining the issues.

Hey, bonus: doesn’t this remind you of the same arguments made against trans people? It’s a passing fashion, they can’t possibly have thought about the consequences, they’re only doing it because their friends are doing it. There’s no way other people actually think — they’re all NPCs who need to be told what to do by us people with our numbers, which are totally free of bias.

I am so happy to see students standing proudly on the right side of history

The Morris campus of the University of Minnesota is quiet. We’re small and rural, so I think we lack the critical mass to spark substantial protests, but universities in the Twin Cities are taking up our slack. They’re organizing, building an encampment, and delivering demands.

Students rallied and set up tents at the University of Minnesota Twin Cities campus, as well as at Hamline University in St. Paul, as anti-war protests continue into a second week.

At the U of M, hundreds of protesters called on the school to divest from weapons manufacturers and companies tied to the Israeli military. The students also want the school to end study abroad programs in Israel.

At around 7:30 p.m., police gave dispersal orders, prompting many to link arms around the grassy area in front of Northrop Memorial Auditorum, where more than 30 tents stood.

Good for them! They are demonstrating peacefully and righteously, although that doesn’t prevent campus police from moving in and arresting students. And, as usual, there are accusations that protesting Israel and Zionism is anti-Semitic — it’s not, but we have to recognize that there are anti-Semitic groups all across the country who are exploiting these protests.

Columbia University administrators are doing a fine job of showing how not to respond to student protests. They set deadlines for students to leave and have threatened them with the thugs called cops, and in response, the students ignored the deadlines and have occupied several campus buildings. Stupid administrators. Instead of listening and recognizing student grievances, they’ve managed to escalate the situation. The problem here is that the administrators are incompetent and don’t believe they have any obligations to the students. The students are the reason the university exists!

The real rioters are cops and college presidents. Students and faculty are linking arms and condemning genocide, while administrators shriek and wail in dismay and send in cops with clubs, guns, and gas to break them up.

The past week or so has been, in many ways, unfathomable: Palestine solidarity protests sprung up at college campuses across the country; Local and state police resorted to violence to break many of them up; Some universities changed their rules last minute just so they could criminalize previously benign student and faculty activity; Prosecutors in most jurisdictions with arrests won’t say if they’ll charge the protesters. Meanwhile in Gaza, multiple mass graves filled with hospital patients were uncovered.

On top of it all, Christian Zionists—in and out of Congress—tried to take over as the true defenders of Israel, while failing to mention why they so zealously defend it. (Hint: If the Jews return to Israel, it will hasten the return of Jesus and an armageddon. Just don’t ask them what happens to the Jews once armageddon happens. Another hint: We go to hell.) Republican House Speaker Mike Johnson suggested in a speech on Columbia’s campus that it might be time to send in the National Guard. Evangelical preachers led a crowd that yelled things like “Go home, terrorists!”, “Go back to Gaza!” and “You want to camp? Go camp in Gaza!” at student protesters. If this all sounds crazy, that’s because it is crazy.

Oh, and about those prohibited activities — here’s a list from the University of Florida.

As has been pointed out online, many of those prohibited activities would shut down tailgating at football games, which most universities regard as a sacred rite.

Faculty, with a few notable exceptions, have been supportive of students’ right to protest. In fact, the Barnard AAUP faculty voted unanimously to make a statement of “no confidence” in the college president. I had to gasp at that — a group of 102 faculty members all agreed on something? I can’t imagine the Morris campus senate doing anything like that, and it probably would take hours of wrangling back and forth to even get a tepid statement out of them. Things must be getting extreme at Barnard.

They tried to do something similar at Columbia, but fell short, and settled for a compromise resolution that was still pretty damning. That’s more like the fractious faculty I know.

At Columbia University, a proposal to censure university president Minouche Shafik fell short, but a resolution calling for an investigation passed by a vote of 62-14 on Friday, according to the New York Times. Shafik has been scrutinized since a decision last week to summon New York police to the campus and authorize them to dismantle an encampment, resulting in the arrest of more than 100 student protesters.

After a two-hour meeting on Friday, the university’s senate approved a resolution that Shafik’s administration had undermined academic freedom and disregarded the privacy and due process rights of students and faculty members by calling in the police and shutting down the protest.

I wonder what it takes to get college presidents to recognize how badly they are fucking up. We’ve got protests sweeping across the nation, they keep generating horrendously terrible optics by sending armored mobs of cops to beat up students and faculty and throwing them in jail, their faculty are sending them strongly worded complaints, and still they keep playing the same stupid games. I don’t have any kids in college anymore, but if they were, and if I saw them getting thrown down and handcuffed at the behest of some asshole college president, I’d be furious and looking to help my kids transfer to some place that isn’t a militarized camp run by wannabe fascists. If I had a kid looking to enroll in college, I’d be torn but what I see at Columbia: the students seem awesome, but man is that place mismanaged. Like a lot of schools right now.

The most encouraging thing I’m seeing about the protests is that wow, the kids are all right. They get it. They are doing the right thing. I would remind them of these words by Frederick Douglass:

Those who profess to favor freedom, and yet depreciate agitation, are men who want crops without plowing up the ground. They want rain without thunder and lightening; they want the ocean without the roar of its many waters. The struggle may be a moral one, or it may be a physical one, or it may be both. But it must be a struggle. Power conceded nothing without a demand; it never has and it never will.

Right now, I’m seeing some of the anti-protest Right whining about the encampments, but they’re blocking the siiiiidewaaaalk in the same way that people complained about Black Lives Matter marches, but they’re blocking traaaaffic. Yeah? Too bad. You’re being confronted with a minor inconvenience while Palestinian people are seeing whole families murdered. Get over it.

Stop the genocide, the students will stop troubling your conscience. It’s that easy.