Social media 1, ChatGPT 0

Way back in February, I made a harsh comment about ChatGPT on Mastodon.

I teach my writing class today. I’m supposed to talk about ChatGPT. Here’s what I will say.
NEVER USE CHATGPT. YOU ARE HERE TO LEARN HOW TO WRITE ABOUT SCIENCE, YOU WILL NOT ACCOMPLISH THAT BY USING A GODDAMNED CRUTCH THAT WILL JUST MAKE SHIT UP TO FILL THE SPACE. WRITE. WRITE WITH YOUR BRAIN AND YOUR HANDS. DON’T ASK A DUMB CYBERMONKEY TO DO IT FOR YOU.
I have strong opinions on this matter.

Nothing has changed. I still feel that way. Especially in a class that’s supposed to instruct students in writing science papers, ChatGPT is a distraction. I’m not there to help students learn how to write prompts for an AI.

But then some people just noticed my tirade here in April, and I got some belated rebuttals. Here, for instance, kjetiljd defends ChatGPT.

Wow, intense feelings. Have you ever written something, crafted a proper prompt to ask ChatGPT-4 to critique your text? Or asked it to come up with counter-arguments to your point of view? Or asked it to analyze a text in terms of eg. thesis/antithesis/synthesis? Or suggest improvements in readability? You know … done … (semi-)scientific … experiments with it? With carefully crafted prompts my hypothesis is that it can be used to improve both writing and thinking…

Maybe? The flaw in that argument is that ChatGPT will happily make stuff up, so the foundation of its output is on shaky ground. So I said I preferred good sources. I didn’t mention that part of this class was teaching students how to do research using the scientific literature, which makes ChatGPT a cheat to get around learning how to use a library.

I prefer to look up counter-arguments in the scientific literature, rather than consulting a demonstrable bullshit artist, no matter how much it is dressed up in technology.

kjetiljd’s reply is to tell me I should change the focus of my class to be about how to use large language models.

And if I were a student I would probably prefer advice on the use of LLMs from a scientific writing teacher who seemed to have some experience in the field, or at least seemed to … how should I say this … have looked up counter-arguments from the scientific literature …?

I guess I’m just ignorant then. Unfortunately, this class is taught by a group of faculty here, and I had a pile of sources about using ChatGPT as a writing aid, that were included in course’s Canvas page. I didn’t find them convincing.

Sure, I’ve looked at the counter-arguments. They all seem rather self-serving, or more commonly, non-existent.

So kjetiljd hands me some more sources. Ugh.

Here are a few more or less random papers on the topic – they exist, are they all self-serving? https://www.semanticscholar.org/paper/ChatGPT-4-and-Human-Researchers-Are-Equal-in-A-Sikander-Baker/66dcd18c0f48a14815edca1d715fa8be8909cca6 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10164801/ https://www.semanticscholar.org/paper/Chat

I read the first one, and was unimpressed. They trained ChatGPT on a small set of review articles and then asked it to write a similar review, and then had some people judge on whether it was similar in content and style. Is ChatGPT a dumb cybermonkey? This article says yes.

I was about done at this point, so I just snidely pointed out that scientists scorn papers written by AIs.

Don’t get caught!

https://retractionwatch.com/papers-and-peer-reviews-with-evidence-of-chatgpt-writing/

I was done, but others weren’t. Chaucerburnt analyzed the three articles kjetiljd suggested. They did not fare well.

The first paper describes a trial where researchers took 18 recent human-written articles, got GPT-4 to write alternate introductions to them, and then got eight reviewers to read and rate these introductions.

Some obvious points:

– 18 pairs of articles is not a lot. With only a small number of trials, there’s a significant risk that an inferior method will win a “best of 18” over a superior method by pure luck.
– 8 reviewers, likewise, is not a very large number. Important here is that the reviewers were recruited “by convenience sampling in our research network” – that is, not a random sample, but people who were already contacts of the authors. This risks getting a biased set of reviewers whose preferences are likely to coincide with the researchers’.
– The samples were reviewed on dimensions of “publishability” (roughly, whether the findings reported are important and novel), “readability”, and “content quality” (here apparently meaning whether they had too much detail, not enough, or just right.)

What’s missing here?

None of the assessment criteria have anything to do with *accuracy*. There’s no fact-checking to evaluate whether the introduction has any connection to reality.

Under the criteria used here, GPT could probably get excellent “publishability” scores by claiming to have a cure for cancer. It could improve “readability” by replacing complex truths with over-simple falsehoods.

And it could improve “content quality” by inventing false details or deleting important true ones in order to get just the right amount of detail, since apparently “quality” doesn’t depend on whether the details are *true*, only on how many there are.

The reviewers weren’t even asked to read the rest of the article and evaluate whether the introduction accurately represented the content.

I daresay the human authors could’ve scored a lot higher on these metrics if they weren’t constrained by the expectation that their content should be truthful – something which this comparison doesn’t reward.

They also note “We removed references from the original articles as GPT-4’s output does not automatically include references, and also since this was beyond the scope of this study.” Because, again, truthfulness is not part of the assessment here.

(FWIW, when I tried similar experiments with an earlier version of GPT, I found it was very happy to include references – I merely had to put something like “including references” in the prompt. The problem was that these references were almost invariably false, citing papers that never existed or which didn’t say what GPT claimed they said.)

I concur, and that was my impression, too. The AI written version was not assessed for originality or accuracy, but only on superficial criteria of plausibility. AI is very good at generating plausible sounding text.

Chaucerburnt went on to look over the other two articles, which I hadn’t bothered to read.

The second article linked – which feels very much like it was itself written by GPT – makes a great many assertions about the ways in which GPT “can help” scientists in writing papers, but is very light on evidence to support that it’s good at these things, or that the time it saves in some areas is greater than the time required to fact-check.

It acknowledges plagiarism as a risk, and then offers suggestions on how to mitigate this: “When using AI-generated text, scientists should properly attribute any sources used in the text. This includes properly citing any direct quotations or paraphrased information”… – this seems more like general advice for human authors than relevant to AI-generated text, where the big problem is *not knowing* when the LLM is quoting/paraphrasing somebody else’s work.

It promotes the use of AI to improve grammar and structure – but the article itself has major structural issues. For instance, it has a subsection on “the risk of plagiarism” followed by “how to avoid the risk of plagiarism”.

But most of the content in “the risk of plagiarism” is in fact stuff that belongs in the “how to avoid” section.

Some of it is repeated between sections – e.g. each of those sections has a paragraph advising authors to use plagiarism-detection software, and another on citing sources.

On the grammatical side, it has a bunch of errors, e.g.:

“AI tools like ChatGPT is capable of…”

“The risk of plagiarism when use AI to write review articles”

“Use ChatGPT to write review article need human oversight”

“Conclusion remarks”

“Are you tired of being criticized by the reviewers and editors on your English writings for not using the standard English, and suggest you to ask a native English speaker to help proofreading or even use the service from a professional English editor?”

(Later on, it contradicts that by noting that “AI-generated text usually requires further editing and formatting…Human oversight is necessary to ensure that the final product meets the necessary requirements and standards.”)

If that paper is indeed written by GPT, it’s a good example of why not to use GPT to write papers.

The third aricle gets the same treatment.

The last of the three papers you linked is a review of other people’s publications about ChatGPT. It’s more of a summary of what other people are saying for and against GPT’s use than an assessment of which of these perspectives are well-informed.

(Of 60 documents included in the study, only 4 are categorised as “research articles”. The most common categories are non-peer-reviewed preprints and editorials/letters to the editor.)

It does note that 58 out of 60 documents expressed concerns about GPT, and states that despite its perceived benefits, “the embrace of this AI chatbot should be conducted with extreme caution considering its
potential limitations.”

Not exactly an enthusiastic recommendation for GPT adoption.

Going a step further, Chaucerburnt reassures me that my role in the class is unchallenged.

I’ve seen people use AI for critique, and my impression is that it does more harm than good there.

If a human reviewer tells me that my sentences are too long and complex, there’s a very high probability that they’re saying this because it’s true, at least for them.

If an AI “reviewer” tells me that my sentences are too long and complex, it’s saying it because this is something it’s seen people say in response to critique requests and it’s trying to sound like a human would. Is it actually true, even at the level that a human reviewer’s subjective opinion is true? No way to know.

Beyond that, a lot of it comes down to Barnum statements: https://medium.com/@herbert.roitblat/this-way-to-the-egress-barnum-effect-or-language-understanding-in-gpt-type-models-597c27094f35

Many authors can benefit from generic advice like “consider your target audience”, but we don’t need to waste CPU cycles to give them that.

This term I had a couple of student papers here at the end that would not have benefited from ChatGPT at all. Once a student gets on a roll, you’ll sometimes get sections that go on at length — they’re trying to summarize a concept, and the answer is to keep writing until every possible angle is covered. The role of the editor is to say, “Enough. Cut, cut, cut — try to be more succinct!” I’ve got one term paper that is an ugly mess at 30 pages, but has good content that would make it an “A” paper at 20 pages. ChatGPT doesn’t do that. It can’t do that, because its mission is to generate glurge that mimics other papers, and there’s nobody behind it that understands the content.

Anyway, sometimes social media comes through and you get a bunch of humans writing interesting stuff on both sides of an argument. I’d hate to see how ugly social media could get if AIs were chatting, instead.

Too much social media

Once upon a time there was Twitter, and it was fine. There was much to dislike about it, but it had the advantage of being the one central repository of all the chatter, for good and ill, and I coped with the badness by doing a lot of blocking.

Then it became “X,” and it was terrible and vile, and Elon Musk is a neo-Nazi idiot, so I left, cleanly and completely. That was a good decision on my part. So I started exploring the other social media options.

I got on Mastodon. It’s a bit clunky, and I still don’t understand some of the details, but I’m comfortable there. I like the diversity of content. Sometimes people are too weirdly judgmental, but it’s not my site, so I’ll adjust. It’s still on my recommended list.

I’m also on BlueSky, which is probably the most like the old Twitter. It’s more centralized than Mastodon, good ebb and flow of topics, and there’s actually a Science Bluesky. I’m sticking with it longer, we’ll see how it shapes up.

Then there’s Threads. I don’t know about Threads. It has a very different dynamic — people take the name literally, and there are a lot of threads, where they go on and on over multiple comments, and it’s beginning to bug me. Shouldn’t you just start a blog? People do write a lot, which is a positive. It’s a Zuckerberg production, which is a COLOSSAL NEGATIVE. I killed Facebook long ago, that was enough.

So, anyway, there can be only one, and I’ve decided to axe Threads. That means that in my head it is now a duel to the death between Mastodon and BlueSky.

Who else is on social media? What do you prefer? Don’t bother to tell me to abandon it all, I’m accustomed to my frequent tiny blips of interaction.

At least you quickly know to not bother reading the rest

Here’s a blatant example of AI polluting the scientific literature:

I, too, would like to know “How come this meaningless wording survived proofreading by the coauthors, editors, referees, copy editors, and typesetters?”

OK, typesetters are forgiven, it’s their job to print exactly what they are given, but the others? No sympathy. I think the answer is that there is so much trash poured out on their desks that their eyes glaze over and they end up rubber-stamping everything, because the alternative is madness. They’d have to read this junk.

The AI apocalypse is already here

I’m not alone in seeing how the internet has been degenerating over the years. The first poison was capitalism: once money became the focus of content, a content that was rewarded for volume rather than quality, the flood of noise started to rise. Then it was the “algorithm”, initially a good idea to manage the flow of information that was quickly corrupted to game the rules. SEO became a career where people engineered that flow to their benefit. And Google smiled on it all, because they could profit as well.

The latest evil is AI, which is nothing but a tool to generate profitable noise with which to flood the internet, an internet that is already choking on garbage. Now AI is beginning to eat itself.

Generative AI models are trained by using massive amounts of text scraped from the internet, meaning that the consumer adoption of generative AI has brought a degree of radioactivity to its own dataset. As more internet content is created, either partially or entirely through generative AI, the models themselves will find themselves increasingly inbred, training themselves on content written by their own models which are, on some level, permanently locked in 2023, before the advent of a tool that is specifically intended to replace content created by human beings.

This is a phenomenon that Jathan Sadowski calls “Habsburg AI,” where “a system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.” In reality, a Habsburg AI will be one that is increasingly more generic and empty, normalized into a slop of anodyne business-speak as its models are trained on increasingly-identical content.

After all, the whole point of AI is to create slop that will be consumed because it looks sorta like the slop that people already consumed. So make more of it! We’re in competition with the machines that are making slop, so we can outcompete them by just making more of it. It’s what biology would look like if there were no natural selection, and if energetic costs were nearly zero — we’d be swimming in a soup of goo. As Amazon has discovered.

Amazon’s Kindle eBook platform has been flooded with AI-generated content that briefly dominated bestseller lists, forcing Amazon to limit authors to publishing three books a day. This hasn’t stopped spammers from publishing awkward rewrites and summaries of other people’s books, and because Amazon’s policies don’t outright ban AI-generated content, ChatGPT has become an inoperable cancer on the body of the publishing industry.

That’s a joke. Limiting authors to three books a day? How about limiting it to one book a month, which is more in line with the human capacity to write? You know that anyone churning out multiple books per day is not investing any thought into them, or doing any real research, or even aspiring to quality. Amazon doesn’t care, they exist only to skim off a few pennies of profit off each submission, so sure, they’ll take every bit of hackwork you can throw at them. Take a look at the Kindle search page sometime — it’s nothing but every publisher’s slush pile amplified ten thousand fold.

The Wall Street Journal reported last year that magazines are now inundated with AI-generated pitches for articles, and renowned sci-fi publisher Clarkesworld was forced to close submissions after receiving an overwhelming amount of AI-generated stories. Help A Reporter Out used to be a way for journalists to find potential sources and quotes, except requests are now met with a deluge of AI-generated spam.

These stories are, of course, all manifestations of a singular problem: that generative artificial intelligence is poison for an internet dependent on algorithms.

The only algorithm I want anymore is “Did PZ Myers subscribe to this creator? Then show the latest from them.” I don’t want “X is vaguely similar to Z that PZ Myers subscribed to” and I sure as hell don’t want “Y paid money to be fed to everyone who liked Z”, but that is what we do get.

One hope is that all the AI-based companies will eventually start cannibalizing each other. That may have already begun: two AI image companies, Midjourney and Stability AI, are fighting because Stability skulked into the Midjourney database to snatch up as much of their data as they could.

Here’s a prompt for you: two puking dogs eating each other’s sick and vomiting it back up again, over and over.

The era of beautiful airplanes

When I was a young kiddo, up through high school, I had two passions: biology and airplanes. You can guess which one won out, but I still sometimes dream of flying. In those days, I’d bicycle out to one of the local airports — Boeing towns had no shortage of them — and just hang out at the chain link fence by the end of the runway, or bike around the hangars. It was a treat to take a long bike trip to the Museum of Flight, which at the time was a big hangar where people were reconstructing a biplane, but has since expanded into a magnificent complex with all kinds of planes.

I am suddenly reminiscing about this because YouTube randomly served up a video about one of my favorite old-timey airplanes, the P-26 Peashooter.

That great big radial engine, that lovely post-war color scheme, and it’s wearing pants! Before retractable landing gear became a must-have for any high performance plane, they were outfitted with aerodynamic coverings, which I find irresistibly charming. Planes from the 1930s hit a sweet spot for me, so this random video in which nothing really happens was something I had to watch. It’s an odd trigger that reminds me of being 15 years old again.

So why did I give up my fascination with planes? One factor was that I only learned in high school that I was extremely near-sighted, and needed glasses — that felt like discovering that I was broken, and nature was telling me that certain pathways were closed to me. I was also getting deeper and deeper into that scholarly stuff, reading constantly, which probably contributed to my optical failures. I still sometimes think it would be awesome to take flying lessons, except a) no time, b) no money, and c) age has taught me that there are many things that look easy, but actually require a great deal of skill and discipline to do well. Flying is one of those things that is unforgiving of dilettantes.

But still, those aircraft from the Amelia Earhart era give me a little tingle.

You couldn’t pay me to ride in a Tesla

Let alone buy one. They’re over-engineered and clumsily designed, as we can see in the example of this stupid, poinless death.

Angela Chao, Sen. Mitch McConnell’s billionaire sister-in-law, spent her last minutes alive frantically calling her friends for help as her Tesla slowly sank in a pond on a remote Texas ranch, according to a report.

Chao, the billionaire former CEO of dry bulk shipping giant Foremost Group, tragically died at the age of 50 on Feb. 10 after accidentally backing her car into the pond while making a three-point turn.

When the car lost power, she couldn’t get out while the car filled with water.

The windows are made of laminated glass, which sounds like a plus, but they’re so hard they aren’t easily broken. The doors are opened electronically, with a clever little button. There is a manual switch for the front doors, but they’re not obvious and you need to have read the manual to know about them. The manual switches for the back doors are buried in a very nonintuitive place, and further, owners are warned that using them too much can damage the finish.

Apparently, changing gears is done with an LED touch screen. Why? Multiple generations of Americans have been trained on simple levers and buttons that are familiar and reliable. There is a virtue to simplicity and obvious controls.

Manual controls are probably cheaper, too, but not as flashy.

Thrashing Boeing, deservedly

Portrait of a modern Boeing plane

Everett, Seattle, Renton, Kent, Auburn — growing up in the Seattle area, we knew the chain of Boeing towns, where so many of our family members worked. My father worked in several of those plants as a diesel mechanic, my mother was wiring the planes, my brother works in the windtunnel unit, my sister was in marketing — we had a lot of Boeing pride. Before about 1990, when booking flights, I’d actually preferentially select Boeing planes over Airbus, because it was reassuring to be on a plane where I could imagine my Mom building cable assemblies with loving care.

No more, of course. Boeing, a company of engineers, merged with McDonnell-Douglas, a company run by profit-seeking military contractors, and oh boy, those flightless chickens have come home to roost, where “home” is no longer a series of factories but skyscrapers in Chicago. They got the John Oliver treatment this week.

I wouldn’t fly in a Boeing MAX plane myself. I’m just a timid little biologist, though…it’s a bad sign when your own former engineers refuse to fly in them. Ed Pierson, ex-Boeing engineer, says:

Last year, I was flying from Seattle to New York, and I purposely scheduled myself on a non-MAX airplane. I went to the gate. I walked in, sat down and looked straight ahead, and lo and behold, there was a 737-8/737-9 safety card. So I got up and I walked off. The flight attendant didn’t want me to get off the plane. And I’m not trying to cause a scene. I just want to get off this plane, and I just don’t think it’s safe. I said I purposely scheduled myself not to fly [on a MAX].

Our recommendation from the foundation is that these planes get grounded — period. Get grounded and inspected and then, depending on what they find, get fixed.

The people to blame are the executives at Boeing.

Boeing’s board of directors — they have a fiduciary responsibility to make sure that their products are safe, and they’re not in touch. They’re not engaged. They don’t visit the sites. They don’t talk to the employees. They’re not on the ground floor. Look, these individuals are making millions of dollars, right? And there’s others between the C-suite and the people on the factory line. There’s hundreds of executives who are also very well compensated and managers that should be doing a lot more. But their leadership is a mess. The leadership sets the whole tone for any organization. Public pressure needs to continue.

That board of directors, and all those executives, don’t know what they’re doing. They ought to all be fired, and the company put in the hands of good engineers who prioritize safety and quality, but instead you’ve got accountants who just want to make lots of money, doing their “fiduciary duty.” Ironically, all that short term emphasis on profit is destroying the company, trashing their reputation, and killing people. I also wouldn’t invest in Boeing any more.

Hey, why are stock buybacks even legal? The executives seem to be more interested in artificially pumping up their stock prices than in, you know, building airplanes.

Oh well. It’s all great news for Airbus.

AI is hell

Can we just knock it off with the AI nonsense? Somebody is profiting off this colossal sinkhole of useless frippery, but I don’t know who. Look what it’s doing to our environment, all because people like Mark Zuckerberg have decided it’s cool, and the future of profiteering.

The amount of water that A.I. uses is unconscionable, particularly given that so many of its data centers are in desert regions that can ill afford to squander it. A.I. uses this much water because of the computing power it requires, which necessitates chilled water to cool down equipment—some of which then evaporates in the cooling process, meaning that it cannot be reused. The Financial Times recently reported academic projections showing that A.I. demand may use about half the amount of water consumed by the United Kingdom in a year. Around the world, communities are rightly beginning to resist the construction of new data centers for this reason.

Then there’s A.I.’s energy use, which could double by 2026, according to a January report by the International Energy Association. That’s the equivalent of adding a new heavily industrialized country, like Sweden or Germany, to the planet.

Microsoft’s own environmental reports reveal these immense problems: As the company has built more platforms for generative A.I., its resource consumption has skyrocketed. In 2022, the company’s use of both water and electricity increased by one-third, its largest uptick ever.

We should be asking what we gain from glossy shiny artificial artwork, or from bizarre texts cobbled together by machines that don’t actually understand anything. Search engines are already corrupted by commercialized algorithms, do we really need to add another complex layer that adds nothing to anything? Tell me one thing AI adds to improve the world.

Do we need this?

Dozens of fake, artificial intelligence-generated photos showing Donald Trump with Black people are being spread by his supporters, according to a new investigation.

BBC Panorama reported that the images appear to be created by supporters themselves. There is no evidence tying the photos to Trump’s campaign.

One photo was created by the Florida-based conservative radio show host Mark Kaye.

“I’m not out there taking pictures of what’s really happening. I’m a storyteller,” Kaye told BBC. “I’m not claiming it is accurate. I’m not saying, ‘Hey, look, Donald Trump was at this party with all of these African American voters. Look how much they love him.’”

Maybe what we really need is a Butlerian Jihad.

Perpetually growing meat!

This is mildly interesting: scientists have modified muscle cells in culture so that they produce their own growth factors. This is a major cost reduction, because now you won’t need to constantly supplement your vat of muscle cells with a relatively expensive reagent.

Cellular agriculture – the production of meat from cells grown in bioreactors rather than harvested from farm animals – is taking leaps in technology that are making it a more viable option for the food industry. One such leap has now been made at the Tufts University Center for Cellular Agriculture (TUCCA), led by David Kaplan, Stern Family Professor of Engineering, in which researchers have created bovine (beef) muscle cells that produce their own growth factors, a step that can significantly cut costs of production.

Growth factors, whether used in laboratory experiments or for cultivated meat, bind to receptors on the cell surface and provide a signal for cells to grow and differentiate into mature cells of different types. In this study published in the journal Cell Reports Sustainability, researchers modified stem cells to produce their own fibroblast growth factor (FGF) which triggers the growth of skeletal muscle cells – the kind one finds in a steak or hamburger.

Keep in mind that this works for cultured meat cells, which is completely different from the artificial meat made from plants that you can buy in stores right now. I have a few reservations about it.

This is basically a tool to remove a regulatory limit on muscle growth. When this happens in vivo, we call it cancer. I suspect the marketing department will balk at labeling it “tumor meat”.

The technique amplifies one cell type. Edible meat has texture and is made up of a mix of cell types, fat and connective tissue. This is a way to make large quantities of something that is equivalent to ‘pink slime’ or, as the marketers call it, ‘lean finely textured beef.’ We already do this! I guess it’s a good thing to be able to produce large quantities of protein in the form of ‘pink slime’ more cheaply, and without the need to slaughter animals to do it.

Without a regulatory limit on growth, though, don’t be surprised if a news headline later announces that Boston has been eaten by a giant ever-growing blob of immortal meat.

Never mind the Vegas Eye, I want to see the Sludge Tunnels

The only reason to visit Las Vegas is for the exotic spectacles (don’t gamble, it’s a scam) and Elon Musk may have added another one.

Despite a decade of dreaming, Elon Musk has only built one tiny Hyperloop tunnel in Las Vegas — and the people who built it say it’s filled with dangerous chemical sludge.

As Bloomberg reports, the Boring Company’s scarce output — which thus far amounts only to driving Teslas around a few miles of neon-lit tunnel underneath Sin City as they ferry convention attendees at no more than 40 miles per hour — has also come with a massive buildup of waste, the consistency of a milkshake, that’s said to burn the skin of anyone who comes in contact with it.

In interviews with the news source, Boring Company workers who declined to give their names on the record for fear of retribution said that in some parts of Musk’s Vegas tunnel system, the sludge would sometimes be up to two feet high. If it got over their work boots or onto their faces, they said, it would burn their skin.

The article doesn’t say where the toxic sludge is coming from, which makes me wonder what is leaking. The Daily Mail — not a reliable source at all — is reporting that the sludge is made of chemical accelerants, and that the worker’s complaints were made while the tunnel was under construction.

It was a crap project anyway, and a poor solution to transit problems, so shut it down already. Hyperloop One, Virgin’s project, has already been declared dead. I hope somebody in Minnesota is paying attention to the news, and is ready to kill the Minnesota hyperloop project.

Once the Boring Tunnel nonsense is shut down, we can get back to the serious business of mocking Elon Musk’s Flaming Vehicles of Fiery Death.

Two men were left in serious condition after the Tesla they were traveling in went off an overpass and burst into flames on the 134 Freeway in the Griffith Park neighborhood of Los Angeles Sunday night.

See, it’s like a metaphor for Musk’s career and an entertaining fireworks display in one! (the occupants of the car survived, fortunately. Let’s not have the spectacle of drivers on fire or tunnel workers melting.)