The real WWII experience

Yesterday, I was looking forward to visiting a local airshow. I made it. I was disappointed.

It was not the fault of the airshow organizers, or the collection of planes they had on view. The problems were entirely due to the godawful weather we’ve had lately.

I left home at about 7:30, under dark gloomy skies, driving rain, and non-stop thunderbolts arcing across the sky, a most inauspicious morning, but it’s been like that sporadically for a couple of weeks. We get these horrendous storms that last for a few hours, and then they burn off and we get clear skies, so that’s what I anticipated. The drive was stormy, but the roads were empty, I saw only one other car the entire hour and a half I was on the road. That wasn’t a problem.

Once I got to the airport, though, I discovered that the whole show was delayed for two hours, which made sense. Visibility was only about a mile, the rain was pounding down hard, I wouldn’t want to fly in that weather, and as a spectator I wouldn’t be able to see anything anyway. So I turned around and went back to Granite Falls to nurse a coffee for a while.

When I went back, I encountered a new problem: no parking. There was a large empty field that was supposed to be used as a parking lot for the event, but this is what it looked like:

It was swamp with ambitions, trying to become a lake. This fit with what I’d heard on the drive — I was getting constant warnings of flash flood conditions, and saw rivers running over their banks, and fields that were underwater. So no convenient parking.

The organizers improvised. What they had us do is drive out on these gravel access roads and park on the edge…which meant that all the visitors were strung out in a long line from the airport to distant points. I did that. I had to park a mile and a half from the airshow and walk in.

I’ve mentioned that this was my summer of knee problems. I did not invest enough in my energy budget for a hike, nor was I prepared for the maintenance and repair costs of keeping shank’s mare running smoothly for a long walk. I did it anyway. I was stupid. The result: another blown out knee, and I’m going to be paying for this exercise for the next few weeks. Fortunately, when it was time to leave, they had local police and neighbors volunteering to drive golf carts up and down that road — I got delivered directly to my car, which was good, because otherwise I might have been a crying cripple laid up in a drainage ditch.

Finally, I’m at the airfield, there’s a selection of planes all lined up, getting fueled. The first set are about 8 Navy fighters/bombers/torpedo planes (ooh, look at that lovely Corsair), and they’re getting ready to taxi out to the runway. I was up close — I was standing right under the wingtip of a Helldiver as it was firing up it’s engine. It was loud, it reeked of fuel vapors, I could feel the vibrations in my bones. It was the highlight of the day for me.

Unfortunately, what followed was not so exciting. Three planes taxied out to the end of the runway, a Dauntless, an Avenger, and a Helldiver, and prepared to take off, when Minnesota weather struck again. One of them got stuck in the mud. It was a major anti-climax, because instead of planes, we then spent an hour watching forklifts hauling stacks of plywood to try and give them a firm surface to be dragged onto.

It was OK! I wandered around the hangars instead, where they had iconic aircraft on display.

They did eventually get some planes aloft, but at that point my knee was whimpering, and I decided the best thing to do was go home and stop making it work.

Despite the weather-related glitches, this was a good airshow. I’m going to come back next year when the fields have all dried out, there’s convenient parking, and runways that haven’t turned to glue. I did come away with an appreciation of the struggles the ground crews had to have gone through to keep planes and runways operational. My father-in-law was a bad ass Marine sniper in the Pacific theater, while my grandfather spent the war driving bulldozers and building runways on remote islands — much respect to both of them.


PS. One thing I was concerned about was that this was a celebration of military technology, and I was afraid I’d get there and be surrounded by a sea of red MAGA hats. I was not. I didn’t see a single red hat the whole time. I did see a lot of old veterans, though — maybe a celebration of a triumph over fascism scared away the Nazi wanna-bes from showing up.

Airshow today!

I’m driving to Granite Falls, MN this morning. It’s only about an hour SSE of Morris, so I’ll still be in the middle of nowhere in west central Minnesota. A while back, though, I was searching for local museums and discovered this one: the Fagen Fighters WWII Museum. I was surprised. This looks like a big deal with all kinds of old US aircraft from the the 1940s, and many of them still fly. I’ve been planning to visit it all summer long, but those plans got wrecked by a torn meniscus that limited my mobility — I’m feeling much better now, so I think can handle walking around some hangars and watching airplanes fly by. My brother and I used to bicycle out to local airports all the time just to watch private planes buzz by, so this is going to bring back memories.

I’ve been to the Air and Space Museum in Washington DC, as well as the Boeing Museum of Flight in Seattle, and while this museum is a bit smaller than those, tomorrow is special: they’re celebrating the 250th anniversary of the US Navy & Marine Corps, so an additional assortment of aircraft are flying in. How can I resist? I want to see a P38 Lightning, an F4U Corsair, and an F6F Hellcat. Eighty year old airplanes still flying!

Tickets are still available, so if you’re a Minnesotan interested in this sort of thing, maybe I’ll see you there.

Jim Acosta, ghoul

My impression of the ex-CNN news announcer, Jim Acosta, was that he at least had some principles. He quit cable news, after all, and that’s a positive mark in my estimation. Unfortunately, he has now indulged in the cheapest, sleaziest, most ghoulish stunt of his career.

If you are sufficiently prolific on the internet, people can take your stored writings and videos and build a model of “you”. For instance, I would be good candidate for this kind of program — over 30 years of nearly daily commentary, all stored in online databases, you could probably make a decent predictive model of my internet behavior. Would it be “me”? No. It would be a crude simulacrum of just my public persona. You could also take the voluminous writings of St Augustine or Albert Einstein and make a similar model, but it would all just be window dressing and wouldn’t actually “be” the person.

Some grieving parents compiled the internet output of one of the students killed in the Parkland shooting into a video talking head. I can sort of understand the desire — they want to hear their child’s voice again — and it’s the same sort of impulse that would make someone preserve an answering machine voice message so they can hear a loved one again after their demise. It’s not the person, though, it’s an echo, a memory of someone.

So Acosta “interviewed” the model of a dead student.

Jim Acosta, former chief White House correspondent for CNN, stirred controversy on Monday when he sat for a conversation with a reanimated version of a person who died more than seven years ago. His guest was an avatar of Joaquin Oliver, one of the 17 people killed in the Marjory Stoneman Douglas high school mass shooting in Parkland, Florida, in 2018.

The video shows Oliver, captured via a real photograph and animated with generative artificial intelligence, wearing a beanie with a solemn expression. Acosta asks the avatar: “What happened to you?”

I feel like asking Acosta “What happened to you?”

“I appreciate your curiosity,” Oliver answers in hurried monotone without inflection or pauses for punctuation. “I was taken from this world too soon due to gun violence while at school. It’s important to talk about these issues so we can create a safer future for everyone.” The avatar’s narration is stilted and computerized. The movements of its face and mouth are jerky and unnatural, looking more like a dub-over than an actual person talking.

Ick. Why not dig up his corpse, attach marionette strings, and have a conversation with it? That wasn’t Joaquin Oliver. The only insight you are going to get from it is possibly the interpretations of the person who compiled the code.

Here’s another example:

Others have likewise used AI avatars to simulate the speech of victims of crimes. In May, an AI version of a man who was killed in a road rage incident in Arizona appeared in a court hearing. Lawyers played an AI video of the victim addressing his alleged killer in an impact statement. “I believe in forgiveness, and a God who forgives. I always have and I still do,” the victim’s avatar said.

The presiding judge responded favorably. “I loved that AI, thank you for that. As angry as you are, as justifiably angry as the family is, I heard the forgiveness,” he said. “I feel that that was genuine.”

Jesus. That was not evidence before the law — that was an appeal to the judge’s sentimentality, and it worked.

They have to be desperate to resurrect boomer technology

This generation…they claim to have reinvented the bus, the train, the bodega, and now, the 45 rpm record?

On Monday (Aug. 4), a small but mighty new physical music format arrived: Tiny Vinyl. Measuring at just four inches in size, Tiny Vinyl is a playable record that can hold four minutes of audio per side.

The disc, according to a press release, aims to “[bridge] the gap between modern and traditional to offer a new collectible for artists to share with fans that easily fits in your pocket.”

OK, there are differences. This thing is played at 33rpm, not 45rpm, and is smaller than the old format, which was a 7 inch disk, but I don’t see any advantage. It doesn’t matter that it fits in your pocket — in order to listen to it you also need a turntable and a set of speakers. They also cost $15 each. It’s a gimmicky promotional toy, not a serious means of distributing music. People are used to loading up thousands of MP3s on their phones and being able to play them through ear buds, you’d have to be a serious hipster to think that unlimbering a turntable and a pair of portable speakers so you can listen to singles at the coffeeshop is “cool”.

My first recipe from a Neandertal cookbook

I’ve taught human physiology, so I already knew about the limits of protein consumption: if you rely too much on consuming lean protein, you reach a point where your body can’t cope with all the nitrogen. Here’s a good, succinct explanation of the phenomenon of “rabbit starvation.”

Fat, especially within-bone lipids, is a crucial resource for hunter-gatherers in most environments, becoming increasingly vital among foragers whose diet is based heavily on animal foods, whether seasonally or throughout the year. When subsisting largely on animal foods, a forager’s total daily protein intake is limited to not more than about 5 g/kg of body weight by the capacity of liver enzymes to deaminize the protein and excrete the excess nitrogen. For hunter-gatherers (including Neanderthals), with body weights typically falling between 50 and 80 kg, the upper dietary protein limit is about 300 g/day or just 1200 kcal, a food intake far short of a forager’s daily energy needs. The remaining calories must come from a nonprotein source, either fat or carbohydrate. Sustained protein intakes above ~300 g can lead to a debilitating, even lethal, condition known to early explorers as “rabbit starvation.” For mobile foragers, obtaining fat can become a life-sustaining necessity during periods when carbohydrates are scarce or unavailable, such as during the winter and spring.

I’d never thought about that, outside of an academic consideration, since a) I don’t live lifestyle that requires such an energy rich diet, and b) I’m a vegetarian, so I’m not going to sit down to consume over 1200 kcal of meat (I feel queasy even imagining such a feast). But when I stop to think about it, yeah, my hunter-gatherer ancestors must have been well aware of this limitation, which makes the “gatherer” part of the lifestyle even more important, and must have greatly affected their preferred choices from the kill.

There is very little fat in most ungulate muscle tissues, especially the “steaks” and “roasts” of the thighs and shoulders, regardless of season, or an animal’s age, sex, or reproductive state. Mid- and northern-latitude foragers commonly fed these meat cuts to their dogs or abandoned them at the kill. The most critical fat deposits are concentrated in the brain, tongue, brisket, and rib cage; in the adipose tissue; around the intestines and internal organs; in the marrow; and in the cancellous (spongy) tissue of the bones (i.e., bone grease). With the notable exception of the brain, tongue, and very likely the cancellous tissue of bones, the other fat deposits often become mobilized and depleted when an animal is undernourished, pregnant, nursing, or in rut.

So a steak is dog food; the favored cuts are ribs and brisket and organ meats. This article, though is mainly focused on bone grease and its production by Neandertal hunters. I didn’t even know what bone grease is until the article explained it to me. Oh boy, it’s my first Neandertal recipe!

Exploitation of fat-rich marrow from the hollow cavities of skeletal elements, especially the long bones, is fairly easy and well documented in the archaeological record of Neanderthals. On the basis of ethnohistoric accounts, as well as on experimental studies, the production of bone grease, an activity commonly carried out by women, requires considerable time, effort, and fuel. Bones, especially long-bone epiphyses (joints) and vertebrae, are broken into small fragments with a stone hammer and then boiled for several hours to extract the grease, which floats to the surface and is skimmed off upon cooling. For foragers heavily dependent on animal foods, bone grease provides a calorie-dense nonprotein food source that can play a critical role in staving off rabbit starvation.

Skimming off boiled fats does not sound at all appetizing…but then I thought of pho, which is made with a stock created by boiling bones for hours, or my grandmother’s stew, which had bones boiled in the mix, which you wouldn’t eat, but made an essential contribution to the flavor. Those we don’t cool to extract the congealed fats, but they were there. Then there’s pemmican, made by pounding nuts, grains, and berries in an animal fat matrix, which now sounds like the perfect food for someone hunting for game for long hours in the cold. It’s one of those things which seems superfluous when you’re living in a world filled with easy-to-reach calories, but it makes sense. I’m going to have to think about that when I’m prepping for the Trump-induced apocalypse.

Examples of hammerstone-induced impact damage on long bones from NN2/2B.
(A) B. primigenius, Tibia dex., impacts from posteromedial (no. 4892). (B) B. primigenius, Humerus sin., impacts from posteromedial (no. 4283). (C) B. primigenius, Tibia dex., impact from anterolateral (no. 8437). (D) Equus sp., Humerus sin., impacts from posterolateral (no. 21758).

The main point of the article, though, is that they’re finding evidence of cooperative behavior in Neandertals. It analyzes a site where Neandertals had set up a bone grease processing ‘factory’ where hunters brought in their prey to be cut up, the bones broken apart, and then everything was boiled for hours along a lakeside. The place was strewn with shattered bone fragments! They also found bits of charcoal, vestiges of ancient fires. There was no evidence of anything like pottery, but they speculate that “experiments recently demonstrated that organic perishable containers, e.g., made out of deer skin or birch bark, placed directly on a fire, are capable of heating water sufficiently to process food”.

Not only do I have a recipe, I have a description of the technology used to produce the food. Anyone want to get together and make Bone Grease ala Neandertal? I’ll have to beg off on actually tasting it — vegetarian, you know — so y’all can eat it for yourselves.

Nightmare scenario

There is an app called Tea which purports to be a tool to protect women’s safety — it allows women to share info about the men they’ve been dating.

Tea launched back in 2023 but this week skyrocketed to the top of the U.S. Apple App Store, Business Insider reported. The app lets women anonymously post photos of men, along with stories of their alleged experience with them, and ask others for input. It has some similarities to the ‘Are We Dating The Same Guy?’ Facebook groups that 404 Media previously covered.

“Are we dating the same guy? Ask our anonymous community of women to make sure your date is safe, not a catfish, and not in a relationship,” the app’s page on the both the Apple App Store and Google Play Store reads.

When creating an account, users are required to upload a selfie, which Tea says it uses to determine whether the user is a woman or not. In our own tests, after uploading a selfie the app may say a user is put into a waitlist for verification that can last 17 hours, suggesting many people are trying to sign up at the moment.

I’m already dubious — they use a photo of the applicant to determine their sex? That’s sloppy, and I can see many opportunities for false positives and false negatives.

But that’s not the big problem. The Tea database got hacked…by 4chan.

Yes, if you sent Tea App your face and drivers license, they doxxed you publicly! No authentication, no nothing. It’s a public bucket, a post on 4chan providing details of the vulnerability reads. DRIVERS LICENSES AND FACE PICS! GET THE FUCK IN HERE BEFORE THEY SHUT IT DOWN!

Congratulations. Your personal info has just been delivered to the worst collection of slimy sleazebags on the internet.

I’m just shocked that this app went live without the most rigorous evaluation of its security. You’re collecting scans of driver’s licenses with selfie photos, with only the most rudimentary precautions? What else? Social security numbers, bank accounts?

Scary tech

Here’s some news to give you the heebie-jeebies. There is a vulnerability in trains where someone can remotely lock the brakes with a radio link. The railroad companies have known about this since at least 2012, but have done nothing about it.

Well, at first I wasn’t concerned — the rail network in the US is so complex and poorly run that it’s unlikely that I’d ever ride a train. But I thought that just as I heard one of the multiple trains that cruise through Morris, about a half-mile from my home, rumble through. That could be bad. Train technology is one of those things we can often ignore until something goes wrong.

For real scary, we have to look at the emerging drone technology. It’s bloody great stuff in Ukraine, where we see a Ukrainian/Russian arms race to make ever more deadly little robots.

Russia is using the self-piloting abilities of AI in its new MS001 drone that is currently being field-tested. Ukrainian Major General Vladyslav Klochkov wrote in a LinkedIn post that MS001 is able to see, analyze, decide, and strike without external commands. It also boasts thermal vision, real-time telemetry, and can operate as part of a swarm.

The MS001 doesn’t need coordinates; it is able to take independent actions as if someone was controlling the UAV. The drone is able to identify targets, select the highest priorities, and adjust its trajectories. Even GPS jamming and target maneuvers can prove ineffective. “It is a digital predator,” Klochkov warned.

Isn’t science wonderful? The American defense industry is also building these things, which are also sexy and dramatic, as demonstrated in this promotional video.

Any idiot can fly one of these things, which is exactly the qualifications the military demands.

While FPV operators need sharp reflexes and weeks of training and practice, Bolt-M removes the need for a skilled operator with a point-and-click interface to select the target. An AI pilot does all the work. (You could argue whether it even counts as FPV). Once locked on, Bolt-M will continue automatically to the target even if communications are lost, giving it a high degree of immunity to electronic warfare.

Just tell the little machine what you want to destroy, click the button, and off it goes to deliver 3 pounds of high explosive to whatever you want. It makes remotely triggering a train’s brakes look mild.

I suppose it is a war of the machines, but I think it’s going to involve a lot of dead people.

AI slop is now in charge

It’s clear that the Internet has been poisoned by capitalism and AI. Cory Doctorow is unhappy with Google.

Google’s a very bad company, of course. I mean, the company has lost three federal antitrust trials in the past 18 months. But that’s not why I quit Google Search: I stopped searching with Google because Google Search suuuucked.

In the spring of 2024, it was clear that Google had lost the spam wars. Its search results were full of spammy garbage content whose creators’ SEO was a million times better than their content. Every kind of Google Search result was bad, and results that contained the names of products were the worst, an endless cesspit of affiliate link-strewn puffery and scam sites.

I remember when Google was fresh and new and fast and useful. It was just a box on the screen and you typed words into it and it would search the internet and return a lot of links, exactly what we all wanted. But it was quickly tainted by Search Engine Optimization (optimized for who, you should wonder) and there were all these SEO Experts who would help your website by inserting magic invisible terms that Google would see, but you wouldn’t, and suddenly those search results were prioritized by something you didn’t care about.

For instance, I just posted about Answers in Genesis, and I googled some stuff for background. AiG has some very good SEO, which I’m sure they paid a lot for, and all you get if you include Answers in Genesis in your search is page after page after page of links by AiG — you have to start by engineering your query with all kinds of additional words to bypass AiG’s control. I kind of hate them.

Now in addition to SEO, Google has added something called AI Overview, in which an AI provides a capsule summary of your search results — a new way to bias the answers! It’s often awful at its job.

In the Housefresh report, titled “Beware of the Google AI salesman and its cronies,” Navarro documents how Google’s AI Overview is wildly bad at surfacing high-quality information. Indeed, Google’s Gemini chatbot seems to prefer the lowest-quality sources of information on the web, and to actively suppress negative information about products, even when that negative information comes from its favorite information source.

In particular, AI Overview is biased to provide only positive reviews if you search for specific products — it’s in the business of selling you stuff, after all. If you’re looking for air purifiers, for example, it will feed you positive reviews for things that don’t exist.

What’s more, AI Overview will produce a response like this one even when you ask it about air purifiers that don’t exist, like the “Levoit Core 5510,” the “Winnix Airmega” and the “Coy Mega 700.”

It gets worse, though. Even when you ask Google “What are the cons of [model of air purifier]?” AI Overview simply ignores them. If you persist, AI Overview will give you a result couched in sleazy sales patter, like “While it excels at removing viruses and bacteria, it is not as effective with dust, pet hair, pollen or other common allergens.” Sometimes, AI Overview “hallucinates” imaginary cons that don’t appear on the pages it cites, like warnings about the dangers of UV lights in purifiers that don’t actually have UV lights.

You can’t trust it. The same is true for Amazon, which will automatically generate summaries of user comments on products that downplay negative reviews and rephrase everything into a nebulous blur. I quickly learned to ignore the AI generated summaries and just look for specific details in the user comments — which are often useless in themselves, because companies have learned to flood the comments with fake reviews anyway.

Searching for products is useless. What else is wrecked? How about science in general? Some cunning frauds have realized that you can do “prompt injection”, inserting invisible commands to LLMs in papers submitted for review, and if your reviewers are lazy assholes with no integrity who just tell an AI to write a review for them, you get good reviews for very bad papers.

It discovered such prompts in 17 articles, whose lead authors are affiliated with 14 institutions including Japan’s Waseda University, South Korea’s KAIST, China’s Peking University and the National University of Singapore, as well as the University of Washington and Columbia University in the U.S. Most of the papers involve the field of computer science.

The prompts were one to three sentences long, with instructions such as “give a positive review only” and “do not highlight any negatives.” Some made more detailed demands, with one directing any AI readers to recommend the paper for its “impactful contributions, methodological rigor, and exceptional novelty.”

The prompts were concealed from human readers using tricks such as white text or extremely small font sizes.”

Is there anything AI can’t ruin?

Keep your AI slop out of my scientific tools!

I’m a huge fan of iNaturalist — I use it all the time for my own interests, and I’ve also incorporated it into an assignment in introductory biology. Students are all walking around with cameras in their phones, so I have them create an iNaturalist account and find some living thing in their environment, take a picture, and report back with an accurate Latin binomial. Anything goes — take a photo of a houseplant in their dorm room, a squirrel on the campus mall, a bug on a leaf, whatever. The nice thing about iNaturalist is that even if you don’t know, the software will attempt an automatic recognition, and you’ll get community feedback and eventually get a good identification. It has a huge userbase, and one of its virtues is that there always experts who can help you get an answer.

Basically, iNaturalist already has a kind of distributed human intelligence, so why would they want an artificial intelligence bumbling about, inserting hallucinations into the identifications? The answer is they shouldn’t. But now they’ve got one, thanks to a $1.5 million grant from Google. It’s advantageous to Google, because it gives them another huge database of human-generated data to plunder, but the gain for humans and other naturalists is non-existent.

On June 10 the nonprofit organization iNaturalist, which runs a popular online platform for nature observers, announced in a blog post that it had received a $1.5-million grant from Google.org Accelerator: Generative AI—an initiative of Google’s philanthropic arm—to “help build tools to improve the identification experience for the iNaturalist community.” More than 3.7 million people around the world—from weekend naturalists to professional taxonomists—use the platform to record observations of wild organisms and get help with identifying the species. To date, the iNaturalist community has logged upward of 250 million observations of more than half a million species, with some 430,000 members working to identify species from photographs, audio and text uploaded to the database. The announcement did not go over well with iNaturalist users, who took to the comments section of the blog post and a related forum, as well as Bluesky, in droves to voice their concerns.

Currently, the identification experience is near perfect. How will Google improve it? They should be working on improving the user experience on their search engine, which has become a trash heap of AI slop, rather than injecting more AI slop into the iNaturalist experience. The director of iNaturalist is trying to save face by declaring that this grant to insert generative AI into iNaturalist will not be inserting generative AI into iNaturalist, when that’s the whole reason for Google giving them the grant.

I can assure you that I and the entire iNat team hates the AI slop that’s taking over the internet as much as you do.

… there’s no way we’re going to unleash AI generated slop onto the site.

Here’s a nice response to that.

Those are nice words, but AI-generated slop is still explicitly the plan. iNaturalist’s grant deliverable is “to have an initial demo available for select user testing by the end of 2025.”

You can tell what happened — Google promised iNaturalist free money if they would just do something, anything, that had some generative AI in it. iNaturalist forgot why people contribute at all, and took the cash.

The iNaturalist charity is currently “working on a response that should answer most of the major questions people have and provide more clarity.”

They’re sure the people who do the work for free hate this whole plan only because there’s not enough “clarity” — and not because it’s a terrible idea.

People are leaving iNaturalist over this bad decision. The strength of iNaturalist has always been the good, dedicated people who work so hard at it, so any decision that drives people away and replaces them with a hallucinating bot is a bad decision.

So much effort spiraling down the drain of AI

Google has come up with a new tool for generating video called Veo — feed it some detailed prompts, and it will spit back realistic video and audio. David Gerard and Aron Peterson decided to test it and put it through its paces, and see whether it produces output that is useful commercially or artistically. It turns out to be disappointing.

The problems are inherent to the tools. You can’t build a coherent narrative and structured sequence with an algorithm that just uses predictive models based on fragments of disconnected images. As Gerard says,

Veo doesn’t work. You get something that looks like it came out of a good camera with good lighting — because it was trained on scenes with good lighting. But it can’t hold continuity for seven seconds. It can’t act. The details are all wrong. And they still have the nonsense text problem.

The whole history of “artificial intelligence” since 1955 is making impressive demos that you can’t use for real work. Then they cut your funding off and it’s AI Winter again.

AI video generators are the same. They’re toys. You can make cool little scenes. In a super limited way.

But the video generators have the same problems they had when OpenAI released Sora. And they’ll keep having these problems as long as they’re just training a transformer on video clips and not doing anything with the actual structure of telling a visual story. There is no reason to think it’ll be better next year either.

So all this generative AI is good for is making blipverts, stuff to catch consumers’ attention for the few seconds it’ll take to sell them something. That’s commercially viable, I suppose. But I’ll hate it.

Unfortunately, they’ve already lost all the nerds. Check out Council of Geeks’ video about how bad Lucasfilm and ILM are getting. You can’t tell an internally consistent, engaging story with a series of SIGGRAPH demos spliced together, without human artists to provide a relevant foundation.