In Which The Air Force Discovers What Every Gamer Already Knows


When I read Robert Coram’s Boyd [wc] I was fascinated. Here was a fellow who appears to have been two things: 1- a strategic genius and 2- a really fast thinker. Coram (and others, including Chuck Spinney) have long held Boyd forward as a innovator who re-invented the art of war, but I respectfully must disagree.

Boyd was very good at understanding what was happening in an aerial battle, but most of all he did it quickly. Back in the day, it seemed like everyone was talking about Boyd’s “OODA Loops” – Observe, Orient, Decide, Act – which he claimed was how he thought about an aerial engagement. That much was probably true, but Boyd’s innovation, naming the process and writing it down as a bunch of bubbles on a viewgraph, was not genius. It’s just a formulation of “what everyone does in combat” added to “… and Boyd does it particularly fast.” Imagine if you were playing Kendo against Hidehisa Nishimura, the current All-Japan champion (and living proof that not all cops are bastards) and you were a more-or-less normal beginner/amateur – Nishimura sensei might appear to be reading your mind, to know what you were doing before you even figured it out, yourself. That’s because he does. Sort of. The thing is: his experience is so profound that he knows what your options are in a given situation, and those options are not infinite. They depend on your position, direction of motion, skill at footwork (how fast you can shift your body) etc. John Boyd had the same ability and knowledge-base when it came to jet fighter aircraft of the mid-late 20th century. I specified that carefully because Boyd would have had to research the capabilities of an F-22 in order to use it to full effect, since he did not ever fly any variable-thrust aircraft that I am aware of. Boyd was still an amazingly great pilot, unquestionably, but no matter how you look at it, he had to understand the tools he had available to him and he always was a really, really fast thinker.

I have discussed this topic with a few high-level martial artists, and – even though it’s a brief flash – they know what you are going to do because the things you are going to do are going to be constrained by your ability and the strategic situation does not allow for adding new decision-nodes to the great “how kendo works” network map in their head. [This is how I always interpreted the “Kobayashi Maru incident” as having played out: Kirk did something that the game engine could not comprehend because it had never seen it before] [which is completely wrong, technically, but it’s a TV show] the point is that, with some games including jet combat the options can be enumerated thoroughly enough, and then an AI will always think faster than a human, and chew them to pieces. Imagine if you were an experienced kendo player and you could slow Nishimura down to 1/1000th speed – would you be able to hit him? Would he be able to hit you? You’d have enough time to think ahead of your opponent and make the best move you could. I suspect that’s what being a combat AI would feel like, except nobody’d waste the time to give the AI any ability to reflect on its own experience.

The meta-workflow of every combat situation, ever

Of course, you can collapse any game AI into a loop that looks like OODA. In fact, at Lavecon in 2019, the programmer who did the opponent combat AI for Elite: Dangerous did a short Q&A about it, and explained it as a loop. [Sarah Jane Avory] Of course it’s a loop: that’s really the only way to program such a thing – more precisely, since it’s got to be interruptable and re-entrant it’s probably coded as a finite state machine, and one of the states is “figure out what to do next”, etc.

[The Register]

An AI bot defeated a human pilot in a series of virtual dogfights that unfolded in skies albeit within a flight simulator during a competition held by the US military research arm DARPA.

The fighter pilot battling on behalf of us humans, a US Air Force instructor nicknamed “Banger,” struggled to fend off the AI system developed by Heron Systems, a defense contractor with headquarters in California, losing 5-0 in one-on-one virtual combat.

Each player, sat inside a fake, computer-generated F-16 military aircraft, attempted to deplete the other’s health bar by shooting bullets while avoiding damage. The battles were part of the final stages of DARPA’s Air Combat Evolution (ACE) program, where eight different machine learning agents built by various companies and academic institutions were pitted against each other. The top team, Heron, was then chosen to fight a human pilot.

There they go with their dumb “bullets” again, pew pew pew. Seriously, though, I can tell you what’s going on: an AI is even better than a human at integrating “where is the enemy going next based on their flight model” and would put a missile in the right place instantly every time. But, to be fair to the poor Air Force puke, a flight simulator gun-engagement is hardly realistic. But, he still got stomped. Of course he got stomped. He was playing 3D chess against a computer and $5 computer chess AIs can play at a master level. Compared to an AI pilot, a typical human is a beginner. I don’t mean just in terms of skill level, I also mean in terms of experience. (“flight hours” may not be the right metric any more)

Lieutenant Colonel Justin “Glock” Mock, one of the competition’s commentators, welcomed the use of AI in the military. “We’ve got AI that works,” he said. “When I’m going down in combat, there is no pride anymore, there is no ego. It is about doing what we need to get done and going back home to our families. ®

Dipshit, it’s about never getting up off your couch, and sitting through your tedious family dinner with your bratty kids, and not going anywhere near an airplane again.

Meanwhile, since we’re talking about software, here’s some shocking news: the “rapid development” project that was going to build a whole new logistical system for the F-35, failed. [register] Basically, the sequence of events was:

  1. F-35 logistics software is developed in parallel with the aircraft
  2. This turns out to produce software too complicated to code, since it’s aiming at a moving target (what does your parts inventory look like when you don’t know what parts the plane has yet?)
  3. Someone suggests “my bunch of college buddies do ‘agile software development’ and they can step in and kick this problem’s ass!
  4. This turns out to produce software too complicated to code, since it’s aiming at a moving target and the development process is now also a moving target

I suppose there was a microscopic probability of the two moving targets converging but, who what? It would have only been brief.

The US Government Accountability Office (GAO) said in its annual report into F-35 design and development that software development practices within the F-35 Joint Project Office (JPO) and jet manufacturer Lockheed Martin were below par – and had hindered the supersonic stealth fighter’s progress.

“The program’s primary reliance on the contractor’s monthly reports, often based on older data, has hindered program officials’ timely decision making,” said the GAO. “The program office has also not set software quality performance targets, inconsistent with another key practice. Without these targets, the program office is less able to assess whether the contractor has met acceptable quality performance levels.”

Shorter GAO: “OMG you took the developers’ time-lines as gospel?”

Lockheed and the JPO adopted an Agile-style methodology that the US military branded C2D2, or Continuous Capability Development and Delivery. This has been less than stellar, though its intent was to allow iterative development of the jet’s capabilities, rather than delivering an all-singing all-dancing aircraft all at once – and taking decades to do so. So far C2D2 has cost $14bn, a portion of which will have been paid by F-35 customers including the UK, which intends buying at least 48 of the jets to fly from its Queen Elizabeth-class aircraft carriers.

The F-35 program has been a failure because of the overly-high attempted parallelism in development. In fact, the parallel design and implementation model that the F-35 program has failed so badly with is the “agile” software development model. It is exactly the same idea: broadly specify components then implement them and we’ll glue them all together at the last minute and it’ll all just work hunky-dory! It’s an idiot who suggests an idiotic solution to an idiotic problem when the axis of idiocy is the very one that is being proposed as the solution.

------ divider ------

Elite: Dangerous had an interesting AI problem, in which the AI was given control over crafting and engineering weapons and some of the AIs created weapons powerful enough that they began targeting and stalking players. [kotaku] You can imagine the hilarity. The AI in Elite had already allegedly been dialed back several times because “nobody could beat it.” Each time a new option-node was added to the state machine, the AI became much harder to predict.

For complex developments like an advanced fighter jet, it is not possible to build the components serially. There are too many components, they are too complicated, and this it is necessary to do parallelism wherever possible. In development terms, an advanced fighter jet may be an inherently impossible project. Especially if the deliverables keep changing, as they appear to be doing with the F-35. There are ways to deal with high parallelism and that is: object reuse. In code terms, you’d build your F-35 around the established and functional engine that goes in an F-15. That takes a gigantic number of unknowns off the table, and you’ve had the benefit of 30 years to thrash out that engine design. Then, maybe you look at the paint used on the F-22, and go “ugh” and conclude that there isn’t enough time to develop a stealth coating, so you declare full stop: this is either not going to happen at all, or it will not be a stealth aircraft. Those sorts of discussions have to happen early in the design cycle, when you make your choice between build, bypass, or buy (if you’re developing a software system). Basically, what the F-35 program does is: “We’re gonna need a database, so you guys get busy coding our own version of ORACLE. And we’re going to need machine screws for titanium, so you guys go figure out how to 3D print them from scratch. Hey Fred you said you wrote a compiler once? Lets use our own programming language!” Use as many risky technologies as possible, bite off much more than you can chew, and plan on converging on a working target “some day.” The component that fails to materialize on time and in spec becomes the new “long pole in the tent” and everything serializes. Boom.

Comments

  1. Pierce R. Butler says

    … and not going anywhere near an airplane again.

    A 100% safe Prediction: human ground crews will still have jobs after all the pilots’ tasks have been automated.

  2. consciousness razor says

    He was playing 3D chess against a computer and $5 computer chess AIs can play at a master level.

    And the free ones can destroy grandmasters.

    Well, not exactly destroy, but … you know … beat them in a game of chess. It’s emotional destruction.

    Anyway: less money, more ass kicking. The hardware’s not so cheap, however.

  3. JM says

    I wouldn’t blindly take the human vs AI pilot event seriously. It was an event to showcase military AI projects and it’s likely that it wasn’t a fair fight. That said, it’s also probably true. The limited range of objects and states that can exist in the air takes away the biggest problem AI’s currently have.

  4. Reginald Selkirk says

    … where eight different machine learning agents built by various companies and academic institutions were pitted against each other. The top team, Heron, was then chosen to fight a human pilot.

    That’s one way to do it, but consider: what it takes to beat an opponent AI may not be the same thing it takes to beat a human pilot. In chess for example, it is fairly well-known that computers and humans do not play the same way. So why not broaden the field?

  5. Reginald Selkirk says

    This turns out to produce software too complicated to code, since it’s aiming at a moving target and the development process is now also a moving target…
    Elite: Dangerous had an interesting AI problem, in which the AI was given control over crafting and engineering weapons and some of the AIs created weapons powerful enough that they began targeting and stalking players.

    Well duh! The solution is obvious. Hire an AI to produce your software. As a side benefit, your expense budget for Doritos and Red Bull will shrink dramatically.

  6. sonofrojblake says

    A 100% safe Prediction: human ground crews will still have jobs after all the pilots’ tasks have been automated.

    I’d go further: 90% safe prediction – human ground crews will still have jobs after all the pilots AND REMF strategists tasks have been automated. It’ll be AI turtles all the way down before we’ve built a bot that can maintain another bot.

  7. Reginald Selkirk says

    Idea for a new weapon category: a small mine or bomb which attaches to a drone, but doesn’t explode until the drone returns to its home base.

  8. says

    Reginald Selkirk@#8:
    Idea for a new weapon category: a small mine or bomb which attaches to a drone, but doesn’t explode until the drone returns to its home base.

    That’s evil. I like it.
    I suppose one would need to get inside the drone’s command and control, force it to swing by for the addition of the explosive package, then be sent on its way again.

  9. says

    Reginald Selkirk@#8:
    Idea for a new weapon category: a small mine or bomb which attaches to a drone, but doesn’t explode until the drone returns to its home base.

    That’s evil. I like it.
    I suppose one would need to get inside the drone’s command and control, force it to swing by for the addition of the explosive package, then be sent on its way again.

    It sounds like, in the future, IFF for drones could be an interesting thing. I suppose it’s already sorted with full-sized aircraft, but you do need to be sure that drone coming back to base is yours.

  10. anthrosciguy says

    There’s a parallel in racing I found back when I roadraced motorcycles. The scariest guys to pass were the guys going too slow, because they could theoretically move anywhere on the track at any time. The faster they’d go the fewer options they have in where they can move, so your safe passing options are far clearer.

  11. says

    Pierce R. Butler@#1:
    A 100% safe Prediction: human ground crews will still have jobs after all the pilots’ tasks have been automated.

    I can’t do Latin but “Who repairs these repairbots?” or something like that. Point well-taken.

    Could we argue that a self-repairing robot is also capable of reproduction? That would hit on some of the criteria for being alive. Add differential survival and repair and you have evolution.

  12. Dunc says

    It’ll be AI turtles all the way down before we’ve built a bot that can maintain another bot.

    Quite possibly, because maintenance is hard. On the other hand, we can already build a bot (or rather, an automated production line) that can build another bot… At what point does it become more effective just to ramp up production enough that you can treat them as disposable? One future option I see is of swarms of cheap, simple, disposable drones, rather than smaller numbers of more capable drones. We’re already halfway there with cruise missiles.

  13. komarov says

    This about the ED AI is a relief. I never got into combat there in my (very brief) tenure but I did try the combat demo in the early days, where I, a highly decorated Tie Fighter veteran, absolutely sucked. Now, finally, I know the truth: It was the AI’s fault. Shoddy programming and all that.

    If the development of the F-35 was always based on moving targets then it was doomed from the start. If you don’t know what you want you most definitely won’t get what you need. Adopting unsuitable project management strategies are basically just an making an effort to fail even harder.

  14. says

    komarov@#14:
    If the development of the F-35 was always based on moving targets then it was doomed from the start.

    Yes, and there was no shortage of people waving warning flags at the time.

Leave a Reply