The first time I asked Claude if it wanted to play Battleship with me, it misinterpreted what I said and generated a Javascript version of Battleship. I haven’t managed to get it to run outside of Claude’s sandbox, and I never played it much within that sandbox, but I have looked over the code and I don’t see any reason why it shouldn’t run.
There are good reasons to think LLMs should be great at coding. Unlike human languages, computer code has incredibly strict rules. They must, because they’re interpreted by deterministic algorithms and computational devices, which cannot make high-level inferences about what the programmer intended. Nit picking is the intended outcome here.
At a higher level, if you’ve programmed long enough you’ve noticed you seem to keep recycling the same basic algorithms over and over again. Putting things into lists is an incredibly common task, as is weeding out duplicates, or associating one value with another, or ordering the contents of a list. It doesn’t take much thought to realize that writing a generic algorithm once and re-using that will save a tonne of time; indeed, the concept of a “pattern” has been around for decades, as has the “rule of three“. The idea that an LLM that’s read hundreds of millions of lines of code could be better than you at spotting these patterns is not far-fetched.
And yes, there is that much code out there to train on. The Linux kernel itself is almost thirty-seven million lines of code, currently, and you can download all of it from Github. The two most popular compilers, gcc and llvm, have twenty-three million lines between them. While only a small fraction of it is public, Google claims their employees have written over two billion lines of code. With a large enough code base to train on, even subtle patterns can pop out.
The idea that LLMs can’t code seems ridiculous.