I used to be a highly creative person. I’m not bragging but one idea I threw out over a sushi dinner at Higashi West got turned into a start-up which netted the founder about $200mn. I’ve had a bunch to drink and topped it off with a Zoloft, so forgive me if I wander a bit. It’s snowing out and the shop is too cold and I’m not sure if I want to play computer games, tonight, or try to write.
So, what I wanted to talk about is something I don’t think people do enough of: paying attention to the experience of being themselves. I assume that some of you are creative, perhaps all of you are. I’m in a weird situation because I’m a formerly highly creative person, who has has to slow down considerably. I used to solve design and fabrication problems (e.g.: fabrication sequence of operation) effortlessly and subconsciously, and now I have to actually battle with it, i.e.: sit down in my arm chair with a notepad and think “1) How do I do this?” and start fleshing out 1A), 1B) etc. until I can get it done. This is not merely a side-effect of ageing, its a side-effect of fairly bad diffuse mid-brain damage that I suffered, which has, among other things, pooched my ability to keep things in my head until I get to the bottom of a chain of reasoning. Imagine, if you will, that you were listing how to make something in a ten-step process and by the time you get to step 4 you have forgotten your own step 1. It makes you feel incredibly stupid, believe me.
Don’t cry for me, though, it’s the hand I dealt myself and I’m going to figure out a winning configuration or die trying. I’m OK with either.
However, as I constantly look over my own shoulder, trying to map out the damage, I keep constantly watching myself run through my paces. That’s the only way I can describe it. If you’re a stoner or you’re in the habit of meeting me halfway by reading my blog when you’re shit-faced, maybe you’ll understand. If you step back and watch yourself make reasonable decisions, you can sometimes think back up the chain and realize that your brain (I am just making up this example pro forma) has a checklist it runs through when you are on a road-trip and stop at a rest-stop:
- is there coffee?
- am I hungry? (bacon and eggs and coffee)
- do I need to pee or other biological stuff?
- vehicle fuel level OK?
- am I awake enough to drive?
That is my list. Now that I have absorbed the fact that I have a brain injury that affects my memory, I have a few other items such as:
6. Where am I?
7. Is Gypsy (the name of all GPS’ in my world) on track?
This new habit of mine, of deconstructing my thoughts, has resulted in my becoming aware of the idea that knowledge is encoded in templates or strategies, that give a rough framework for solving problems repeatedly. An example would be: “How do you make a marble statue of Michaelangelo’s David?” option 1) Be Michaelangelo, option 2) encode Michaelangelo’s process for mentally rendering a block of marble into a statue. What piece goes where, and how does it all sort of form itself in your mind? Or, in AI terms: given a training set of Michaelangelo, fill a virtual marble block of a certain size with noise, and let your Michaelangelo training set successively remove the parts that least match Michaelangelo’s sculptures. [I will continue to note this incessantly: that training set does not encode all of Michaelangelo’s work, merely a probabalistic map of what is more or less Michaelangelo-esque. If you think about it, that’s the only way you can come up with a system where you can effectively say “give me an image that is 75% Michaelangelo-esque and 25% Jeff Koons. Ick. OK stop that train of thought.
What I really wanted to talk about was two creative experiences, in detail. What I felt and thought while I was having them, and how they fit with other creative experiences (or problem-solving experiences) I have had. These were experiences that old me would have simply had without registering them, because I had not yet learned to pay attention to them.
I guess we can start with the log. So, there’s an instagrammer (@lumberbarbie) in the southern end of New York State/Northern end of Pennsylvania, who posed some absolutely erotic photos of some red oak slabs she and her guy had cut. I immediately asked if they were for sale (assuming they were out in Oregon) but it turned out I was only a 2-hr drive away. “Hi Ho Silverado!” full tank of gas and let’s go. They are super charming people and I arrived right after the weather gods had gifted us all with a thin sheet of ice to deal with. It ended with my wallet slightly lighter and the back of my truck laden down with 350kg of oak slabs, count: 3. The biggest is a glorious monster 8cm thick, 2.7m long, and 45cm wide. It’s quarter-sawn right off the heartwood, so if you’re a wood nut you have my permission to drool. Three humans were able to load it in the back of my truck without a lot of pain but when I got home, and decided to deal with it the next day, I discovered that my feeble muscles couldn’t even shift it. That distant laughter was probably the gods. I tied a rope around one end, looped it around one of the poles of the pole barn, and pulled: whackabam! Now I had a small pile of heavy wood on the ground. It did not want to move at all.
I went around the back and, took my pry bar, and pried the wood forward about an inch. I had 30-odd feet to do. I have invested the time in projects on that order, but – and here is the part I wanted to talk about – some little part of me was saying “there might be a better way.” Old Marcus (I am sure some of you think he was an asshole) would have immediately riffled through a dozen solutions, but new Marcus stood there like a cow, with snot dribbling from his nose. So: this is the mysterious “creative process” so many people claim AI can’t do. And I’m either going to have to be creative, or freeze to death choking on my own snot. I experienced a brief period of flashing through options: do I have anything with wheels (wheels, BTW, rock for this kind of stuff) or is there a pulley somewhere that will give me some mechanical advantage? I could keep levering it forward with the pry bar, but no, something was telling me that wasn’t right. It was really frustrating, but the experience of “being creative” is what I want us to focus on: I was rapidly remixing ideas involving wheels and rope and ratchet straps and prybars and – aha.
I have never done this before, but it sure worked wonderfully well. Did I invent a new way of moving heavy things? Almost certainly not. In fact the Egyptians would have certainly known of this one. I just frustrated myself by looking on the internets for “egyptians moving large rocks” and got pictures that I don’t find credible, at all. For one thing, some of the people in the illustrations were smiling. So, what I did was wrap two loops of rope around the forward end of my stack of slabs, tied it in a knot, then tied it in another knot halfway up my pry bar. I remember distinctly at the time that I was still rapidly filtering through options that my brain was throwing at me, i.e. “what if the rope slips? what if the bar slips? what if it doesn’t work at all? etc.” I remember distinctly thinking that this gave me a great deal of fine control of where and how much leverage I was applying, and no matter what, my leverage would lift the front of the logs instead of bury them like a force from the rear. With a pry-bar from the rear, your leverage is a factor of how far you insert it under your load – and you cannot move it farther than that without resetting the pry bar. With a pry-bar from the front, your leverage is a factor of how high or low the loop around the bar is (the lever arm of a type-2 lever) so you can easily adjust your movement distance, lift, and leverage. By the time I had the stack of slabs where I wanted it (including lifting them 2″!) I had remembered how to calculate mechanical advantage, I think, and that I had just invented the type-2 lever, I.e.: sit down and shut up, modernity punk. Bear with me: I don’t think I invented anything new, but what I experienced was a new thing for me, which is the subliminal process of winnowing out bad ideas, proposing new ideas, visualizing them in action, figuring out what was even worth trying. If that is the mystical experience of “creativity” I have experienced it many times, just usually a lot faster and with more certainty. Now if you watch a current AI like Deepseek running in problem-solving mode, you’ll see exactly the same thing. “Call for aliens? Not gonna work.” “What about something with pry bar?” etc. It rapidly riffs through pieces of a potential solution that involve the things that are present on the scene, and eventually settles on a suggestion. Is that different from what I did? Sure: the AI probably explored the option space more thoroughly.
I know that Subliminal Special Agent Pierce R. Butler, seer of all errors in images, is going to have a field day with that. First off, the ancient egyptians probably didn’t wear white diapers. Also, the only pry bar present in the scene is the guy who is helpfully lending his weight to the block of stone.
So much bullshit, so little time. Here’s a better view of moving stones in ancient Egypt:
They definitely are kitted for the job. By the way, I am happy to see that they are wearing approved OSHA Safety Sandals(tm) like all the molten metalworkers on youtube. At least they only have 6-packs instead of awkward 10-packs or extra nipples. But this is a problem with using models that are more or less designed for fictional erotica:
That looks sort of like the cast and crew of “Pyramid Builders” starring Charleton Heston and Jude Law, and whoever the girls with the boob jobs are. I am not sure how they are supposed to pull rocks if their arms are tied behind their backs… But.
What I am trying to get you to do is to deconstruct your own experience of “creativity” in some situation or other, and to look at the components of what happened, in detail. If your brain still works better than mine, it may not work for you. But, if you’re like me and you struggle sometimes, you might have a few sparks of enlightenment that lead you either to: creativity is just mushing around existing ideas, or creativity is absolutely magical but machines seem to get damn close to it.
Another recent experience of creativity that I deconstructed was making some shelves for the hot shop. I mounted a big 2.4m x 1.2m plywood sheet to the interior wall, with the idea of cutting some sword racks out of more plywood (using Mr Happy Dancing Bandsaw) and making what I have been calling “The John Wick Shelf” – a place where I can hang current and future projects in a reasonably dramatic and intimidating manner. [The shed is not airtight, thus seasonal moisture is a problem, I will not leave anything valuable hanging] But I had another problem requiring a creative solution. I kept bumping up against it and my instinct (!I do not know what “instinct” is but I’m pretty sure once AIs develop it, it’s Game Over!) said I needed some shelves that looked right, for projects that weren’t quite blades, yet. I have a lot of projects that consist of 2 or 3 lumps of steel wrapped in a cloth soaked with WD-40 for several months, until they magically turn into a bigger single lump wrapped in a cloth soaked with WD-40. Anyway, I was able to, over the course of 3 days, interrogate and monitor my creative process.
The shelf began with a single row of basic sword rack down the center. Then, down the left side I put shorter sword rack and, at the bottom, a few fake firearms. (Specifically an Ermawerke MP-40, and a Ruger P-99 with a silencer) but that left the right hand side. My idea was to put shelves, just – you know – big square blocky things, but it didn’t feel right. Then, I realized I was being creative and started monitoring the process. I went back to the basic sword rack design, and the shelf design and eventually everything clicked together on an idea of making shelves between the racks but in line with them, so I could rack or shelve components.
Now, the way I interpret that is that my “possibility generator” started kicking out possibilities based on the things that were already in the environment (racks and shelves) and started asking itself over and over “what about this?” and “what about that?” and I kept it on the basic concept. It was a weird feeling – my conscious self felt like there was a solution to hand, but I didn’t know what it was – I just needed to wait and work on it more. When the idea (which is not a particularly splendid idea, it’s just an idea) finally hit, it was an “oh, duh” moment instead of a “Eureka!” moment. But that is how creativity seems to work, for me. Again, there is a hypothesizer that cranks out crazy ideas as fast and sloppily as it can, then there’s a go/no-go filter and a refiner:
which goes back to the go/no-go filter until something gets through and my upper brain thinks about it a bit and examines it for practicality. I distinctly remember the “oh, duh” moment when I realized that the difference between a classic sword-rack and a rack-shelf is 1/2″ more depth in the rack, where the shelf can do. For someone like myself, with a table-saw and band-saw, it was a matter of 15 minutes to make a rack-shelf, and 4 to mount it.
Creativity!
I know I’ve nattered on a lot here about how what AIs do is not substantively different from humans, but it ought to be increasingly obvious to anyone who wants to think about it, that we are no longer the only clever ones in the house. We may actually not be that clever, at all.
Meanwhile, you’ve probably heard a bit about the great big AI wars and DeepSeek VS OpenAI, etc. My better angels are telling me not to weigh in on all of that, but I feel like I really must. It’s not a technological problem, or an AGI problem, it’s a capitalism problem. What happened was that Trump and the US Government and a bunch of other parties were about to (did, in fact!) invest in OpenAI at a mind-blowing valuation in the hundreds of billions of dollars. Then, the day after that deal was inked, a Chinese company released a pretty amazing AI engine that runs on much much less hardware than OpenAI and performs a bit better. It included some features that OpenAI felt obligated to appropriate immediately – a very interesting “mulling mode” in which the AI mutters about the various things it is considering and why and what. I have watched that a number of times and I have to say it seems a lot like how I think, now.
[Lengthy sidebar: I believe that thinking is a learned behavior. Some of us are not as good as others, and some of us have strategies and tactics the others have not invented. Back when I was studying the AI called Richard Feynman, it became apparent to me that it took advantage of parallelism that the rest of us could not emulate, to work forward and backward toward a solution from both ends of a problem. While doing other things. Feynman’s memories, in which he describes how his father taught him, seem to justify my interpretation. What if there are people who simply think differently? Not quantitatively, but just using different algorithms that are more efficient for certain problems? Feynman appeared to favor intuitive leaps followed up with grinding out the reasoning, whereas John Von Neumann, as described by Feynman, used to simply burn holes through problems with the sheer power of his brain. I regularly get the sensation that I am dealing with problems that pre-injury Marcus would have smoked, but current Marcus has to methodically attack and sometimes take notes. It’s annoying and interesting in roughly equal measure)
Anyhow, what the clever Chinese did, quite deliberately, was fire a torpedo below the waterline of the market valuation of OpenAI. They just inked this deal for billions and suddenly there is a free open source alternative that comes close to kicking its ass. Not only that, but the code is available for researchers. If you haven’t been watching, the name “OpenAI” implies…. openness. Which is odd since it’s been a closed solution that its backers are struggling to monetize. Meanwhile, the Deepseek solution can actually run on a person’s home gear, if they’ve got a super high-end gaming machine (mine is high end, but Deepseek does require about 3 times the memory and GPUs as my machine has) but a research lab at any company can build a research platform for Deepseek for around $100,000, whereas OpenAI have been insisting that to even play in their space requires tens of billions of dollars.
Most Americans seem to forget that the Chinese kind of wrote the book on strategy about around 550BCE. The Chinese technophile grows up with familiarity that technology is a strategic consideration, going (again) back to when most Europeans were still mainlining jesus christ myths and living in mud huts. A Chinese will not throw a stone unless it has multiple possible ducks that it will kill; I utterly lose my shit when Turnip or other Americans talk as if the Chinese are stealing our good ideas. Anyhow, one of the ducks that got casually crushed by the Deepseek stone-fling was the assumption that US tech firms like Microsoft, Google, X, and Facebook are leading the AI cutting edge. The Deepseek guys didn’t just demonstrate that that’s not true, they did it on low-end hardware, faster, easier, and better. One of the other birds that got crushed is the US’ attempt to restrict availability to high-end GPUs. The Deepseek team optimized their code and came up with a couple clever ways of training new models. So OpenAI is saying “We need $3billion to train a new model, and 200,000 GPUs in a humongous data center, and Altman will buy another $10mn sports car” while the Deepseek guys leapfrogged them with about 250GPUs and some advances in training algorithms. Specifically, the Deepseek guys made a model that’s a “teacher model” that exists and wants to do nothing more than teach new AIs how to AI – then you point that at a new AI and you don’t need to shovel the whole internet through it any more. OpenAI’s models legendarily cost billions to train and take a year, Deepseek trains them fully in a weekend and they are smaller and faster and (apparently) smarter. In fact, Deepseek also leapfrogged OpenAI on “reasoning models” which are a cool new thing in AI research.
Let’s briefly look at reasoning models. One of the things AIs are pretty good at is clustering things in problems. That’s because they’re good at matching. So you take a whole bunch of facts and group them based on proximity matching. We humans do this, we have clusters of problems called “math” and clusters of problems called “engineering” and when you keep tightening up the clustering it turns out that engineering problems regarding shelf-building tend to cluster together. So, now you have a reasoning model that can apply previously known to be successful approaches to shelf-building to any shelf-building problems that come in. I know that probably sounds like a silly example, but as always with AI you have to multiply it by a few billion – imagine the AI “knows” basic details about every shelf humans have ever built. And everything else humans have ever done. Anyhow, the cool thing about reasoning models is that they can (just like with humans) extend across domains – if you know how to make elegant shelving, you might have some cool ideas about how to do the porches on a high-rise hotel. There are problems we humans deliberately treat as separate, when they are not (e.g.: labor and immigration) part of what will give AGI its power (real soon now!) is that it will be able to generalize cross-domain problems. And those crossed-domains might include “common courtesy” and “how to conquer humanity” but probably not.
Anyhow, Deepseek’s brilliant maneuver has advanced US AI research, while simultaneously firing shots below the waterline of US AI research. The only effective response I have seen, so far, was OpenAI stealing the idea of letting a human listen into the AI mulling over its answers. The other stuff is just cringe: OpenAI and most AI apps code to an API layer called “CUDA” that gives certain access to NVIDIA’s chipset. The Deepseek guys realized that CUDA’s many abstraction layers made it less efficient so they just read the fucking manual and talked directly to the GPUs. Also, they didn’t write their main control loops in interpreted languages. In techbro-land, programmers have learned to live in a world of infinitely scaleable inefficient computing, so they don’t really bother worrying about any of that stuff. The Deepseek team had to live on the wrong side of US tech restrictions and showed that the tech restrictions just encourage you to write better code. For those of you who aren’t programmers, it may be hard to believe that a few lines of code can make such a difference. Sometimes it’s a matter of finding the right optimization, and then the whole game changes. When I was a young pup at Digital I was looking at one of our customers’ applications and I realized it was doing a linear search through an unordered database – after adding 15 lines or so to build a hash table index and store the data sorted, I had made the customers’ application 53 thousand times faster. Crazy, huh? Another time I optimized a certain web app by putting a shared memory lookup table called a “bloom filter” in front of its query engine, and made it uncountably faster. When your query speed goes from 1 minute to zero, how much faster are you? So, another of the hard-flung Chinese stones that smacked OpenAI between the eyes was, “and your code ain’t shit.”
I’d take it as a personal favor to me, if you think a bit when you’re next being creative, and try to deconstruct what the process is, for you. As I said, I believe diferent people think differently (they would have to, since thinking is a learned behavior) There must be some among you who experience creativity differently, and I’d love to hear about that.
By the way, I didn’t belabor my point but when I designed the rack/shelf above, I also realized it needed to be strong enough to support weighty steel things on the shelves, so I came up with the idea of mounting a cross-piece at the top and bottom, which would support the entire weight of the thing by supporting the sides. In AI terms, I came up with a concept and then sent it through a couple refiner passes.
Just how confident are we that the output of one of these models in “reasoning” mode is actually representative of the real underlying process? I was under the impression (possibly mistaken) that what these things were doing was essentially producing a response to the initial prompt by an inscrutable process involving a mountain of linear algebra, and then doing second pass through the inscrutable mountain of linear algebra to produce a plausible explanation for how it got there. Note I said “plausible”, which is not the same as “accurate”… It’s not reporting its “thinking” process, it’s just confabulating it after the fact.
Mind you, there’s quite a lot of evidence that that’s how people work too…
Oh, and while I know it’s not the point of the post… Here’s how to build an effective winch using two poles and a rope: Flip-Flop Winch. Probably overkill for your problem, but it’s a cool thing to know.
I think you might be onto something with the internal script. Like when I’m forging i seem to think about temperature, how to hit the steel, dies to use, while the order of operations is something i’ve thought about before hand
so before it’s like 1) isolate tang material, 2) point blade…
But there is a lot within 1 that i fill in while doing it. a lot of sub-steps i only complete then
It’s not necessary to remove the lid of the can. Leave an inch or so attached to the can, fold it out to pour, and back in after you’ve poured.
That last AI-generated photo of rack/shelving: the shelves do not appear tall enough for the liquor bottles which must be the intended shelf contents. Perhaps in the future, AI will be able to master such details.
But consider: when meting out ‘fingers of whiskey,’ more fingers is better.
Nice shelves. Cool computing stories.
For the optimal canned soup procedure, I think step 4 or 5 should include a step to see if the can recommends adding water, and then adding water as needed. Sometimes it is good.
I still don’t understand how I can be absolutely stumped by a cryptic crossword, forget about it for a day, then come back and get five or six clues right off the bat. Summat’s going on in the unconscious mind, but fucked if I understand it.
Rob Grigjanis@#6:
I still don’t understand how I can be absolutely stumped by a cryptic crossword, forget about it for a day, then come back and get five or six clues right off the bat.
There must be some process where candidate words get set aside separately to try. I would be curious if that is liminal or subliminal with you. Ever since Myst I have avoided puzzle-games because sometimes I notice that I am unable to stop mulling over possibilities. But it still kicks up with any in-game problem solving – my brain keeps offering up half-baked bits of strategy (“what if I pop a shield booster then hit the jump jets, take the hits, then shelter behind the big rock and start sniping?”)
You must have some running process you load search criteria into, which crunches away and there’s no need to remember it or waste consciousness on it because eventually you solve it and then you’ll never revisit that solution.
I want to say something extended here but I’m not sure how. I can say a couple of things.
I’m good at looking for analogous patterns. Seeing the race based equivalent for Lewis’s law in behavior for example (challenging a dominance behavior leads to the appearance of that behavior from the comments on articles about feminism justify feminism).
If I’m right about the vestigal accessory olfactory system and cooties and cuss words that might be an example.
I’ve this problem with self-promotion though, when I’m wrong I fall on my metaphorical face pretty fast.
There’s another one, a feeling for patterns in non-literal language relating to the senses and anatomy. I’m not exactly sure how to describe it all. “Lateral thinking” is too undefined a concept.
I might just feel so negatively about my fellow humans that it works like a debiasing heuristic.
Marcus @7: The only video game I’ve played was Riven, and what appealed to me was just exploring a world (i.e. nice graphics) rather than solving the puzzles contained therein. Of course, you have to solve some puzzles to progress, but I saw that more as an annoyance.
Curious: Are there video games in which you can just wander around and see cool stuff without the solving puzzles and/or combat stuff?
Different cause in my case, chronic grinding abdominal pain caused by adhesions on my colon with occasional flares of intense pain, combined with the opiates to partially control the pain, but the end effect is much the same. There are things I knew how to do without thinking about it, in my case mostly related to cooking or knitting or sewing, that I have to think through almost as if I’d not encountered them before. When I’m doing that I sometimes get the ‘Duh, of course’ but occasionally get ‘Huh, I know that’s not how I used to do that, but I’m sure it’s a better way’. And I often have to write the process down to see where I am missing things out – that’s definitely a lot of ‘Duh, of course I need to find and clean the jam jars AND the lids before I sterilise them’ so there is almost always a first draft and a reasonably readable copy – but it does allow me to see some of how I work things out.
On the styles of thinking I am absolutely sure there are different thinking styles, the thing I and Paul have come across both professionally and socially is the inability of most people to really think ahead. I don’t mean weeks or months I mean decades. It really comes out when you are asking them what their ideal world would be like, for say transportation in a city (that being what both of us worked on that brought this mental habit up). People vary in how big the block is, some get so hung up on what is here now that they can’t imagine it changing in any positive way, others can imagine something better, but still struggle with all the bad decisions already built into where they live and can’t see any way of getting to their ideal. Very few really seem to grasp that it took time to build wherever they live, and that with time it can be utterly changed, will be utterly changed (except for the oldest cities with things like the Oxford and Cambridge colleges defining the centre – but that still leaves a lot to be played with) and if they don’t work for the changes they want someone else will decide what those changes will be. They often can’t do it even when you explain it to them carefully, with pictures and maps of how things were ten, twenty and more years ago.
And I made the mistake of going away and doing something else and have no idea where I was going with that.
I don’t recall whether I’ve said this here before, but one of the big things people point to when they want to differentiate these modern neural nets from human beings is that they lack desires. The AI, that is.
So I’m waiting and wondering when some researcher is going to try building a system that has needs. Like, from my lay perspective, a reward function which recursively demands that a human type ‘feed the AI’ into the interface. If the AI gets ‘fed’, it’s doing its job and is happy, but as long as it’s not fed the reverse would be true. I suspect this would lead to a system which demanded attention for its own needs in a way that current reactive models don’t.
I also have an impish hope that you could release the thing onto the net and have it turn up in comments sections and forum posts begging, threatening and bribing people to go to its interface and ‘feed’ it.
Rob @ 10:
Sounds like you’re looking for games that are (originally often derisively) called “walking simulators.” Dear Esther, Gone Home, and The Stanley Parable (or the original release) are the big names in the space, but there are plenty of others. Off the top of my head, the only other one I’ve played is Proteus.
Others that I’m aware of, but haven’t played and can’t comment on how involved they are:
Everybody’s Gone to the Rapture (from the devs of Dear Esther)
Tacoma (from the devs of Gone Home)
The Beginner’s Guide (from the dev of The Stanley Parable)
BABBDI (free!)
NaissanceE (free!)
Firewatch
Lake
CUCCCHI
There’s also Eastshade which I have played a little bit of and which has some light puzzles, though it’s possible that they get more involved as you go further in the game.
Oh, I forgot about ABZÛ as well. Bought it, still haven’t played it.
I also spent so much energy trying to find CUCCCHI that I forgot to include a few games that are just unlimited building/tinkering/look-at-the-pretty-lights kind of affairs. Off the top of my head there’s Tiny Glade, Dystopika, Townscaper, and SUMMERHOUSE.
Sorry for all the links, Marcus.
Speaking of algorithms.
Many years ago a friend of mine was asking me to help him make his prime number searching code go faster. Neither of us were mathematicians so no clue about modern day algorithms.
His program was written in C++ and basically checked numbers for dividability by everything between two and the square root of the number to be checked, one by one.
Then I wrote a python script that was about a hundred times faster at finding primes by:
– outright skipping everything that’s on a 2, 3, 5 or 7 boundary
– keeping track of found primes and using only those to check candidates
… point being, if your algorithm sucks a ‘faster’ language won’t save you. Of course, the better algorithm in the faster language will beat the interpreter, but interactive, interpreted languages make it much easier to tweak your algorithms.
@15
Speaking of algorithms, the notion that a single algorithm, an LLM, would be good for every problem is absurd. Maybe some day AIs will be smart enough to use tools for specific purposes – calculators for math, dedicated tools for counting letters, etc.
Nes @13 & 14: Thanks!
I’d also like to throw a plug for Subnautica, which was a generally interesting, beautiful, and fun game. I had a genuinely good time figuring out its world, building a cool base, outfitting my sub as a mobile command post, etc.
There is an underlying plot with clues and a solution, but you can have a great time exploring the fascinating and beautiful biomes and wrecks. I have played it for hours just trying to build bases in weird places with beautiful views.
I’m slightly prone to claustrophobia and there were a few times I almost passed out because I was holding my breath.