As you possibly recall, I suck at writing fiction. So I enlisted the help of ChatGPT.
As you possibly recall, I suck at writing fiction. So I enlisted the help of ChatGPT.
I paid a brief visit to my old friend Gary McGraw, who used to work in computer security with me, but has switched to focusing on AI applications in that field. He’s my “go to guy” when I have questions about AI, and I was surprised that his view of ChatGPT3, etc., is that they are toys.
When I started noodling with midjourney, I used Ronald Reagan and Marilyn Monroe as prompts, because I figured there were lots of pictures of them.
Over at Pharyngula, special jackbooted operative raven floated a dangerous idea: [pha]
I suppose Tucker Carlson wants one of the M&Ms to wave a Swastika flag around or carry an AR 15 rifle or something. The right wingnut patriot M&M.
If everything you read on the internet was written by AIs, would you care?
I’ve been struggling with a problem: “what happens if someone tells an AI to ‘code a better version of yourself?’ and – whoosh – the singularity happens?
One of the kids in the wargaming group went off on vacation in the midwest and came back with a new game: Dungeons and Dragons.
This is going to be interesting. No, I lied, it’s going to be entirely predictable and fairly ho-hum. But I’ll be interested.
A typical AI model like GPT-3 now contains beelyuns (billions) of decision-points – it’s a huge probability map of all of the potential answers that have generally been given before.
I’m fascinated by how the AI models get better extremely fast once they are exposed to a few thousand users frantically working at them.