This is something I had heard nothing about, until last week, when I stumbled over it as a result of a google search (I was researching links for the Lituya Bay mega-tsunami).
This is something I had heard nothing about, until last week, when I stumbled over it as a result of a google search (I was researching links for the Lituya Bay mega-tsunami).
It’s funny how much effort we put into building redundant and reliable systems (e.g.: “cloud computing”) that scale and replicate well – yet they are subject to the simplest of attacks that can disable them.
I’m not afraid to read a book (if I can handle it) but I feel you need to know something of a field, in order to know which books are definitive and represent a consensus.
[This is a second attempt at this posting; the first went way off into the weeds. This is a tricky topic!]
… is anyone actually surprised by this? Disappointed, sure – but surprised?
I paid a brief visit to my old friend Gary McGraw, who used to work in computer security with me, but has switched to focusing on AI applications in that field. He’s my “go to guy” when I have questions about AI, and I was surprised that his view of ChatGPT3, etc., is that they are toys.
Field-expedient repairs are sometimes expected. You haven’t got all the gear to make a proper fix, so you log a maintenance report saying something like, “I did not have the correct threaded bolt to replace it correctly, so I forced the wrong bolt on to the nut with a pipe-wrench, just to hold the thing together until we got home.”
I’ve been struggling with a problem: “what happens if someone tells an AI to ‘code a better version of yourself?’ and – whoosh – the singularity happens?
One of the kids in the wargaming group went off on vacation in the midwest and came back with a new game: Dungeons and Dragons.
This is going to be interesting. No, I lied, it’s going to be entirely predictable and fairly ho-hum. But I’ll be interested.