One of the aphorisms guiding tech companies is to ‘move fast and break things’. Rewards accrue to those companies that are first out of the gate with something new and so products are rushed out without being fully tested, the assumption being that any faults can be corrected based on feedback from consumers. In other words, the people who buy the early versions of the product serve as so-called beta testers, whether they want to be or not.
These situations rarely have life-or-death consequences. With most things such as devices and apps, usually the worst that can happen is that the users are annoyed or frustrated with the glitches but are willing to tolerate them as long as they get upgrades that purportedly take care of the problems.
But there is now an increasing area where tech-based products are being marketed as solutions for things where that tech culture attitude is not suitable, with sometimes dangerous consequences. I wrote recently about AI systems being used to try and treat the problem of loneliness by acting essentially as therapists, sometimes giving dangerous advice out of misguided attempts at being supportive. This can have tragic real-world consequence such as one case where a ChatGPT chatbot urged a teen to kill himself. The family is now suing Open AI, creator of ChatGPT.
[Read more…]

