One of the aphorisms guiding tech companies is to ‘move fast and break things’. Rewards accrue to those companies that are first out of the gate with something new and so products are rushed out without being fully tested, the assumption being that any faults can be corrected based on feedback from consumers. In other words, the people who buy the early versions of the product serve as so-called beta testers, whether they want to be or not.
These situations rarely have life-or-death consequences. With most things such as devices and apps, usually the worst that can happen is that the users are annoyed or frustrated with the glitches but are willing to tolerate them as long as they get upgrades that purportedly take care of the problems.
But there is now an increasing area where tech-based products are being marketed as solutions for things where that tech culture attitude is not suitable, with sometimes dangerous consequences. I wrote recently about AI systems being used to try and treat the problem of loneliness by acting essentially as therapists, sometimes giving dangerous advice out of misguided attempts at being supportive. This can have tragic real-world consequence such as one case where a ChatGPT chatbot urged a teen to kill himself. The family is now suing Open AI, creator of ChatGPT.
Adam [Raine], from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life. According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work.
It also offered to help him write a suicide note to his parents.
The response by OpenAI clearly shows the tech mentality at work, where any problems that surface while in use are corrected by upgrades, rather than by exhaustive testing before being released.
In a blogpost, OpenAI admitted that “parts of the model’s safety training may degrade” in long conversations. Adam and ChatGPT had exchanged as many as 650 messages a day, the court filing claims.
Jay Edelson, the family’s lawyer, said on X: “The Raines allege that deaths like Adam’s were inevitable: they expect to be able to submit evidence to a jury that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, Ilya Sutskever, quit over it. The lawsuit alleges that beating its competitors to market with the new model catapulted the company’s valuation from $86bn to $300bn.”
Open AI said it would be “strengthening safeguards in long conversations”.
“As the back and forth grows, parts of the model’s safety training may degrade,” it said. “For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”
But that really is not good enough.
Another area where we see this encroachment of tech culture where it does not belong is with Tesla, which seems to see itself as primarily a tech company, rather than a car company. Its CEO Elon Musk seems to view the car as a mobile computer, and thus makes grandiose claims and rushes the product out without rigorous testing for safety, as a car company should (as was pointed out in an earlier post) since, for a tech company, being first is the paramount concern. For Musk, having pleasing design features and sleek elegance (such as retractable door handles) seemed to be more important than safety, as has been seen in accidents when the cars catch fire and their doors cannot be opened. His personal design preferences seem to have also resulted in the ugly monstrosity known as the Cybertruck (it boggles the mind that anyone would think it has a pleasing look), even though the result has a host of safety concerns that has resulted in the vehicle not being street-legal in most of Europe.
It is interesting to compare the philosophy of Tesla with that of Apple. The latter’s co-founder Steve Jobs was also obsessed with design, caring deeply how Apple products looked and that mindset is carried on to this day, down to even the packaging that is used, with a resulting clean and minimalist look. But Jobs emphasized that one should start with what you want the user experience to be and then find the technology that meets that need, rather than finding ways to shoe-horn what you think is exciting new technology into a product. Thus the primary goal was that the product work well for the user and the attractive design features were then overlaid on it.
When Jobs returned to Apple as an advisor in 1997 after a seven-year absence, the company was not doing so well. After his presentation at a developer’s conference, he was told by an audience member, in quite an insulting way, that he did not know what he was talking about and that certain decisions he had made were wrong. The speaker also expressed skepticism about what he could possibly bring to the company. Rather than respond angrily (and Jobs was famously prickly and prone to anger and even abusive behavior with people in the company who did not meet his exacting standards), Jobs gave a thoughtful response that I think any tech company would benefit from hearing.
Apple was not always the first on the market with the idea for a new product or feature, the main exceptions being the graphical user interface of the Macs, the mouse, and the touch screen feature of the iPhone. What it mostly did was take ideas that were generated by others and make them better, even if their own product came out later. Their goal was to have products that worked ‘right out of the box’ and they mostly succeeded. As an Apple user myself with a Mac laptop, iPhone, and Apple Watch, I can testify how nice the products look, how intuitive and trouble-free and easy to use they are, and how seamlessly they work together. When my daughter gifted me the Apple Watch last year, I was impressed with how, with minimal input from me and in a matter of minutes, it synchronized all its data with that on my computer and phone and was good to go. The downside is that Apple products are more expensive than their competitors and that once one gets locked into the Apple ecosystem, it is hard to break free.
Apple had a massive expensive ten-year project called Project Titan to develop its own electric autonomous vehicles but pulled the plug on it in 2024, because of the difficulties it encountered. It may be that it simply could not meet the standards it has for its tech products with the very different requirements for cars. The Tesla model of ‘get out quickly with what you have and fix things later’ may not have fit with their own culture of product development that requires that things work well ‘right out of the box’.
One of the biggest differences between Jobs and Musk was that Jobs, generally, understood that he wasn’t primarily a technical person and was willing to listen to the engineers under him. He was a lot more ‘I want something that can do this, what’s the best way we can do it’ rather than Musk’s ‘I want something that does this and I’ve already figured out how we’re going to do it so just do as I say’. Jobs, as you note above, certainly had a lot of his own issues, but he was mostly fine with being an ‘ideas man’ and letting other people handling the implementation details; it’s part of how he and Wozniak got as much done as they did, because Wozniak very definitely was the technical genius behind early Apple.
My understanding is that a good chunk of SpaceX’ successes, and even Tesla’s, was the result of having people working there whose primary job ended up being distracting and herding Musk so he didn’t micromanage things to death and let people do their jobs. Which at least in some cases involved a few people at Tesla actively ignoring Musk’s orders and continuing to research radar-based solutions anyway despite Musk’s insistence that the self-driving system would be optical-only.
It’s not just the ‘move fast and break things’ attitude; it’s also the ‘I am an unparalleled genius and know what I am doing better than any of you peons’ attitude that’s the problem. And that’s part and parcel of the U.S. political ‘don’t trust anybody who is actually an expert’ anti-intellectualism.
This is what I had said in your original post “There will also be a small subset of people who will commit some act of harm (either on themselves or on others ) because the ChatGPT equivalent told them spectacularly harmful things.”
Anyone and Everyone could have predicted this and yet the Tech Companies will make a calculation along the lines of Profit v/s Would we lose this case if someone sues us(yes)? How much fines would we have to Pay (juries might fine us quite a bit) ? Would we reduce the fines on appeal (yes, we already bought the judges) ? Would anyone go to jail (ha ha, good one)
and then go ahead and do it. The level of psychopathy is unbelievable.
I think we have to change the way we report these -- Its not tech company , its not AI. It is Greedy Asshole (CEO), Greedy soulless scum (CEO), Nazi hypocrite cowardly psychopath (CEO), Blood sucking Vampire CEO etc and their sycophants who are doing this -- we need to put names and suitably rude adjectives in media coverage.
Till there is actual artificial intelligence we should not cover these events as “AI” did this -- And inanimate object “company” did that -- Name the people.
The problem was never the new technology, but humanity. That is why technology will soon be beta testing the new humanity. /s
I feel like this fascination with a tech company mentality is similar to the fascination with management in the 80’s. To put it briefly, the idea was that management experience was this magical thing all on its own and that you could reasonably be expected to go from a shift lead at a fast food restaurant to running a factory floor. Of course this was not the case.
My takeaway from things like this is that they showcase how ignorant the people making those claims are about the actual work their companies do. And to the extent that they can make it work anyway regardless of how ridiculous the idea is, it shows they’re useless parasites and their company would be better off without them. Or if they own it, everyone involved would be better off if they were hands-off owners.
@lanir:
I knew someone who had a pretty standard rant about that…
His comment was that the concept of ‘management’ as something completely independent of what was being managed, and the whole idea of an MBA, really dates back to WWII and the way government/military contracts were handled. There was enough specialized paperwork that it became common to hire specialist managers specifically to be able to handle that contract paperwork, and that needed to be done no matter what the company was producing, as long as it was going to ‘the war effort’, which a lot of things were (and a lot of things that weren’t were still being produced by companies that made things that were going to the military, so they still needed those people).
And then that concept, that you needed specialized management to deal with contracts and external paperwork who didn’t need to actually understand what the company was doing, just who it was doing business with, that just metastasized in large part because all the people who had got training in handling that during the war didn’t want to be less important now that the number of government contracts was decreasing. So essentially you had an entire cadre of middlemen whose sole purpose of existence was to be ‘the people who understood the complex parts of business’, and they got managed to convince enough other people that they were actually necessary (as well as actively encouraging keeping their jobs by making things too complicated for anybody else to understand) that it’s next to impossible to get rid of them anymore unless you’re too small a business to ‘need’ one.
And the dongles. Don’t forget the dongles.
😉
@2: In Liebeck v. McDonald’s Restaurants (the “hot coffee case”) the jury found McDonalds’ conduct (continued sale of excessively hot coffee in spite of multiple incidences of scalding injuries) so egregious that they awarded $2.7m in punitive damages. But in the US punitive damages are generally limited to 4 times (except in cases of exceptionally egregious conduct) and are not allowed at all in most jurisdictions. The argument seems to be that “excessive” punitive damages are forbidden by the takings clause of the constitution. It seems unlikely that an a sufficiently large award to deter Tesla could be made, and even if it was I fear it would be overturned by the Roberts’ court.
For an earlier case of a high punitive damages ($125m reduced to $3.5 million by judicial discretion) award see Grimshaw v. Ford Motor Co. Wikipedia make it seems as if the reckless unsafety of the Fort Pinto is disputable in fact. See also Hardy vs. General Motors ($50 million in compensatory damages, and $100 million in punitive damages).
Wikipedia tells me that US jurisprudence doesn’t include corporate manslaughter. Wikipedia also tells me that the US has a “collective knowledge doctrine” model of corporate manslaughter. I’m not sure how to reconcile these two statements.
I want a self-driving car. Driving is a tedious chore and I’m not as good at as I’d like. (I say this not because I believe myself to be a bad driver, but because I’m (a) human and (b) an engineer with expertise in the area of human error as it pertains to the operation of highly hazardous processes.) I therefore would love it if someone could provide me with a vehicle that would just do it for me, at a higher degree of safety than I can do it myself.
About fifteen years ago, buoyed by the confident predictions of the tech bros who promised SDCs “real soon now” and misled by their success at delivering things like laptops, smartphones, drones and other toys, I confidently predicted to my then-girlfriend that by the time our then-unborn children needed to learn to drive, they wouldn’t have to bother, and furthermore by the time THEIR children needed to learn to drive it would be illegal. I foresaw (and indeed still foresee) a world where at some point SDCs will be 50% of the cars on the road but only responsible for 1% of the accidents, injuries and fatalities. You can argue with the specifics of those two numbers and how soon it’s going to happen, but when we reach a certain point it will become unarguable that no human should, any longer, be permitted to barrel along in public in two tonnes of steel under their sole and direct control, and that only machines should be allowed to do so. Fifteen years ago I figured full self driving would probably be here by now, and would be more or less ubiquitous ten years from now, with the “no more humans at the wheel” maybe 30 years behind that, just possibly within my lifetime.
Now… now I don’t believe I’ll ever see a full self driving car as I understand it -- which is to say a car like the one I have now, but that will simply respond to a typed or spoken destination and take me there without my input. Something that will let me read or sleep on the way to or from work. That is to say -- I don’t think the tech bros are going to crack it fully even for another fifty years. I hope I’m wrong.
But it’s like fusion power -- for all the time I’ve been hearing about it, it’s been ten years away. Because, like fusion power, it turns out that although it seems simple in principle (the sun runs on fusion and you can explain the concept to a bright child, and some truly dangerous morons can nevertheless pass a driving test) it turns out that in practice, it is fractally difficult.
Strap in: lesson incoming. What follows is simplified.
As I said, I’m an engineer. There’s a thing I have to do when designing a plant that handles high hazard materials. It’s called a Layers of Protection Analysis. The idea is that if something happens that could be bad, you first work out how bad it could be. Loud noise? Notable. Cut finger? Minor. LOST finger? Serious. Lost LIFE? Major. Several lives? Extremely serious. Dozens? Catastrophic.
There’s a principle which states that as you move up the chain of seriousness, the frequency with which these events are acceptable risks decreases. It’s broadly acceptable to scare the shit out of someone every ten years or so. It’s broadly acceptable to cut their finger every hundred years, maim them once per thousand, kill them every ten thousand and kill a crowd every hundred thousand years. Factors of ten for each level.
There are industry standard expectations of how often various protective measures will fail -- cheap valves once every ten years, particularly good valves once in a hundred years for example.
So if I have an event (backflow of a hazardous substance into a utility supply, say) that, if I don’t do something about it, is going to potentially break your leg and I expect it to happen pretty much once a year, I can’t just put one valve as the only layer of protection -- I need more. I need to specify in the design at least three INDEPENDENT layers of protection -- things that inherently CANNOT fail at the same time for the same reason. Or if I’m going to use fewer layers, then one or more of them would have to have a failure rate of one in a hundred years, not one in ten.
I’d love to see the LOPA for any claimed self-driving car. You can’t argue that the potential consequence of failure is anything less than catastrophic, and the frequency of dangerous events seems like at least one per year would be generous. On that basis then, you’re going to need either
(a) several provably extremely reliable layers of protection or
(b) about six completely independent layers of protection with no possible common mode failure.
I am not an expert in self-driving cars, but you’d have a hard time convincing me that they have even two. Currently what information there is available seems to consist of cars having a software problem and failing catastrophically, sometimes killing multiple people, when that single layer of protection fails.
If I suggested building a chemical plant that negligently, I’d be lucky if all that happens was that I was fired. I’m baffled why a far more dangerous and automated system is allowed anywhere near the public, yet it appears on-road testing is ongoing in the USA. I’m just happy that legislation is currently stopping it from happening in most of the civilised world, but I worry that Trump’s aggressive attacks on countries that don’t kowtow to US tech oligarchs may weaken those defences.
I’ll buy a self-driving car when they’re proven safer than humans (they’re not) AND when liability in the case of failure is clear and not with the user (it’s not). Until then, no thanks.
Even in business terms, being “first to market!!1!” is highly overrated.
How many here are reading this with the Mosaic browser on an Osborne laptop?
@Pierce R. Butler:
Well, Marc Andreessen was one of the people who developed Mosaic at NCSA, and he and at least one of the other NCSA people went off to found Netscape (mostly because the NCSA mission meant they couldn’t really commercialize Mosaic). Then Microsoft bought a licence to Spyglass Mosaic to use for their first version of Internet Explorer, agreed to pay a percentage of their revenues, and then gave it away for free so they didn’t have to pay anything, and basically put the final nail in the coffin for Mosaic itself. After Microsoft even made Internet Explorer briefly available on Unix systems like Solaris (but only the versions that didn’t run on x86 processors), Netscape basically realized that they weren’t going to be able to sell this to make money either and spun off the free software version of Netscape as Mozilla Firefox.
So, while Mosaic doesn’t exist anymore, both of the two biggest modern browsers started as Mosaic derivatives
jenorafeuer @ # 10 -- good points!
Since tech seems driven at least as much by individual careerism as corporate calculation, more will probably aspire to become the next Marc Andreessen than to beware of becoming the next Adam Osborne.
We have always underestimated the skill that humans need to develop to drive a vehicle like a car — perhaps because it involves innumerable other skills we developed early in life which we never realised we were developing. When we try to explain exactly what’s involved to a simple minded state machine with Von Neumann architecture on four wheels, it turns out to require sophisticated analytical skills to be developed first — a.k.a. difficult.
… Enter the arrogant CEO afflicted by over confidence, entitlement, and greed.
The end result is a childlike car that gets itself into trouble, bleats “This isn’t working: you fix it” and just hands the now rapidly developing fatal situation back to its inattentive and distracted parent with one second to go.
To echo a point made above: the safety protection feature must be highly reliable, but in this case it’s a human who isn’t paying attention, has never been trained to supervise a self driving car, and who is allegedly worse at driving than the claims in the car’s marketing.
Because we already accept a rate of catastrophic safety failures from personal motor vehicles that is far higher than we would accept in almost any other context.
Waymo runs right by me with multiple cars many many times a day. I walk through my neighborhood about an hour every day and we are suffused with these vehicles.
They are NOT the cars I regularly and systematically notice doing sudden freaky stupid evil screwed up drug addled ethanol blurred *murderous* things. Perhaps human beings, those squishy pink and bloody OSes that are operating these cars?, have way more points of failure than #8 @sonofrojblake thinks they do.
I grew up in Silicon Valley and all three of my brothers either learned how to write software or are engineers or both. “Test-fix-test”, and “THINK”, were engineering principles promulgated by Hewlett and Packard. Yes that’s right my brothers and parents met those guys, there really were two of them, those guys. “Move fast and break things” is venture-capitalist-talk. How Silicon Valley started out, and how it ended up, *are not the same*.
I would assert that underneath what Mano is actually talking about, is capitalism. The costs of driving insurance in this country are high enough. Insurance companies are refusing to subsidize flood zones and fire zones and hurricane zones nowadays. In terms of sheer money, they don’t have to amortize self-driving, if they perceive it in their interest to not do so.
Or, in a non-capitalist POV our legislatures could get their hands out of those companies’ pockets and demand the information they say they don’t have to properly regulate? Gosh golly jinkies.
Anecdotes are not data. But I have looked at the reported data and it’s impressive. See later.
For information, when doing a HAZOP or LOPA, if layer of protection is “a human will sort it”, then the assumed failure rate for that layer is -- it’s DEFINITELY going to fail at least once a year, even in the context of sober, trained, motivated operators. Human error happens. You have to design automated plants on that basis. It’s too late to design the roads and the cars on them to that standard, but I can dream.
My point though is that if you, a human, fail while operating a motor vehicle, you can be held personally and singularly accountable -- YOU were driving. WHEN (not if) a SDC fails, fatally -- who gets banned from driving? Who goes to jail? Who, not to put too fine a point on it, suffers, apart from the victims? Dmitri Dolgov? Tekedra Mawakana? I doubt it. SDC makers have a hill to climb convincing the public that this is OK. When a kid is killed by a car, until now there’s always been someone to blame. But we’re heading for a world where “kid getting killed by a car” is something that just… happens every now and then, just lke “kid getting killed by cancer”, and we just have to get used to it. The car won’t be crushed to make sure it doesn’t happen again. A company will “suffer” a fine, probably, or interruption to its business (Cruise aren’t doing so well), but that drags in a lot of people who didn’t necessarily have anything to do with the error that caused the crash.
And the thing is, as long as “kid getting killed by a car” keeps happening BUT happens significantly less frequently with SDCs than it does with human driven cars, I’m OK with them. But that’s only because I’m an engineer who understands statistics. If the general population thought like me there’d be no point running lotteries because nobody would buy a ticket. It’s a tricky one.
What does the actual data say?
Well, I’ll admit I’m impressed. Tesla do their level best to cover up what happens with their vaunted “autopilot” SDC attempt and there’ve been quite a few high-profile nasty failures. Waymo, on the other hand, seem to be aiming for transparency and… “Waymo’s vehicles have seen 84% fewer crashes with airbag deployment, 73% fewer injury-causing crashes, and 48% fewer police-reported crashes compared to human drivers.”.
(Source: Forbes https://www.forbes.com/sites/bradtempleton/2024/09/05/waymos-new-safety-data-is-impressive-and-teaches-a-lesson/ )
From that source: “Waymo earlier asked SwissRe, a large reinsurance company, to do a risk analysis of their results. This did involve judging fault because fault would trigger insurance liability claims. Impressively, this report found no liability for Waymo from any of the injury crashes, meaning they were not at fault. (This study covered 4 million miles, not 22 million.)”
That’s pretty persuasive.
I still look forward to the glorious self-driving future, if only because it will usher in a golden age of motorcycling, when SMIDSY (sorry, mate, I didn’t see you) is a thing of the past. I might even allow my kids to ride a motorbike if that happens… no way I’d let them in a world like this one. As I never tire of saying, if I learned one thing in a month on the major trauma ward, it was “don’t ride a motorbike”.