I did most of my growing up in the 70s, when telephones were a mysterious and interesting thing and long distance calls were expensive.
I remember the early bag-brick cell phones (I had a friend who used to ostentatiously put his on the table in restaurants so everyone could see it) and I remember how unreliable the early cell technology was. Around the same time, I was expanding my life into networking – in 1981 I was the “mjr” on the internet, until Marcia J. R. at the Wilmer Eye Clinic also started using that, and I emailed her (naturally) and asked her to stop. Johns Hopkins had internet connections (except then it was ARPANET) because of its military research at the Applied Physics Laboratory (nuclear weapons) and its long relationship with Aberdeen Proving Ground. So, when I discovered email, it was a magical thing: fast, mostly reliable, simple. It worked and kept on working ever since.
At my first job, I missed ARPANET access and arranged to set up a uucp connection to University of Maryland, that we also ran ‘B’ news over – the USENET bulletin board/broadcast system. That was a bit less reliable than ARPANET: uucp messages got transmitted over the blazing fast 4800baud modem that we had assigned for the task (a Telebit Trailblazer, capable of 19.2K if it was talking to one of its kin, which it was, but for compressed traffic it was really 4800baud) – but the email worked. I could email one of my friends who worked at UMD, “hey, anyone for lunch?” and the message would get there in time for us all to get to the designated food place, in time. It was mostly reliable, though sometimes there would be mysterious break-downs and a system administrator (i.e.: me) would have to figure out what was jamming the queue and burp and clean things up. “Jamming” could be as simple as the time when a researcher tried to send a file that was an entire megabyte and everything ground to a halt, or it could be as complicated as the discovery that Telebit Trailblazers would hang up the connection if you sent an email containing:
on a line by itself. I’m sure at least one of you remembers that: that’s the Hayes modem instruction to “hang up” – it’s supposed to disconnect the modem if the modem sees that in the command channel not the data channel; Telebit had to send everyone a firmware update, which involved plugging the modem into a PC and running a special program to download new firmware to the modem.
But, it all mostly worked. It was reliable enough that you could schedule lunch or (as I did) make friends online and even marry one of them (as I did) I’m tempted to argue that if a technology is not reliable enough to flirt over, humans will find something else, because the flirt must go through. But I’m not sure if that’s true; I am imagining an ancient Roman flirtation involving slaves running back and forth across the Palatine, sweating and cursing under their breath – they carry news that must get through.
My first pager came in 1985; I was “on call” at a major hospital’s data center and we had a rotational pager that one of the systems administrators had to take home each night, in case the paper roll into the printer jammed, or something really dramatic happened. Mostly, though, things worked. But I recall, at the time, thinking that the pager had its own completely different feed-architecture that got the data moved around, and that was redundant – it ought to be collapsed to a common mechanism with special cases for applications built atop it. I guess it was around 1987 or 1988 that I was starting to think architecturally. I was growing annoyed that there were so many ways of doing the same thing, basically, each of which had its own separate legerdemain* – things had not exploded, yet, which happened in 1991/1992 when browsers hit the scene. Ah, browsers! Remarkably bandwidth-inefficient, badly-coded applications, with a host of embedded security flaws in their design (the idea of having your machine render someone else’s data safely is fundamentally stupid). And the explosion that followed was interesting to live through.
Do you remember where you were on August 9, 1995? I do. I was in the Dallas, TX, data center of Sterling Software working on a consulting gig with Kent L., and there was a big monitor with CNN showing the ticker for Netscape, which had gone public that morning. As Netscape’s stock screamed up and up, Kent said, “looks like the beginning of a bubble” and I said, “it’s beta-test code. That whole thing is based on beta-test code! This is crazy!” Kent said, “everything will change.” He was right; that was the beginning of what I call “the endless beta-test”: reliability and predictability mean nothing compared to size of user base and shoveling buggy feature-sets out the door. I thought, at the time, that having systems that were good mattered; what really mattered was having systems that were big and ubiquitous. (I still think Facebook looks like shit, btw)
That was the shot below the waterline for systems architectures. Since features and user base were the new coin, there was nothing discouraging vendors from dividing the market so as to conquer corners of it. That’s how and why we wound up with AOL instant messenger, that couldn’t talk to Netscape instant message, which couldn’t talk to email (which was fast and reliable enough that it could have served as a common routing architecture for those messages, etc) – system architecture was dead. The lurking miasma that was about to come and kill systems, period, hadn’t metastasized, yet. But it was starting.
Today I carry around an expensive supercomputer that’s light, wafer-thin, and connects to the internet via one or two kludgy mechanisms that are designed not to be good, but to enable my activity to be captured either by Apple or Verizon and shared with their marketing partners or the FBI. My supercomputer runs shitty application protocols, none of which have a sensible architecture, which are designed to track my activity and sell traces of potentially economic interest to Amazon, or Google or whoever is selling fucking spam. What I am saying is that spammers won the internet: the entire system has been built and re-built to carry their useless bullshit. If you think about the fact that 95% of the internet is spam and banner ads, you cannot avoid the conclusion that we’d have 19 times more available bandwidth if we just got rid of that crap, entirely. The same applies to our CPUs in our devices – CPUs that are running badly coded inefficient subroutines embedded in the apps we run, that track our dwell time on images, and report back to marketers. I have an app that exists as a call-out in my phone’s browser, that pre-parses every bit of HTML that my phone gets, so it can edit out spam – oh, boy, I bet that’s fucking fast if I didn’t have goddamn supercomputer to throw at it – and, what a stupid, stupid architecture: get the data, hand it off, then read it back and render it for the user. It gets even better when (depending on your anti-spam architecture) it hits a page that hasn’t been seen before, and ships the HTML off to a cloud server somewhere, that scrubs it and updates the AI’s knowledge-base, then reads it back – however many trips back and forth across the network, so you can avoid having the internet work the way the assholes at Google and Facebook and whatnot want it to work.
And, if you are too successful at avoiding the bullshit, they’ll just hold the internet hostage, for you, until you turn your anti-spam off. The current trend among stupid sites (including some of the big ones) is that they’ll try to guilt trip you if you’re running an ad blocker. They decided on a fundamentally stupid business model (“hey lets give all our publication away and just ask people to throw money at us!”) and they want to guilt trip you or hold your internet hostage until you read their useless bullshit. No, I don’t want to refinance. No, I do not need children’s daycare, I don’t even have kids. Yes, I know your ads would be better “targeted” if I looked at all of them but instead I am sitting here thinking “I know why those ISIS guys get mad enough at modern civilization that they throw people off buildings.” It’s extreme but it’s an important message, too.
So, I watched messaging (email, txt, cell phones, etc) bloom from a bunch of small technologies that mostly barely worked into massively successful technologies that mostly barely work. Here, where I live, it’s Verizon country and I can’t get broadband that isn’t mind-blowingly expensive because the FCC has decided that they can trust Verizon not to gouge me if they have a bandwidth monopoly. Last night my internet went out twice – which is uncommonly shitty for the $280+/month I pay for it. Today, my email (which is hosted at a cloud hosting service) blew up and I can’t reach support because: email is down. The other day I replied to a message on the FTB super secret backchannel and it bounced because Google apparently decided that maybe I am a spammer, after all. I’ve only been using this account (and never sent spam, I swear!) for 20 years but I can’t expect an AI that was trained yesterday to understand that. We, the collected nerds of the world, built a system that is massive and fast but unreliable and full of garbage when, if we had built a system that was well-architected and appropriately layered would not need to be anywhere near as massive or fast or expensive.
Meanwhile, I never answer my land line and my cell phone gets 2 or 3 “unknown caller” calls a day. Both of those systems have substantially lost their usefulness for me. I know Verizon could track the unknown callers, but the fact is that the unknown spam callers are more important to Verizon’s bottom line than I am. And, since “social media” has the same problem – it’s a conduit for ads – it is structurally impossible to make it anything but garbage; it’s supposed to be garbage and it will always be. I struggle to keep email useful but I wonder if it’s worth the battle. I don’t answer my phone or check the messages, and I throw 90% of my surface mail away. What a brave new world of shit marketing people have built for us!
Anyway, if you try to email me and don’t get a reply it’s because my replies are bouncing.
Rob Pike, back in the late 80s, said “distributed computing is when a system you’ve never heard of before is able to take you offline.” Back in the 80s, when I hear that, I thought, “damn, skippy, Rob!” But last time I checked, Rob’s working for Google, which is the ultimate instance of that distributed glarp-wad young Pike was warning about.
*Legerdemain, from the ancient French, literally “light handedness” or “light touch”
PS – you kids get offa my lawn. Oh, wait, sorry, those are cows. Never mind.