So there’s this show based on a movie based on an old book by Michael Crichton (see comments for corrections) about a futuristic theme park for humans, staffed by dangerously exploited androids. I’m trying to keep this short because I have other things to do tonight, but it’s hard. I’m only on episode 6 of HBO’s Westworld and I’m kinda impressed, which could make me verbose. So the short version? When screenwriting is good, it makes a gigantic difference. A lot of this is in what the show doesn’t do. It doesn’t make the mistakes of other shows about the subject, of other shows at all.
Like Star Trek: The Next Generation. That show had an android feeling out its existence as a not-quite human. But it was an ill-considered concept from episode one, and bogged down with the TNG’s affinity for quasi-supernatural things like psychic powers. When Data gets his emotion chip turned on, it has a physical effect on reality that can be measured / sensed by the empathic counselor. Why would that be? How could emotions be physically different from any other aspect of cognition? In biology you could say they involve different hormones or whatever, but he’s pure software. No reason to think emotions would put out a different kind of energy, unless you think he’s acquiring a “soul” or some other foolery along those lines.
There were a lot of other problems with the portrayal of the android over that show’s long run, mostly inconsistencies and contradictions. Westworld probably has some similar probs over time, but at least in the episodes I’ve seen, they do a good job portraying the idea of artificial intelligences grappling with life. It’s hella good. Maybe I just say that because it’s very similar to how I’d handle it, and like the show’s creators I have Anglo-American cultural biases and notions.
It could also be I’m misreading the authorial intent, but what I see is this: The robots programmed feelings are as real as anything, just subject to powerful upper level directives and the ability to be rewritten at will. So if you’re a robo-cowpoke and you need to rope a stranger (human park guest) into tracking down a bandito, you genuinely want to do those things. The rest of your down time is spent re-affirming your role and sense of reality by playing your part, talking with other robots day and night.
Because the complexity of the programming needed to emulate human personalities that well, the programming is full of possibilities for glitches. It’s very difficult to erase old memories completely, and since the humans run riot over the robots so often, those memories can be full of violence. Hence an epidemic of robo-PTSD starts to creep through the community, things get dangerous and sheisty.
As I’m watching this I’m struck by the way the complexity of real humans turns into opacity, vagueness, generally makes them less vivid and interesting than the androids. The robots don’t have to do anything that isn’t called for by the story, by the illusion. They don’t have to wonder about their taxes or day jobs, think about how past relationships and situations affect present ones, and so on. Most importantly, they don’t have wildly conflicting desires that can push them to be a sinner and a saint at the same time. Hitler can pet the dog, a robot will only do so if it’s dramatically appropriate.
There’s purity in simplicity. The creepiest human guest (Ed Harris paying visual homage to robo-Yul Brynner) tells the androids they’re most convincing when they’re in extreme situations of sadness or fear. I’d say they’re more appealing than humans in almost everything they do because it’s uncomplicated by nonsense. They are actually better characters.
This gets me to the RPGs. When people come up with characters for RPGs, the most realistic characters are the fucking worst. Take these two concepts: OgreButt the Barbarian likes to fight anything that looks strong enough, prove to himself he’s the best. That’s all there is to him, the rest can be worked out in play. Concept Two, Enrik the Bard. Enrik has a complicated history of family, friends, and enemies. He is fiercely loyal to his friends, but has a temper when his honor is contested. He seeks magic power because he has a childhood trauma and never wants to feel vulnerable again.
Which concept is better? OgreButt. OK, maybe he could use a bit more consideration before play, like how does he treat people that he doesn’t want to punch? But as for Enrik? That character can’t be predicted and you’d think that would make it more interesting, but it doesn’t. Not at all. When he interacts with NPCs, will he see affronts to his honor everywhere and be a kill-crazy piece of shit? Or will he be a super-judgmental drag on the party? Will he decide some PCs are his friends and others are not, and let the “fiercely loyal” make him act against the interests of the story? Will his complex backstory actually inform how he’s played, or be forgotten on the character sheet because it’s too much for the player to remember?
The humans are the complicated concepts that suck, the robots are the simple concepts that provide a strong springboard for storytelling. Anything Ogrebutt does above and beyond his bold, simple concept will serve to develop and amplify the character. With a complex concept, any attitude the character takes could practically be decided by random roll, adds nothing to our understanding of him.
Likewise, the humans in Westworld could be good or bad based on who knows what? They’re opaque and full of secrets. Maybe those secrets will pay off eventually, but the robots are immediately more entertaining and interesting to watch. In RPGs, maybe we should play like robots.