Back

Explaining Westworld’s unsettling Season 3 titles


The makers of HBO’s Westworld brought in a team of experts from A.I. Fiction led by Dr. Pinar Yanardag to add some hallucinatory dreamlike elements to their Season 3 titles. An artificial neural network (ANN) binge-watched Seasons 1 and 2 and then set about interpreting what it had seen to create new, similar content as best it could.

Main titles designer Patrick Clair and Dr. Yanardag spoke to SyFy Wire about the experience.

So how did Project A.I. Binge-Watch work?

DR. PINAR YANARDAG: We made this A.I. algorithm watch the first two seasons of Westworld. And by watching, I mean we processed them and fed them into the format it requires, and it’s about to learn what these things look like, and then if they learn, they can hallucinate new content that is not in the episodes at all. It’s really fascinating.

And by watching, you mean just visuals, no sound, correct? So the A.I. algorithm is not getting any ideas from Dolores about how to break free or take over …

YANARDAG: [Laughs] Yes. Just the visual content, without any sound.

PATRICK CLAIR: I will say it took me a while to understand what was going on, exactly, the way the algorithm was analyzing these images, from across the whole series, it kind of puts the colors and shapes into an abstract three-dimensional space, and it’s able to draw relationships between some shapes, like shapes of faces, being similar. The most useful metaphor I came up with, it’s like a big 3D block of Westworld cheese, right? The holes inside the cheese are kind of the shapes that it sees. If you cut a slice of cheese, you might get something that looks like a face, or a horse, or the lab. Now, it’s more complicated than a 3D block of cheese, and it exists in many dimensions, which is very hard to conceptualize.

But I think what’s interesting is that the network starts to create relationships that we wouldn’t logically create — like what looks like a woman’s leg with a boot on it, like some of the women from the saloon, and then a few seconds later, a rock formation. It’s not either of those things. It’s probably inspired by both of those things. But that’s where you start to get this really interesting, intuitive quality coming out of this A.I.

A lot of A.I. researchers work in areas of hard science, engineering, trying to figure out how to make an automated car drive, but this is in the field of creativity. It’s actually going to be applicable to more soulful, human things, like figuring out emotionally provocative metaphors, or the right way to communicate with someone in a sensitive way. These are really beautiful, abstract, somewhat fuzzy results with Westworld, but if you take this technology and turn it on to a much greater sample of things, the machine will probably find patterns beyond what we can consciously identify, perhaps things we can only feel intuitively. It might yield insights about human emotional states that we haven’t been able to articulate ourselves.

Any insights that it yielded that you couldn’t articulate before?

CLAIR: It reminds you of how much of what we watch is about watching people. Almost any sample we take out of this algorithm is some form of a human face, which, of course, makes a lot of sense in hindsight. But I sort of naively assumed that we’d get these very strange, warped Western landscapes, and they’re there, a little bit, but I would say 95 percent of what you see is some kind of human face. I did a bit of reading on it, and a massive part of our brain is devoted to processing faces, decoding faces. You get a really raw, immediate, emotional result from deconstructing faces.

YANARDAG: This is a big research problem in A.I., in the sense that they’re vulnerable to the data that you feed to them. They are able to learn based on what you feed to them, so they’re able to learn things like biases and discrimination that exist in the data. Sometimes the algorithms can be racist and sexist.

I thought one of the things that distinguished human consciousness from artificial intelligence was our ability to dream. So what does it mean if A.I.s can hallucinate?

YANARDAG: It’s mimicking the data. It’s not a real hallucination, but it’s a step towards it. In the near future, maybe these algorithms can go beyond hallucinating and start to get creative. But we’ll still be further along. Algorithms won’t be able to mimic our creativity, our decision-making skills, our empathy, or other such features, for a long time.

CLAIR: The scale of artificial intelligence is so much smaller than the scale of human intelligence. Dreams are extraordinarily complex on a visual and conceptual level, and just as advanced as an episode of Westworld, and that’s what your brain is doing when it’s switched off! As we learn that machines can do things like interpret images in an intuitive way, it helps us to understand the human brain even more.

Tragically, in the middle of this, my mom had a stroke. And speaking to her doctors — she’s doing really well now — it was fascinating to realize that this A.I. idea of building pathways and connections was exactly what I needed to understand for my mom’s recovery, how she needed to build new pathways to articulate the fingers on her left hand. The more steps we take in terms of neural networks and developing machine learning, the better we’re going to come to understand our own brains.

Do you think it’s possible for an artificial intelligence to attain consciousness, and under what conditions?  

CLAIR: I’m not worried about a Terminator scenario. [Laughs]

YANARDAG: I think the machines will definitely get much smarter, and they will learn faster, but I really am skeptical that artificial general intelligence will truly become a reality in our time. But the progress is fascinating. We can think about a future where humans and machines can work better together. Here, we just showed an example of a human-A.I. collaboration in a creative way, but there are many areas were A.I. can truly change the world for the better and improve our lives.

Which areas would you like to see developed?

YANARDAG: The medical application. There is huge potential there. A.I. can really change the way we approach healthcare, because these algorithms can consume your medical history and predict what you need, in time for you to get help.

CLAIR: Diagnostic healthcare is the area that these things will surface really quickly. I have a friend with an extremely rare condition, and it could help there. But there are two others I’d love to see. One of the dads from my son’s daycare was using machine learning to come up with an extremely cool bespoke sports car that could be 3D-printed. I thought that was cool! And if in the process of making animation, it could be as easy as me just telling an algorithm what I wanted it to be, and then kind of making it? That would be cool, too. I look forward to that time, but I don’t want it to be able to direct the title sequence, too, because then I’d be out of a job. [Laughs]



RELATED INSIGHTS