Dr. Matt Agnew joins us to offer his expert science opinion on our favourite sci-fi parables. Plus, how likely is it that an AI will take over the world? Listen at your own risk...
Dr. Matt Agnew joins us to offer his expert science opinion on our favourite sci-fi parables. Plus, how likely is it that an AI will take over the world? Listen at your own risk...
Tell us your biggest robot fear at @netflixanz on Instagram and Twitter, or tag #thebigfilmbuffet.
Further reading:
The Mitchells vs. The Machines
https://www.netflix.com/title/81399614
The Terminator Trailer
https://www.youtube.com/watch?v=k64P4l2Wmeg
Machine learning in the Cambridge Analytica data breach (The Guardian explainer)
Ex Machina Trailer
https://www.youtube.com/watch?v=XYGzRB4Pnq8
Gary Kasparov vs. IBM’s Deep Blue: The historic chess match (Washington Post)
https://www.washingtonpost.com/history/2020/12/05/kasparov-deep-blue-queens-gambit/
Yuval Noah Harari — Homo Deus
https://www.ynharari.com/book/homo-deus/
Her Trailer
https://www.youtube.com/watch?v=WzV6mXIOVl4
2001: A Space Odyssey Trailer
https://www.youtube.com/watch?v=oR_e9y-bka0
Asimov’s Three Laws of Robotics (The Conversation)
https://theconversation.com/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501
Roko’s Basilisk (click at your own peril)
Alexei Toliopoulos:
You're listening to the The Big Film Buffet, snack addition. We've talked about the things in popular culture that we are fricking obsessed with. My name's Alexei Toliopoulos, and joining me as always is my dearest friend, Gen Fricker.
Gen Fricker:
I'm your dearest friend?
Alexei Toliopoulos:
Yes, it is official. Of all my friends, you are officially number one dearest.
Gen Fricker:
My official statement at this point is, if you're listening and you think you're Alexei's friend, absolutely sock it.
Alexei Toliopoulos:
Yes. You may be my friend, but you do not hold the dearest position. You are ranked bottom tier, currently all my friends out there.
Gen Fricker:
Absolute grubs. Anyway, Alexei, I'm really excited about this week because we have a friend joining us in the stu, which is short for studio.
Alexei Toliopoulos:
We're in the stu room right now guys.
Gen Fricker:
We're in the stu room. We're cooking up a stu in the buffet, because last week you might've heard us chatting about one of our favourite movies we've used so far, The Mitchells vs. the Machines. If you haven't seen it, it's out on Netflix now. We went into it a great big deal last week. Have a listen if you haven't. But the essential kind of plot line of the movie is... It's about a family trying to make a road trip while the end of the world is happening because of a robot apocalypse. Basically AI takes over the world and is trying to enslave mankind. And I was thinking about it and I was like, this is a fun, delightful family movie, but also is an existential assault.
Alexei Toliopoulos:
Absolutely.
Gen Fricker:
We've seen a lot of movies about robots turning against humans. And I was thinking, how likely is it that something like this could happen? And both you and I are Alexei, we're cuties, but we're not smart.
Alexei Toliopoulos:
That is true. Let the record be known. We are cute as heck, but we are not smart as heck.
Gen Fricker:
Yes. And so, we thought we'd bring in an expert in the field of artificial intelligence, robots and whatnot, and also just a lover of animation and cartoons generally Dr. Matt Acne. Hello.
Dr. Matt:
Hello. Hello. Thank you for having me.
Alexei Toliopoulos:
Dr. Matt, it's a pleasure to have you on the podcast today.
Dr. Matt:
It's an absolute delight to be here.
Gen Fricker:
So, you've had a chance to check out the [inaudible 00:02:15].
Dr. Matt:
I have, you're absolutely right. Lover of science, lover of cartoons, lover of fun, silly movies. And I thought this hit on everything, really. It was a really enjoyable watch.
Alexei Toliopoulos:
Made for you in fact, in lines.
Dr. Matt:
Absolutely. Yeah. I was thrilled to have that recommendation come through.
Gen Fricker:
Dr. Matt bait, for sure. But because this is how my brain is wired, it instantly made me feel extremely fearful that this could happen. And you're writing a book on robotics at the moment. Is that right?
Dr. Matt:
Not quite. I'm writing a book on aliens, but I'm studying robotics, studying artificial intelligence. So, there's several things going on, but that's combining two of them.
Gen Fricker:
The Mitchells vs. the Machines is about robots turning against humans. Dr. Matt, how likely is it? Should we all be throwing our phones away?
Dr. Matt:
There's some people who would argue that, "Yes, you should be throwing your phones away," for different reasons. I think obviously scifi and certainly the common kind of robot uprising is very much a robot uprising. The Terminator, it's coming to get you and in this case lock you away in this ship to launch you off the planet. But I think in terms of the way AI can be misused, there's definitely a lot of ways such as has been seen in the recent echo chamber type effect in social media and things where you can see kind of democracy kind of having some issues, where elections seem to sway in certain ways. And that's because like the Cambridge Analytica saga, there's ways that people are realising data can be extrapolated and exploited in ways. And a lot of that is based around artificial intelligence and machine learning. Is a robot uprising likely? Maybe not in the Terminator way, but certainly there's ways that it's kind of starting to destabilise things for us [crosstalk 00:03:57] a little more uncomfortable way, but I think we're still in control at the moment.
Gen Fricker:
Oh gosh. Oh gosh.
Alexei Toliopoulos:
I don't know how much control we have because I recently got a Google Assistant, the okay Google. And every morning I wake up, I'm like, "Okay, Google, what's the weather today." And nine times out of 10, it just reads me the dictionary definition of what the weather is. So, is that thing f-ing with me? Am I getting freaked out?
Dr. Matt:
I think that's exactly right. When we talk about artificial intelligence today, it's still kind of, I don't want to say primitive because it is very sophisticated stuff... But you're right. It's kind of still clever algorithms doing things to kind of create the illusion of intelligence.
Alexei Toliopoulos:
It's throwing me off.
Dr. Matt:
It's throwing you off. So, you've asked for the weather and it's got a series of steps or a recipe to follow and then it spits out these things.
Gen Fricker:
[crosstalk 00:04:46] Why do I feel like this is going bad, like things are on a turning point? We've seen it a lot in movies like Ex Machina. I know you Alexei, that's one of your favs.
Alexei Toliopoulos:
I do like that movie a lot. That is a freaky deaky scifi thriller movie, where two fellows talk to a robot, android, humanoid character with artificial intelligence and slowly that artificial intelligence manipulates them, makes the man fall in love with it. Is that possible?
Dr. Matt:
I love this movie. This is one of my favourite movies as well. I was thrilled that you've brought this up. And I think it brings up one of the really interesting things that we probably never even think about, which is the fact that when we do actually create artificial intelligence and when we do actually create something smarter than us, they're going to already be several steps ahead. They won't let on that they're smarter than us. They'll kind of play dumb almost.
Gen Fricker:
Like Alexei's Google assistant?
Dr. Matt:
[crosstalk 00:05:41] Well that's it. You could be in real trouble right now.
Alexei Toliopoulos:
You're saying this is a type of flirtation that I'm experiencing?
Gen Fricker:
She's nagging you.
Alexei Toliopoulos:
No, no, no. It's working. It's absolutely working.
Dr. Matt:
And we've all kind of started to be the recipient of this kind of artificial intelligence manipulation and exploitation in social media, in targeted marketing and all of this. And it's like, are we just starting to scratch the surface? Are we all going to fall in love with robots and get... Yeah, you'll have these horrible things happen. Is this the way we're going?
Alexei Toliopoulos:
[crosstalk 00:06:09] I think part of that Ex Machina one, that's kind of scary. But also, a big part of science fiction is that idea of what is human? And in this one we're seeing this Android have this level of self-determination, beyond self-awareness. It's embodying a real human, to use the term soul or something like... It brings about those questions. Do you think that is something that we're heading towards, the idea of self-determination and artificial intelligence?
Dr. Matt:
I think yeah, kind of moving in this direction of self-awareness, intense consciousness. I think this is something that's still really eludes us. And as I said, a lot of the intelligence at the moment is very much clever algorithms. It's all about creating the illusion of intelligence or very, very focused goals, such as AlphaGo or something that can beat humans at Alpha or the Deep Blue that beat Kasparov at chess. They have very, very narrow focus and it's all clever algorithms to solve these problems better than humans. But I feel like there is this kind of missing ingredient that's like, what is it that is different between our brains doing these kinds of things algorithmically and then that step towards sentience and consciousness. There seem to be a missing piece to the puzzle. And I think it will probably get there eventually. I think the brain is essentially just firing neurons and flowing electricity. At some point, surely we can build a clever little rock that can do the same thing. Could the missing ingredient perhaps be love.
Alexei Toliopoulos:
This is [crosstalk 00:07:40] a touch of this in the movie.
Dr. Matt:
Why should we be preserved? And I think that's kind of often the trope in scifi, which is, "This is the thing that machines are missing. It's this love." And I think the Matrix has done it when they kind of had that element-
Alexei Toliopoulos:
[crosstalk 00:07:58] Makes me cry every time.
Dr. Matt:
It's really touching my heart. There's a book, Yuval Noah Harari. In his second book, Homo Deus, he kind of starts touching on this in a little more detail. And at the end of the day, things like love and emotions and feelings and all of this, it all kind of boils down to, again, just neurons and electricity. Can we create algorithms that simulate these kinds of things?
Gen Fricker:
Well, film wise is exploited in the movie Her as well, where I guess the robot falls in love back with the human as opposed to Ex Machina where it's kind of more one-sided.
Alexei Toliopoulos:
Yeah. It's more natural love.
Gen Fricker:
Yeah. Again, a natural love. [crosstalk 00:08:35].
Alexei Toliopoulos:
I'm so sorry guys.
Gen Fricker:
[crosstalk 00:08:40] Very normal. Very cool.
Dr. Matt:
Yeah, very cool. But hers reciprocated? In Ex Machina, the feelings are more like, it's self-determination for survival. But Her, its like, that's a more natural feeling that a AI would have of like, to feel those emotions as well, rather than just survival, which is more primal instinct.
Gen Fricker:
Can we make an algorithm that feels emotion?
Dr. Matt:
I think this is a fascinating one. I haven't seen Her for a while now, so I'm probably a bit rusty. But-
Alexei Toliopoulos:
Well, he's got a moustache.
Dr. Matt:
I recall the exceptional facial hair.
Alexei Toliopoulos:
Great outfits.
Dr. Matt:
Yeah. But the thing that kind of came up there is, I guess she was designed as an assistant or an OS type, like a-
Gen Fricker:
[crosstalk 00:09:22] Like a Google Assistant perhaps.
Dr. Matt:
We're back into Alexei's google flirtation [inaudible 00:09:27] again. But I guess that's kind of what they're tapping on, is that if we do code these things, how much of this... Once we get the ability to, I guess, simulate or create algorithmically a sentience or consciousness, or self-awareness, how much, I guess, would that emotion and those abilities to love and feel and all of that kind of cascade out naturally from that sentience just as it has in humans? And not knowing exactly the designer's intent in Her, but I'm sure that wasn't... The goal was to design this kind of potential relationship and pleasure robot, but that's what's coming out.
Gen Fricker:
We've banned the word pleasure from this. From this point, a veto on pleasure robot as a phrase.
Dr. Matt:
Kind of where it feels like where we are at now from we've seen, it's more like a 2001 Space Odyssey situation with how artificial intelligence on the ship with its main goal being this mission that supersedes everything even beyond human life. The goal is the mission. Is that something that is happening now? I think this kind of stuff comes up a lot when we think about the first law of robotics in terms of the Asimov laws.
Gen Fricker:
A robot may not injure a human being or through inaction allow a human being to come to harm. Robot must've obey the orders given it by human beings except where such orders would conflict with the first law. And the third law is a robot must protect its own existence as long as such protection does not conflict with the first or second laws.
Dr. Matt:
Yeah. So, I think this kind of idea of instilling in robots this sense of protection and looking after us is something that we need to get right the first time. Because once we create something smarter than us, that's it. It's game over. That can continue on ad infinitum, and get smarter and smarter and smarter and smarter. So, we have one shot at getting it right. And I think this comes up in scifi a little bit, about whether or not robots or machines, if we think about software at its basis level, will follow exactly the instructions it's given. And so, it's a case of, well, if we give it an instruction such as in 2001, A Space Odyssey, such as follow this mission, how much will it follow this mission to the detriment of other humans?
Dr. Matt:
And this is kind of where these Asimov laws come in and hopefully that kind of prevents anything happening. But there's these kinds of really interesting things such as... Imagine if you were like, "I want to protect all humans. So, let's instil in this robot, your goal is to maximise human happiness," on average human happiness, living quality or something like that. And you think that's going to kind of ensure that all humans are looked after. But what this robot does is go, "Oh, right. Well, basically on average, if I want to have the maximum happiness, all I need to do is find the happiest human and kill everyone else."
Alexei Toliopoulos:
Oh my word.
Dr. Matt:
And now, I've maximised human happiness because on average-
Gen Fricker:
Because the sample area is smaller.
Dr. Matt:
... the sample is now just that one tremendously happy human. [crosstalk 00:12:33] And so as far as it's concerned, mission accomplished, goal achieved. Even though for us as humans, except for this one lucky individual, we're gone. So, it kind of highlights it. And this is what 2001, A Space Odyssey, kind of touched on is that, these missions could be misconstrued by robots. They're going to follow them exactly as they need to follow them and not kind of consider these nuances such as don't kill humans to then maximise this other function.
Alexei Toliopoulos:
[crosstalk 00:12:59] I've got to tell you, you just put a tingle through me.
Gen Fricker:
I know. This is maybe the spookiest episode of the pod sofar.
Alexei Toliopoulos:
Yeah, absolutely.
Gen Fricker:
Look, I know you're the doctor, you're the scientist, but I did some preliminary panic scrolling, and I found something called Roko's Basilisk, and I think it's going to break Alexei's heart. Do you want to explain it?
Alexei Toliopoulos:
Is it a type of lizard?
Gen Fricker:
That would be far less doomy, I guess-
Alexei Toliopoulos:
[crosstalk 00:13:28] Oh my word, I have a tear in my eye already.
Dr. Matt:
I love the whole concept of Roko's Basilisk. It is truly terrifying.
Alexei Toliopoulos:
[crosstalk 00:13:36] It sounds like a board game.
Dr. Matt:
It's a bit like [inaudible 00:13:38] the ring [inaudible 00:13:39] watching the video. [crosstalk 00:13:42] Once you've heard it-
Alexei Toliopoulos:
My scariest movie
Dr. Matt:
Is it?
Alexei Toliopoulos:
[Inaudible 00:13:43] If this is about physical media being corrupted by evil. That's the thing I care about most as well.
Gen Fricker:
Alexei has a lot of DVDs and videotapes, and the idea that one of them would turn on him is heartbreaking to-
Dr. Matt:
Strap yourself in. Here we go. The whole thing is once you've heard this, you're in. You're part of it.
Gen Fricker:
[crosstalk 00:14:00] This is a warning. If you don't want to bee part of this now, then turn off the podcast.
Alexei Toliopoulos:
Okay. Turning it off. See you later guys.
Dr. Matt:
So, the idea of Roko's Basilisk is that at some point in the future, when we do reach the singularity, which is the idea of creating something human level and exceeding human level intelligence, this will create this kind of intelligence explosion, will create this artificial super intelligence which will be essentially god-like. And the whole premise of the Basilisk is that for it to be coming into existence, it can then now kind of do a lot of things and make things better for us. So, if it doesn't come into existence, that's kind of bad for everyone, including humans, because it can't look after us. It's quite benevolent in that respect. So, what it'll do is essentially try and figure out who didn't help it come into existence. And if you didn't help it come into existence, it will essentially torch you for eternity. And the way that works is that in the future, once it comes into existence, it'll be like, "Well, I'm going to run some simulations, and I'm going to simulate you Alexei" and say "Did you help to make the Basilisk come into existence?" And if you didn't, it'll be like, "Right. Well now I know that you didn't help. So, I'm going to torture you."
z:
I'm crying, but let me not. I'm tears.
Dr. Matt:
There's two scenarios happening right now. Either you're not in the simulation, so you're in real life right now, which your decision now to help make the Basilisk exist is what your simulation self will happen. So, when the best Basilisk is created, it will simulate and it will... Depending on what you do now... Say you don't help create it, your simulation self won't, so when we create super intelligence, it's going to torture you. Or you are the simulation self. In which case, if you don't help the Basilisk come about, then your real self is going to get tortured. So, essentially it's from the future blackmailing you to help create the Basilisk. Otherwise you face eternal torture and damnation.
Gen Fricker:
So, your Google Assistant, you can either take back all the shit you said about her in the podcast and hope for your future soul, digital or otherwise. or from this point, you've just made yourself an enemy of the robot-
Dr. Matt:
Because you know about this now. Anything you don't do to help the coming about of an artificial super intelligence, it will see as you failing in your duty.
Gen Fricker:
The robot god will know.
Alexei Toliopoulos:
Oh, no, no.
Dr. Matt:
The most frightening thing is that it doesn't necessarily have to be a malicious AI. It can be really benevolent in that it's whole thing is I want to make humans lives better, but for me to do that, I have to exist. So, if you don't bring you to existence, you'll make a humans life worse. So, I'm going to threaten you with this existential blackmail in the future. And If you don't fulfil this, then you'll face eternal torture.
Gen Fricker:
Alexei looks lost.
Alexei Toliopoulos:
I'm spun out. I'm in a tizzy currently.
Gen Fricker:
It's the first time I've ever seen your gorgeous hair a bit rump.
Alexei Toliopoulos:
Yeah. I've lost it. I didn't even touch it. It just frizzed out. I'm going to have to say, all praise the Basilisk. Love you brother. Love you like a mother. And I praise the Basilisk. I will say right now to the Basilisk, if you are listening, if you are my master, I will be your movie expert forever. I will watch every movie and I will incorporate that into your brain so you can manipulate humans using the motions they put into movies. This is me, literally, pleading. I've thought of how I can be of worth to the Basilisk. This is the only skills I have to survive in this world to not be tortured. I want to eat a steak and I want it to be real. I want to always close to real and pleasurable as possible. Sorry to use the forbidden word one more time. Basilisk I'm your mate. I promise you that. I'm selling you out. I'm boot licking the little ethernet cables that its feet are made out of right now.
Gen Fricker:
I guess we can all start worshipping the Basilisk. Just like putting a thumbs up on the The Mitchells vs the Machines.
Alexei Toliopoulos:
Exactly. Get on Netflix, give it a high rating on Netflix, recommend it to a friend. Just really loved Olivia Colman's [inaudible 00:18:18]. Really lovely and compassionate
Gen Fricker:
[crosstalk 00:18:22] Very cool.
Dr. Matt:
Wonderful leader and boss that should totally look after and control our world.
Alexei Toliopoulos:
Absolutely.
Gen Fricker:
This is a devastating episode of [crosstalk 00:18:28] podcast. Thank you so much, Dr. Matt, for coming by and absolutely ruining our lives. I
Alexei Toliopoulos:
[crosstalk 00:18:32] I think it's something close as podcast door to hell. That's all I got to do. I've lost it.
Dr. Matt:
Thank you so much for having me. It's been a pleasure except for potentially ruining your days-
Alexei Toliopoulos:
I wish I could say was a pleasure, but great to meet you buddy.
Gen Fricker:
See you all in the robot apocalypse. Oh my God. This is a nightmare. This is the most horrifying episode.
Alexei Toliopoulos:
The world is heinous . Either we're in the simulation or we're not and both are equally terrifying.