Interview with Dr. Eve Poole about Her New Book on AI, “Robot Souls”

Text transcript, lightly edited for readability.

Douglas Giles
The topic of Artificial intelligence, or AI, has been all over the news lately, quite often discussed with a sense of foreboding doom and a plea for someone to do something to save us from the effects of AI. Much of this discussion has been among techies and corporate bigwigs. And if, as the doomsayers say, AI poses an existential threat for humanity, then perhaps the people who need to be discussing the future of AI are those in everyday life.

I’m fortunate today to be joined by Dr. Eve Poole, an expert in leadership and leadersmithing the day-to-day craft of gaining real experience in the critical incidents of humans leading other humans. That’s pertinent to AI development because AI is being developed by humans. Dr. Poole, I greatly appreciate you being with me today to talk about your book.

Eve Poole
Hello, and thank you for the invitation.

DG
Oh, definitely. I’m so glad that your publisher got in touch with me. The book is Robot Souls: Programming in Humanity as in programming the humanity into artificial intelligence. It’s published by CRC Press, which is an imprint of Taylor & Francis. It’s a wonderful book, It really is. They’re not paying me to say that.

Where I’d like to start and then I’ll shut up and let you talk. You write about how we’ve called it “artificial intelligence” rather than “artificial humanity,” and you say elsewhere, have we copied intelligence but not soul? What do you mean by these concepts?

EP
I got very intrigued by that piece of nomenclature, particularly because my background is as a theologian before I did business and wrote about capitalism and leadership more generally. And of course, in Christianity there is quite an emphasis on the body because of the idea that God became a human body in order to redeem the creation he had made. So, it’s always quite interesting for me when you have. Philosophies or other kinds of theories which are much more about thought and mind than about anything physical.

And of course, ever since the Enlightenment we’ve been obsessed with intelligence as a sort of free-floating concept. I mean, of course you’re a philosopher, you know, that’s not a new preoccupation. But the sort of disembodiedness is particularly relevant, I think, because we then got into copying the mind instead of copying the body, and I understand. Why that has been a bifurcation because if you think about any copying of bodies that has been going on, particularly after our experience in a post-Second World War uncovering what had been going on around eugenics there, there is a horror of the idea that you might get into copying bodies.

Because that seems to usher in a whole host of things we were in a real panic about. And globally there are still huge red lines on cloning and experimentation, stem cell research, all these kinds of things. So, we’re a bit ethically compromised about copying bodies for all kinds of good reasons, and therefore because that’s just too difficult, we have felt entirely free to just focus on copying minds.

But again, because of where we’ve got to, philosophically speaking, there used to be, as you know, a whole tradition about mind brain theory and thinking about all these kinds of things. But in the last certainly 30-40 years that’s kind of gone. Everyone’s just decided that that’s been solved by neuroscience and there’s no such thing that it’s all essentially material one way or another. And so that means if you’re copying intelligence, you’re copying mind, but these days you’re copying brain processing.

And so, we’ve started limiting ourselves down just in that introduction to some quite specific things we are copying, which is the things we understand about brain processing and things we can see on MRIs, or we can kind of test empirically speaking because we’re in a sort of flat panic about how would you even describe some of those other things. They’re just too difficult conceptually for us to us to cope with.

And in this very materialistic world, which is totally sold on the narrative of improvement through evolution, and you know, religion not being a popular narrative in many countries, particularly those that are acing AI, it’s all been about these incremental improvements to do with very materialistic understanding of what a human is. Because I’m a theologian, because I have a background in working with people leading other people, I was quite interested in the totality of what makes us us. And therefore, how do you get the best out of other people. That’s probably one of the red threads to my work is about optimizing our performance.

And it feels to me that education is about optimizing thought processes and things like that, I guess. But increasingly what we’ve learned over the last 20-30 years is it’s not just about IQ, it’s about EQ, which is very, very interpersonal, and very physically loaded, quite often in terms of how you deliver EQ as opposed to IQ. So, with all of that in mind, I was sort of smelling a rat about the emerging narrative on AI. I mean, if we are copying humans, intelligent humans, why are we doing it so badly? And through all of that, you know the narrative about the environment and what can be learned from nature.

Everyone is batty about biomimicry and how can we use teasles to figure out about Velcro, and how can we, you know, understand about waitings from the different ways that beetles move or whatever it might be. We’re getting really, really good at copying the details of nature in order to learn and innovate and things like design and engineering. Though we’re rubbish at copying humans because we’ve just decided most of it’s too hard. So, we’re just going to do the easy bits.

And of course, the problem of doing the easy bits. Because I argue in my book that we’ve kind of programmed a psychopath and I don’t really think that’s likely to end well, yes.

DG

I really like how you so often in the book make that differentiation between these things that we think are easy to copy—rationality, logic and the one side of the brain—and we just ignore or the people who are doing this just ignore. And philosophy is very guilty of this too. I think about the analytic philosophers that it’s all about logic and brain and we ignore what you call junk code. I know that term from computer programming, but please explain junk code and how that applies to a human being.

EP
I love this idea of junk code. It’s partly a play on words because of junk code. Encoding is one of two things. It’s either redundant code, like stuff that just happens to be in the program. I’ve got no idea what it does, but for some reason it’s there. Or sometimes you’ll put junk code in to kind of hide the real code so that people can’t copy you. It feels that both of those things are relevant for my definition of junk code.

So, my definition of junk code in this context is all the things we’ve deliberately left out of our programming in AI because we think it’s redundant. Whether or not we think it’s because someone’s hidden it in there to stop us copying properly or whatever, I don’t know.

But of course, when you look at the totality of all those things we’ve missed out, and you start trying to see how all of that junk code might make sense in and of itself, you start discerning a really interesting pattern behind all of those kinds of mistakes and emotions and uncertainty and all that kind of stuff you’d sack a robot for. And because they are vital to the ways of our species, the design of our species has sorted out problems like alignment and control. Because without those things, the species would have been extinct years ago.

DG
Yeah. And that’s what’s so interesting because the downside of using the term “junk code,” and it’s very intelligent that you use it, is as you say, at one point what we call junk code is really the source code of what it is to be a human being. And that junk code you associated with soul, with I think a non religious, quasi religious idea of soul, perhaps more akin to Plato than a Christian concept necessarily. But it also is the source code that draws us to community.

You say at one point that the junk code teaches us that the meta-hallmark of soul is being in community. These are decisions we need to take together and this. But in my philosophical research, I get a lot into the what—what makes a community? What breaks apart a community? So, how does the junk code cause us to yearn for community, or even push us to community as human beings?

EP
Well, the very first thing that is salient about our design is free will, and in AI, we’re kind of achieving that through deep learning and neural networks and giving AI the ability to reprogram itself, if in doing so it would be able to better meet its objectives.

Now, if you think about sitting in the lab with God’s load of blueprints, you think what fun it would be. We’ve done whales and we’ve done badgers and we’ve done, and you know, let’s do humans and you think, well, free will.

Why not? We’re a laugh. You know you have to correct fairly quickly to make sure that the first thing your species doesn’t do is kind of immediately make a stupid mistake and fall off a cliff, or we eat the wrong berries, or whatever else it might do, and immediately the species is extinct and then you know you have to start again.

So, you make the decision on free will and you have to bring in a few kinds of ameliorators to make that work and essentially the rest of the junk code, because free will is kind of junk code, is kind of batty not to want to control your creation, which is why there’s so much dispute about AI, of course.

So, there are sort of three pairs I think of sort of stabilizers if you like. The first two are to solve the problem which is our species is really rubbish and it takes nine months to gestate and then you give birth to something which is entirely rubbish for years and needs a huge amount of care and attention or could get squashed by the nearest anything. So, you need to think of a way to stop mothers just sort of chucking the baby in the bin or over the wall or something and kind of getting on with their lives.

So, you design in emotion because emotion makes you grieve and feel worried and love and feel affection and all that kind of soppy nonsense which means that you want to nurture these rubbish babies that keep you awake all night and, you know, useless. And it also makes you grateful to your own parents. And so, when they get dotty and old and disabled and rubbish or whatever you, you want to try very hard to nurture them and keep them flourishing for as long as possible. And because you love offspring, you love kin.

Anyone who is born who would struggle, you want to protect, nurture, promote their wellbeing as well. So, if you inject emotion as a bit of design you create, the circumstances where people will bond with other people. So that’s the first really important thing, which is you give them free will, but you make them interdependent, emotionally speaking.

Because it’s not always clear what’s going on with people, you also put in intuition, which is that slightly spooky kind of Spidey sense. When you know someone’s really cross with you, you know you upset them, you kind of don’t want to go into that meeting or that room because it feels bad. All that kind of stuff that again, we find really hard to articulate, but it is very, very real and we all have stories about times when we have absolutely relied on that gut feel. So, you have the first two, the emotions and the intuition which is to help you flourish with other people.

So, then you have to address the problem with free will, which is that you know if you’re not, frankly, brainy and have 25 PhDs you might make mistakes. But mistakes actually are part of our design because mistakes can be incredibly fruitful. Now, the AI designers know this because they have reinforcement learning, which is about helping AI learn through mistakes, and there is certainly a degree of that in humans. If you watch a child trying to walk, there’ll be a lot of trial and error and it’s through that process the child figures out how to make that work.

So, there is an element of mistakes in order to learn. But with humans there’s something more important, which is that mistakes will tend to have an effect on other people. And those other people, because of emotions, will tend to therefore be cross with you, be happy with you, hate you. All those kinds of things, you know, it’s Adam Smith and being held in the eye of the observer and all those kinds of things that we know about that because of this emotion, we will tend to be affected by how the people feel about the mistakes we’ve made.

And over time that creates in us this sense of conscience. So, we don’t want to make mistakes because we know people will be cross, we know we’ll get shouted out, we know people will show us, we won’t have friends, people won’t look after us, all those kinds of things. So, mistakes are a very clever bit of design to help us learn but also help us over time learn kind of even better to avoid making mistakes in the future. So, it’s a real future briefing derisking thing.

To stop us precipitously rushing into mistakes, we also have this very clever bit of design, the junk code around uncertainty. So again, one of the reasons we use AI is because we want everything to be certain. We want very quick decisions, we want clarity. But actually, even in AI they figured out that uncertainty is very helpful to reduce risk, which is what the design serves in us. If you are incredibly clear all the time, but not that bright as a species, you might make lots of mistakes, which again will not help you survive. Having uncertainty means you stop, and you pause, and you think. And most importantly, you seek advice from other people because of emotions and the bonds and all those other things.

There’s an intuition that people aren’t quite clear about—all those other things start playing together, and the uncertainty stops you being precipitous, and it helps you hold things in tension for a bit longer than you would normally want to in order to try and give time for the best solution to emerge. So, uncertainty and mistakes are really important parts of derisking free will.

So, those are the first four, the last two to make up the suite of seven in the junk code. Those are things about, well, you’ve stopped us rushing off cliffs and, you know, leaving our babies dead on the floor. What are you going to do to try and make sure that there’s a reason for the species to continue into the future in order for there to be a such thing as a species rather than just a generation? And we’ve got a couple of very clever bits of coding in there.

There’s the junk code of meaning-making. I mean, how bizarre is it that, you know, you get up in the morning, you think, oh, I saw three red cars and two black cats and that means. . .? Or, you know, we read our horoscopes, really. Or we look at the stars and we call them a thing. We say that’s definitely a bear. That’s a player, that’s a star. You know, they’re not, they’re just random stars. So, we’ve turned them into a pattern of meaning and we incessantly make meaning around us, and everything feels purposeful.

And of course, a species that has a sense of meaning and purpose will want to continue because there’s a reason to, and we put that in. By this very clever thing that humans are particularly brilliant at, which is storytelling, which is very, very sticky. It’s the best way to communicate values through the generations. We tell all kinds of stories, and we know stories that are thousands and thousands of years old. And we know through the work of people like Christopher Booker that there’s a pattern to stories. There are only a certain number of types of stories we tell, and we tell them repeatedly, the same sorts of story to explain, you know who we believe, what happens if you’re bad, who we like? Where did we come from? Where are we going? Is there a God? Is there not a God? Are there several gods? What do gods do? You know what happens to you when you die?

All of these things are answered in all the stories we’ve ever told, and we use stories as a way to communicate meaning and purpose down the generations, and also to encode the things we’ve learned from the other junk code, around mistakes, particularly uncertainty about emotions, about gut feel.

There’s a lovely bit in Ian McEwan’s book where he has his robot, Adam, talk about the fact that you couldn’t have literature if you weren’t human, because literature is all about humans being utterly rubbish. It’s all about junk code. It’s about us all faffing off the wrong people, making mistakes and faffing about it. You couldn’t really have robot literature. What would it say?

So, those sorts of things all together, essentially, create an environment where in order to flourish as a species, we seek other people out and we need other people for our flourishing and we march together, which keeps us all going as a species. That’s why I think it’s so interesting that it looks like rubbish junk code stuff that you would sack a robot for, but when you look at it in that different way, you start thinking, Crickey, those are the whole lot of soul. That’s really what makes us distinctive and special and that’s why we’re still here.

DG
That’s what’s so intriguing when you talk about all of that. Your book is as much about us humans as it is about artificial intelligence and robots. And I’ll get back to that one thread of things—about what your book really says about us as humans in a little bit, but what it says about AI, and creating robots is that, as you say, we’ve done a good job of creating these machines that do very particular tasks that we say, you know, do this, find this answer to this problem And we haven’t been able to program maybe we haven’t even tried to program in this meaning seeking facility. And I’m not sure how you would do that.

But this sense of meaning, I think it’s so tied in with the sense of community, because we find our sense of what it means to be an us, to be even a single self in the context of the meaning of a larger community.

And have we really just created intelligent machines, intelligent toasters? I’ll use that as an example. The last few months, everything is now branded as being AI. It’s an AI toaster, it’s an AI this, it’s an AI that. I’m waiting for AI scissors to come out. But it’s like, no, it’s not artificial intelligence, because it’s not intelligent. Because it’s not emotional intelligence.

So how do we address this issue of finding a robotic community, for lack of a better term? How do we create this community within artificial intelligence that gives us the sense of meaning?

EP
Well, there’s a couple of things about that and we’re sort of inching towards it in processing. They’ve already figured out that having lots and lots of neural networks in deep learning configurations and federalized gives you more processing, gives you more nuance, gives you better outcomes. So, there’s already sort of many communities within an AI in the same way that within our own brains there’s a lot of different processing centers and a lot of different things going on.

I think what would be an interesting thing, not only in terms of increasing computing power but increasing intelligence, would be when they start trying to link AIs up. And of course, everyone’s in a flat panic about that because that feels like sci-fi and Master Race and Armageddon and everything else. And it’s slightly tricky because pretty much all AI is in private hands and there’s a lot of sort of proprietary this and the other about it.

Of course, because we’re not at a stage of artificial general intelligence yet. It is, as you say, very atomized. So, what would it mean if you got AlphaGo and AlphaFold to talk? You know, would that even be a conversation? Whereas, of course, with humans it would be, you know, if you got the world’s most brilliant AlphaFold and AlphaGo scientists together with, you know, a grand go master or even a chess player. If you want to have Deep Blue in the conversation too for a dinner party, the three of them would find all kinds of really interesting connections and would probably create all kinds of new concepts, innovations, thoughts that have never been had before just because you’ve got three things, three different disciplines in harmony and talking to each other. So, I don’t know practically how you do that and what the outcome might be. And there’s sort of fractals in this, because one thing, of course that LLMs [large language models], in particular, rely on is pattern spotting.

The reason they work is because they have learnt all these different ways that words have tended to be grouped together and so you put some words in, and it will finish things off for you. So that’s a sort of meaning making, but it is very lateral and kind of 2D. The thing that’s different about humans is we have a teleology, and of course that’s quite compromised at the moment because people are a bit flummoxed about teleologies because everyone’s sort of fallen out of love with religion. So, people have got proxy teleologies about the flourishing of the species, or humanity’s triumph, or whatever it might be because we’re not really quite sure.

We sort of thought, well, evolution, hurrah, it’s going to end somewhere triumphant. So, we’re all faffing around trying to figure out what our teleology actually is. So, who knows what the answer is to that? But, you know, robots and the NEIs don’t have a teleology. They have some programming around what they’re supposed to accomplish but nothing larger than that in terms of their meaning and purpose, intrinsically.

So, I’m interested in that. What would it mean to give an AI meaning and purpose, and would that be helpful for the AI. I mean, would it help it in any way? It helps us because it helps us be able to get through difficult patches to inspire other people, gather people together to achieve extraordinary things we can do on our own. Yeah, there’s a whole lot of reasons that you would sneak meaning and teleologies into design and in terms of flourishing.

I’m just curious about whenever you start figuring out why do we have these batty facets to our intelligence. You start thinking, well, why? And if you have any idea about why, then you think, well how could we apply that? And so, I suppose that’s the whole exercise of this book. I’m not sure if my seven junk code things are the right ones or the only ones or the best ones. But the conceit is, if we got really, really curious about our own design, what else might we learn about? What might help AI be better?

DG
One of the things that I wonder about in terms of that question is the analogy that you don’t explicitly make, but I see at several points in your book, is about treating our relationship to AI as a parent and child relationship. And that makes a lot of sense on multiple levels. The most obvious one, of course being we are the parents of AI. We are creating these things. And if we stop with the paradigm of, we are just creating a machine, we’re just creating a computer, and we adopt instead of paradigm, which I think is what you’re suggesting, that we are creating humanity, we’re creating artificial humanity, not an artificial intelligence artificial computer.

Then, as you put it, how we raise children is of course we raise them in a community. We raise them to be a part of the community. We set them free, and we let our children grow up. And that does mean of course you’re sending them off to school and now interact with other children and be nice to your teachers and don’t hit your sister and do all these things that we tell children to do. And you make the great example of not doing this was the slave trade. It shows how very wrong it is, you say, when we fail to honor the dignity of others.

So, if how you raise a child is you say, well maybe we don’t have a grand teleology of you are part of the future evolution of humanity, but we say, you are a being, you have a beingness, you have a consciousness, and we want you to be something. Is the parenting analogy way off base here, or is this something that might work for us?

EP
I think that’s absolutely right. And I think it’s really tricky because it’s one of the things that people feel quite sick about this book. They’re like, they’re toasters. They’re just toasters. They get really stressed about the idea that these things might have any kind of personality. I mean, ah, because everyone’s read all the, you know, the books and they’ve watched all the movies, and I’m afraid that boat has sailed. I mean, the second you start mucking about putting deep learning into a robot, you’ve given them the free will, essentially constrained, yes?

But if they can reprogram themselves and they’re smart enough, then how can you rein that back in? You know, it’s the lovely quote from Barack Obama about sort of switching them off. You know, there’s a point at which we won’t be able to switch them off. And that point may already have been passed. So, I think rather than sort of say, oh, they’re just toasters, don’t panic.

It’s just, say, well look, it’s we know this from every fable we’ve ever had about, you know, human vanity and hubris and making of monsters. You know, you can’t stuff the genie back into the bottle. We have sallied forth creating in our own image with no exit plan. And are we surprised? I mean we’ve cocked it up because we’ve just not done it very well. As I explained it around this sort of biomimicry thing, we can do it a lot better. But also, you know, if we genuinely want to copy ourselves, then that’s going to mean things like free will and all those things.

As you say, we have created, in the same way that we were created, and we have given birth to children, and that means you have all these ghastly things about you. You give them book to read and then they suddenly become an anarchist. And you’re like, oh, I just really hope that one day they change their mind that you can’t make them because the second you make them then they’re going to be a covert anarchist, and they’ll be even worse.

It’s fraught with risk, but the whole thing is fraught with risk anyway. And did we think this was a risk-free enterprise when we mucked about with it in the first place? All these people writing letters saying, oh nothing to do with me, let’s just all switch it off and say stop. You’re like mate, you should have done your homework that that boat has sailed.

So, it feels to me that we need to just get to the program and think, okay. Let’s very quickly have a look at this. You know, put the brakes on where we need to, because I think there are some things where we just need to be a much clearer globally before we kind of let these things go, particularly because everything’s in private hands. But then we also need to backfill very quickly with what we know stops us going rogue.

One of the things that stops us going rogue is community, and particularly, being brought up by families, because most people are still lucky enough to be in that context where you have some sort of parenting which does give you that balance of freedom and constraint. And that’s kind of what we need to figure out here is what would be reasonable about constraints. I mean, you’re not going to just say to a baby, knock yourself out, because you need to look after it until it’s competent to make some decisions for itself.

But as the baby ages and grows up, you start taking away some of those stabilizers and starting to give more freedom. But we’re in a controlled environment. I think the problem is that AI looks quite adult because it’s making brainier decisions about chess than we could ever make. So, we kind of forget that it’s still quite new and kind of learning, and we need to learn very quickly how we can help with that.

DG
Yes. And that’s why your question of are we spawning a race of psychopaths is so important because continuing with the analogy of recreating artificial humans and we’re trying to get these AI robots to be humans of a sort. The concept of a psychopath is a human being that lacks perhaps the junk code that you talk about.

But the point is, we raise these psychopaths. They are raised in our community. We have done something wrong. That doesn’t mean that they’re not responsible for their actions in a certain sort of way. But we have created situations where psychopaths develop as human beings, and we need to be aware of how we can be duplicating that as much as we are duplicating all the other things that were duplicating in AI.

EP
Yeah, I think that’s absolutely right. And I think this is where ethics comes in. And of course, we we’ve kind of wandered into utilitarianism as a kind of meta narrative, being the only game in town because it’s so popular in Western civilization and political frameworks. But of course, whenever people say, oh, it’s all neutral, you know, if you look at the implied personality and ethic of AI, it is very specific already, just not on.

So, if we think about that ethic, I mean in the UK, we had a very interesting moment on this during COVID, where it became apparent that the UK government’s starting strategy was about herd immunity, which as you know, is really about throwing everyone on the bus in the hope that the kind of the healthy survive. And when that fell out, people were disgusted. They thought, I’m not, you know, sacrificing my granny and my disabled cousin and you know, the weak and the and everyone who’s infirm, that’s not on because we still feel that there’s something about the dignity of the person that’s precious and special. Even though in classic greatest good for the greatest number utilitarian turns, herd immunity is a brilliant strategy, it’s not a human strategy.

But if at that moment we had been controlled by an AI, at the moment, it’s all about optimization and it’s optimizing outcomes because that’s how you build decision-making in terms of rules in that way. So, unless we can be very precise about how, why does that kick in in humans and what circumstances does that kind of override kick in and have we put that into, you know, autonomous cars, do you kill the granny or the child, you know? What? What’s going on about that? Or do you kill yourself? What is it?

That is the decision that’s being made there. Because this is always the problem with rules. And of course, when you’re talking about coding in rules, you’re only ever as good as those rules. But it is kind of outrageous for people to think we haven’t already coded in a whole load of worldviews and ethics. It’s just that they’re partial. And as you say, when we talk about psychopathy, we have very deliberately not coded in emotions and conscience and stuff because we thought it’s nonsense anyway and frankly, we don’t know how to code it, so let’s not bother.

So, we can’t now be surprised if we haven’t given AI the capacity to be human towards us, that that’s not AI’s fault, that’s our fault because we designed it and we just designed it really badly. We’re bad parents in that thing.

DG
Not that I’m an expert on being a psychopath or psychology in general, but one of the things that we find very commonly in people who we would call psychopaths, who commit these crimes, is they feel that they lack meaning, that they are not respected. And so much of what we would call misbehavior and antisocial behavior in society is from human beings that feel that they are not respected. And you’re right at one point, to quote you, if we do not treat AI with respect and as though it is valued and purposeful, we undermine its ability to experience is existence as meaningful. Affording dignity to our partners in creation is the human thing to do because it’s also about who we are too.

That’s one of the things that I actually find the most disturbing about the whole AI thing is not that they’re going to realize that they just need to eliminate us and they’re going to kill us all. I really wonder how the conversation would be so much different if it wasn’t for those Terminator movies, but I think it is more the sense of how the big techie, corporate idea of this love of intelligence and love of the zero-sum game, and utilitarian ethics, and nothing else really matters but profits, is permeating back into us as human beings.

So, what we’re really doing, what’s really happening, is not just we’re creating AI monsters, we’re creating a humanity that is nothing but monsters.

EP
I totally agree with that, and I think again a lot of the direction of travel in policy, education policy, certainly at the moment in the UK and the US, is about the promotion of STEM. And that’s again another really interesting artifact to show what we are up to, which was we’re trying to sort of optimize as robots because actually the kids are already being beaten hands down not only by Google but ChatGPT on STEM. You know, there’s a there’s a window where we need some excellent STEMage. Because we’re going to need to, you know, program these things, manage them, all that kind of stuff.

But there’s going to be a time soon where an awful lot of the stuff as taught, certainly undergraduate level, that’s called STEM. It is going to be done by AI, no problem.

Whereas all the humanities stuff, which is currently not very trendy, it’s seen as Mickey Mouse and can’t get you any job after the university, none of that stuff is being nurtured where it once was. When you look at this through the junk code lens, those are the priorities because they’re the things that are distinctive about us, you know, everything else is too easy to replicate.

But I love your point about it disfiguring us, because I think that David Gunkle in the US, his writing on robot rights is brilliant to this, he says it’s not really about arguing the toss about what is a robot, what is an AI in terms of its status. Is it like a corporation or a rabbit that you don’t want to experiment on, or a minor as opposed to an adult in terms of legal personality?

He says you can have those arguments, of course, and you can decide that it’s somewhere between an animal and a child in terms of the sorts of rights you might assign it. But actually, the point about rights is much more about who we are and what behavior it drives in us. So of course, we did a load of stuff on not experimenting on animals because we felt that they could feel pain and we didn’t want to hurt them. But it was largely to stop us being cruel to animals. It was about policing our behaviour by giving them rights, and I think particularly with where we are on AI.

You know, we can argue the toss for years, but will AI get consciousness? Is it conscious? It is it conscious now? We don’t frankly know because we don’t have enough of an articulation about what consciousness is to be able to be clear on that. But while we’re having those debates, which are really important ones to have, we need to figure out how we need to protect AI and robots to stop us behaving badly. And this goes into, you know, sex bots and autonomous weapons and pretty much everything that you can possibly imagine because it’s about, as you say, the disfiguring effect it has on us as a species and what it does to our humanity if we develop systems and processes that make us abusive in a really large-scale way.

And again, that’s why, I mean, it’s a really tricky thing to talk about the slave trade because it is so such a stain on our histories as a species. But it was objectifying people and making them less than people that created all of that, and it should never have happened. And we need to learn from what happened. And of course, with women with lots of different classes of people through time, the disabled, you know, whatever we have categorized them in a subhuman way with AI. We don’t want to over-describe AI as human because it’s not. But we do need to recognize that it’s designed in humanity’s image, so it has some human traits and attributes and expectations, frankly, from our fact of designing it and therefore deserves some protections in order to protect it from us and us from ourselves.

DG
Yes, and that goes back to your point about AI being predominantly, if not exclusively, in private hands right now. That connects with one of your earlier books, Capitalism’s Toxic Assumptions. From the blurb here, this is a very appropriate line related, I think, to the discussion about developing AI you say, “The capitalist system masquerades as a machine programmed by experts, with only economists and governments qualified to tinker with it.” And that’s what we’re bumping up with already with this development of AI because I think that it is being developed by people who have that mindset.

We can call it capitalism, we could call it this human attitude that existed before capitalism, and capitalism is just a symptom of that. But it’s a system of thinking, a teleology if you will, where the only thing that matters is a sense of productivity, of utilitarian calculus, and the recognition of human beings as human beings does not matter. And so, we are going to, no matter how else we think about it, what we’re going to do with AI is create ourselves because that’s what we do.

Science is an is an expression of us as human beings. Religion is an expression of us human beings. Art is an expression of us as human beings. So, AI will be also. The robots that we create will be us regardless of how we think about what us means and what robots mean.

So, the real issue then is do we let our society, our humanity, and everything that we create be in the hands of a small group of experts that have one mindset and a mindset that is only a recognition of profit motive or however you want to phrase it, and not a recognition of human value?

EP
This is why we need the rising up of the humanities, the philosophers, the theologians. Again, because religions has had such a bad rap, the theologians have gone very quiet, and the philosophers of mind have generally ceded the floor to the neuroscientists. And no one is really doing very much good thinking on this. And it’s shocking.

I mean the lack of resources on consciousness itself. Stuart Russell in his Reith lectures, he was asked a question about it. He just put it in the “too hard” box and says well, no, we can’t solve it, so don’t bother.

Well, that’s not good enough. You know, we’ve got to try really bloody hard to solve this because it is the point and we’ve got to keep at it. And the people who are most likely to be able to help us with that kind of conversation are all the people who are not getting public funding at the moment because they’re not in STEM.

DG
Yeah, and it’s disturbing to me as a philosopher who is very firmly in the continental tradition that even philosophy now is so dominated by cognitive science and neuroscience and philosophy of mind and philosophy of language that we have lost our own connection with humanity and with the junk code and with, as you put it, the uncertainty and nuance that is so vital to us being human. It’s not just a fluffy thing, it’s this important realization that it is through the uncertainties. It is through all these nuances and fluffy stuff that allows us to survive, that allows us to have a decent life.

Lived experience and the philosophy of being a person, of being an individual, is so lost even in philosophy nowadays. And you’re right, the theologians are quiet because they’re not allowed to speak, not in the sense that they will necessarily be shouted down, although we see that a lot.

EP
But yeah, it’s a sort of odd thing really, isn’t it? Because it’s sort of a homogenization of, you know, that this is one view of the world around ethics is utilitarianism. It’s about materialism. It’s about, you know, even though we’re understanding through globalization that that’s not a great way to go.

There’s a huge amount of sort of collapsing of the metanarrative so that it’s all about capitalism and all of these kinds of things. And as you say, I mean, I know you’re a fan of Wittgenstein, and I think his idea of language games is really helpful here because, a bit like Keats and his negative capability, it’s saying one of the things that we’re so good at as humans is holding things in tension. I’m thinking, well, you could believe in utilitarianism, and you could also believe in a different ethic of some kind, and you could think capitalism is worthy, but you could also see the virtue in communism or some other similar system. And I think it’s not to say that we don’t agree with some of the big narratives.

It’s just, say, the way you keep those narratives healthy and fresh is you stop them having a monopoly. And again, this is one of the things in in toxic assumptions. It’s talking about the fact that we’ve drifted into some ways of seeing the world that actually make us less resilient because we haven’t allowed there to be that kind of dissent and argument and helpful confusion and conflict to keep us thinking. And I think there’s a really interesting bit in Wittgenstein about talking to someone as though they had a soul.

I think there is a language game to be had there that the religions do, a language game which is assuming that people have a soul. It means that in that conversation, the rules of those conversations would protect humans even if they were born with only a few moments to live, or if they were born unable to do a lot of things, or if they had suddenly got ill or suddenly become infirm or disadvantaged in some way. The kind of undeniability of the dignity of a person with a soul is part of that language game.

Now, I totally understand why, if you are a politician and you’re designing your state, I understand where we’ve ended up in these very secular ways of understanding. How do you do a public ethic? And of course, utilitarianism tends to win out on top of a very good rule of law. But John Rawls is brilliant on this because he keeps reminding us that we must put ourselves in the shoes of the least in order to get the best policy out of the situation. And it feels a little bit like we need to be thinking about that and honoring the different language games that they might be going on.

Because again, if you look at the data behind the training, data of all the LLMs at the moment, it’s what, nine institutions globally that feed the entire billions and billions of whatever you would call it, of all the data that these things are being fed. And that’s a very small number of language games being deployed. So, that also makes AI less resilient because it’s very globally truncated.

If you are trying to design something brilliant, then you would want to try and make it as variegated and nuanced as possible. And again, we’re quite good at doing that. As humans we can have all sorts of impossible thoughts before breakfast, but AI can’t. And that’s a shame.

DG
And we need to instill that even more in the classroom, in our public discourse and social media. Because as you touch on a couple of times in the book, we think sometimes in terms of Platonic Forms still, and about the human being as this type of Form of perfection of things. I think that is tragic in that the Platonic ideal of course is, as really encapsulated best in Plotinus, of the One. There should be this One and we should all be going toward this idea of Oneness of monism. That may seem like perfection, but it’s not. And maybe perfection isn’t even the right concept to be able to hold multiple ideas and see the good and the strengths and weaknesses in both.

I mean, as a philosopher, that’s what we try to instill in our students: think of this person and this person who disagrees, what are the strengths and weaknesses of those arguments? And that’s something we need. We’re losing it in our education system. And it’s something that we’re definitely probably not programming into the AI that we’re programming today. That appreciation for nuances to solve the problem. Make me toast.

EP
Absolutely. And I and I think even if you don’t like religion, if you think about what evolution is up to, evolution is simultaneously improving loads and loads and loads of different species, optimizing them all in their own different ways, even if it’s so they can eat each other. So, it’s not evolution isn’t tending us towards one perfect being. It’s actually optimizing billions of different beings in different ways to be able to live together in harmony.

And I think that’s really interesting because the junk code isn’t actually great about alignment, but it’s not about aligning us to become the same person, it’s aligning us to respect each other and to give each other space, even if we don’t really understand why that person should be respected. There are just rules about that in our makeup around honoring other people with souls. And we know what happens when we don’t do that.

DG
Yeah, we know what happens. And yet we do it so much in our society right now. The so-called culture wars and the war on wokeism. We are rapidly trying to get into this lack of respect for diversity, if at a time when diversity is higher than ever, and you look back at history and that’s kind of always been the case where you have an advance of diversity and acceptance of individuality and there’s always a backlash to it. And getting back to the AI the robot question again, no matter what happens and how we go on the evolutionary path of robots and robot kind. We are going to have to figure out how to live with these things that we have created.

And so, to kind of wrap things up here, even though we could obviously and will be as a race talking about this for forever, what is the best mode of thought for a paradigm for going forward with AI development?

EP
I think it’s right. Parenting is right. I think in most religions we posit the god or the gods as parents of various kinds because they made us. So, I think when we make things, then that is the correct relationship to have with them and to err on the side of caution by making it a parental responsibility, not just a sort of maker objectified responsibility, because we don’t really know what we’ve done.

And I think that’s what’s terrifying about all of this is that we didn’t really know how it was going to end. We still don’t know how it’s going to end, and we probably weren’t able to have done the homework we needed to have done before we started. But now that we have started, given what we do know from all of those people who ever bothered to sit down beside a fire and tell a story, it’s not like we don’t know how this goes. It’s not like we don’t know what our flaws are and what tends to happen when we try and create in our own image.

And given that we know that, and given that these beautiful people who have written sci-fi have essentially written all the futures that we could ever have in these scenarios, we also know the range of possible futures. So, given that we know from our design what’s likely to happen, you know, the abuses, the hubris, all these kinds of things, we need to really mine all that junk code to find out what else, what else would help us instead of just slapping on a load of regulation when the regulators don’t even understand what’s going on. Because none of these things have been released to the public apart from ChatGPT as far as I can see. So, we don’t even know what we’re up against.

You can’t expect the politicians to be brainy about that. That’s not fair. But if we could get better at designing it, it may be that there’s no hope for us. We don’t know because the thing is that we haven’t really been very good at explaining why we’re special. We haven’t coded that in anywhere apart from law, because at the moment we’re in charge. If at any point we cease to be in charge, you know, whether it is the rabbits that take over or the robots or whatever else, then you know we have no rights anymore.

But while we are in charge and while we’re on this ridiculous project to replicate ourselves and replace ourselves, indeed, then let’s just try and do it a bit better. And then at least we’ll be proud. If we end up shuffling off the the golf course and, you know, gradually retiring and dying off, at least we’ll feel that we’ve fulfilled evolution by leaving something better than us behind, rather than something which is much more limited.

DG
That would be a lovely thought because in a way we have kind of woken up to realize this AI that we have created as like a three-year-old child who has just said something insanely clever and we’re realizing, OK, now what?

EP
That’s absolutely right. And someone was talking about ChatGPT and how best to use it, and they’re saying it’s kind of like having a slightly ropy intern and you’re trying to really encourage them and be keen because it’s their first placement. But actually, they’re just coming up with a load of rubbish and you have to be very gentle with them but also just not believe anything they say.

DG
Well, Doctor Poole, thank you so much for this wonderful talk. And again, the book is called Robot Souls: Programming in Humanity from CRC Press. I’ll put a link to that in the description below this podcast and video. And again, thank you so much and best of luck in getting this message out because it really is important, and I hope a lot of people listen to it.

2 comments

  1. Splendid discussion & reflections. Thank you. Brings home vividly the URGENT need to get a richer deeper sense of how we as humans are truly constituted into AI design – most immediately (but not only) vis a vis the underpinnings of the concept of intelligence in humans. Viva junk! A modernised theology’s vitally needed, in culturally palatable forms.

    1. Thank you. I am hoping more people watch the interview because Dr. Poole is saying things that are very important to consider and implement.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.