Meghan O'Gieblyn

Meghan O'Gieblyn

play_arrow
Thu, 12 Oct 2023 10:00:00 -0000

Meghan O’Gieblyn: Will AI Destroy Humanity?

Transcript

Are robots going to destroy humanity?

Thanks to the rise and implementation of Artificial Intelligence (AI), the common sci-fi trope of a machine-perpetuated apocalypse has taken on a new gravity in recent days. But is Chat GPT really going to rebel against humans, or even change things very much at all?

“We're at the point where we do have technologies that are incredibly powerful,” says writer and commentator Meghan O’Gieblyn. “They're able to do things that they weren't programmed to do.”

In this episode, Meghan discusses AI in great detail, and lays out what she believes to be the social, political, ethical, and even theological issues at stake as humanity learns to live with new technology.

Episode Transcript

Lee

[00:00:00] I'm Lee C. Camp and this is No Small Endeavor, exploring what it means to live a good life.

Meghan

The sort of the existential risk or the existential threat of AI has been something that's been speculated about for a long time.

Lee

That's Meghan O'Gieblyn, award-winning author and contributor to an array of news and opinion outlets and a highly respected commentator on artificial intelligence.

Meghan

Now we're at the point where we do have technologies that are incredibly powerful and incredibly opaque that have demonstrated emergent qualities, in the sense that they're able to do things that they weren't programmed to do.

Lee

Today, our discussion on artificial intelligence raises the question, what does it mean to be human? And what sorts of ethical, political, and religious issues in the midst of this new season of human history?

Meghan

It's important that enough people at least have basic knowledge to understand what's at stake.

Lee

All coming right up.[00:01:00]

I'm Lee C. Camp. This is No Small Endeavor, exploring what it means to live a good life.

Whether it's simply a personality trait of mine to worry, or whether it goes along with a vocation as an ethicist, I often find myself carrying about a heavy emotional load with regard to pressing social and political issues of the day. And in recent years, a sharply increasing load of cognitive weight, if you will, with regard to artificial intelligence.

What seems clear is that we are on the precipice, maybe already over the precipice, of a new world. And it may be a new world in which we could see a demise of the human species itself. Or so it is that many of the engineers working on artificial intelligence, too many of them in my mind, believe that the demise of the human race is possible.

In what I suspect will be only the first of a good number of episodes in which we will feature conversation around AI, [00:02:00] we feature today Meghan O'Gieblyn, acclaimed author who has thought deeply about the intersection of AI technology and what it means to be human, as well as the ways in which theological categories like redemption or the mysteries of God wisdom and the like surface in fascinating ways around the discourse around AI.

As always, you can share your thoughts with me by emailing lee@nosmallendeavor.com.

Meghan O'Gieblyn is an award-winning author, having received three Pushcart Prizes, having had her writing featured in the Best American Essays Anthology. She writes for numerous outlets, including Harper's Magazine, The New Yorker, The Guardian, and The New York Times. And she writes for Wired's advice column, Cloud Support.

Today, we're discussing her book, God, Human, Animal, Machine, subtitled, Technology, Metaphor, and the Search for Meaning.

Welcome, Meghan!

Meghan

Thanks so much for having me.

Lee

Yeah, so pleased to have you. And, um, I'm just [00:03:00] so grateful for your book. It has taught me a lot. And it's fascinating to me.

I actually was trained in undergraduate days as a computer science degree. And so, for many years, for many years I actually coded in order to afford my teaching habit and my grad school habit.

Meghan

Oh, nice.

Lee

So, I thought about technology for a long time. And then, of course, I thought about theology for a long time.

But I don't know that I've ever encountered someone who's bringing the discipline of theology to the history of philosophy of science and technology the way you do and I just find it super helpful and I really thank you for the book.

I have assumed for a long time without digging into it, seeing the ease with which eschatological metaphors are used with regard to technology. But you point us to so many other theological categories and presumptions that were at play that are just super helpful.

I want to begin, though, at a different place, and namely, the fact that we are hearing from so many people about the [00:04:00] potential dangers of artificial intelligence. And just this week, for example, The Center for AI Safety issued this one single sentence, quote: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war," end quote. And it was signed by all manner of top AI researchers, scientists, and executives in the, in the field.

And this seems to be kind of the strongest of language, that this could lead to our extinction as a species. So I'd like to start there and just ask you, what do you make, what do you make of that?

Meghan

Yeah, it's funny. There was, you know, this is the, sort of the existential risk or the existential threat of AI has been something that's been speculated about for a long time, both in science fiction, I think, which is where most people get their familiarity with that idea of sort of a robot apocalypse. And then also within, you know, the field itself of [00:05:00] AI.

And up until recently, it seemed like those ideas were very much in the realm of theory or speculation, right? So, uh, Nick Bostrom, the Oxford philosopher, came up with this sort of famous scenario called a Paperclip Maximizer, and it was this idea that you could have, you know, some form of AI that was trained to do something relatively simple and benign, like, his example was to maximize the number of paperclips in its possession, right? So it seems like a neutral goal. But, you know, if the correct guardrails weren't on that, or if the objective was worded in a careless way, there's a way in which a simple goal like that could get really destructive, right? Where, you know, all of a sudden the AI starts to-- if it had, you know, unlimited power and access to resources, it could basically decide that all of humanity had to be destroyed so that it could create paperclip factories and, you know, just sort of take this goal to its logical [00:06:00] end.

And that, for a long time, has been used as sort of the classic example. And, you know, when people are talking about existential risk and AI, this idea that It's not just about, you know, robots developing sentience or, you know, conscious, sort of, malevolent goals or this evil desire to, to kill humans. It could just be a matter of creating this technology that we don't totally understand and that has powers and abilities that are difficult to anticipate in advance.

And so I think that's where a lot of the anxiety is coming from, because we are now-- I think Bostrom came up with this example, you know, maybe 15 years ago, and now we're at the point where we do have technologies that are incredibly powerful and incredibly opaque that have demonstrated emergent qualities in the sense that they're able to do things that they weren't programmed to do. They sort of unexpectedly evolved these capacities.

One example, you know, ChatGPT, there's been a lot about this algorithm in the news lately. It was designed [00:07:00] for a very simple task, which is to sort of predict the next word in a sentence, to predict language. But in the course of doing that, it also learned how to code because it turns out that a lot of its training data contained information about, you know, computer programming. It learned how to translate languages, it learned how to do some basic reasoning.

So there's a lot of things that we don't understand about the technologies and that the people who are building them also don't fully understand. And so I think that's where a lot of that fear about where this is going and what could possibly-- how it could possibly go wrong for all of us.

Lee

So one of the, I think, presumptions by some is that it seems that the development of such technology is seen as a sort of escape from traditional and/or conservative norms, but one of the things that stood out so counterintuitively in what I've heard a variety of people say, but it came out in several illustrations that you give in the book, [00:08:00] is that at one level, there's a sort of potentially troubling social conservatism, potentially, in that, you point to the way in which these algorithms rely on historical data. At one point you say, "their decisions often reflect the biases and prejudices that have long colored our social and political life."

Could you unpack that for us and tell us how that counterintuitive bit of observation comes to pass?

Meghan

Yeah. I think for a long time there was this ideal that AI and machine learning was going to be purely objective and you, you know, you had, you had this idea that it was going to sort of transcend all of these biases and flaws that, that we've have as a culture and society.

And, you know, this was particularly true in conversations about-- there was a lot of excitement around, you know, 2016, 2017, about using deep learning systems in the justice system to make, you know, sentencing decisions and to decide who gets parole, things like [00:09:00] that.

And what they discovered is that, you know, far, far from the expectation, which is that these machines were going to be totally neutral and get around the problem of, you know, systemic racism and biased judges and all of this, it was actually the decisions that the algorithms were making were sort of recapitulating the trends that we see in the justice system, which is that, you know, Black defendants were given higher sentences than white defendants.

You know, all of these sort of biographical data that shouldn't be part of the decision was being, you know, absorbed by the algorithms and, you know, the, the, the systems were trained on past court decisions. So, you know, they ended up sort of just solidifying and reifying a lot of those problems that were already present in the justice system.

And this is true throughout, um, you know, different realms in which machine learning has been implemented. In financial institutions, you know, deciding who gets loans and who doesn't. Hiring decisions, trying to decide who [00:10:00] is going to be an attractive job candidate or not. There's been a lot of writing in the past few years about bias and algorithmic bias and how these systems are basically just reflecting back a lot of the problems in our society already.

And in a way I think that they're more dangerous because there's this, again, this veneer of objectivity because the machine made it and because it supposedly has access to more data than a human does. It seems as though there's more authority in those decisions, when in reality they're, you know, making the same sorts of errors that we've been making for a long time.

Lee

There was a famous case I think you pointed to in 2016 with an AI chatbot by Microsoft that kind of illustrated this, I think.

Meghan

Oh, Tay?

Lee

Yes.

Meghan

The AI-- yeah, where they, uh, released... this, this was, yeah, one of the first chatbots that was sort of released into the wild, where Microsoft changed, uh, created this chatbot called Tay, and it was just supposed to learn-- it, they [00:11:00] released it on Twitter and it was just supposed to learn from how people were speaking on Twitter and it was supposed to sort of mirror back what users were posting about and, and the sorts of jokes that they made, et cetera, which, which seemed at the time.. I mean, now it seems incredibly naive, but at the time, I think it seemed like an innocent enough idea. And within, I think, 48 hours, it had started posting hate speech and denying the Holocaust and sort of mirroring back just the worst tendencies that you see on that platform.

And yeah, that, that still is a case-- it was a very brief and early case, but it's one that's still referenced because I think it really distills the ways in which AI tends to reflect and mirror back a lot of the problems that we see, especially online and in these online spaces where they gain all of their training data.

Lee

In one of your chapters, or a couple of your chapters, [00:12:00] you unpack the way in which the notion of algorithm and data in the history of the rise of artificial intelligence, sort of the move toward artificial intelligence, the notion of algorithm and data on the one hand versus science and theory on the other hand. And you point to a 2008 article by Chris Anderson, editor of Wired Magazine, that was entitled, "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete." And I had never thought about it in these terms, but it seems very provocative.

Could you unpack that for us and explain, for those who are unfamiliar with this, what's going on, like I have been, could you kind of explain some of this to us?

Meghan

Yeah. So um, Chris Anderson published this essay in Wired, you know, shortly after-- I think it was largely in response to this incredible progress that Google had made with translation, um, in the twenty-teens, where, you know, they realized that there was-- the, the person who was in [00:13:00] charge of the Google Translate system that translated English into Mandarin boasted at one point that nobody on his team understood Chinese.

And the idea was that, you know, that all they did was just feed the model tons of English and Mandarin documents right next to each other and it was able to learn how to translate just based on that. And so there was no human knowledge that actually went into training the algorithm on what to pay attention to or, you know--

Lee

Grammar rules...

Meghan

no grammar rules, nothing about the structure of language.

It was all just pure data, and it was particularly the volume of data that was fed into the algorithm that was able to allow it to learn in that way. And I think Anderson in a way glimpsed, earlier than a lot of people, the potential of machine learning, where he said, you know, we don't need the scientific method anymore.

He really made this huge leap forward and said, since the Enlightenment basically, we've relied on this idea of science that is based on theory. You come up with a [00:14:00] hypothesis - a human comes up with a hypothesis. Um, and then you test it and see if it works.

And his conclusion was that now that we have these machine learning algorithms, we don't have to have theory anymore. We can just sort of train the models on data that we've picked up in the wild, let it sort of discern patterns and consistencies within that data, and let it, you know, let's see what sort of conclusions the algorithm comes up with. And to some extent, it's true, you know, where-- like the way in which something like ChatGPT works, there's a certain extent to which the people who made it don't really understand what it's paying attention to, how it knows how to predict language, like what sources it's drawing from.

There's a lot that is, you know, because these systems are black box models, there's a lot of mystery about what happens between the input and output just because the calculations and the feedback it's getting from the world is very complex. [00:15:00]

But they do have a tendency to be right in a lot of fields most of the time, right? And so-- at least it seems so in the early days, right? So, you know, there was a lot of studies done about how, you know, AI, you know, machine learning systems could predict cancer better than human doctors, you know, or that it could predict voting patterns. It could do all of these sort of really remarkable things.

But the problem there is that you don't really know why it's giving you a certain answer. And so there was a certain level of faith that you have to put in these algorithms and trust that the output they're giving you is correct. And that was sort of what, what Chris Anderson was calling attention to at the time.

And it did at the time seem like a very provocative argument, where, you know, he's saying, we don't need to understand how the world works anymore. It's enough to know that, you know, these systems which understand the world on a level that we can't are making certain predictions, and those predictions are very often right, so we should trust them.

Since then, I [00:16:00] think there's been a little bit more awareness of how often they're wrong. You know, there's been a lot of attention to how, you know, even something like ChatGPT is often just-- hallucinates or creates, you know, gives you a lot of false misinformation when you ask it a very simple question.

So I think that, you know, I, I think there's, there's this hope that, oh, we just need to build the models and make them better and, and more effective and eventually that problem will go away.

I don't think that that's really the case. I think there's a case to be made that those problems are, are endemic to the technology and that the bigger models actually tend to hallucinate a lot more.

Lee

Yeah.

Meghan

Yeah.

Lee

You're listening to No Small Endeavor and our conversation with Meghan O'Gieblyn.

I love hearing from you. Tell us what you're reading, who you're paying attention to, or send us feedback about today's episode. You can reach me at lee@osmallendeavor.com. [00:17:00]

You can get show notes for this episode in your podcast app or wherever you listen. These notes include links to resources mentioned in the episode, as well as a PDF of my complete interview notes, and a full transcript.

We would be delighted if you'd tell your friends about No Small Endeavor and invite them to join us on the podcast, because that will help us extend the reach of the beauty, truth, and goodness we're seeking to sow in the world.

Coming up, Meghan and I continue to discuss the existential threat AI may pose to humanity, what the liberal arts, such as theology and ethics, have to contribute to the conversation around these new technologies, and what might still distinguish humans from machines.

There's, um, ways in which this points back to what you indicated earlier with, uh, talking about Nick Bostrom, where, uh, in 2003 you noted that he warned, there's no innate link [00:18:00] between intelligence and human values. A superintelligent system could have disastrous effects even if had a neutral goal and lacked self-awareness.

Are there other ways in which this whole notion of dataism, as I think some people refer to it, as opposed to a quest for scientific understanding-- could you kind of point to the way in which that might be a sort of existential threat, or should we see it as an existential threat to the human endeavor?

Meghan

I mean, it's certainly a threat to the sort of ruling paradigm of knowledge and humanism that has been around since, you know, the Renaissance or the Enlightenment. I mean, this, this notion that we, that we need to understand the world, and that, you know, science is a part of our sort of quest to, to better understand things.

And, you know, I think that that's, that's something that is being slowly eroded as we're increasingly asked to put our faith in these opaque [00:19:00] technologies that nobody understands, right? It's almost a return, and I'm not the first to make this point, but it's, it's almost a return to sort of a pre-Enlightenment epistemology, where you have to sort of rely on these, you know, very mysterious runic forces like the stars or, you know, oracles, in order to reach understanding.

And I guess the question that I was interested in exploring in my book is like, well, where do human values fit into that? At what point do we have these completely mysterious opaque systems that are no longer reflecting our human values and what we want? And what are the trade-offs there? Because I think with any sort of technology that, you know, transcends our own abilities and powers, there's also trade-offs - relinquishing a lot of control, we're no longer creating meaning, or creating systems in which we can find meaning, and so that's, that's really was a starting point for me for that, for the book, and thinking through those questions.

Lee

Yeah.

Meghan

Yeah.

Lee

Yeah.

So [00:20:00] my field's ethics, and I do a lot with kind of virtue ethics traditions. And then have thought a lot about the Enlightenment and the way in which Enlightenment has impacted the way we think about what ethics is and the quest for universal objectives and ethics and so forth.

And, and knowing that that story has not turned out so well, right? That the, the quest for a supposedly universal objective ethic, separated from any sort of notion of tradition has, has failed. And so I've always found it, just as an outsider, listening to people who talk about technology and who talk about AI, at least at the popular level, a sort of naivete and a sort of idealism that has shocked me.

You know, in the, in the sense of-- a lot of times, like, you'll hear, I won't call his name, but one famous thinker, famous in the science writing world, you know, he said, well, basically, all we have to give it is three simple rules and it will be fine. And I'm [00:21:00] thinking, this seems outlandish to me and ignorant about the sort of complexities that there are in having any sort of informed conversation about ethics, morality, human values, whatever kind of language we want to put around it, and how complicated that is.

And so, then reading your book, it's like, when you're looking not just at popular conversation, but you're looking at the history of philosophy of science or the history of various philosophical conversations about difficult problems like the mind, body, dualism, and so on, it's even much, much more complicated.

And so how do you have any sort of sense of hope, or do you, that we can come to some sort of shared commitments to guardrails or shared commitments to limits that will not just simply overwhelm humankind.

Meghan

[00:22:00] Yeah. That's a really tough question. To be honest, I don't have a lot of optimism or hope about it.

But to the extent that I do, I think, you know, it's gonna rely on... yeah, humans coming together and trying to figure out how we want to regulate this technology. I think that's one of the toughest questions right now too, is like what sorts of values are being programmed into, into the AI, you know, because they are, they're, they're black boxes in a sense.

And that's where this, you know, dream of sort of universal ethics or objectivity comes from - this idea that somehow if we just, you know, give these systems enough data, they'll be able to see the world at this sort of higher level from the sort of Archimedean point, right, that we can't access as humans.

But we've already seen, you know, the, the ways in which in order to-- I guess one thing is that in order to release the models, you know, as products, they've had to been very, they've had to be very highly fine-tuned. [00:23:00] And in order to do that, you need, you know, it's a process called reinforcement learning where you have to, you know, basically instill the models with values. And this is how you get, ensure that, you know, chatbots don't, you know, spew racist, sexist, you know, tirades.

And, and the big question now is like, well, who gets to decide what those values are?

Lee

Yeah.

Meghan

And, you know, there's been a lot of talk about, well, you know, we're going to have these, these major companies that are creating sort of the base models.

You know, I think it's only a handful of corporations right now that have the computational resources to create technologies that is this powerful, but once those algorithms are in the wild and people can sort of, you know, fine tune them how they want, we're going to have a lot of competing-- basically, the way I see it is we're going to sort of just supercharge the political fractures that we already see on the internet.

And, you know, we've, we've seen what, like, social media has done to those, you know, to the, the capacity for public dialogue and discourse [00:24:00] and the way it's polarized a lot of the country. And I think once you throw an even more powerful technology, AI, into that mix, you know, I think, if anything, it's just going to sort of supercharge that.

Lee

Yeah.

Meghan

Yeah. I guess the other thing I'll say about that is that I, you know, there's been a lot of talk about how, like, AI could potentially solve problems for us. Like, oh, AI is going to solve climate change for us, for example. You know, OpenAI refers to this, this, you know, we, we can't give up AI research because there's so many upsides. It's going to solve all of these problems for us.

But, you know, even, like, scientific problems, like for me, climate change is, it's a political problem. We know what we have to do to, to, to solve it. We just don't have the political will.

And so I think that that's, that's a dangerous place to be where we're, you know, thinking about offshoring or relinquishing these problems that are problems of human values and human meaning and things that we have a stake in as human [00:25:00] beings, and trying to look to some sort of godlike technology to solve them for us, which I, I don't-- I guess I, I guess my answer is I don't really have a lot of optimism about that potential.

Yeah.

Lee

So, uh, let me move to a more explicit conversation about the ways in which metaphor and theology are at work in your book, um, and you've already begun to allude to this, in the way in which we've kind of given a sort of transcendent power to the black box, a sort of divine unquestioning of the will of the black box and so forth.

But talk to us a little bit more about how metaphor and theology has been central in your thinking about technology, especially when the conventional wisdom, it seems, would have it that things like poetry, metaphor, the liberal arts, theology, [00:26:00] or the questions raised by theology have nothing to do, really, with science and technology.

Meghan

Yeah. I mean, the book really grew out of my realization, in writing tech criticism and reading a great deal of it, that a lot of religious metaphors were creeping into discussions about technology.

So this could be from, you know, anything from speaking of black box algorithms as god-like or divine, which was something that was, was happening a lot around 2016, 2017.

Or you know, people talking about this, the possibility of a technological singularity, this moment where we were going to basically transcend our human forms and, and take on, you know, some sort of post-human existence, which seemed to me very similar to-- there's a lot of comparisons to the Christian resurrection and the afterlife.

And, you know, so that was, I guess, the seed of the book. And I was curious about where, where did those ideas come from? How did these religious ideas get [00:27:00] into, you know, this thinking about science and technology, particularly people who claim to have no religious affiliation whatsoever, who were deferring to that kind of language.

And, you know, through the process of researching the book, I really came to appreciate in a new way that the extent to which so much of the hard science and technological thinking rested on metaphors. And one of the most foundational metaphors, and that I talked a lot about in the book, is this idea that the mind is a computer, right?

That this is, you know, where we get the idea of neural networks, which is one of the first forms of artificial intelligence. It was supposed to be loosely brain-- loosely based on the neural networks of the human brain, that, you know, there was, there was some sort of parallel between these systems and, you know, some of the earliest people working on AI called computers electronic brains.

There's some truth to that metaphor, and it's been incredibly useful to the development of AI. [00:28:00] But I'm also interested in how those metaphors reflect back on us as humans and how we start to think about our own brains as computers, right? And we defer to this language all the time in everyday speech often without really recognizing that we're using metaphors.

You know, when I say, oh, I have to like process new information. That's, that's a, that's a term that's take it from computation that, that wasn't around until we had computers and we didn't used to think about our brains as processing information, right? And that's actually not in fact how our how our brains work at all.

So, um, so yeah, I was interested in how, you know, there's this sort of weird dance going on where we see ourselves through the lens of the technologies that we're creating, and then we sort of project humanity onto, onto machines also through anthropomorphizing them. And there's sort of this feedback loop taking place between us and our tools.[00:29:00]

Lee

Another category, theological category, that you point to quite a bit is your grappling with the notion of free will, especially vis-a-vis your history in grappling with Calvinism, which you ultimately rejected. Uh, but would you unpack that for us some?

Meghan

Yeah, I mean, I first became really fascinated with this idea of free will when I was studying theology.

I was at a conservative Bible college and had taken a few classes with professors who were very much taken with like New Calvinism. This was early 2000s when, you know, that that idea was really sort of sweeping evangelicalism.

And became really unsettled by this idea of predestination, you know, and the idea that certain people were chosen to be saved and that you really-- personal belief wasn't something that you had control over. That, you know, you were either elect or you weren't and there was nothing [00:30:00] you can do about it.

And that was sort of, it wasn't the, the full extent of my, my loss of faith, but it, it was the seed, it was one of the seeds of it. And, you know, I ended up leaving Christianity and I, I was reading a lot of, you know, New Atheists at the time, Dawkins and, uh, Christopher Hitchens and, you know, realized that like, oh wow, a lot of, you know, people in the secular world also don't believe that we have free will. Um, and that this isn't something that is distinctive of, of Christianity. It was sort of like this problem that I thought was a theological problem was actually much larger.

And, you know, that's a question that comes up a lot, um, in discussions about technology too. I think that, you know, because we have these technologies that are to some extent deterministic, um, that we think about, you know, input, output and think about, you know, the ways in which a lot of things that we think are complex, spontaneous, creative, you know, our, our own human capacities, which seem to us very free, are actually things [00:31:00] that can be programmed into machines.

And, you know, you see this, especially with this latest iteration of generative AI, where, you know, all of this really incredible, you know, images and animation and language, all of these things that we've considered really crucial to our free thought as humans, turn out to be something that you can program or something that sort of arises from these stochastic processes.

And, so yeah, that's, that's, that was also sort of a starting point for the book, is thinking through how those, the questions about technology and are we just machines, how that's reflected in religion and vice versa.

Lee

Yeah. Yeah. Thank you.

We're going to take a short break, but coming right up: if the human brain is conceived as a computer, what is the role of the body? Plus more discussion around the all [00:32:00] important question surrounding this topic: what does make us human?

One question I've asked myself a number of times that I'm wondering-- and again, I'm new to coming to this field that you've been thinking about for years, but, you know, from a virtue ethics perspective, things that we count as indispensable to be living a sort of life that would we would call a flourishing human life or a life worth living, so traits like courage or compassion or justice as fairness, all of those sorts of habits or dispositions are inseparable from bodily experience.

Because there is like, courage makes no-- the whole notion of courage is grounded in the notion of fear and how one rightfully or not navigates fear, right? [00:33:00] Or the notion of compassion is by definition grounded in the capacity to relate in some way to the pain of another human being. Justice as fairness is similar.

So it's always confused me about how we could presume that, if we think of intelligence as somehow related to that sort of capacity, that sort of practical capacities, it doesn't seem to me to make any sense of how we can ever have a notion of virtuous intelligence that is thus disembodied.

That the body seems to be, not a tangential fact about existence, but is central to what it means to live a virtuous life. But thoughts on that, or pushback on that, or folks who have thought about that at some length?

Meghan

That's fascinating. Um, and yeah, I mean, I completely agree with that.

I, I, I think it's interesting that there's, it seems as though there's been a lot more focus lately in [00:34:00] psychology on the role of the body, right? And understanding sort of how, um, things like emotional intelligence are part of intelligence, and a lot of that stems from yeah, our, our embodied existence in the world, the sort of biological experience of fear or anger or, or what have you. Um, and that, that's happening sim-- simultaneous to this effort to create these completely disembodied, very intellectual machines.

And I think that goes back to, sort of, that Nick Bostrom quote that you mentioned, which is like, there's a possibility of sort of creating these machines that can understand the world on a very high level, but it's a very purely abstract intellectual level. And even though they come from sort of guidelines or rules that we've given them, they're not going to be the same as the way that we experience them as embodied creatures.

You know, I wrote a little bit about embodiment in the book. It's something I've been thinking a lot more about since I finished. But, you know, there was an effort, it's interesting, in like the '90s, uh, when Rodney Brooks was at MIT, there was a big push for embodied [00:35:00] AI, and this idea that, you know, in order to sort of create intelligence, you've got to create robots that can interact with the world, that have some sort of sense of their environment, that have feedback with the environment.

And there was this idea at the time that, you know, if you created robots that were able to walk and interact with humans and even have facial expression, that eventually consciousness was going to sort of emerge from that process, from that interaction with the world.

And it didn't get very far at the time, but you know, I think it's interesting now, we've totally moved from that idea back to this sort of disembodied algorithm where it's like, okay, we just have these huge brains that we're going to feed tons and tons of data to. And to me, there's sort of, uh, a hollowness or an emptiness that I find to a lot of the output.

And I see this, especially with the algorithms that produce language. You know, chatbots. There's something-- and I can never quite put my finger on what it [00:36:00] is without sort of deferring to mystical ideas about the soul or the spirit or something, but there's something that just seems really empty and hollow about that, the language that AI produces.

And I was thinking about that in terms of the role of embodiment, you know, like all of human language is really built on metaphors that we draw from sensory experience in the world, right? So when I say, like, you know, she's a very warm person, or she's a very cold person, that relies on my knowledge of what it feels like to, you know, be warm or cold.

And even this goes down to, you know, George Lakoff and Mark Johnson did work on this in the '90s, on, like, the way in which language is built on sensory metaphors. Even saying something as simple as like, the future is ahead of us or the past is behind us, that relies on my knowledge of being sort of immersed in a spatial environment and being able to move around.

And you can teach AI statistically how to predict the next word in a sentence, but the output that it's creating has none of that more sensory bodily awareness behind it. And to me, that's really what strikes a [00:37:00] lot of the output as not quite as human as I would like it to be.

Lee

Yeah. I can see-- I just found as you were describing that, sort of two different, completely different reactions.

One is the, sort of, ruminating on, given that lack of embodied observation or capacity, therefore, maybe all of this stuff will not come to the sort of terrible apocalyptic visions that we sometimes imagine. And it might not be that big a deal.

The other-- and the other was, oh my, this is all the worst. Right? Because of the, the utter lack of capacity for whatever this intelligence is to truly relate to what, some of the most basics that make us human.

Meghan

Yeah. And I mean, it's, it's possible that both are true. I mean, I've, I vacillate a lot between those two things too, in thinking, yeah, a lot of the, you know, we're in a hype cycle right now. A lot of the things that we're, we're hearing about AI, about it's sort of, you know, it's going to become [00:38:00] superhuman, it's going to become, uh, you know, have all the capacities that we have as humans, that that's very far-fetched and the limitations are deeper and and more pervasive than we think.

But yeah, I think there's also the possibility - and this is the darker one - is that those, you know, the, the power is still there and the impact it's going to have on society despite those limitations, and that, you know, I think a lot of the dangers in AI is precisely the fact that it will have a large impact on the world, despite the fact that it lacks a lot of what we value in ourselves as humans, which is, you know, the ability to empathize, to sort of think intuitively, to understand what it means to be an embodied person in the world.

Lee

Yeah, thank you. So in this last one or two questions, I wanted to turn, if I may, to... there's a passage somewhere in the middle of your book where you talk about your inability to talk about the things you're talking about without reference to the subjective self. That is, the use of [00:39:00] "I," the word "I," pronoun "I."

I wanted to unpack that just a little bit with you, if I may. And with much of modernity, and even part of modernity, right, in certain ways, this quest to have some sort of view from nowhere or this sort of view in which we can see things that we presuppose by which God sees things.

And that, in many ways, the modern world, especially the postmodern world, has taught us that that's a futile quest and that there is no-- and even now quantum physics is teaching, quantum mechanics is teaching us, right, that there's, there's no observation of anything apart from the subjective observing self.

And thus, it's quite natural then, that as a writer about these issues, you would want to and need to come back to the, to the notion of "I." Uh, but did you want to unpack that for us just a little bit?

Meghan

Yeah. I've always had a lot of ambivalence about the, the subjective point of view, because, you know, I'm, on one hand I'm a personal [00:40:00] essayist and I've always written about the world through the lens of the "I." It's a approach to writing that feels very natural to me, but that I also feel a little bit of ambivalence about because it's, it's so, um... I've noticed that personal writing is not taken as seriously, particularly if you're writing a, you know, a book about science and technology, it seemed like... you know, I, I didn't want the, the questions I was raising to be reduced to sort of my own idiosyncratic background, my upbringing, my, my, you know, especially my Christian background.

And I, you know, actually the first draft I tried to write of the book was without my personal perspective in it at all. I wanted it to be authoritative. Again, this sort of view from nowhere. And very quickly discovered that that wasn't possible for me, that I got-- that the questions I was exploring were just far too abstract and I kept continually losing track of why I was interested in them. What is at stake? Right?

And that for me is really my, my anchor, I think, as a writer, is like, what is-- what in the real world, what in my experience [00:41:00] am I trying to figure out here? And it was amazing - once I introduced the "I" back into the book, everything sort of snapped into clarity, where I thought, oh no, this is, this is why I wanted to write this book. This is why this is important.

And it helped me think, even on an objective level, more clearly, because again, I had a rooted position in a time and place, and embodied, you know, presence in the book. And, you know, it occurred to me once I did that, that a lot of the problems I was exploring about, you know, consciousness and technology, and to some extent there's, you know, a chapter about physics too... that those were also about this tension between the subjective and objective point of view, right? This is the, basically where the hard problem of consciousness comes from. We understand the brain from the third person point of view. We understand, you know, a lot of its functionality, how it works.

But there's this subjective experience, you know, the, the sort of, you know, our first person experience of the world. Qualia - the sounds, sights, smells that we have that just, there's no way to account for how that arises from the mechanics of the [00:42:00] brain.

And again, you mentioned the problem in physics, right? The observer phenomenon where, you know, the, the, the results of an experiment, which should be objective, change depending on who's, who's looking and when.

And it occurred to me that, yeah, a lot of what we're doing in trying to create super intelligence through AI is trying to create this god-like third person perspective, this objective view from nowhere, um, that we've traditionally sought in religion.

And, you know, I, I talk a lot about Hannah Arendt in the book, and she had this idea, she called it the Archimedean point, right? This extent to which humans were always trying to sort of transcend - not just the first person point of view of the individual, but the humanity as a species - that we're trying to sort of transcend our limitations and sort of try to see the world through science and technology through an objective point of view.

And her view, and what I've come to become convinced of too, is that you really lose something essential when you do that. On one hand, it's this great gift that we're able to do this as humans, compared [00:43:00] to other species. We're able to detach from our limitations of our first person point of view or even the point of view of our species, and consider the cosmos as a whole, consider the much larger world.

But we also, you know, there's an extent to which you do that too much. You lose what makes us human and you lose those values and the distinctive point of view that gives us meaning. And I think that's why a lot of what we're pursuing right now in AI seems like it's, it's sort of this failed quest that's going to create things that are completely meaningless to us.

Again, because once you get a little bit too far from that point of view, you know, it, it doesn't have anything-- there's, there's nothing at stake for us as humans in that output. Yeah.

Lee

So, if you'll allow me then to ask one more question about your own "I," I, I look at this work of yours and you're raising so many disconcerting questions.

I wonder if you could, uh, perhaps point us toward what's keeping [00:44:00] you somewhat grounded. I can imagine that someone might, might fall into a fetal position grappling with all of these questions that you're grappling with so well.

Meghan

I fall into a fetal position about once a month, I think, so I don't know that I've avoided that.

I mean, I guess the one thing that has given me hope is the human commentary and criticism and concern that has arisen around these technologies, which wasn't present when I was writing the book.

I think, you know, these conversations were happening in a very small, sort of cloistered environment of researchers and experts. And now I think there's, there's, you know, just because there's been a lot of public alarm about the technologies, and also a lot of excitement about them, I think that there's been a much larger sense of public awareness about the risks, and, you know, what we want and what we have to decide as, as humans about, you know, how we're gonna move forward with this [00:45:00]technology.

I think my fear, particularly with the way in which alignment research is happening-- so alignment is basically this question of how do we ensure that AI is aligned with human values. And there's a very strong push in the world of corporate AI to automate alignment research, which is to basically let AI decide how to best align AI with human values.

Which sounds absurd, but there's been a lot written about this.

Lee

And terrifying.

Meghan

OpenAI. Yeah, terrifying.

So, I mean, to me, the, the counterpoint to that is to get as many humans involved in this conversation as possible and to insist on, you know, allowing our voices to be heard and, and that we make those decisions for ourself.

So, I guess that's, that's, um, when I'm not in the fetal position, those are the things that, that get me out.

Lee

Yeah. Last question, of, what would you suggest for those who are lay people in this area, who are, uh, you know, going about wanting to live their lives... what would you wish that folks who are not paying [00:46:00] attention to this conversation would do, would not do?

Meghan

I think the-- it's important to remember that these technologies are not inevitable or predetermined.

I think that that is the narrative that I find most pernicious, is there's sort of this assumption that, well, this is the future. That they're here, we can't do anything about it.

There's a lot of decisions that still need to be made in terms of what we're willing to use them for, whether we're willing to use them at all. So I would hope that, that people are, at least-- I mean, I understand firsthand how it's exhausting to be immersed in this new cycle and to be reading about them and keeping up with everything is very overwhelming. But I think that it's important that enough people at least have basic knowledge to understand what's at stake, and, you know, what, what they're willing to forfeit or adapt to when it comes to, to AI.

Lee

[00:47:00] Been talking to Meghan O'Gieblyn, author of God, Human, Animal, Machine, subtitled Technology, Metaphor, and the Search for Meaning.

Meghan, thanks so much for your time. Thanks for the wonderful book and grateful for your sharing with us today.

Meghan

Oh, thanks so much for having me. It was really great.

Lee

You've been listening to No Small Endeavor and our interview with award-winning author Meghan O'Gieblyn on her book God, Human, Animal, Machine.

We gratefully acknowledge the support of Lilly Endowment Incorporated, a private philanthropic foundation supporting the causes of community development, education, and religion.

And the support of the John Templeton Foundation, whose vision is to become a global catalyst for discoveries that contribute to human flourishing.

Our [00:48:00] thanks to all the stellar team that makes this show possible. Christie Bragg, Jakob Lewis, Sophie Byard, Tom Anderson, Kate Hays, Mary Eveleen Brown, Cariad Harmon, Jason Sheesley, Ellis Osburn, and Tim Lauer.

Thanks for listening, and let's keep exploring what it means to live a good life, together. No Small Endeavor is a production of PRX, Tokens Media, LLC, and Great Feeling Studios.