What I Think About AI
No surprise here
I had a start on this letter last week, but when Monday morning rolled around and it wasn’t finished, I decided to delay it for a week. I’m still not sure that it’s where I want it to be, but that’s the humility involved in writing. You write what you know, knowing you never know enough.
If you’re paying attention to the news recently, everyone is talking about AI. If you listened to Ross Douthat’s most recent interview on Interesting Times, with Daniel Kokotajlo, the executive director of the A.I. Futures Project and former researcher for OpenAI, you know we’re preparing for human extinction within a decade (if we’re lucky). I’ve just started the debate about AI that Bari Weiss moderated and posted on her podcast, and I think it’s going to be a rich and animated conversation about whether the benefits of this new technology will outweigh its costs. (I have an opinion about that question, and it’s going to be no secret here, as you keep reading.)
A year ago, I was doing research on AI and the Liberal Arts as an assignment for the alumni magazine of Wheaton College. That piece was my first introduction to large-language models and their developing capabilities. I had never used ChatGPT before that assignment (and I’ve possibly used it three times since, mostly as a joke.) As I’ve written before, this Substack is written without the help of AI, which might be the dumbest professional decision, given that Axios is telling some of its staff (though not its reporters): "You are committing career suicide if you're not aggressively experimenting with AI." I continue to think that the best work I can offer to my readership is deeply human work. And this was my conviction before ChatGPT published a hallucinated list of great summer reads that somehow got published in two major newspaper outlets.
There is one line of thinking regarding this technology that goes something like this: “At every technological inflection point, people have declared it to be the end of the world. They were wrong then, at the time of the printing press and the electric lightbulb and the internet, and they are wrong now.” Certainly, my own children don’t appreciate all the dangers I might foresee, as they experiment with AI. They simply think I’m old and out of touch.
I recently had a conversation with one of my teenage sons about the list of the top twenty Christian books that ChatGPT generated for him when he prompted it. He read the list to me, and I could agree, if somewhat begrudgingly, that the list was pretty good. There were familiar titles from C.S. Lewis and Frederick Buechner. There were older classics like Pilgrim’s Progress.
“You’d be helped by reading any book on that list,” I said. But before he got the impression that I was condoning the use of AI for every question that popped into his head, I asked him to consider what other means he might have used to generate a similar list.
My son scratched his head, a little puzzled.
“Imagine that you could have asked someone!” I said. “Imagine you could have put this question to one of your parents, to your youth pastor, to one of your Bible teachers at school. You probably wouldn’t have just gotten a list of the best books to read, but you might also have heard some stories along the way, of when they read the book, of why it was meaningful in that particular season of their life, of how things changed as a result.”
“ChatGPT gave you knowledge, but those conversations—and those relationships—might have given you wisdom.” At this final point, I’m quite sure I got an eye roll.
“The list was pretty good,” he said.
If I’m going to be honest, it’s astonishing to me how naively we can tend to think about AI these days. For one, people don’t often consider the meaningful differences between “information” and “truth” and “wisdom.” And the questions we’re asking are so absurdly simplistic. Will this help me work faster? Will I be more productive? More efficient? These are always the questions that are put toward new technologies, and perhaps we can say that they served us well when it came to the cotton gin and the steam engine and the motorized car.
But here’s the thing with technological innovation: even in the cases I’ve mentioned, we’ve lost, even as we’ve gained. We got the car—and then we got the suburbs and garages and isolation and loneliness. We got the internet—and then we got alternative facts and increased political polarization and virtual bullying and 24/7 work. Truthfully, I’m not all that sure that my life is substantially better now than before I left for college without a screaming demon of distraction in my pocket at all times.
I have to hand it to the Amish that they at least have considered various technologies in relation to their communal values. Years ago, I read a book by Eric Brende called Better Off: Flipping the Switch on Technology. Brende, an MIT graduate, and his wife went off the grid and lived among the Amish for 18 months, then wrote about the experience. As I remember it, he decided to stay with the Amish after that experiment ended, and with a little bit of internet digging this morning as I write, it seems Brende now lives in Saint Louis and has remained mostly off the grid.
The questions Brende (and others) have asked are much more interesting to me as a Christian. How will a new technology form (or deform) me? What role should work play in my life? What is meaningful work? How can I learn to want less materially to get off the treadmill of needing to work more to buy more to spend more? What information is most useful to me? And when is too much information counterproductive to the goal of growing in wisdom?
What skills will AI technology take from me that I need to practice in order to flourish as a human and grow as a Christian?
In the debate about AI, what shocks me most, quite honestly, is how contented people are to hand over their brain function to machines. As someone who is companioning her mother into the fog of dementia, there is nothing I want less than to alleviate stress on my brain now. To stretch my brain now—to exert it, to strain it—is to strengthen it for tomorrow. All the brain research emphasizes this. They tell us to learn a foreign language, play an instrument, and read fiction. As we engage cognitive challenge, we protect ourselves from cognitive decline.
And yet, just as the industrial revolution gave us labor with back-breaking effort, so now the AI revolution will give us cognitive capacity without strain. But is this really what we should learn to want? Perhaps we need only to consider how much we have needed physical work for physical health (oh, the irony that CrossFit mimics the world we once did in the fields) to see that the promise of AI may actually be a threat, especially to our soul formation.
If we have learned to despise the least physical effort (because who can even endure the effort to find the remote and turn off the light?), we will learn to despise all cognitive effort. And this can only mean that we will find it too taxing to practice many of the habits that help us grow as disciples of Jesus. We won’t read the Bible through in a year. We will put down seminal books like Charles Taylor’s A Secular Age—because the print is small on its 600 pages. We will lose our ability to learn what we don’t yet know, and we will be lulled into thinking that learning simply means having different information instead of being transformed.
AI will cultivate the vice of acedia on a massive scale—and this is just one of the reasons I’ll be okay to be left behind, trying to think and write and create at a distinctly human pace.
If this resistance is a privilege, that won’t stop me from exercising it.



Thank you for sharing your thoughts on losing what makes us (at least potentially) wise humans, and so much more.
Thank you for sharing your thoughts! I have yet to use Chat GPT for much other than making images, and I am scared about using it because I fear depending on it. But I see it being used a lot these days by students, writers, and other professionals. Technology has grown in leaps and bounds, and while there are many negatives to using so much technology, there have also been benefits. I am starting to see that there are not many ways we can have a balance anymore. We have all tried to work harder, focus better, and use technology in healthy ways, but perhaps going off the grid and living simple lives in community, living with less, doing less, but investing in others more, and reading the trove of history, culture and millions of books that are still as yet unread is a better way to live? I do not know. Somedays the answer seems clear and other days we need Substack to connect and read each other's thoughts:) We do not live near each other and cannot share a cup of coffee. I wonder how much of technology we need to stay wise and humble. In the mean time I will continue to try to find a balance, but it does seem impossible at times.