Genre— Non-technical AI speculation and projection, discussion of literature in general.
How EA?— Not EA but very AI
In his diary, Franz Kafka wrote about a mountain. His image had him wavering alone at the top while he watched stronger, more collaborative people winding their way up below him, slow and steady. In my memory of this passage, the mountain was a metaphor for literary achievement, Kafka was literally a fly, wobbling and short of energy— let’s pretend this is the case.
My slightly twisted memory of the passage came up when I was fearing a particular consequence of the development of artificial intelligence, the likelihood that it will eclipse our literary and philosophical ability. As AI develops, we can expect this mountain of literary achievement to grow in size, while Kafka, and the rest of humanity, remain at the same height. It’s only occurring to me now, but maybe the constitutionally stronger people below Kafka represented his fear of succession, of others catching up with and overtaking him. Maybe they can represent AI too.
I’ll argue that literature matters for our humanity, that AI could genuinely succeed us in these areas, and that something about that act of succession is profoundly damaging.
I- Literature, philosophy and humanity
The heights of literature (and philosophy) represent idealised human thinking for me. Language is ubiquitous and constant, and it is precious that with it we can change people’s minds, give them experiences of beauty, and make them better, or worse.
Thoughts can seem insubstantial when you’re on your own. Great literature and more rarely, great philosophy, give them weight. When I imagine myself on Kafka’s mountain, I see the figures above me as solid, and those below as transparent— when I look into the work of myself and those below, I see the author and precisely what they are trying to do.
The possibility of greatness in writing, in giving thought shape, is key to my feeling solid in the world. Less, I realise, that I should become great at writing, but more that others have, and therefore that there is a ceiling to thought which is in principle reachable, and not too far away. They are humans like me, after all.
II- Artificial Intelligence can surpass us.
The idea that AI could surpass us, especially in literature or the arts, is often met with stubborn resistance. It is frequently those who, like me, care about writing and thinking, who call ChatGPT a ‘stochastic parrot’, a creator of senseless bricolage. Though ChatGPT does produce plausible answers, they admit, it cannot be creative. They hold that there is still some unique, human capability which the AI doesn’t have, and never will. They know they won’t be surpassed.
But we should not be confident of this. We can point out chatbots’ limitations, but the fact is that GPT-4 can answer more questions, with more accuracy, than any human. It writes faster than us, and better than most of us. It can pass law exams. Further— AI has only developed these skills in the past few years. We may have reached the limit of AI’s writing and thinking potential, but there is no evidence for this yet. It is at least plausible that the continued improvement of these systems will lead to them surpassing our ability, even in literature. That is unless there is some clear theoretical limit on their skills. I’ll consider three.
One potential limit to AI’s literary achievement is that literature has no singular goal. The goals of literature are amorphous, constantly changing, and decided by a culture made up of influential humans. Books can be critiqued for their prose style, their contents, their political positioning, or even whether their characters are relatable. How is an AI to learn what we want, when we want so much at once?
The question seems difficult, but it is one that every writer has to confront. Although there is no unified goal to literature, there are a range of metrics which are generally important. a story that aims to compel us through a narrative can be better or worse at doing so, a book which comments on its time can be insightful or trite, and a passage that wants to let you into someone’s head can be believable or wrong. The moment there is a goal, there is a task. When there is a task, a future AI will likely complete the task better than us.
Perhaps future human literature will attempt to twist and convulse itself, become more experimental and less compelling, doing what it can to hide its implicit aims from the machines that are watching and learning. But who wants to read this?
Another candidate for a hard limit is the fact that AI writers won’t have a biography. As literature stands, the writer is rarely separated from the work. We make celebrities of authors, consume their life stories, and ask them questions about areas far from the subjects of their books. It is hard to see how a book written by an AI could compete in this market. But how big will this market be, if AI is creating works more compelling, more insightful, more interesting than the work of humans?
Whatever it is that makes humans great at literature— their ability as psychologists, their memory, their experiences, their style and natural turn of phrase, their candour, their fearlessness— nothing is here that in principle cannot be learned. And if there is something that the AI can’t learn, then does it make sense to say that it is communicated in literature?
This brings me to the final limit I’ll consider. We cannot write what we cannot think. The cadence of literary writing is always at least a cousin of speech. These facts may suggest that, in order to make sense to human readers, AI’s literary writing could never be of a standard that humans couldn’t match. If whatever we read must be somewhat linked to our speech, and if what is written must be thinkable, then AI may not surpass us so starkly in quality. But— even our greatest writers have dead pages, make bad decisions in their writing, and show their biases and weaknesses in ways that weaken our view of them. An AI needn’t do this. Even if future AIs hover only just above us on Kafka’s mountain, they’d still be painfully better than us.
III- Why does it matter if AI surpasses us?
Why do humans usually fear succession? The older generation worries that the younger, who they see as weaker and stupider than themselves, will grow into their roles, push them out, and realise just how flawed they are. With the potential of rapid AI advance in the next decade or so, the young face the same fear.
After our work has been surpassed by AI, what happens to us? Our greatest writers, who appeared as solid, fixed in the firmament, may begin to seem transparent, fraudulent or limited. Especially in the case of philosophy (see appendix) — AI’s effortless brilliance will mock us.
The growth of Kafka’s mountain will affect everyone, not just those at the top. It matters to me that I am a good thinker, and that, if I spent my life developing my skills, I might be very good. To an extent, I can care about the development of my abilities for their own sake, but as with fashion, appearance, or kindness, it is impossible to fully separate the value of cultivating writing and thinking from its effect on other people. Part of what it means to be a ‘good’ thinker, is to be better than others.
If the difference between human and AI writing becomes as stark as the difference between human and mechanical calculating, then the place of writing, and thinking, in our lives will have to change. Human calculators are novelty acts. How do we stop human writers from being the same?
A future where those who care about writing and thinking are not spiritually impacted by the growth of AI capabilities is one where we manage to separate the value of personal growth from the abilities of others. This may be possible, but we need to start thinking about it now, carving out the space for human thought and art before all thought is automated.
Appendix: AI will be a better philosopher than us
Perhaps it also sounds absurd that AI might be able to do better philosophy than us. But I’m more confident about this than about literature.
A philosopher reading another philosopher's work can see, fairly easily, the mistakes, mental lacunae, biases, and evidence of narrow reading. These issues are easily avoided by an AI system which can read all of the available literature, is well-versed in the areas of knowledge that the philosophy is brushing up against (unlike almost any philosopher), project every counter-argument and have a response to it, etc…
Whether anyone will bother to give the AI this task is a different question. If we genuinely do philosophy because we want to know the answers to the questions it poses, then most philosophy will be done by AI in the future. If, as I suspect, some people just find it interesting and fun, then we are as likely to hand the labour of philosophising over to AI as we are to ask it to solve our sudoku puzzles.
Still— the stories philosophers tell themselves will have to change if they want to continue to think and write without the help of AI.