Inhuman Oracles and the True World
By Glenn Arbery|March 10th, 2023|Categories: Catholicism, Glenn Arbery, Senior Contributors, Technology, Wyoming Catholic College
In these strange times, the arrival of ChatGPT, a dispassionate voice that draws upon vast resources of knowledge (far beyond human capacity), might seem like a good thing.
Late last semester, the mother of one of our freshmen sent me an article about a professor who had stopped assigning essays. He had realized that with the advent of ChatGPT, he would not be able to tell real student writing from texts by GPT, which stands for “Generative Pre-trained Transformer.” GPT’s “large language model” draws upon massive amounts of data, including millions of books, to construct responses to complex prompts within seconds.
Several weeks ago, the 99-year-old Henry Kissinger published a cautionary essay about ChapGPT in the Wall Street Journal. Co-authored by former Google CEO Eric Schmidt and Daniel Huttenlocher, Dean of the Schwarzman College of Computing at MIT, the essay explains that the capacity of artificial intelligence to generate extensive answers was discovered by accident. “These models,” they write, “are trained to be able to predict the next word in a sentence, which is useful in tasks such as auto-completion for sending text messages or searching the web. But it turns out that the models also have the unexpected ability to create highly articulate paragraphs, articles and in time perhaps books.” As these distinguished authors put it, “the process by which it created those representations was developed by machine learning that reflects patterns and connections across vast amounts of text,” and “the precise sources and reasons for any one representation’s particular features remain unknown. By what process the learning machine stores its knowledge, distills it, and retrieves it remains similarly unknown.”
I asked ChatGPT whether Christian warfare in the Song of Roland (which our sophomores just read) is compatible with Catholic teaching. In just a few seconds, it distinguished between “just war” and “holy war,” asserted that the poem problematically depicts Charlemagne’s enemies as “uniformly evil,” and concluded that “while the Christian warfare depicted in the Song of Roland may be consistent with certain aspects of Catholic teaching on just war, it also contains elements that conflict with the Church’s teachings on the conduct of war and raises theological questions about the compatibility of the concept of holy war with Christian doctrine.”
The answer seems intelligent and fair—which makes it alarming. Why? Because nobody wrote it. It is apparently impossible to find out exactly where ChatGPT found its information about The Song of Roland and holy war, sorted through it, and generated an answer. According to Kissinger, Schmidt, and Hutttenlocher, the developers themselves do not know exactly how it works, and “the complexity of AI models has been doubling every few months.” Just to make sure, I asked ChatGPT if it was intelligent, and it answered that “the text generated by GPT can appear to be intelligent, as it is designed to mimic human-like language patterns and generate coherent and contextually appropriate responses to prompts. However, it is important to note that GPT is not capable of true understanding or consciousness, and its responses are generated based on statistical patterns learned from large datasets.”
It’s hard for us to get our minds around what this statement means. GPT just generates texts that it does not understand. Its self-description warns that “the text generated by GPT may contain errors or biases, as the model is only as accurate and unbiased as the data it was trained on.” Could a student at WCC use ChatGPT to “write” a paper about, say, matter as potency in the work of St. Thomas Aquinas? Sure: I just had it generate one (interesting, informative, completely unsourced, possibly untrue). Will it happen with our students? With our technology policy and the “reality ethos” we try hard to instill, we certainly hope not. We count on honor and integrity. But suppose that the temptation to consult the oracle is too great. One way to ensure that GPT does not exert undue power is to rely upon the Socratic method alone (the recent proposal by Jeremy Tate, CEO of the Classical Learning Test). If cogent writing remains a good, a professor can give a prompt in class and have students write essays by hand. But perhaps the stronger challenge will be to achieve heightened prudence with respect to the model’s oracular statements and more penetrating attention to the difference between soulless information and good writing. Teachers can prompt students to show what is lacking or misleading in an essay generated by GPT.
A word about what I have called the reality ethos: our students spend their first three weeks at WCC backpacking in the wilderness. On campus, they do not have cell phones. They read books with care and passion. As much as possible in a world intoxicated by technology, we let them experience the given world, the created order in its beauty and complexity and limitation and the human presence unmediated by the constant temptation to be virtually elsewhere. As St. Thomas Aquinas puts it, “The first effect wrought by God in things is existence itself”—that gift completely ignored by contemporary culture. Milton’s Satan voices the argument of many today: “We know no time when we were not as now;/Know none before us, self-begot, self-rais’d/By our own quick’ning power.”
Republished with gracious permission from the Wyoming Catholic College Weekly Bulletin.