Ai © Shutterstock. All rights reserved.

By Jeffrey R. Di Leo

Professor of English and Philosophy at the University of Houston-Victoria

Jeffrey R. Di Leo is Executive Director of the Society for Critical Exchange and its Winter Theory Institute, founder and editor of symplokē, editor-in-chief of American Book Review, and the author, editor, or co-editor of over thirty-five books, including Corporate Humanities in Higher Education: Moving Beyond the Neoliberal Academy (2013), Higher Education under Late Capitalism: Identity, Conduct and the Neoliberal Condition (2017), Catastrophe and Higher Education: Neoliberalism, Theory, and the Future of the Humanities (2021), and Contemporary Literary and Cultural Theory: An Overview (2023). His forthcoming book, Selling the Humanities, will be out this fall.

November 2023


“They are going to close down the English department.” “We are going to lose our funding for the Writing Center.” “Students are going to use ChatGPT for all of their written assignments.” “It’s the end of writing instruction.”

 The release of ChatGPT on November 30, 2022 resulted in a complete schemozzle among English faculty. Around the country, many of our colleagues feared that machines were going to take over their jobs and that students would stop writing.[1] It did not help matters that many faculty were still unhappily teaching in isolation in an unfamiliar pedagogical environment because of the pandemic. Paranoia and fear about the specter of ChatGPT, amplified by weariness and fatigue, spread like a wildfire among our colleagues.

 A year later, however, while there is still high anxiety among English faculty that their jobs will be eliminated, the fires of fear connecting it directly or solely to ChatGPT have abated. There are plenty of threats to job security and faculty well-being beyond machine writing. Just ask the professors whose jobs have been threatened for teaching critical race theory. Coupled with the ongoing cheeseparing austerity measures in higher education, the assault on tenure and DEI initiatives, declining enrollments, and the continued defunding of the humanities,  the threat of ChatGPT to job security is just another day in neoliberal academe.[2]

 Moreover, a recent report seems to indicate that use of this software may be on the decline.[3] Regardless, and more importantly, a full spring academic semester conducted with this artificial intelligence fully and widely available to undergraduate students did not convince university administrators or state legislators that it is time to close down their writing centers and English departments. Not yet at least.

 If anything, there has been less handwringing and more talk of adapting and recalibrating to its usage in all forms of writing across the academy. Many instructors are now exploring how to best utilize ChatGPT in their courses, rather than just ranting in the hallways about “the end of writing instruction.” This process of classroom adaptation, of course, will be an ongoing and important one.[4]

 Value and Academic Integrity

Still, for some, adaptation and recalibration becomes more challenging and less feasible as artificial intelligence gets better at simulating the skill and intelligence required for good research and writing. Why? Because if the kind of research and writing that is required of undergraduates can be done well by a computer, then there is presumably less motivation and incentive for students to write their papers without the aid of artificial intelligence. If a computer can write the paper better than I can, and my aim is to turn in the best possible paper to my instructor, then why not seek the aide of a machine in this endeavor?

This line of reasoning is a compelling one for many students. Compelling, that is, provided that the honor code and plagiarism policies of the university permit the use of artificial intelligence to aid students in part or full in the composition of their papers. This goes as well for all forms of university discourse, test preparation, and writing where ChatGPT can be utilized. If the university allows, for example, students to use ChatGPT to prepare for or complete their essay exams or to provide first drafts of their papers, then the only prohibition to its use are the values of the student.

If a student finds little or no value in studying for a test or writing a paper without the aid of artificial intelligence that will make the task easier for them, and does not care about learning the material and mastering the art of writing, then in the absence of an honor code or some other university prohibition on the use of ChatGPT, that student will use it without a second thought. And who can fault them? The value a student places on learning is itself learned. If the message of corporate visions of higher education is that learning is secondary to skills training, then the skill of using a computer to do one’s writing outweighs learning how to write without a machine. [5]

In the workplace, does it matter if the memo was written by ChatGPT or a human? The key is that its style and message align with the professional protocols and expectations of the workplace. And, all things equal, if a machine assisting a human can produce this memo faster and better than a human alone, workplaces that value efficiency will go with ChatGPT. Thus, if writing in the workplace with the assistance of machines is more efficient than writing without their assistance, and it is a skill that employees can master, then it is sure to be one that will be sought by employers. Furthermore, this means that corporate universities that believe their prime directive is vocational training will strive to train their graduates on how to use machine writing in the workplace.

One of the ways to push back on the prime directive of the corporate university is to draw clear institutional policy lines between student academic work that can and should use ChatGPT, and that where its use is a violation of academic integrity. Moreover, these policies should explain why these lines have been drawn and how their observance is important to the achievement of the institution’s educational vision for the student. Just as the mathematics instructor must decide whether students can use a calculator to complete their assignments, the writing instructor must decide whether students can use ChatGPT to complete their assignments.  Such decisions are informed by educational values, institutional policies, and pedagogical choices—and time should be taken in class by the instructor to explain to students the reasons for them.

The Efficiency Paradigm

ChatGPT has not changed our values regarding education, writing, and learning. Rather, this new form of machine writing has laid bare our existing values. Efficiency is a key value of the corporate university. It is one that takes many different forms in academe. For example, we tend to strive for efficiency in course scheduling, not its opposite. Efficiency in time toward degree not its opposite. Efficiency in writing, not its opposite.

 And, conveniently enough, efficiency generally serves the needs of academic austerity measures. If we can only offer one section of a course rather than two, then the department can expend less money on instruction. If a student has a path to completing their degree in four years, then this is always preferable to five or six because it saves the student some educational expense, improves the university’s time-to-degree figures, and ultimately, costs state universities and colleges less to educate students. If you can write a better paper in a shorter amount of time and with less instructional cost, then this serves the ends of austerity better than producing the same writing in a longer amount of time with more instructional cost.

 Similar efficiency thinking holds for research as well. Fifty years ago it was unthinkable that an undergraduate could write a research paper without physically going to the library. But today not only is it possible, but more research material is available online than can be found by stepping into all but a few Ivy League libraries. Not only is this research material widely available online, it can also be searched by engines like Google, which makes using a library card catalogue for this purpose akin to walking rather than flying across the country. If one wants a book or an article on the history of the computer, for example, a search engine can locate one for you. It can also search its contents for you.

 I am old enough to recall a time when the only way to find a passage in a book was by turning printed pages. Now this is done by a machine, which can also locate in a matter of seconds any word or sequence of words in the book. By comparison, research with printed books and articles is very labor intensive, time consuming, and inefficient. In short, much of what was previously done by humans under the term “research,” now can be done by a machine that is prompted by a human.

 But whereas few colleagues are upset that they don’t need to go to a library to do their research, and are more than happy to allow a machine to do their searches for them, when a related technology makes it easier for a student to write a paper based on the same books and articles, then everyone gets up in arms. I might be mistaken but both the digital library and machine writing seem to be cut from the same cloth: digital research databases. Without these digital research databases, ChatGPT would not be able to produce a good research paper, and faculty would need trudge up to their library for many of the books and journals they use in their research.

Call me lazy, but I’d much rather download an article than go to the library. So why then are students who use ChatGPT to jump start their research papers considered by some lazy? Aren’t they just being smart with their limited time like their professors who use books and articles online rather than trudging to the library every time they need something?

The Typewriter and the Computer

Typewriter and Computer

The Two Ages of the Research Paper

The process of writing a research paper did not change last fall with the release of ChatGPT. It has slowly been changing for at least the past forty years.

Back in the early 1980s, few used a computer to write a research paper. The big technological leap I took during this period was from a manual to an electric typewriter. The electric typewriter was a state-of-the-art IBM. It even had “correction” tape built in that allowed me to fix typos without using Liquid Paper or “corrasable” typewriter paper that allowed type to be erased with a gum eraser. A few years later, I upgraded to a typewriter with internal memory. This seemed like a big technological advance, that is, until I purchased a microcomputer complete with a dot matrix printer at the end of the 80s. As an indication of how essential it was to research and writing at the time, consider that I took out my first (and last) student loan to purchase it.

But typing or printing one’s paper during this period was merely the end of the time-intensive research paper writing saga for the average undergraduate. Until relatively recently, everything you needed to read in order to conduct your research was disseminated through paper and ink.

If you were lucky, all were ready at hand through required texts for the class. Some may have been purchased at the university bookstore, whereas others were on reserve at the library. Hopefully, you got to the reserve desk before your classmates, or you would have to wait your turn for the reserve items. For others, the material that they needed to write their paper was not in these required materials.

It required that they go to the library and seek it out. The card catalogue was the starting point for locating books, and bound bibliographies the starting point for articles. But the starting point was sometimes far from the ending point of the search, as it was not uncommon for the books that you needed to be checked out and the articles you sought to be missing—the journal misshelved or (mercy, mercy me) the article torn from the journal.

For the lucky ones who were able to locate their research materials in a timely manner, the reward for this was more time for note taking on the readings. My preference was to make notecards with key passages and points, but others had their own preferred method. These reading notecards then gave way to a paper outline, and then a paper draft, each of which was handwritten. And often, the drafting process was put on repeat cycle a few times. When, and only when, a suitable handwritten draft was completed would typing ensue.

Research and drafting in this environment was time-consuming. The computer, however, changed all of this. Multiple drafts were possible on a computer without nearly as much effort as rewriting pages by hand. Printouts simulated typed pages giving the illusion of finality to a draft that in many cases would be corrected several times in additional printouts. Then, with the advent of digital books and articles, the physical legwork of library research gave way to computer printouts or cutting and pasting from one digital file to another, which of course are much less physically taxing and time consuming.

Moreover, the need to consult a physical dictionary for the correct spelling of words or a printed style manual or grammar book to assure adherence to assigned format and proper grammar vanished—it could all be done by increasingly advanced word processing programs.

In short, for the past forty years many of the elements of research and writing a paper have become increasingly accomplishable through software and databases accessible via computers. If we do not question students on the source of their correct spelling or proper grammar, why would we question the source of their outline or the search techniques of their research? If the goal is production of a good research paper, and no classroom or university rules are being broken, what does it matter whether they were assisted by Microsoft Word, Google Books, or ChatGPT? To make a fuss about using computers to assist in writing is to split technological hairs in an age where the printed word on the library shelf has yielded research authority to the digital word in the data structure.

And just as it is impossible to get onto the internet with a typewriter, so too is it now impossible to ask students who live in the age of the computer—a machine with the potential now to do their writing for them—to go back to writing as though they lived in the age of the typewriter.

Ok, well maybe not impossible, but nevertheless, unreasonable, particularly if we expect them to have a modicum of success with writing in the workplace. Learning how to write in the workplace means learning how to use, if not master, the technological tools at hand. We need to stop worrying about ChatGPT, and learn how to work with it in support of our writing and research.

ChatGPT and Writing

If anything, it is a new beginning for it.

It is something to be embraced in the same way that we have embraced spell-checkers and search engines. If used properly, both can make writing better by reducing spelling errors and increasing the odds of using the most relevant material to make our arguments or tell our stories.

How we come to spell words correctly or use excellent scholarship in our writing has become less important than the expectation that we do so.

If anything, machines have raised the bar on writing and research. Today ChatGPT cannot produce a better research paper on Titus Andronicus than the best Shakespeare scholars in the world. In time, however, there is every reason to believe that the results of computer-generated writing will increasingly mirror the results of human-generated writing. Will this then signal the end of writing and research? I don’t think so, and one need look no further for evidence of this than the example of computer chess.

For years, computer programmers have been working to develop a computer that can play chess better than the best players in the world. One of the results of this is that though there are many computer programs that can play chess, few have stopped playing chess because a computer can also play chess. Rather, people use chess programs to help them to get better at playing chess. Initiates can learn to play chess by playing against a computer, whereas players at more advanced levels can get better at the game through the challenge of playing against a computer.

And, of course, this does not just hold for chess but also for many other human endeavors. Just about anything a human can do, including writing, can on some level be simulated by a computer. And in just about every case, the human endeavor has not ended or suffered because of the presence of computer simulation. In fact, in most cases, the simulation has in some way benefitted the human activity.

Take, for example, sports video games. Just about every sport has been simulated. Nevertheless, none of the real sports have ended as a result of simulation. Junior high students still play basketball and high school students still enjoy soccer in spite of the presence of popular video simulations of these sports such as FIFA and NBA 2K. Nor has college baseball been eliminated by MLB The Show or professional football by Madden. Moreover, these games and others afford people of all ages, capacities, and abilities the opportunity to participate in sports, which now also includes intercollegiate esports.

So if computer simulation of sports has not ended sports, why would anyone believe that ChatGPT portends the end of writing as a human activity? Or obviates the need for universities to employ writing instructors? After all, as far as I can tell, even though a computer can simulate a football game, universities have not fired their football coaches or closed their programs as the consequence of the release of Madden. So why would one expect anything different to occur with the release of a Chat Generative Pre-trained Transformer, otherwise known to us as ChatGPT? There are many things to worry about in higher education today. But worrying that ChatGPT is going to shut down departments and end writing instruction should not be one of them—that is, of course, until they shut down the football program.

    

  • 1. Regarding its potential impact on jobs, a blog that was posted about a month after the release of ChatGPT captures the anxiety at the time well: “When I imagine what a finely tuned version of ChatGPT might look like I can’t say it feels very comfortable and I can’t imagine how it does not mean job/income loss in some way or another. Now it could also mean job creation but none of us really have any idea.” See, Autumm Caines, “ChatGPT and Good Intentions in Higher Ed,” December 29, 2022. https://autumm.edtech.fm/2022/12/29/chatgpt-and-good-intentions-in-higher-ed/.

    2. See, for example, recent liberal arts cuts at West Virginia University. https://www.post-gazette.com/news/education/2023/09/25/wvu-humanities-liberal-arts-cuts-west-virginia-university/stories/202309240143.

    3. Gerrit De Vynck, “ChatGPT Loses Users for First Time, Shaking Faith in AI Revolution,” The Washington Post, 7 July 2023.

    https://www.washingtonpost.com/technology/2023/07/07/chatgpt-users-decline-future-ai-openai/

    4. Regarding these adaptations, the Harvard University Division of Continuing Education notes that ChatGPT is “constantly changing, so whatever adaptations we bring into the classroom must be flexible enough for those changes.” This emphasizes the point that the process of adaptation will be an ongoing one. See, “ChatGPT in Your Classroom: Setting Expectations and Boundaries,” 4 May 2023, Harvard University, Division of Continuing Education, https://teach.extension.harvard.edu/blog/chatgpt-your-classroom.

    5. On this point, see Jeffrey R. Di Leo, Corporate Humanities in Higher Education: Moving Beyond the Neoliberal Academy (New York: Palgrave Macmillan, 2013).

    6. I’d like to thank Benjamin Bertram, Marco Di Leo, Orlando Di Leo, Judson Merrill, and John Muthyala for their helpful comments on this essay.

Next
Next

Ever Higher Education