Section: Features

Kenyon professors discuss implications of ChatGPT, AI art

Kenyon professors discuss implications of ChatGPT, AI art

Professor Elkins participated in a faculty panel | COURTESY OF KENYON COLLEGE

Chats on campus look different than they did even a few months ago, with the advent of ChatGPT. Like their peers around the globe, Kenyon students have begun to explore OpenAI’s recently released chatbot, finding ways to bring their conversations — and even their coursework — to the world of artificial intelligence. For some professors on campus, this is less concerning than it sounds.

Decades ago, researchers thought it would be easy to teach a computer to generate and understand language, after having already achieved the remarkable feat of teaching it computation. But the task turned out to be incredibly difficult.  

Professor of Comparative Literature and Humanities Katherine Elkins, who is also director of the Integrated Program in Humane Studies (IPHS), works in the field of natural language processing, which includes both the understanding and the generation of language. Over the past 10 years, she has worked extensively with Visiting Instructor of Humanities and Affiliated Scholar in Scientific Computing Jon Chun on computation research. 

Elkins said deep learning, a type of machine learning, seems to be effective in teaching technology to understand and generate language. “This is a neural net that is loosely modeled on the human brain, in which, instead of telling a computer what to do, we feed it massive amounts of data and it learns on its own,” she said. ChatGPT, an artificial intelligence (AI) tool that users can converse with, seems to be able to do just that by generating natural-sounding language probabilistically. 

Kenyon students have been using ChatGPT since it came out last November, Elkins said. “I can tell you that students have used it to apply for jobs, and write the cover letter, and so far, every letter written by the AI has gotten them an interview.” 

Using ChatGPT is not plagiarism because it doesn’t turn up work that already exists, Elkins said. However, that doesn’t mean that the work it generates belongs solely to the student: “It depends on how much you think that prompt engineering requires human skill,” she said. Much of the work that goes into using AI requires knowing how to communicate via natural language when delivering a prompt.

Kenyon students in the Digital Humanities Lab have played a crucial role in collecting data about how generative pre-trained transformer (GPT) models work. Elkins has worked with students to program the technology to perform like a particular writer. “We had students do Sex and the City, we had students do Seinfeld, we had them do Chekhov, Oscar Wilde, Taylor Swift lyrics, all kinds of stuff,” she said.

When Professor Elkins participated in a roundtable discussion of GPT-3 last October, she was the only panelist able to provide specific, novel examples of the program’s responses to prompts — because of the help of Kenyon students and their original research. “We don’t really know how it works,” she said. “So right now we’re just collecting tons of data and then trying to develop theories to explain what we’re seeing because old models of linguistics and language don’t seem to apply. And so Kenyon students have been some of the first to be collecting all of that data,” Elkins said.

Kenyon community members have also used AI to create artwork. Around 2018, an intriguing and slightly unsettling website called “This Person Does Not Exist” caught the attention of Studio Art Technology Specialist and Visiting Assistant Professor of Art Emily Zeller. The website features AI-generated faces that do not belong to real people but look realistic, and this introduced Zeller to the world of AI. 

Zeller began playing around with AI art-making with the collaborative art tool Artbreeder, which at the time was known as Ganbreeder. She liked that users could ask the software to create interesting images based on unique commands: “You could put in, I want something that’s 3% jellyfish and 25% grasshopper, or negative 30% comic book or just weird things, and then it would make something that was always strange looking.” To her, the art felt slightly warped. Even today, AI cannot generate hands or teeth very well.

When photography was invented, many painters expressed concerns that it wasn’t real art because it involved a machine, Zeller noted. The same thing happened again when Adobe Photoshop came out. “So this is the latest version of that,” she said. “I don’t feel like it’s really coming for artists’ jobs.” However, she can understand concerns when it comes to digital artists whose work takes hours of labor — with AI, users can generate an image in just a few seconds. It is also concerning to Zeller that users can prompt the technology to create images based on the work of real, living artists. 

Whether AI-generated art is the intellectual property of the user is a different conundrum. The United States Copyright Office recently declared that users cannot copyright AI-generated images because they “are not the product of human authorship.” To Zeller, the process is collaborative between the technology and the human. Zeller said that when humans spend time generating and editing various prompts, or deleting and combining new visuals, the process counts as artmaking. “I think people who are really engaging with it are making art, but it’s really important to be transparent [that it] is AI-generated,” she said. Without transparency about the collaboration, the artist is claiming the work as their own without knowing where the data that the technology was trained on comes from. 

One thing that Elkins and Zeller both agree on is that it is important for students to be informed about new technology. “One of the things that we believe about our [IPHS] courses that we teach is that students like you at Kenyon need to understand how the technology works to be a good citizen, to weigh in on these kinds of things,” Elkins said. Zeller also encourages Kenyon students to explore the tools available, but to not turn in work that does not technically belong to them. 

Both professors are cautiously optimistic. “I think it’s gonna be a really interesting time to be alive,” Elkins said. “With many dangers and many opportunities.”


Comments for this article have closed. If you'd like to send a letter to the editor for publication, please email us at