Opinion: Maybe generative AI isn’t a complete academic disaster
———-
If you’re a student today, chances are you have an opinion on generative AI. I know I do, especially as an English major with people speculating left and right whether AI is going to take over any job I could have in the future.
The ability of tools, such as ChatGPT, to generate a wall of text on seemingly any subject is mind-blowing, and it’s no wonder some students choose to use it to clarify their thinking, improve their work or just save time.
I wouldn’t count myself as part of that group. Or at least I wouldn’t have until recently when I found myself in class with UO English Professor and Digital Humanities Minor Director Mattie Burkert, where we all collaborated to produce a classroom AI policy.
In the past, the consensus from my teachers and professors on AI use has been “don’t even try it.” As someone with a raging ego at times, I’ve been just fine with that. There’s no way a machine can do my work better than me, right?
This professor-endorsed shift away from the original policy was shocking to me.
In Burkert’s class, we watched a video on the benefits and ethical problems of generative AI use in schools. I had only ever heard of people using it to write entire essays, but the video opened my eyes to the different, and possibly more ethical, uses of the tool divided into three categories: research, ideas and writing.
The research category included requests for helpful sources or background information on a topic, while ideas covered paper outlines and problem-solving instructions. These possibilities were exciting to me, and I began to see how AI could be more than just an enemy trying to steal my future jobs.
To test the capabilities of AI for these various applications, I decided to run a few experiments of my own with the free version of ChatGPT.
The highlight of the experiment was when I asked it how to solve a type of math problem I had been working on all week. It gave me the correct formula and an in-depth explanation of the variables involved as well as situations in which the equation would be useful. I was pleasantly surprised, but I don’t expect to use this feature much in the future.
Next, I described my college thesis idea and a few sources that helped me. When I asked for recommendations for more academic articles, it spat out a list of titles that initially seemed helpful. When I researched them, I found that they didn’t exist. I confronted ChatGPT about this. It told me the articles were “theoretical.”
I had more success with the writing category, although it still wasn’t perfect. When I asked the AI to write a script for a five-minute presentation on the history of labor rights in the Pacific Northwest, the information was accurate. However, when I read it aloud, while focusing on speaking slowly, it was half the time I needed. A prompt from a short assignment in one of my English classes yielded similar results: factually accurate information, a decent writing style but no depth.
Clearly, even when generative AI theoretically has all the answers, it’s not the most reliable source. Even when it provides accurate information, it runs the risk of plagiarism, a controversy particularly evident in the art world.
Even with writing, AI doesn’t pull information out of thin air. As Burkert reminds us, “the AI system wouldn’t be able to train itself without some human data.” And this isn’t the only ethical issue its use presents or even the biggest one we should focus on.
As we talked, she described a multitude of problems to me, ranging from the various labor issues it presents, the strain it places on creative workers and even the environmental impact of the data centers it requires.
“We’re not having the conversations we need to have,” Burkert said. “[There are] huge questions and problems we need to grapple with about the cost-benefit, and instead we’re talking about people turning in essays.”
So, as a student caught in the middle of this confusing time, what are you supposed to do? Feel morally superior as you drudge through a concept you don’t understand? Embrace this new tool and submit essays with made-up sources and quotes?
With AI policies that vary between schools, classes and instructors, Burkert explains that the best thing you can do is take the time to go to every professor’s office hours and discuss it with them. Figure out what their expectations are for different assignments and what you’re supposed to get out of them.
As for me, I don’t expect to use AI in my classwork going forward unless an assignment calls for it. After experimenting, I don’t trust it to consistently be accurate and produce work up to my standards. But with that in mind, I’m not turning my back on it just yet either. It presents more possibilities than I ever considered and I’m excited to see where it goes in the future.
Tresnit: ChatGPT as a classroom tool
January 22, 2024
0
More to Discover
About the Contributor
Sadie Tresnit, Opinion Columnist