ChatGPT, an artificial intelligence chatbot, has opened up conversations about academic misconduct and learning opportunities at the University of Oregon regarding the power that technology holds in the classroom.
ChatGPT, an artificial intelligence chatbot from the AI lab OpenAI, was released at the end of 2022 and allows users to generate text.
Two months after its release, ChatGPT reached over 100 million users according to the MIT Technology Review.
Thien Nguyen, assistant professor in the Department of Computer Science at UO, said the appeal is not about the technology itself but rather the accessibility.
“It’s not about the revolution of the technology but it is about how accessible it is to the public user,” Nguyen said.
With discussion of academic misconduct and the ethics of ChatGPT, professors at UO are expressing their opinions.
Maxwell Foxman, assistant professor of Media Studies and Game Studies in the SOJC, said ChatGPT will naturally complicate things but also believes there are interesting aspects to dive into with the emerging technology.
“I think it’s important to balance the advantages of ChatGPT with the requirements of teaching,” Foxman said.
Foxman said ChatGPT is just as appealing for educators as it is for students. He has been using ChatGPT for his own research. He asked ChatGPT to generate questions and he was able to critically appraise those answers.
“It is yet again another technology that will change the classroom,” Foxman said.
In his classes, Foxman said some of his assignments are too complex to use ChatGPT. He also leaves room for student reflection in his class assignments that ChatGPT is unable to aid.
But when AI or ChatGPT is used, it’s done with his permission.
He is not aware of students that have been caught plagiarizing with ChatGPT at the UO but said citing the usage of it is important.
“Students need to recognize that keeping track of that process is going to be as important as citing or referencing a quote,” Foxman said. “Referencing and citation are going to come along with this.”
Sophomore Jillian Gray said she has used ChatGPT to generate ideas and brainstorm but still has reservations about it.
“I don’t trust it. I have too many fears and questions about it to use it outside of a classroom assignment or just for fun to see what all its limitations are,” Gray said.
She believes that people may overuse the technology without understanding some of the potential consequences..
“I’d be scared for people who just type in the prompt of their assignment into the site. What if ChatGPT just rips off another person’s work and spits it out to you and you’d never know?” Gray said.
Although some students may use ChatGPT to cheat on assignments, Gray doesn’t think it’ll become a norm.
“I think that some may overuse or abuse it but for the most part, students thrive on academic validation and success, and that’s something a site like ChatGPT can’t provide,” Gray said.
Dan Morrison, professor at the SOJC, said he’s been doing research on ChatGPT and other forms of reality-altering AI by collecting articles on the uses and effects of AI. He’s concerned about AI’s potential as a source for plagiarism.
“If you turn in work you did not create, whether you steal it from someone else or whatever other means, that falls under the definition of plagiarism,” Morrison said.
The line between ethical and unethical with ChatGPT is thin, according to Morrison. “It’s here. Everybody’s going to use it but again, it’s a tool and if you use it ethically, fine, but it will never be ethical to use it to create an image or for a story. Never,” he said.
Morrison said journalists or writers who use AI run the risk of ruining their career.
“The biggest concern, of course, is that it’s incredibly easy to use. It’s not fail-safe, it makes mistakes, but it is powerful and is very quick,” he said.
As a result, plagiarism policies stand stronger than ever, with some professors filtering assignments through the Canvas plagiarism detectors and then AI detectors like Grover.
“If we catch students using it, and we probably will, they’ll flunk that class for sure and will probably be thrown out of the SOJC,” Morrison said.
There is no school-wide response to AI, but UO spokesperson Kay Jarvis said professors are encouraged to establish an explicit policy in their course syllabus specific to the use of AI tools, UO spokesperson Kay Jarvis said in an email.
Plans are being developed to add AI-specific content such as advanced detection software through the Academic Integrity Canvas modules, Jarvis said.
These canvas modules talk about measures to take to ensure academic integrity. “These resources are based on current research around why students engage in academic misconduct and what strategies have effectively increased academic integrity,” a recently published page on AI by the Teaching Support and Innovation website says.
As far as how the UO is going to be tracking students, the website says that detection software could be used on students’ assignments and students should be aware of this.
For some teachers, ChatGPT is encouraging more out-of-the-box and complex thinking when it comes to crafting assignments and writing prompts.
Mattie Burkert, an assistant professor in the College of Arts and Sciences, said assignments should be designed with the knowledge of AI and its capabilities.
“From the instructor’s point of view, it means crafting assignments that invite forms of thinking and writing that are complicated enough that the computer couldn’t do them well,” Burkert.
Students can learn how to think critically even with the role of AI in this generation of content, Burkert said. Burkert used another version of ChatGPT for the course’s midterm assignment.
Students co-wrote a take-home essay on the dangers and ethics of new technology in the novel Frankenstein with the AI and reflected on the experience.
“I wanted them to reflect on the aspects of that novel that were applicable to that information and the ways we approach technology today,” Burkert said.
For the computer science department, ChatGPT is a learning tool for how technology is developing and how it can help the public.
Ngyuen said ChatGPT has brought up research opportunities.
“In computer science classes, we talk more about how it works and the fundamental issues. It’s more about the technical issues and how it links to the social issues that it has,” Nguyen said.
AI sources like ChatGPT give students the opportunity to see how much technology has evolved over time and allows for hands-on experience, Nguyen said.
“If this is going to stay with us for a long time, then it’s about teaching the students how it works and getting them to use it in a more efficient way,” he said.
However, people must be aware of the setbacks the emerging technology might have, Nguyen said.
“With every 100 answers, 99 of them could be correct and one of them could be wrong,” Nguyen said
Nevertheless, Nguyen goes forward with an optimistic outlook on ChatGPT.
“We are lucky to be in a critical moment for technology development where we start seeing it can reach the public really quickly,” he said.