UO faculty and students are navigating through the challenges of the “current technology” of campus — artificial intelligence. Many UO faculty and students believe that AI can be “beneficial,” while the Student Advocacy Program has seen an influx of false academic misconduct reports due to AI.
The Student Advocacy Program assists UO students on issues with the university and guides students through UO’s processes. The SAP provides navigation through two branches of services: peer-to-peer advocacy and attorney services.
“It’s really confusing being a college student,” SAP Coordinator Savannah Olsen said. “We are the place to go anytime you get snagged.”
SAP peer advocates are intended to provide a clear understanding of UO’s procedures. Peer advocates also use their expertise to help locate a variety of resources for students.
“Our peer advocates are kind of our resource navigation hub,” Olsen said.“They are campus experts.”
SAP attorney Kristi Patrickus equips students with a variety of assistance including legal counsel and direct support to students involved in UO’s processes.
“Kristi helps with longer term issues that can look like really serious conduct cases. For higher level concerns where students are either really upset or are going to face longer term issues with higher with more major implications.” Olsen said. “They can also look like the kind of issues where you’ve just hit a wall and you can’t do it by yourself anymore.”
Students utilize SAP for a variety of reasons including getting an understanding on how to handle conduct violations, grading issues and housing disputes.
According to Olsen, plagiarism is amongst the most popular issues that SAP assists students on, ever since the Student Conduct Code was updated on Sept. 22, 2023. The modification removed the faculty resolution option of allowing the opportunity to resolve academic misconduct concerns without the need of consulting the Student Conduct and Community Standards for resolution options beforehand.
“Plagiarism is a big one that we see here, it’s one of the biggest issues we’re seeing on campus right now because the academic misconduct process just changed,” Olsen said. Olsen also said that instructors would handle the former process in an “informal way,” including pressuring students to accept responsibility and offering inconsistent consequences.
“They’re accusing students but they aren’t following the rules that they’re supposed to follow.,” Olsen said.
The current SCC requires the direct involvement of the SCCS where a grade penalty may be a result of a violation. Faculty are to retain the ability to informally resolve any misconduct concerns and are encouraged to discuss resolution options with the SCCS.
Following the change of policy, SAP has continued to see “instructors still handling things in that informal way and still leaving that level of inconsistency,” Olsen said.
According to Olsen, the usage of AI is a factor in related academic misconduct reports, since error detections tools aren’t a reliable source.
“We’re absolutely seeing AI-related misconduct and we’re absolutely seeing accusations that aren’t AI academic misconduct, but students are being accused,” Olsen said. “AI detection tools aren’t very good and they don’t give you good information about whether a student is using AI.”
The current SCC advised faculty to not use AI detection tools, including TurnItIn or GPTZero, due to concerns that they may not be effective and can result in false positives.
“It’s really difficult to tell if the student did do the work or didn’t do the work. It’s really difficult to tell to what extent they’re using [AI],” Olsen said. “The error detection tools cannot really tell if you’re using them or not.”
Olsen highlights how it’s important for professionals to educate themselves on how to use AI since AI is now part of the current technology.
“It’s important you understand how to use them [AI] well and how to use them in ways that are generating products that are unique and are valuable to the thing that you’re being asked to create,” Olsen said. “Academia would be making a mistake if we [professionals] tried to ignore that that is now a part of our lexicon of technology.”
Associate Vice Provost for Teaching Engagement Lee Rumbarger is the director of UO’s Teaching Engagement Program, and assists with leading a group of faculty known as the AI Community Accelerating the Impact of Teaching to discuss the multitude of approaches that faculty can adopt to address AI in their classrooms.
“[What] I’m doing is trying to create opportunities for faculty to engage with these [AI] tools and see what their [AI tools] capacities are, and to talk to each other,” Rumbarger said.
According to Rumbarger, it would not make sense to have a “one size fits all” policy on AI that each professor has to abide by, since each faculty has different goals for their classes. Therefore, it is “important for every member of the faculty to be clear with students about what the policies are in their own class.”
The Office of the Provost Teaching Support and Innovation provides a variety of resources that help faculty members navigate AI into their syllabus and class discussions, including AI prompts for teaching.
Rumbarger acknowledges the challenges that students may face when navigating the different kinds of regulations that each professor has on AI.
“The hardest for students is to go from one class where it’s like, ‘it’s a hard no’ to another class where it’s ‘absolutely part of our field,’ now to a third class that’s ‘somewhere in the middle,’” Rumbarger said.
Therefore, Rumbarger hopes that the university will establish a clear understanding of how AI should be used in the classrooms to help prevent confusion and challenges amongst students.
“It’s good that people have different perspectives [on AI], and it’s good if those perspectives are grounded and what people are trying to teach, but it’s really unfortunate if we’re not understanding of what it’s like for students to navigate those different kinds of spaces.” Rumbarger said.
Rumbarger is optimistic about the future of AI since faculty members have found “remarkably creative-ways to think about it [AI] and to bring it [AI] into the classroom.”
SOJC instructor Justin Francese allows the usage of AI for preliminary research and upholds the academic misconduct ruling on plagiarism. Francese has not permitted students to cite AI as a source for past assignments and will continue to follow through with its regulation.
Francese said that departments need to be more educated on AI for the better interest of students.
“The more that we can incorporate those [education] into our own teacher training, the better, [and] more effective we can be at teaching the tools,” Francese said. “I think that to do what we do best, which is [to] serve students’ career goals and learning goals, we can’t ignore [AI].”
Francese also said that “it’s a good idea to assign the use of the tools in certain contexts because not only can students learn the tools [and] techniques of the technology, [but] how to use it [AI] for what the best practices are.” for research, for professional writing, or for grammar checking. Teaching those uses is, of course, important as long as a faculty member know[s] what the best practices are.”
Francese and his team of graduate assistants first use their personal instincts as a strategy to help detect plagiarism.
“We can double check our intuition by using the freely available AI checkers that are there and copy and paste students’ work into those and just to see,” Francese said.
However, Francese said that he would not use the results of the error detections tools as a definitive answer.
“I’m not going to assume that students are using or cheating first, just because the tool is there, while at the same time recognizing that this is a new tool to cheat and along with all the others that are there,” Francese said. “That’s why we always reach out right first and then talk to them.”
Additionally, Francese said that AI tools like ChatGPT have not caused an influx of academic misconduct in his classes.
Many UO students have said that their professors have encouraged students to use AI for academic reasons.
In UO student Jack McArthur’s J462 Reporting II class, SOJC instructor Mark Furman allowed the usage of an AI tool to help generate ideas for a class assignment.
“He had us use some sort of other AI. It was to help us generate ideas or reorganize something. It was interesting,” McArthur said.
McArthur believes that professors are allowing the use of AI because “in general, teachers were just as curious as students about it and they’re still trying to learn how to use it and they’re trying to learn how to limit it.”
However, McArthur said that professors may be increasing the amount of claims against students using AI because “they’re so sick of students not doing their work.” Therefore, professors should “limit it or not let it get out of control.”
Similarly, UO students Josef Pellegrini, Emilie Mendoza and Ethan Cupper said that they all think it’s a good idea to allow students to use AI.
“I think it’s a great tool,” Pellegrini said. “As long as you’re really turning in your own work and not turn in your work that AI did for you, then there’s no issue and using AI in your work.”
Cupper said that his professor allowed students the opportunity to use AI if they cited AI as a source.
McArthur and Mendoza agreed that they’re not worried of being accused of plagiarism since they’re assignments have always been authentic.
“Overall, if students can utilize it properly, then it can be beneficial,” McArthur said.
[Correction: This article has been updated to clarify the roles of the Student Advocacy Program’s peer advocates and attorney.]
[Editor’s note: A photo caption attached to the print version of this article stated that students were “comparing answers,” which may be misleading. The caption has been amended to indicate that students were reviewing for upcoming exams.]