Skip to Content

How UO is incorporating AI into the classroom

How professors across departments are training students to work alongside artificial intelligence amid growing AI usage
Erinn Varga
Erinn Varga

In the last few years, the world of artificial intelligence has continued to change, providing benefits and causing confusion for the University of Oregon community

Some programs at UO train students how to use AI to process medical data, do artistic mock-ups, develop code and brainstorm ideas for writing topics. 

Through the transition, professors across fields are finding creative ways to add instruction on AI into their courses and address ethical questions along the way.

Leslie Coonrod, associate director of the Bioinformatics and Genomics Master’s Program, said AI has helped with discovering “the next big thing” in bioinformatics.  

“I like to say we’ve never taught this exact same class twice because we’re always integrating new techniques… we’re always looking for that next big thing on the horizon,” Coonrod said.

Since different departments use AI differently, the UO 2023 to 2024 Communities Accelerating the Impact of Teaching conference decided that a “one size fits all” policy would be insufficient in addressing the needs of every department, according to Assistant Professor and Philosophy and Data Science Initiative Data Ethics Coordinator/Colloquium Committee Chair Ramón Alvarado.

According to UO’s website, AI policy guidelines “strongly encourage instructors to have an explicit policy about GenAI in their course syllabus” and “reinforce their expectations in assignment instructions and in conversation with students,” but don’t list explicit rules.

Alvarado teaches computer science and data science students the philosophical and ethical implications of technology and AI.

To better understand what “technology” truly means, Alvarado says his Ethics of Technology class zooms out thousands of years.

“Most of my students come in thinking like, ‘Hey this is ethics of technology, you’re gonna talk about the internet, you’re gonna talk about social media, you’re gonna talk about AI, right?’ But because it is the ethics of technology in general… do I start with a rope? Or do I start with bread because that’s technology as well, right?” Alvarado said.

Alvarado said asking these questions is critical to thinking about things with a wider lens to realize “technology is almost inescapable.” 

A large part of Alvarado’s classes are discussion-based, and students analyze social biases in GenAI models and ask deep questions, including: What does it mean for an algorithm to be fair?

“Ethics is an exercise that needs to be done… continuously by us. And so it’s not something that you can just program a computer to do for you,” Alvarado said. “A lot of people think we can fix bias in machine learning by just getting the proper computational model for fairness and then we can just make an algorithm fair. No, you’re still gonna have to choose which kind of fairness you’re talking about.”

Casey Shoop, a professor of literature and philosophy, teaches a Clark Honors College 101 course that helps students analyze writing styles.

According to Shoop, he assigns his students to write an essay about themselves and then asks ChatGPT to write an essay about his students as an exercise on differentiating between a human-written and AI-written piece. 

Shoop said the error rate was significant, which caused his students to think about what makes an essay truly personal.

“The technology (AI) has already insinuated itself into our lives and so I think the assignment has a practical dimension which is students need to know why that writing is not as good as it might be,” Shoop said. “And so learning what the limitations of the large language models are is useful as a way of thinking about how to write a successful piece of writing.”

Shoop also said it is useful to step back and think about the boxes students may put themselves in to adhere to technological standards.

“I think the uncanny and interesting thing… is not simply just that we are increasingly asking these systems to do more for us to be more like us, but that they’re making us more like them, that’s the true kicker in the course,” Shoop said. 

In the computer science department, senior instructor of computer science Juan Flores requires his students to produce their own original code, but also teaches how to use AI coding assistant Microsoft GitHub.

According to Flores, he only allows students to use GitHub for single sections of their coding projects because it is important for students to learn the basics of programming beyond coding.

“(Students) need to know… what computer science is about, how to formulate the problem, what they need to solve,” Flores said. “Generally (that is more than) just the coding. Coding is like the last part. You need to design an architecture, you need to interview the stakeholders, you need to extract their needs and that’s still not well developed with AI. So coding is the last part. It’s like plowing the ground.”

No matter how a class may approach AI, journalism professor Damian Radcliffe said disclosing what AI model is being used is always important to avoid “undermining trust,” and to encourage collaboration among professors and students.

“I think it’s helpful for us as faculty, as instructors, to understand how students are using this technology so that we can support and guide that as much as possible,” Radcliffe said. “(Disclosing AI use) will in itself make you think about the implications of using this technology — both the good and the bad.”

More to Discover