Image by iStock/XH4D
In this episode of Lehigh University’s College of Business ilLUminate podcast, we are talking with Greg Reihman and Cathy Ridings about the implications and possible impact of generative artificial intelligence, or AI, on universities and students.
Reihman is Lehigh University's vice provost for Library and Technology Services. In this role, he provides strategic, budgetary and organizational leadership of Lehigh's Library and Technology Services, an organization that includes Lehigh’s Libraries, Technology Services, the Center for Innovation in Teaching and Learning, and Information Security Services. Reihman also teaches courses in the Philosophy Department.
Ridings is an associate professor in the College of Business' Department of Decision and Technology Analytics, or DATA for short. Her primary research interest is virtual communities, including social networks, social capital, trust, knowledge management, and electronic commerce in this context. She also has secondary research interests in technology adoption and acceptance.
They spoke with Jack Croft, host of the ilLUminate podcast. Listen to the podcast here and subscribe and download Lehigh Business on Apple Podcasts or wherever you get your podcasts.
Below is an edited excerpt from that conversation. Read the complete podcast transcript [PDF].
For a related podcast, check out Brian Merchant on the Limitations of Technology.
Jack Croft: Let's start with a survey that was conducted recently by Educause, a non-profit association whose mission is to advance higher education through the use of information technology. The greatest concerns about generative AI expressed by higher educational professionals who took the survey were, in order: academic integrity or cheating, which was named by three out of four respondents; overreliance or trust in outputs, which are the search results and contents generated by AI; inaccurate outputs; and AI-generated content becoming indistinguishable from and replacing human content.
Each of these were named as one of their greatest concerns by 60% or more of the respondents. So let's start at the top. How concerned are each of you about the threat AI poses to academic integrity at Lehigh and elsewhere?
Greg Reihman: I want to start with a story about one of my students. I'm currently teaching a course called Philosophy and Technology. So in addition to my administrative role at Lehigh, I also teach and research in this space. As ChatGPT and other generative AI tools were getting a lot of press, I said, "Go do your own research. Go dig into what's happening in this space around generative AI and higher education." And when they came back, there was one student who looked particularly crestfallen. I said, "What's going on?" And she said, "I was surprised that, with this exciting, remarkable transformational technology, that the first thing people are worried about is academic integrity."
I want to give acknowledgment to this as a concern because, depending on how you are assessing your students, this can be a real concern. But putting ourselves in the students' shoes, what she said to me was, … "Give us a little more credit." And I understand why that might not extend to every student in every situation, but it's the other elements that Educause pulls out that are more interesting to me around the overreliance or trust in outputs.
So my advice, I guess, on this is to not pretend that this tool doesn't exist out there. Acknowledge it with your students; talk about it, "How can it be a tool? How can it augment your intelligence and your research?" but not automatically worry that it's going to replace all the kinds of activities that you most value in what your students do.
Cathy Ridings: Honestly, every technological advance brings academic integrity threats. Right? Simply having the internet come online and Google come online were tools that students could use to plagiarize or cheat. So I look at this as yet another technology advance that, sure, students could use to commit academic integrity violations. And as professors, we have to understand what this is and, maybe, redesign our assignments or the way our class is structured so it's not so easy for students to use this.
But we do have to be aware of it, and as Greg said, I like to give students credit to hope that they want to learn at Lehigh and produce their own work for me. I'm not going to stick my head in the sand and say, "Ignore it." I'm not going to blanket tell my students not to use it, but instead, look at it as a tool that they could use in a certain way, and I'm going to educate them about that.
I think professors always have the opportunity to have assessments in class without outside help or sources that students would be forced to do their own work. But we have to be aware of this as yet another technology that's come out that can threaten academic integrity, but could also be something the students use to help them produce better work products and better assignments for us.
Croft: Greg, you had mentioned the second leading concern, which, for you, would actually rate ahead of academic integrity, and that's the overreliance or trust in what comes out of generative AI. What are some of the main concerns you have there with how reliable the information is? Particularly at this stage, I think everybody understands it's probably going to keep getting better.
Reihman: That's a fair point, Jack. It is going to keep getting better. And so whatever conclusions we draw this month, we're going to have to revisit and draw different ones next month. But the general idea, I would say, is there's a phrase that gets, probably, overused in higher education, but maybe underappreciated, about critical thinking.
Again, I don't want to be a naysayer, because there are also truly wonderful things that it can do that will drop your jaw and have you scratch your head and think, "How is this possible? What is this doing?" and, "Wow." We should appreciate those "wow" moments with technology when it arrives. And we should also keep our critical thinking caps on, and say, "Hold on. That one's not quite right. That doesn't sound accurate to me."
And it doesn't take too long. If you see it as a dialog instead of as ask a question and get an answer, that's a better approach to this. You ask it a question about some subject that you know a lot about, and see what it comes up with, and you'll notice the cracks in what it's able to do.
I gave an exercise where I had my students come up with an annotated bibliography. It sounds like an old-school approach there, but an annotated bibliography on research around ChatGPT education or generative AI in education. They came up with some great sources. And then I plugged the same question into ChatGPT, and I challenged them to say, "Compare what you came up with with what it came up with." And they had positive things to say about both. There were different kinds of answers. And the grammar was flawless in ChatGPT; it sounded highly professional-- very polished, very persuasive. And then I pointed out to them that it had given us seven results, and six of the seven were academic articles that don't actually exist-- some of them in journals that don't actually exist.
Ridings: I think that students do have to have the skills to evaluate the reputation and credibility of a source. And I make a big point of this in my courses. When I have my students write papers, part of their grade is, when I look at the bibliography, that they have sources on that are credible and have a solid reputation. And I think this is a critical skill that students need to have, and this existed even before ChatGPT.
So when they are looking at the output of ChatGPT, they have to ask themself, "Does this seem reasonable?" And that way they can evaluate it. It can't be taken, simply, from ChatGPT, put into assignment, and submit it to a professor. They really have to critically look at it to see if it seems reasonable. A lot of my classes that I teach involve coding where students write their own code, and I do the same thing in those assignments. It's your own code, and when it produces an answer, look at it critically. Does it make sense? Could there be a mistake in your code? Just because there's an output doesn't mean it's right. We have to look at the output of ChatGPT with the same eye.
Croft: In addition to the concerns, Educause also asked higher education professionals about the greatest opportunities related to generative AI use. And the top answer selected by 77% of respondents was "improved efficiency of human work" that they think is the greatest opportunity that generative AI offers. But I found it kind of interesting that only 51%, compared to 77%, listed "improved quality of human work" as one of the greatest opportunities. So I find that gap between efficiency and quality of human work to be interesting. And I wonder, what do you think it tells us?
Ridings: I think the efficiency part means that AI-generated results give both students and faculty and, let's face it, the business world the opportunity to have AI automate repetitive and time-consuming tasks that we all have to do that we don't necessarily like.
I think that, as the AI models advance and get better, the quality will get better. And the surveys' results tell us that the AI isn't necessarily the same quality now as humans produce, but I think that will change and the AI quality will get better. But I think people are recognizing that the AI quality of work just isn't exactly the same as human work.
Croft: There's one more topic I'd like to make sure we cover before we wrap up here, and that's policies and standards. Universities and colleges across the country are starting to look at this, and I'm just wondering, what is Lehigh's approach to that? Are there policies and standards that need to be addressed with generative AI advancing?
Reihman: About 10 years ago, a group of faculty, staff, librarians, technologists got together to ask a similar question around academic integrity and standards around student use. And so we're reconvening a group to look at: Does there need to be a modification in our policy around academic integrity? What counts as appropriate use-- inappropriate use?
And then there's always two sides of that coin. One is educating our students around that so they're aware of what's acceptable and-- the easy answer is always, ask your professor, because it really is different from course to course, discipline to discipline, professor to professor. And then the other side of that coin is conversations with professors. So at the Center for Innovation in Teaching and Learning, we work a lot with librarians, instructional technologists, instructional designers to help faculty think through these questions that it might-- for some faculty member, this is what they do; others, this is quite new. So trying to think about, what is an assignment that will tee students up in the right way? What is a way to talk about students and prepare them for this? ….
So the short answer is, no firm policy has been developed around this, but a lot of people have been in conversations about it, even before the arrival of ChatGPT.