In this episode of Lehigh University’s College of Business IlLUminate podcast, we are speaking with Beibei Dong on research she's been conducting with her College of Business Marketing Department colleague Eric Fang on how artificial intelligence, or AI, is changing the way many of us get our news online.
Dong's research and teaching interests include digital marketing, services marketing, and international marketing. Her research, has been published in the Journal of Marketing, Journal of the Academy of Marketing Science, Journal of Service Research, Decision Sciences, Journal of International Marketing, Market Letters, and many others.
She spoke with Jack Croft, host of the ilLUminate podcast. Listen to the podcast here and subscribe and download Lehigh Business on Apple Podcasts or wherever you get your podcasts.
Below is an edited excerpt from that conversation. Read the complete podcast transcript.
Jack Croft: A recent Pew Research Center survey found that 86% of U.S. adults say they get news from a smartphone, computer, or tablet often or sometimes, including 60% who say they do so often. When Americans go online to get their news, how is that news being delivered to them and who decides what they see?
Beibei Dong: Typically, we say news could be fed in two ways. One is called a user subscription, and the other one is AI recommended. So when we say user subscription, we refer to the situation where news stories are pushed from sources that readers have subscribed to. For example, if you use the Google News app, the app allows you to follow various news providers. This could include local communities, companies, major news outlets, and the media platforms. You could also subscribe to a variety of topics, right? Such as health care, politics, education, technology, and all this news will appear in the tab. That's what we call the “Following” tab.
Alternatively, news feeds could actually be pushed by recommendations, and these recommendations are coming from what we call artificial intelligence algorithms. Basically, the AI would actually produce some algorithms based on the readers' prior clicking or browsing behaviors, and it will be real-time adjusted based on their evolving behavior. For example, the Google News app also has a "For You," page. So when it says "For You," basically that's where Google uses AI to create digital profiles of its users and then deliver content that would conform to their preferences.
Croft: One of the things I found fascinating is that the research is looking at the way that we process information, and you discussed that there are two main ways that you're looking at. What are those two ways and what do the differences between them tell us about how people reach conclusions and form opinions about topics?
Dong: In our research, basically we are saying when you are reading news on a Google News app, let's say it's either in the “Following” tab or “For You” tab. We think because there are some fundamental differences between these two models, the way you process information and navigate news would be different. And there are primarily two ways to do it. One is called the central processing, and the other one is called the peripheral processing. This comes from a classic theory called Elaborated Likelihood Model. The short name is ELM. So ELM basically outlines two ways people process information.
In highly involved situations, people will be engaged in what we call central processing, so when they spend high levels of cognitive resources to digest information in a systematic, rigorous, and comprehensive way. Let's say in the news feed context, we're using central processing. Readers care more about the comprehensiveness, about the news, OK? So they will pay more attention to the core message that news tries to detail, and they will read that news very carefully and process it in a systematic, rigorous way.
In contrast, in low-involved situations, people are more likely to adopt what we call peripheral processing. I think peripheral basically says readers care more about the efficiency, and they tend to use heuristics to quickly form conclusions. They will pay more attention to the peripheral cues — these are typically not directly related to the content — to make their judgment. For example, in the news feed context, they may actually, instead of going in-depth into the news, the news content itself, they may actually look more into the headline, the picture, and the author who writes the news. And they would use that simple information to quickly arrive at some conclusions. These are the two different ways people process information.
Croft: You're focusing on four fundamental differences you've identified between the subscription model and the AI-driven recommendation model. Two of those — what you call source uncertainty and content relevance—have to do with the news itself, the what, and two—presentation fluency and lack of content control — have to do with the way the news is presented, which is the how. [For detailed descriptions of each of the four fundamental differences, please listen to the podcast above or read the full transcript.] When we put these four differences together, what do they tell us about the way people process information depending on whether they get their news feed from a user subscriber model or one that's driven by artificial intelligence?
Dong: Actually, the four fundamental differences we talk about between the two news feed models actually all point to a same prediction. That is: In the AI model, viewers are less engaged, and they would opt for peripheral processing due to the high source uncertainty, high content relevance, the fluent news presentation, and less sense of control. In contrast, in the subscription model, viewers are more involved and engaged in central processing because of the low source uncertainty, low content relevance, less fluent news presentation, and a greater sense of control.
Croft: So should we be concerned about the increasing role that AI plays in how people get their news?
Dong: Yes. My answer would be yes. Apparently, when we are celebrating what AI has given to us, such as extremely individualized and personalized news content and the fluent, effortless viewing experience that is based on real-time adjustment of our mood, we need to be careful about the downsides. This includes the filter bubble. Our news world could be selective, and biased, and controlled by the engineers who write the algorithms and the platforms who decide what they want us to learn. Ultimately, having a machine to make the call for human brains is a bit scary.
Probably what is even more concerning is that AI uses our cognitive effort in information processing. This could mean that AI actually makes us lazy. AI is also eating up humans' ability in deep reading and critical thinking. While being fed by everything we love to see in our comfort zone, we may gradually lose the ability to discern different views and to think independently and critically, right? Just think about like the pre-teens, the teenager kids who are glued to TikTok these days, and TikTok is purely based on AI algorithms. We're kind of worried about the future of a younger generation in terms of their ability to think independently and critically. So this is something we need to think about as to look at how AI is interacting with humans.
Croft: Are there any lessons from your research that you think could have an impact on the future of newspapers, whether print or online?
Dong: The first thing I can say for sure is that the future of newspapers will not be on printed papers. We are moving away from it. It most likely will be online, but I would say more accurately, it most likely will be on the mobile app, the mobile phone, as our entire world is now in a portable mode. So I think that's where we are heading in terms of the device, the medium that news come from. The other thing is that for the content, I think the news will be different for everyone. It will truly depend on who will read them, at what time of the day, in what mood, use what device, and at what location.
It would be truly individualized to the certain extent that would best cater for what you need and what you like. The other thing is that the bad news for that is now we're kind of in the peripheral processing mode, where we actually interact with the news dominated by AI. The news has to be short, colorful, and interactive to best serve our readers who are looking for fast information and light reading. So we need to think about how to present the news in a way that would truly tailor for their reading preference, now.
I just want to say one more thing. I want to use the quote that I really love from Stephen Hawking. He's a famous theoretical physicist and an author, and he said, "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last unless we learn how to avoid the risks."
So I think regardless if we like AI or not, AI is in our life, and it is changing every aspect of our world and our life. The future of newspapers as we discussed today provides a great example to understand the interplay of AI and the human. But I think there is a lot left for us to explore in many different fields about how AI could influence us in a positive way and hopefully in a controllable way.