
In this episode of Lehigh University’s College of Business ilLUminate podcast, we are talking with Ramayya Krishnan, Dean of the Heinz College of Information Systems and Public Policy at Carnegie Mellon University, about artificial intelligence (AI) and the future of work.
Krishnan was on campus recently to deliver the Lehigh College of Business Year of Learning Lecture on this year's topic, Generative AI and Business: Opportunities and Challenges. He is currently Faculty Director of the Block Center for Technology and Society, a university-wide center at Carnegie Mellon examining the societal consequences of technological change. And he is an expert in data and decision analytics and digital transformation.
Krishnan spoke with Jack Croft, host of the ilLUminate podcast. Listen to the podcast here and subscribe and download Lehigh Business on Apple Podcasts or wherever you get your podcasts.
Below is an edited excerpt from that conversation. Read the complete podcast transcript [PDF].
Jack Croft: In the broader context of the business environment as it is today and is rapidly changing, what have you seen that make you optimistic and hopeful that AI really can be a powerful productivity enhancer for companies?
Ramayya Krishnan: I mean, just a few examples, right? I think beyond productivity, it can increase access and opportunity. If you take Khan Academy, which is AI-powered education, the number of kids who can get access to AP-level courses, that not every school has instructors who can deliver AP-level courses, you get access to really high-quality instruction using Khan Academy. And initially, it was all just videos that Sal Khan produced.
But now … there's been an integration of AI into Khan Academy's content so that you have tutors that are AI tutors that help students learn, be it math, be it history, be it English. That's a great example of increasing access and opportunity.
A second example is in health. You have physicians engage in consults with patients. So if you are my physician, there's a startup from CMU [Carnegie Mellon University] called Abridge that actually listens in on the consult. And at the conclusion of the consult, creates a summary, a script of what the physician and the patient discussed. And the physician has to review it to make sure that it correctly summarized the content of the consult and then uses that summary to fill in what the electronic health record system needs and fill in what the claim system needs.
And that supposedly returns between six to eight hours of what they call pajama time to a physician. This is the time that the physician at the conclusion of their workday spends in actually filling in and providing summaries of the various consults that they've had during the course of the day. Six to eight hours per week is a lot of time for physicians.
So these two examples, one in education, one in health. And you can think of numerous other examples of AI yielding productivity benefits. I think the key issue is that as you think about responsible deployment of AI, you have to think carefully about how this is deployed in ways that one is cognizant of potential intended or unintended kinds of consequences that might come about from this deployment.
So there's a risk management piece that needs to be thought about as well. So there is productivity, but you need to also consider how to assess risks and manage the potential risks of deploying AI. The simple example is the hallucination example I gave you, making sure that you don't accept everything it says because especially generative AI is what's called a stochastic system, meaning it makes predictions. It can have errors in them. While the number of errors is reducing, it can still commit errors. And some of them could be serious, especially in high-stakes settings.
Assessing these systems is still an ongoing effort at figuring out how to actually reliably get assurance on these systems. It's still an area where we are still understanding how to do it in a way that would allow us to gain complete confidence that these AIs that are being deployed are being deployed in ways that are responsible.
In truly high-stakes situations, I think most of the deployments that you're seeing today are in deployments where the benefit-cost assessments in terms of productivity gains and the potential risks are such that that's where you're seeing most of the applications of the sort that I was describing. You're not seeing deployments of it in areas say like loan granting where explainability is an important criteria and therefore these types of models that are hard to understand are not being deployed. Or in diagnosis, there are proofs of concept underway, but they're not being used in production yet because we are not fully confident yet about how to assure these systems in high-stake settings.
Croft: Now, one of the main concerns, particularly that workers have with AI is, "Oh my God, the machine's going to take my job." So first of all, how much of a threat is that? And what are the kind of categories of jobs that will probably be most vulnerable to having either whole or part of the work being taken over by AI? And how do we as a society balance that against reskilling, retraining, the other things so that people aren't kind of falling by the wayside?
Krishnan: This is a huge question for policymakers, right? And I think it boils down-- let me first start by saying it's really hard to predict how this will play out. First question. Let me give you two examples. We've both seen the advent of automated teller machines [ATMs]. And when they first came out in the '70s, you'll remember that everybody said there are going to be no more tellers and banks. And if you look at the number of tellers and banks today, we have more tellers today than when ATMs came out in the '70s.
Are they doing the same thing that tellers in the '70s did? No. They do more than debit and credit. They do marketing and selling of products to customers. So the job, even though the title has remained the same, the set of tasks and skills that they require has evolved. That's potentially one kind of future that might emerge.
Another is on the turnpike, since we're both in Pennsylvania, I can talk about the Pennsylvania Turnpike, the toll booth operators, you remember they used to hand out those tickets and when you get out and you enter and when you exit, you would pay. Are those folks in those booths anymore? No, they aren't. That job has been entirely automated.
So you have one example of automation where the technology is reliable enough to be able to perform that task in its entirety, whatever that role called for in its entirety, at a skill level and at a reliability level that is acceptable, in which case that role then disappeared. But on the other hand, with ATMs, it's actually you have more tellers, like I just told you, right?
So I think you're going to see certainly some roles that are going to be like the toll collector roles. In fact, OpenAI had this article published by a colleague of mine at Penn, Daniel Rock, that said 80% of the U.S. workforce could have at least 10% of their tasks affected. Affected. 10% of the tasks affected doesn't mean that 80% of the workforce is going to be displaced, but 80% of the workforce is going to have 10% of their tasks affected, and 19% of the workers may see 50% of their tasks impacted.
So the question is, what does that mean? What does affected and impacted mean? And does impacted and affected mean augmentation? Does it mean automation? So I think these studies certainly result in people thinking, "Hey, the machine's going to take my job," which is exactly what you started out with. But I think it's hard to predict what exactly may come about.
I think you will see certainly some jobs go the way of the tollbooth operator. I think some new jobs may come about that you and I are not even contemplating right now. And then you might have other jobs that get modified in ways that will require new tasks, new skills, like the automated teller machine and tellers.
Under that state of the world, my view is that we as a society need to monitor instead of predicting because predictions can go wrong here, we need to monitor so that we can have quick response. That monitoring has to be of both public and private data systems, by which I mean the Bureau of Labor Statistics, which the Department of Labor runs, certainly is one source of data.
But the LinkedIn, Indeed, and ADP, which does payrolls, these systems, they have a lot of data. Burning Glass, these are private entities that have a lot of data about the labor market. And the private sector data sources are not representative, nor are they complete, but they're very high frequency, meaning that you get very quick feedback about what's going on in the market. The public sector data sources tend to be representative and complete, but they tend to be slow.
So I think by blending the two and monitoring that data, I think we'll have a much quicker sense of what's happening in the labor market. And that will allow, I think, policymakers and firms to ask the question, "What kinds of programs are best going to be able to help people acquire the new skills required by the new economy?"
And we'll have to combine that with some kind of a safety net that will be required for people who are older, who might not have that much of a runway ahead of them, and therefore upskilling and reskilling may be more challenging. So we need a combination of wage insurance type of public policy to allow for people to get the wherewithal, the funding to be able to acquire these new skills because it costs money to acquire new skills.
So it's a combination of giving people the time and money to acquire new skills, and at the same time, monitoring the labor market to see what's going on. Because we are, I suspect, going to go through changes in the labor market. And we should be proactively monitoring it both as business decision leaders and policymakers to best support workers manage this transition.