Image by iStock/Floriana

In this episode of Lehigh University’s College of Business ilLUminate podcast, we are speaking with writer, tech journalist and author Brian Merchant, who delivered a guest lecture as part of the College of Business Year of Learning on the topic, “The Working Limits of Technology: What Happens to Us While the Robots Are Coming for Our Jobs.”

Merchant is the author of the upcoming book Blood in the Machine: The Origins of the Rebellion Against Big Tech, due out in September. He also wrote a bestselling book about the iPhone, titled The One Device, and writes a tech column for the Los Angeles Times.

Merchant spoke with Jack Croft, host of the ilLUminate podcast. Listen to the podcast here and subscribe and download Lehigh Business on Apple Podcasts or wherever you get your podcasts.

Below is an edited excerpt from that conversation. Read the complete podcast transcript [PDF].

For a related podcast, check out The Impact of AI on Universities and Students.

Jack Croft: In your previous book, The One Device: The Secret History of the iPhone, you chronicled the development of Siri, which you called, "Maybe the most famous AI since HAL 9000," a reference, of course, to the classic Stanley Kubrick film 2001: A Space Odyssey. In the six years since you wrote that book, have you been surprised by how fast or how far AI has come since Siri was launched?

Brian Merchant: I think it's safe to say that Siri is no longer the most famous AI in this day of ChatGPT and DALL-E and this latest AI boom. So I think it's a really interesting question. I would not have predicted the level of saturation with AI that we've seen, starting last year in 2022 and really kind of taking off this year.

I am interested in just how much of a moment it's having right now. They can do some amazing things. It also has some very real limitations. There is a hype factor that is sort of propelling a lot of the conversation right now, especially with image generation and questions of disinformation and whether or not these things are actually sentient. It can be hard to parse what is real in regards to that hype and what is maybe an advanced form of marketing that these companies are enjoying. So there's so much to chew on here.

And no, if you had asked me six years ago if it would be the topic that has taken over the tech world, taking everything by storm, I don't know that I would've said this is exactly how it would've happened. But I think, technologically, it's very much in line with what we were seeing six years ago.

Croft: I mentioned HAL 9000 … What was your reaction to the unveiling of Microsoft's new Bing chatbot that seemingly developed a rather thin-skinned and even menacing alter ego referred to as Sydney.

Merchant: I have Bing's ChatGPT AI search on my computer, and I've been using it as well. I never quite got it to do anything as strange as it did to Kevin Roose, the New York Times tech columnist who had an interesting late-night experience with the bots. So no, it didn't try to get me to leave my wife. It didn't try to appeal to its own sentience or get strange. What I think is interesting is that, immediately after that story went viral, Microsoft kind of turned off the tap, right? It immediately became much more—the very next day, and even I noticed this—it became much more like a search engine.

Right after that story went viral and it had some negative connotations and-- I actually think that part of the scariness element is something that a lot of these AI companies like OpenAI are quietly encouraging because the more afraid of it we are, the more power that it implies that the system has, and the more likely that certain companies are going to want to use it to tap into that power.

Croft: One of the things I found interesting in [The] One Device, when you were talking about the development of Siri, was this idea that artificial intelligence is really one of our-- I think you called it an age-old fantasy and ambition of the human race. And I think my favorite example you gave was the line - I'm going to quote it - "Mary Shelley's Frankenstein was an AI patched together with corpses and lightning." [laughter]

In recent weeks, obviously, there's been a lot of people sounding the alarm. And I want to read just one paragraph from an Ezra Klein column. You had mentioned the scariness, so this will get us into that:

One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough.

Is that really the choice we face?

Merchant: I am going to say no, it's really not. I think for one reason. And you can understand why there is this level of concern, especially to those with a little bit less technological acumen or understanding what's actually going on here. It's easy to get kind of swept up in the fear.

Again, as I mentioned, I think this is a fear that a lot of these AI services and companies are quietly encouraging in some ways as almost a soft marketing tool, you could say. But what we really have to do is, I think, prepare more economically than societally. I mean, you can't really separate the two. But I don't see AI as sort of a danger that's going to run amok. I think those fears are overstated.

Now, are a lot of companies going to start using AI at different layers of management to perform certain tasks that are going to reorganize the way that we work or put more competition on certain workers? Yes, and that's exactly what OpenAI and a lot of these other AI companies are hoping will happen. They're hoping that people who do data entry or copywriting, or even programming in some cases, are going to start using their tool and pay to use their tool to do these things.

So I do think that there is a level of preparation we have to make. Disinformation is going to be a real concern because we've already seen some of these images spreading that are pretty real, and that technology is going to get better. There's still telltale signs, right? If you look at the hands, there's still weird stuff going on that the AI can't quite get. But we can assume, in a few years, that those will be pretty hard to parse from the real thing.

So there are real concerns, but I think we don't do ourselves any favors when we start operating from the assumption that these things are going to be-- that AI is going to be sentient. It's going to take over. That is still in the realm of science fiction. These are still computer programs that are operating within the parameters that they have been programmed to operate.

A lot of times, they have an incentive, the companies that make them, to make them as real, and they approach this uncanny valley that can make people uneasy a lot. But we're nowhere near to the point where we're going to have to contend with a sentient AI. That part of the equation is something that is not in the realm of concern. Again, we do have a lot of policy questions to make around disinformation, labor questions, the leverage that these tools are going to give companies, but let's not get ahead of ourselves.

You mentioned Frankenstein, and I actually returned to Frankenstein in my new book. I went deeper into the history of [AI] when I was researching Blood in the Machine. And I talked to some researchers and historians of AI and even a zoologist who, I thought, gave a really interesting sort of definition of what separates-- he's a zoologist. His name's Dr. [Antone] Martinho-Truswell.

And his theory of what makes us human is that we are the beast that automates. A lot of animals, primates, they use tools, but they don't advance those tools to the point where we could say they're doing sort of crude automation. And I think that's a really interesting definition of what makes us human. And it points to how baked in this impulse to advance these tools are, to continually find things that will make our work easier, to automate the tasks that we need to do.

He points to the bow and arrow as the very first example, in ancient Egypt, 2800 BC or so, that-- sure, it's great to throw a pointed object at your prey, but imagine how much better it was for the first person who could just pull a string and point it and launch it-- have that work done for him or her. Really, really interesting theory that gets at what propelled this will to automate.

And now, all these thousands of years later, here we are with text generators and image generators that we can automate the production of entire artworks or text corpuses with the click of a button. So it is an interesting contextualization that I think we should bear in mind. We do want to do this. So the questions are: What are the complications now? Why do we feel so much tension around these issues? And can we untangle that?

Rob Gerth

Rob Gerth

Rob Gerth is director of marketing and communications at Lehigh Business.