Dr. Erik Gustafson: How Artificial Intelligence May Shape Future Reality

Erik Gustafson IMG_2661
Dr. Erik Gustafson

What are the unforeseen consequences of the integration of artificial intelligence? Erik Gustafson, Ph.D., explains AI’s influence on everything from industry to daily routines. This is a deep dive into how tools like ChatGPT are reshaping understanding of knowledge, complicating people’s ability to sort fact from fiction. Confront the privacy and security conundrums AI brings to the table and tackle its potential to sway political opinion – a particularly pressing issue in an election year.

Gustafson begins with AI’s disruption of the norm and ends with a reflection on its paradoxical role as both a harbinger of progress and a source of significant ethical challenges. This discussion transcends simple good-versus-evil binaries and instead examines AI’s intricate influence on society.

TRANSCRIPT

LANDESS: 0:04

It’s predicted that this year, we’ll see artificial intelligence transform industries and redefine human interaction with machines, but it’s also facing challenges to meet ethical commitments. I’m Mike Landess. To further discuss the benefits and potential downside of AI, UT Tyler Radio Connects with University Assistant Professor of Communication Dr. Erik Gustafson. Endless possibilities with AI and huge ethical challenges, am I right?

GUSTAFSON: 0:29

Absolutely, and thanks for having me on, Mike. I think when we get to the conversation of ethics, oftentimes we’ve already had those changes happen, and the questions surrounding ethics, I think, really are at a flashpoint on college campuses particularly and then sort of leaking out into other industries as well. Where we first need to figure out what even are those challenges? What are the questions that we need to ask? While I think some people in a number of different industries, very brilliant people all around have isolated some of those questions. We still have them tinted with the different emotions that come from something so new, and we’ve yet to completely remove ourselves from the equation or at least do the best we can to assess these things on their own merits as opposed to on our own fears or excitements about them.

LANDESS: 1:32

Perhaps the flashpoint of this international conversation came with the demonstrations of what ChatGPT was capable of. Tell us more about the upside and downside of this free-to-use AI system.

GUSTAFSON: 1:39

Well, one, ChatGPT  — if you’ve ever peeked into it — is immensely helpful. It’s a great tool and a great aid for helping to find information, to conduct research, to learn. It’s a great tool. But — it also poses a lot of different issues because it replaces those skills that we used to have to do on our own. Those research skills, those identifying what is credible and what’s not, and while with every new technological development, we often, we get a new way of knowing or a new way of coming to know or creating knowledge, which means we lose a past way. But I guess we are in the stage of figuring out is that new way that ChatGPT proposes or allows us to sort of come to knowledge and create knowledge? Is that good, bad, ugly, somewhere in between? What do we do with it?

LANDESS: Sounds like it can be all of those things.

GUSTAFSON: Yeah, I had a great colleague who once said, “Yes, but.” Or in Communication Studies, we always say, “It depends; it’s contingent on context.” All the time.

LANDESS: 2:53

It depends. In a day and age in which we are bombarded daily with all kinds of information that may not have been professionally vetted, how will we know for sure, going forward, what is true and what isn’t?

GUSTAFSON: 3:05

I’m not sure we will, especially with the upcoming election, and we think about the different sort of campaign messages that have already bent the truth intentionally. And then with the help of software like this that can exacerbate those difficulties of identifying what’s true and what isn’t, I’m not sure we will for a little bit. Right now, we have different fact checkers or AI checkers that have developed rapidly in concert with these technologies, but we also have programs that actually strip AI- generated content of the markers that would be caught by the detectors. So in concert, we have all these technologies and software specifically sort of running together, and they’re leapfrogging well ahead of our questions right now.

LANDESS: 3:59

I’m thinking of the 1950s and ’60s sci-fi movies in which machines would take over the world. They’d start talking to each other, and then they didn’t need humans anymore. I mean, is that even technically possible?

GUSTAFSON: 4:14

I think some of the most interesting predictions or sort of explorations into what the future will be seem so far-fetched that we can’t fully grasp them. They seem so fantastic, so far off. And yet if you transported someone from the ’50s or ’60s to here, they’d probably go, “What is all of this?” And I think we will have the same realization in another 50, in another 60. So I think all of those at the time of their writing seem so far-fetched. And then, all of a sudden, in the ’50s and ’60s, we were already, you know, if we talk about the Turing machine and think about sort of this birth of computing, this birth of sort of the first artificial intelligence, if you will, we already had it then, and then it’s finally exploding now, 70 years later. I think we’ll see that with other hosts of technologies that are running alongside this.

LANDESS: 5:19

It’s said that security and privacy are essential requirements of developing and deploying AI systems, but that’s also the biggest problem facing AI. It would feel a little bit like the foxes are essentially guarding the hen house at this point in time. Is that an overestimation of it?

GUSTAFSON: 5:36

I don’t think it’s an overestimation. I think it may be a way of characterizing exactly what’s always been the case, which is, when we look at technological developments, especially those that push the frontiers of our understanding of the world, we often see war and security at the forefront. One of the technologies running alongside artificial intelligence that is said to, going to supercharge it in the next decade or two is quantum computation, which represents a fundamentally different way of computing from classical computation. But our original impetus for developing quantum theory and quantum mechanics was to create the atom bomb. It also helped us create MRI machines, helped us create the transistor, which was the fundamental unit for classical systems. But oftentimes, we do see the foxes are the ones funding the research that makes these things possible and oftentimes pushing those boundaries. And we get to find out about it later.

LANDESS: 6:43

So we’re just months away, we’re just months away from what promises to be a very contentious presidential election. You mentioned this a moment ago. In theory, a voter in Smith County could get a phone call with Joe Biden’s voice asking them to vote for Donald Trump. I may sound ridiculous, but technically that’s possible, right?

GUSTAFSON: 6:59

You think it sounds ridiculous. Actually, last semester, I had a student create an AI-generated podcast, and he used the voices of Joe Biden and Donald Trump for the two podcast members. And you would be shocked at how good this sounded. It’s absolutely a possibility. It’s more than a possibility. It’s most likely a probability. And to the extent that to your earlier question, can we tell the difference? Sometimes not. Sometimes, yes. It’ll develop along with that human touch of the people in these campaigns saying, “Oh, if we tweak that, this will sound better and maybe no one will know.” It’s really hard to tell, but it’d be very interesting.

LANDESS: 7:46

Yeah, exactly. Deep fakes with video are a little scary. I see some of them being used for comedy bits. You see those on the internet. They involve those two men, the president and Donald Trump. But someone with enough money and a determined enough agenda could certainly get a lot of mis- and dis-information out there.

GUSTAFSON: 8:03

Absolutely. I think with prior elections, we’ve seen that explosion of this idea of fake news. And to our college students, to younger individuals, it’s second nature to them now to question these things, whereas some of us who are older have a touch more faith in them. But I think when we see these deepfakes, especially with video and audio, what we’re seeing is fake news sort of pushed to its extreme. And it might reverse to its opposite, to use sort of a phrase from Marshall McLuhan, a media scholar from the 20th century. If we take this so far, we push it to its opposite. Instead of simulating these voices and crafting a message that creates trust, we put all these messages out there, and none of them create trust, and so accomplishing the opposite of what we thought.

LANDESS: 8:53

There’s been a call for international collaboration and ethical standards to take place. How quickly could safeguards be established and put into place, should that happen?

GUSTAFSON: 9:03

For the election, I’m unsure. For the legislative system in general, there have been talks that have started. The European Union just pushed an act through that levies significant penalties on AI transparency and sort of attempts to safeguard these things. In the U.S., we are still working on them, I believe. But legislation always travels slower than technological development. So whether the right safeguards were put in place prior to the election will be something we’ll see soon.

LANDESS: 9:43

Just over 30 years ago, the World Wide Web went into the public domain, a decision that fundamentally altered the entire past quarter century. Are we fretting over the unknowns about AI in the same way that some did about the internet years ago? Or are the concerns about AI more substantial?

GUSTAFSON: 10:01

I think if you look to any technological development, you’re going to find anxieties with it. If you go back to the ancient Greeks, Socrates bemoaned literacy because it removed knowledge out of the human mind, and how do we know if someone’s smart if they can’t remember it? Try to tell a college student that today. And the internet is a great example, too, because we find the roots of it first being developed by the military to share documents in the late 1960s. And then in 1983, they switched a protocol, which was what we consider sort of the birth of the internet. And then, probably 20 years later, we see it 20, 30 years, we then see it as integrated into every single facet of our lives. So I don’t think the concerns about AI are unwarranted. I think we’re just now realizing that this has been a long time coming, and oftentimes we just need to, if only we could learn about these things before they exploded onto the scene, I guess.

LANDESS: 11:06

Any final thoughts you’d like to share about AI?

GUSTAFSON: 11:10

AI. It’s the next big scary thing. When we ask ourselves questions about it, it’s not productive to say this is awful or this is amazing. It’s more productive to weigh both of them. AI represents the tip of the iceberg for me.

LANDESS: 11:25

Thanks for listening as UT Tyler Radio Connects with Dr. Erik Gustafson of the University’s Department of Communication. For UT Tyler Radio News, I’m Mike Landess.

(Transcripts are automatically generated and may contain phonetic spellings and other spelling and punctuation errors. Grammar errors contained in the original recording are not typically corrected.)