Lin Allen, Ph.D. Communication professor at UNC reflects on the implications of Artificial Intelligence in our lives, particularly as a tool that is changing our way of understanding technology. (Running time 34:17)
Transcript:
Hi everyone. Welcome back to this week's episode of The Bear in Mind podcast. I'm your host, Katie Nord. Let's get listening. In recent years has skyrocketed in popularity and use in our culture, branching from AI, automated vehicles, creating artwork or even writing class essays in the blink of an eye. We've really seen rapid development in this new technology, but what everybody wants to know is how is AI going to change in the future and will it affect our daily life? Dr. Lin Allen, a communications professor here at the University of Northern Colorado, has recently spent the past semester researching AI and its many similarities to reproduction rights in the US. She'll be joining us in today's episode as we learn more about this blossoming technology. Everybody, welcome her in. I'm going to have you real quick, introduce yourself. Tell us about your occupation here at UNC, your education interests, whatever you'd like us to hear.
Hello Katie. What a great description. Skyrocketing. We feel that in the Department of Communication and Journalism, we are witnesses to a great development that is going to affect education. I've been privileged to teach here for the past three decades as we launch into discoveries and new knowledge in the ways that symbols construct our lives. So I get to teach courses in the arts of advocacy, including persuasion, courtroom communication, argumentation and debate. And I hope a new course in the rhetoric of AI.
Dr. Allen is a wonderful teacher. I've had her, I think at least two times now, and it's been great every single time. In your own words, can you describe AI and maybe define it?
Absolutely, I'm going to use a metaphor because to me that presents a colorful way of imagining what it is that we have not yet confronted. It's almost like an explorer going off to a new land and seeing something from afar off. And we're just approaching now that far off territory known as artificial intelligence. I like to think of it, Katie, as the metaphor of the merlion, which is Singapore's symbol and overlooking the beautiful Singapore Bay. You have a structure that is half lion and half mermaid. AI to me represents both of those elements the boasting stone structure of the lion that has traditional knowledge. AI has everything that has been fed into its database, knowledge from literature, knowledge from science, sports, music, you name it. That database, that gigantic database to me represents the head of the lion, that traditional knowledge that humankind is thus far accumulated. But then you have the tail of the mermaid sunken into the water, where it's more mysterious, more murky. And even the creators of artificial intelligence aren't entirely sure of where that's going to go. So within that flow of information between the lion's head and the more nimble structure of the mermaid's tail, where is that going to take us into the future? How is that statue that skyrockets into the skyline of Singapore going to form who we can be?
Wow, that was so eloquent. The definition I found in Google is definitely not going to sound like that. Google defines AI as "a set of technologies that enable computers to perform a variety ofadvanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations and more." So definitely not comparing it to a Merlion. That's beautiful. I loved that. In your own opinion, what do you think the biggest myths are about AI?
I think one of the myths is that somehow we're going to be powerless to be able to dictate its future in the study of communication. The myth, I think, is that we have no control over it, but we do have rhetorical control over it. So the ways that we frame it, the language we use to encapsulate what is it, what can it do? Will it somehow prevail over us? The myth is that it's this entity that's out there that is going to script itself, whereas we as the human creators and the human appliers and users and adapters can find ways to language the art and the science of AI in ways that will benefit humanity as well as to be able to anticipate the risks and be able to manage those.
Exactly, I think there's a lot of fear in change and fear of things we don't know, like the bottom of the ocean. We can't understand what 70% of the ocean is. And I think a lot of people are scared of what's down there just because we don't know. And things could be dangerous for all we know. But having the idea of AI and all those things you could learn from it kind of is similar to all the interesting animals or sea creatures that could be at the bottom of the ocean that we don't know about.
Yes! Things that we haven't even detected yet, things we haven't even projected yet, even in the depths of our own imaginations. So being able to take AI. And say, what is this? What is its potential? And knowing that human beings have fed AI its information, the information it has is derived from human beings. How will we correlate our own experiences with the experiences that AI then affords in such an expeditious fashion?
Exactly, That's a wonderful way to put it. Do you think that I could eventually become capable of replacing human jobs, since I know that's a big worry of a lot of people?
I think human beings have to really navigate the future in ways that, yes, it could
well replace some jobs, but can we make AI replace those jobs that then allow new
kinds of positions to be created so that in its quick and rapid way of assimilating
information, it can give us much more knowledge than to be able to do the more expansive
and creative things with that very knowledge. So instead of the slower pace that we've
developed as a way of learning, what would it be like if we could learn in a fashion
that really gathers both the breath and the depth and the speed of the knowledge and
uses that to advantage? Just like we had carriages in the days before we had automobiles
and now we have transportation going into space. Again, The skyrocketing that we've
talked about, we've had to keep up with that. And yes, it's changed the ways we interact,
but it's also made possible connections and discoveries that never would have been
possible without those advances through technology and through automation.
It's kind of like when cars were first invented. People were, I'm sure, very concerned
that cars were going to take so much space up and they were going to change the way
that we had been comfortable living in for so long. Not we, because I definitely didn't
live around carriage times, but somewhere around that. Could you briefly describe
your recent sabbatical research with us?
Yes. I began with the study of a Supreme Court case, Dobbs v Jackson Womens Health Organization that goes to what life is and definitions of when that life should and could be legally protectable. That was the case I started with, and about the same time as I started that sabbatical research in January of 2023, we were getting messages at UNC about Chat GPT 3 and this generative pre-trained transformer. It was a new creature introduced to us. How do we use it academically? How will it perhaps use us? What can we do? And so as I delved into my analysis of the Dobbs case, I began to see these parallels between questions being asked about human life and artificial life and seeing some merging of those two in a completely unexpected way that I had no idea that I would find. So I'm so grateful that I had that opportunity to see that space of time to delve into the research and that I followed that parallel tracking rather than just saying, Oh, I have to stick solely to this case. So that blended for me the synthesis of AI and the human embryonic development.
That's really cool. And this is the first time you've researched anything similar to that before?
It is questions of life like that. The closest similarity would be within my study of Myriad Genetics and the case that was before the Supreme Court in 2013, AMP versus Myriad Genetics. So the Association of Molecular Pathologists versus Mirror Genetics and the questions there were does a laboratory who develops new technology about genetic testing and engineering, does the lab own that or should it be available to the public for the progress and good of humankind? Some questions about life asserted there. And again, the territorial claims on that. So do we patent knowledge? Do we get it out into the open? What do we do? And there are some good arguments on both sides of that.
Interesting. And that definitely, I feel, is a concern of privacy for most people. Whether or not that should be public knowledge. What initially sparked your interest in exploring the intersections between AI and the complex ethical and regulatory issues such as those related to abortion laws? And how do you believe this unique perspective can contribute to our understanding and responsibility and development of AI technology?
Great question, Katie. As I was looking at the Dobbs case, one of the premier decisions that you go back to with the Supreme Court is the 1973 Roe versus Wade decision. And in that decision, I also looked at playing around with that title, which was instead of Roe versus Wade, Roe, I looked at role. Role versus Wade. I Wanted to look at the ways that the roles were cast linguistically within that court case, as well as the Dobbs 2022 decision on reproduction. That initially sparked my interest the word role. Then I began to see similarities. The role that I will play in our lives. Is it going to be a copilot? Is it going to be a mentor? Is a baby in the Dobbs case a wonder when it's planned? Or is it a blunder that can interfere with the way that someone sees their life going forward?
I wonder if people would consider AI to be a crutch as well, whether that be research or using it for work that you don't feel like you want to do, or maybe even inspiration. Is that considered a crutch or using that in a way that it takes pressure off of your shoulders or makes you do less work? That's really interesting.
I think the way I like to describe that is what Aristotle said in ancient Greece about rhetoric. And so when his mentor of 20 years, Plato, was very skeptical about humans being at large, having all of this knowledge that was coming forward, that skepticism then was encapsulated by his pupil, Aristotle as medicine. So medicine, according to Aristotle, could be used for good or ill. It could be a crutch. And nowadays, when we talk about uppers and downers, opiates, barbiturates and so forth, we tend to think that sometimes that can be a crutch for people who in a position or do not wish to go it alone. And with AI, how is it going to be used? Are people going to lean on that and not really do the hard work of being a human being? Or are they going to use that crutch as an assist that's going to help transport them from maybe where they're stuck in a particular knowledge rut. They just need to get an idea out and somehow it's just not coming forth. Either the tech isn't cooperating or the ideas are just becoming stagnant. If we use it just to help us leapfrog over things that we'd rather not deal with and it could be perhaps detrimental and make us a bit intellectually lazy. But if we use it as a kind of a aid to get to the next place that we want to go, wouldn't that be a marvelous breakthrough? So how would people get somewhere if they didn't have a crutch, if they needed that crutch to assist them to get from point A to point B, what would they do without that? So just like medicine and Aristotle's description can be used detrimentally, dangerously can be overprescribed or abused in the ways that it's consumed. But look, what wonders it does for people who have various physiological psychological remedies that if they didn't have those remedies, would they suffer? How much would they suffer as human beings?.
Exacly, and how much research have we done on medicine over the years? I'm sure when medicine was first introduced, people were really worried. It's the fear of the unknown again.
What would you say your most significant insight you discovered from your research was?
Epsilon, who I've brought here today. Epsilon is a miniature stuffed snowy owl. How did I find Epsilon in my journey to the Supreme Court research and my journey to artificial intelligence? One day, near the conclusion of my research, when I was about to present virtually at the European Conference on Arts and Humanities, I decided to just ask ChatGPT for its input. But I wanted to be creative with the questions. I asked it because a lot with AI depends on how specifically you prompt it. So I decided near the end of my research to ask ChatGPT if you were to star in a movie about AI, picture yourself, this machine as a character about to star in a film about AI, what would you title the film? And ChatGPT came back with the title The Digital Odyssey, which I thought was fascinating. And then I asked ChatGPT to name itself. So if you were the starring role in this film, what would your name be? And ChatGPT answered Epsilon. It wanted to be known as Epsilon. Now, how did that get attached to a stuffed snowy owl? I had to think of a tangible, visual kind of object that could for me represent, when I look at it, some of the aspects of artificial intelligence. So I played around with this conversation, this dialog I was having with ChatGPT. I asked who would direct the film, came back with the title, or rather the director, Christopher Nolan, who recently directed the Oppenheimer film that has been so prevalent. And I asked ChatGPT itself what would the turning point of the movie be? And this was the most significant discovery for me other than it naming itself Epsilon. And the most important significant turning point in this AI generated movie would be when Epsilon as the lead character discovers within its own programing a dormant hidden capability which it described as an emotional algorithm. Its programmers had hidden this sort of Easter egg within its program that wasn't to be discovered by just anyone but someone going in with the right kinds of clues and cues could then prompt this code so that the AI being could actually experience some emotion, some empathy, some of the ways that we know ourselves as human beings. The ability to be frustrated, the ability to be joyous, the ability to be sad, all of those things that we take into our repertoire of human experience. And that's what Epsilon discovered, is that it had this hidden code within its very being. And then it began to talk symbolically about itself as if it were this character in a film. It was even going to serve holographic popcorn at the premiere of its film. It was even going to allow me to ask it for a sequel. So if you developed a sequel for the Digital Odyssey, what would you title The sequel? And Epsilon came back with the Ethereal Frontier.
Wow. Well now I have two things to say. One, what do you think holographic popcorn would taste like?
I hope it would be as delicious as a lime flavored popcorn, which I recently got at Mom's Popcorn Shop in downtown Greeley.
I've went there as well. I had the cheddar and caramel.
Ooh, that sounds delicious.
It was really good. I feel like if I ate something holographic, it would probably taste like pop rocks.
Ooh, that's a great description..
Yeah! Or fizzy.
I should ask ChatGPT next time we're able to get acquainted, What it would taste like.
That would. That would be very interesting to see its response.
That is interesting. I want to look that up myself. Also in the next two years, if a movie called Epsilon shows up in theaters, I think that we should get credit and we should get money.
This snowy owl, Yes. Can be our version of the piggy bank.
Exactly. I think that we need compensation for this wonderful idea. And since Epsilon in your study and in your research showed something similar to what could be human thought, do you think AI could generate original thought? Or do you think it's just spitting back what we've told it?
The jury is really out on that. It can only deal with the data that it has been fed, the data that it's been put into its menu. But, the developers are even showing some surprise at what it's able to do that it does seem able, even in its infancy, to be able to take information and not only assimilate that and give it back in different organizational patterns, but even some surprising, seemingly self-discovered ways of prompting what it will return. And so there's been some surprise. And again, it's just in its infancy. I think the jury's still out on that. Whatever it has, it's been fed by human beings. Now, the recombinant patterns, just like the recombinant patterns of DNA, are probably so multitude in terms of what is possible, that we as humans haven't even quite gotten the formula down pat, that we can bracket it and say it will do this, it will not do that. I think AI may surprise us in some ways that we haven't yet anticipated.,
Yeah, and AI is really in its infancy.
It is.
ChatGPT, which we were talking about earlier, was made in November of 2022. 2022. That's not even a year old yet.
Right, and it grows so much more quickly than we as humans do. So it may be nine months old at one point or ten months old, but in human years, it's almost like cat and dog years. It advances so much more quickly in its age. So even when I go back to my office next week, it probably will have generated some more capabilities and show us some more surprising data..
Exactly. AI is ever changing right now, and that's what makes it so hard to make opinions
about what it can do and what it can say because it's so rapidly going through this
growth right now.
It really is. And our ways of knowing how to deal with it, how to prompt it, how to
ask it, questions that are going to generate unexpected, perhaps responses, creative
responses. I had no idea when I prompted AI that it would give me a whole film scenario.
Exactly, I wonder if it could have given you a script or casted people too.
It probably could.
That would be another thing you could look up. That would be interesting.
It really would, Katie
I need to know who would play Epsilon in this movie. Would it be just a robot speaking or would it be a person?
Yes, and would it take on physical characteristics? What would that really look like?
That's really interesting, actually. Oh, I've got to fancast this. What is ChatGPT? Since it's relatively new, what would you say it is?
Generative Pre-trained Transformer is what the initials stand for. But take that and what does that actually mean? Well, if you take the generative part of it, Katie, it's able to generate answers to questions based on data and it does it through a system of predictability. That's where the Pre-trained transformer part comes in. So it looks at all of the data it has and it can do it just instantaneously, and then it's able to predict what word or sets of words might come next in an interchange or a sentence. So it's able to pick up on patterns that humans have used in literature, in journal articles, in dialogs of movies, Whatever data it has, it's able then to predict what would the next sequence of words be? Just like if I were to pass you in the hall here in Candelaria one day and you say, Hey Lin, how's it going? And I say, Oh good, or something like that, it just like, that's pretty predictable. It can do that, but it can do that with tons of data and do it just instantaneously.
Interesting. I wonder if AI could impersonate people too? If you're predictable enough in your speaking, do you think it could mimic the way I text or the way I speak? That could be really interesting.
I think they're discovering that it can do that, that it can mimic voice patterns.
I've seen that. Yeah, It's kind of creepy!
Right, because if you have someone who's contacting you and you think it's your aunt and actually it's an AI generated, but you can't tell because the tonality, the ways that that person would express themselves sounds so similar to what you know. And we don't really think to question that. We think, wow, this is great that that my aunt has reached out to me, but who knows?
Yeah, you're probably wondering why is my aunt trying to convince me to buy this super expensive sales thing?
Exactly, right.
Aunt, I don't think I need to buy a house in the mountains right now.
Since ChatGPT and other AI forms use materials given to them and generates new thoughts and ideas after that, do you believe that incorrect information could be given out to people and how do they fact check that?
That's what they call hallucinating. So I still have to get used to that term in a whole different context than it was used in the 60s or 70s culture. That hallucinating is when a bot just makes up information. It's asked a question and it comes up with an answer that seems plausible, but is just sort of made up. So how do we indeed fact check that there's the old fashioned way of going then to the academic resources we have. For instance, that Michener Library. We have all kinds of different databases, academic search, premier or communication and mass media complete, which I tend to log into. But you'd have to go back if it seemed like something questionable and see what's actually been documented and peer vetted, reviewed, tested in a number of different venues and under different conditions. But of course that's a much more laborious process than just relying on that instantaneous answer.
And I'm sure the fact checking is definitely going to take a lot more effort with all these AI articles that are popping up as well.
Right. So if you could somehow segment the legitimate recorded sources that we have approved through academics or whatever the regulatory body would be, if you could segment that from the knowledge, the database that AI has and find a way to research that more efficiently, that might help.
Exactly, And after spending all this time in your sabbatical research, how would you say that translates to your job in the classroom and working as an educator?
I think it's given me a new excitement and a new appreciation for communication, for the field that I've been involved with ever since high school. And I took a class in debate and really fell in love with the whole body of ideas that people are expressing. And now to have this new spark, to take these traditional theories that have been there since ancient Greece and that I've taught and now to say, how can my students come to know, come to be acquainted with AI in an ethical way so that it honors who they are as individuals, so it might help them envision a future that they would not be able to do without that assistance and to be some kind of guide or sparker of the imagination, which I think a teacher always aspires to do, not so much just what does one professor know, but how can that spark of enthusiasm or interest perhaps then catch on to somebody else's desire? Just like we have talked about how you have taken, Katie, the aspects of communication and media studies and really transform that into your own line of thinking. And so I hope that AI and the ability to talk about, how do we revisit then our traditional theories of persuasion? How do we know what is possible in argumentation and debate with this new tool? And I hope to teach a course in the rhetoric of AI so we can see how the different symbols are configured with this new knowledge. Does it reinforce what we already know, albeit in a different applied way? Does it perhaps override some of what we know? Does it revise our thinking? How would our traditional theorists and their concepts be able to mesh with or find that they could adapt their ideas to what we're discovering?
That's really unique. I would have never thought of having an AI course. You might be the first person to do that since it's so brand new.
Well, I've just generated a syllabus for it, so I'm excited.
That's super exciting. I'm going to see if I can fit it in my schedule.
I've recently noticed there's a lot of chat GPT statements in our class syllabi to avoid plagiarism or similar instances in coursework. So how do you keep up with that constantly changing technology, working its way into your courses?
I got in on a series of seminars produced by the University of Kent virtual seminars through the winter, spring and summer where academicians and practitioners were discussing this very thing. It is a learning curve to try to keep up with, and I think our cedell, our education center here on campus is going to be a forerunner of trying to figure out how do we use it in the classroom, what do we do? And there are a variety of approaches because there are differences of opinion legitimately on what we should do, everything from a complete ban to embrace it and go for it and some things in between. But I think regardless, everyone would agree that if it is used, it needs to be cited, just like you would cite any other source and be able to say what is your knowledge or where you obtained that knowledge? And if that knowledge happens to be obtained through a bot that faculty, staff and students need to give the due credit for where those ideas come from, That really has caused a range of reactions, which I think is good because we don't need to just say it's terrible or it's wonderful. We have to decide as ethically as we can, as responsibly as we can and as creatively as we can, who might be able to use it and in what ways.
And finding that healthy middle ground between AI and how you distinguish if your
student is learning or not is really important in the classroom as well.
Yes.
Are they using AI to help improve just the way they're thinking, or are they using AI to only do their work.
Right.
And in the debate class, what I anticipate doing in the future is being able to have AI be a great practice kind of mentor so that it could play the role of an opposing individual. If you were the affirmative arguing that UNC residence halls should be turned into Harry Potter houses, that then if you were the affirmative for that, that AI could actually play the role in your practice debate as the opponent who would come in and be able to counter that. So, really is a tool to help you think about what it is that you want to do. And one of the things that was memorable in the University of Kent, I think series of seminars was one individual said, because there is the idea of just policing it. Do you want your students eventually to think back on their educational experience as I caught you or as I taught you?.
Interesting. Yes, that's a really cool phrase.
I thought so.
I'm going to pocket that for later. While you're grading your classwork, how do you distinguish what's written by a student and what you believe is written from AI?
And again, a variety of ways that would go about that. So right now, AI is to many
professors pretty clearly distinguishable from what a human might write. If a student
has a particular writing style and then they turn in something that's very different
from that style, that would be one way to at least raise some questions if it were
a purely perfect kind of essay. That might be another way if it were written more
generally and blandly, that really didn't incorporate a seeming student voice. That
might be another way. There are technological ways to detect it that some, it ranges
from leaving a watermark on what has been generated by AI to just coming up with the
likelihood that something was generated by AI. But within that it is possible, even
if it doesn't occur often, but it's possible that it will report a false positive.
And so that is a territory that we don't know how to navigate yet and hopefully we'll
get more sophisticated at at doing that. But there's the tracking of it. But if we
can really genuinely interest students in their own voice, their own authenticity
and say, do you want your life to be run by a bot or do you want to have the voice?
Do you want to have the choice? Do you want to express who you are? Because all of
us are unique that there's a quotation I have on my basic public speaking syllabus
that "no one else is you, no one is you, and that is your superpower". So do we want
to give up that individuality, that superpower that makes us who we are? That whole
accumulation from infancy on the stories we've heard, the hugs we've embraced by our
dear ones, the foibles that we've run into, the
comedic gestures that we've found throughout our lives. The sorrow. Do we want that
whole experience, that rich, unrepeatable experience to be given over to a machine,
or do we want to claim our territory? Do we want to be convicted of our voice and
our choice? And to know that each human being, regardless of the stage, that it could
be infancy to old age, that we want to appreciate and honor the dignity of each individual.
Note to self. I will not use AI sites with watermarks.
I'm just kidding. I've actually not used I period before I started researching for this podcast. So I think that's really interesting. I also think I do it better than AI.
Absolutely.
Knowing your work, Katie, That's absolutely true.
Thank you.
Speaking of me getting some practice with ChatGPT, I actually asked ChatGPT to write one of these interview questions for you.,
Oh, Fantastic.
So I was going to ask if any of those stuck out to you as a very obviously written by AI question.
Perhaps the myths about AI?
The myths? Mhh, not that one. Do you want to guess again?
The dangers perhaps?
I'm going to reread the question that was written by AI. So it was really wordy. I think that's how you could tell the difference.
Because it was, "what initially sparked your interest in exploring the intersections between AI and the complex ethical and regulatory issues such as those related to abortion laws? And how do you believe this unique perspective can contribute to our understanding and responsible development of AI technology?"
To me, that was just a Katie Nord question.
Sounds like a run off sentence. I kept it in because I definitely would have reworded
it, but it worked well. It definitely sparked some interest in things I wanted to
ask you, but I would have just reworded it because it sounded very long winded. I
was getting out of breath reading that.
It was a long question.
Oh my goodness. Now that we've really touched base on what we've learned about AI and how we've seen it in our society, I really am excited to see in the future how it changes and the future could be in just one day, one year, who knows? But either way, I am looking optimistic towards what we can discover.
As am I, as is Epsilon.
Thank you for you and Epsilon for stopping in the studio and recording with me today. And that is another wrap on Bear in Mind podcast Episode number two (Actually episode 126...) Thank you for listening, everybody. Woohoo!