Skip to main content
Shakespeare Unlimited podcast

Artificial Intelligence Goes to English Class, with Jennifer Black, John Ladd, and Laura Turchi

Shakespeare Unlimited: Episode 204

The Folger: Hey ChatGPT! Could you write a six line Shakespearean monologue in iambic pentameter about an interview with Jennifer Black, Laura Turchi, and John Ladd about the challenges and opportunities that ChatGPT presents in the English classroom? Thank you!

ChatGPT: Of course, I’d be happy to write a Shakespearean monologue on that topic! Here it is:

Oh, how ChatGPT may bring the bard to light,
And in the English classroom, set things right.
With Jennifer, Laura, John to lead the way,
They’ll show the world what ChatGPT can convey.
For in its code, a wealth of knowledge lies,
And students learn, to their great teachers’ surprise.

Based on that monologue, Open AI’s ChatGPT tool obviously believes that it can be a positive addition to the classroom. Should you share its optimistic outlook? Our host Barbara Bogaev talks with Jennifer Black of Boise State University, Laura Turchi of Arizona State University, and John Ladd of Washington and Jefferson College about the challenges and opportunities that artificial intelligence tools offer English teachers.

Listen to Shakespeare Unlimited on Apple Podcasts, Google Podcasts, Spotify, Soundcloud, or wherever you find your podcasts.

Dr. Jennifer Black is a Lecturer in English Literature at Boise State University, where she teaches a broad range of undergraduate courses in literature and humanities. Her most recent publications focus on teaching Shakespeare online, leadership and ethics in Shakespeare’s plays, and flipping the college classroom.

Dr. John R. Ladd is an assistant professor in Computing and Information Studies at Washington & Jefferson College. His teaching and research focuses on the use of data across a wide variety of domains, especially in cultural and humanities contexts, as well as on the histories of information and technology. He has published essays and web projects on cultural analytics and humanities data science, the history of data, and network analysis.

Dr. Laura Turchi is a teacher educator specializing in English Language Arts. She co-authored Teaching Shakespeare with Purpose: A Student-Centered Approach (Bloomsbury/Arden) with Ayanna Thompson and recently completed Teaching Shakespeare with Interactive Editions (forthcoming from Cambridge University Press ). Turchi is Clinical Professor in English at Arizona State University, where she directs curriculum development for “RaceB4Race: Sustaining, Building, Innovating” at the Arizona Center for Medieval and Renaissance Studies.

From the Shakespeare Unlimited podcast. Published February 28, 2023. © Folger Shakespeare Library. All rights reserved. This episode was produced by Matt Frassica. Garland Scott is the associate producer. It was edited by Gail Kern Paster. Ben Lauer is the web producer. Leonor Fernandez edits our transcripts. We had technical help from Shane McKeon, Kristin Vermilya, and Voice Trax West in Studio City, California. Final mixing services provided by Clean Cuts at Three Seas, Inc.

Previous: Margo Hendricks on Shakespeare, Race, and Romance

Next: Patrick Page on King Lear and Shakespeare’s Villains

AI Meets Shakespeare

To illustrate this episode of Shakespeare Unlimited, we asked DALL-E, OpenAI’s image-creation tool, to create a portraits of Shakespeare.

We started out asking for images that met the masthead specifications for our new website: “A 1800 pixel by 600 pixel illustration of Shakespeare working on a laptop, sitting on a cliffside in a jungle.” But DALL-E would only make square images, so we went in a different direction, asking it to create a series of images that played with  Shakespeare and technology. Check out the gallery to see how DALL-E responded to prompts like “3D render of William Shakespeare trapped in a CPU,” “William Shakespeare piloting a spaceship in the style of Cowboy Bebop,” and “Robot William Shakespeare in the style of John Singer Sargent.”

Related

Transcript

MICHAEL WITMORE: How do you teach Shakespeare in a world where artificial intelligence makes plagiarism undetectable?

From the Folger Shakespeare Library, this is Shakespeare Unlimited. I’m Michael Witmore, the Folger Director.

It used to be that text generated by a computer sounded clunky, awkward, and artificial.

[CLIP from a sample of computer-generated text.]

ROBOT VOICE: My name is Dr. Sbaitso. I am here to help you. Say whatever is in your mind freely. Our conversation will be kept in strict confidence.

But now, artificial intelligence tools like Open AI’s ChatGPT are much more powerful. Trained on vast amounts of text, these AI chatbots can generate text that reads as if it were written by a human being. For example, if you ask it to “define artificial intelligence in the style of William Shakespeare,” you’ll get something like this:

It is a mind, yet not of flesh and bone,
Whose thoughts and deeds are all their own,
It learns, it grows, it does create,
A marvel of our modern state.

With algorithms and with code,
It mimics thought, and quite well showed,
The power of the human brain,
Now harnessed in a machine’s frame.

That’s kind of fun, but AI can do a lot more than write middling poetry. It can also write term papers. When Open AI released ChatGPT in the middle of the academic year, teachers and professors of literature scrambled to adapt their methods of evaluation. After all, plagiarism is a lot harder to spot if every word of a paper has been generated from scratch by a computer.

Beyond the challenges, AI also presents opportunities for teachers. What would a classroom exercise look like that intentionally involved AI chatbots in a conversation? What if students were encouraged to take AI-generated responses as a starting point in their critical work?

To explore some of these questions, we’ve brought together a panel of scholars who’ve given a lot of thought to pedagogy. Jennifer Black of Boise State University and Laura Turchi of Arizona State University led a workshop at the 2022 Shakespeare Association of America conference about teaching Shakespeare in the general education classroom. And John Ladd of Washington and Jefferson College will lead a seminar on Early Modern data at the upcoming SAA conference.

They’re interviewed by Barbara Bogaev.
———–

BARBARA BOGAEV: I do have to think everyone knows what ChatGPT is by now. But for our listeners who’ve been on silent yoga retreats for the past month or in a coma or something, what is ChatGPT and how does it work? John, I’ll ask you since you teach computer science.

JOHN LADD: Sure. ChatGPT is based on OpenAI’s large language model, which is called GPT-3. It predicts word distributions. It can give a, I guess, realistic sounding response to human questions. The way ChatGPT works is you give it some kind of a question or prompt and it can respond with full sentences in response to whatever you have asked it.

BOGAEV: That was great. Thank you for that. And a lot of us, I’m sure—and definitely the three of you—have been playing around to see what this can do.

I had to do something for Valentine’s Day, so I had it write Valentines for my husband in the style of a Yelp review and a Taylor Swift song—that was offered in the New York Times. Really disappointing, I’ve got to say. But what about you folks? What have you done with it? Laura, I’ll ask you that.

LAURA TURCHI: I have a son who’s a poet and he’s been feeding in various prompts to try to write poems in the style of Larry Levis and Ellen Bryant Voigt. So, we’ve been talking about that, mostly from how disappointing it is, actually, what a surface resemblance there is, but what is missing? That’s been really inspiring to me in terms of thinking about teaching with this or teaching not so much about it, but with having ChatGPT in the universe and what that’s going to mean.

BOGAEV: And we’re going to talk about that. Yeah. It is interesting. And Jen, how about you? What have you tried on it?

JENNIFER BLACK: I’ve mostly put in prompts that I give to my students to see what kind of responses ChatGPT might write. And I’ll say, with some of them I thought, “Oh, this is pretty mediocre. I really hope my students write something a lot better than this.” But with some of them, they look a lot like the things that my students write. It’s gotten me really thinking about how this might change my teaching.

BOGAEV: Also, what we’re going to talk about for the rest of the hour. But what was your prompt?

BLACK: Well, I put in a prompt for one of my classes where we had just read three different texts, and students were supposed to try to find a common thread across the texts.

I asked it to write a paper comparing Bashō’s Narrow Road to the Deep North with Wu Cheng’en’s Journey to the West with Chikamatsu’s Love Suicides at Amijima. So, where it’s in a word literature class.

I said, “Find a common thread across these in at least 500 words using quotes.” And it wrote a pretty decent five paragraph essay. I think if I were a high school teacher, I would probably, “Oh, good structure. There’s a thesis. It used quotes.”

But it said, “They all have in common that there’s a journey.” So then I rewrote the prompt and then said, “Not a journey,” to see what it would come up with. And it said, “Oh, all three are reflective of the human experience.” And I’m like, “Oh, come on.”

BOGAEV: D-minus.

BLACK: Exactly. In fact, I went and talked to my class about it and just said, “This is what I did. I’m really glad that you guys wrote papers that were a lot more interesting than that.”

BOGAEV: What does this mean for homework for you? Are you thinking about how to change your essay assignments to make sure students write them themselves? Or only have them write in class on special computers that monitor their computer activities so you make sure you’re not using these bots?

BLACK: Not so much, in that way. I mean, I’ve thought a lot about and actually done a fair amount of research about what makes student plagiarize or what makes them turn to other people’s writing instead of their own.

I think that, you know, those same issues apply here in terms of when students feel like their own ideas are not what I actually want, then they turn somewhere else. When they think that their knowledge is not going to be sufficient, then they think, “Okay,” and they panic. Not all students of course, but some of them do.

So, I’m not so worried about that. I guess what I am concerned about is, am I asking students to write something that ChatGPT can’t write? Am I giving them the opportunity to think and not just regurgitate other people’s ideas?

Of course, I mean, there is some need to understand what other people think and have said. It’s making me look at my own assignments to think, “Well, how am I allowing my students to be humans as they write these assignments and not just collectors of other people’s ideas?”

BOGAEV: This is making me really feel for you teachers, because it’s a nuclear arms race with these chatbots.

John, help us out here. How should Jen be thinking to write assignments that are more human and that a bot can’t easily fulfill?

LADD: Sure. So, I’m also not especially concerned about the plagiarism issue in particular. That is, I think the things many classroom instructors are already doing to talk with their students about what constitutes original work and how to engage with those processes of writing will also work to steer them away from the temptation to go to ChatGPT.

I would say in general, those same principles of trying to think through how you are asking students to bring work together and really synthesize, that is to offer their own thoughts and commentary on the quotations and other ideas that they’re bringing together. That’s something that ChatGPT can’t do.

BOGAEV: Laura, does that speak to your concerns? And Jen too. You can jump in here. What are examples of questions that you are going to gravitate towards given this AI?

TURCHI: At my university, at Arizona State, there’s already, kind of, guardrails going up. In terms of final papers, the ability to check or at least predict whether a paper has been artificially produced in this way is… I mean, that’s already part of the conversation.

But I’m very interested in the high school level, because I work with a lot of future high school teachers. I mean, we’re already trying to teach media literacy and other ways of helping students understand where ideas come from in their own brains and ways of generating thinking that would rob their education, to the extent that they would hope to just get their answer from a search engine.

But, one of the things that happens in high school teaching, especially around Shakespeare and other difficult text—The challenge I think is, if they’re being thrown in, for instance, to a literary analysis assignment where they’re kind of cherry-picking a bunch of scholars and they don’t really have a background in what that scholarship is about or like what the world of scholarship is like— again, at the high school level, especially in AP classes… Well, the fact that ChatGPT could generate might help them see—might help teachers see why it’s important to engage with ideas more than just grab them out of context and kind of mush them together into a paper.

I think it’s an opportunity for students to understand the thinking that needs to happen and the expression of their own ideas and their own analysis of text.

BOGAEV: Okay, great. You brought up the first silver lining. You get a prize for that. Jen, getting back to what John was saying, does that reassure you? And, also, what kinds of assignments—or how are you changing your assignments? Could you give us some examples?

BLACK: Yeah, so I mean, I feel pretty optimistic about all this. In any case, I guess I am a technophile. I love the access to information that we have, that students have.

Laura and I have talked quite a bit over the past few years about some of the online resources that are available for students in relation to Shakespeare, and the way that they are giving more students access to Shakespeare, but also giving them confidence in their ability to come to understand without a deep background in it.

In terms of that, I’m hopeful that ChatGPT will be yet another tool that students can turn to to get ideas to, you know…

Like, I think so often with a prompt… so for instance, the one I put in most recently was a prompt for my non-major introductory Shakespeare class, where I’ve asked students, after reading Macbeth, to choose a decision that one of the characters makes in Macbeth and to think through, well, what are the other options that this character had? What are the effects of the choice that they made and what might have happened had they chosen differently?

I put that into ChatGPT. It came up with the pretty obvious answers first. So, I kept changing the parameters like, “Well, not Macbeth killing Duncan,” and, “Not Lady Macbeth convincing Macbeth to kill Duncan.” You know, to see what it would come up with.

In some ways, I feel like that might be a helpful thing for them to see. Well, what are the obvious things? Then, how can I get past those obvious things? I’m not expecting, you know, my freshman Intro to Shakespeare students to have something so groundbreaking to say. But, I do think it might be helpful for them to know like, “Okay, let me not just say the thing that that everyone’s going to go to first.”

BOGAEV: Can I just say my reaction to what you just said, which is that it’s kind of like thinking of ChatGPT as a kind of dumb or dense friend that they can bounce ideas off of.

BLACK: A study partner.

BOGAEV: Yes, right. Go ahead.

BLACK: So, no, I do think that in a positive light, that could be a way. “Just let me toss my idea out and see what ChatGPT says, in a place that nobody’s going to judge this in any way.”

Then, also, it’s made me think: How can I focus more on the process of them coming up with their ideas rather than just on product? So, going through the brainstorming of ideas and looking for evidence to support those ideas rather than just, you know, answer this prompt and move on.

BOGAEV: Right. Or maybe inserting themselves more in—or yourselves more in the revision process for later drafts, right?

BLACK: Yes.

BOGAEV: So, they can explain their thinking and their process.

BLACK: Yeah. And, then, how can I engage them in discussion with each other about their ideas in a scholarly way? This addresses, partly, the plagiarism issue.

You know, if ChatGPT wrote their essay for them, are they going to be able to sustain their conversation about that essay through conversations with peers, whether that be online or in person?

BOGAEV: Let’s talk about this plagiarism issue, John, because there are plenty of plagiarism programs out there already that for years universities and schools have been using. Are the plagiarism programs up to figuring out whether ChatGPT wrote something?

LADD: That is kind of an open question. It’s the subject of ongoing research right now.

There certainly has already plagiarism detection software and there are known issues with that software even before talking about ChatGPT, where it will give you a false positive and indicate that a student has plagiarized when maybe they haven’t.

That danger is increased maybe with trying to detect whether an essay or some piece of text was generated by ChatGPT. Initial attempts that were shared recently trying to do things like classify text as to whether or not it came from these models had kind of middling success to start.

I think the good news for classroom teachers is that most of the time, when you have relationships with your students—I encounter this in my classes all the time—you have a good sense of what their work looks like, what kinds of work they’re producing.

Particularly if you are doing what we’ve just been saying, which is having a scaffolded process that guides them through different parts of writing an essay. You will be able, I think, to have a sense of when this is happening, even without the use of a classifier or some kind of plagiarism detection.

BOGAEV: Well, you all are pretty sanguine about this new technology. But I imagine, especially on the high school level, that some schools are not going to be. Are some teachers just going to stop assigning essays as homework? Laura, I’ll throw that out to you.

TURCHI: Well, I think the challenge will continue to be workload for high school teachers and how many papers they’re processing. Lots of really good high school teachers spend a lot of time on writing processes and do get to know students work, get to give them those opportunities, like Jen was just talking about.

There are kind of two places where that’s going to be complicated. Again, workload is one thing. I think the other is helping students understand that what they think about a text matters; That it’s worth chewing on difficult ideas over time, and that learning to express those complicated ideas and inform other people about them, express them, support them, is a complicated process.

Assignments may have to change, but if teachers were already only looking at final products, I’m not sure it’s going to make that much difference because it was already… that is already open to all the ways we’ve already talked about plagiarism already being possible.

There have to be real opportunities for students to believe that their ideas matter. That is the first, best defense against plagiarism.

BOGAEV: Is there another silver lining here, though? I mean, might this be an opportunity to stop assigning big research papers in these AP classes for kids?

TURCHI: Yeah, I mean, I hope so. Yeah.

BOGAEV: You said that with some relief. A lot of relief, right? Why?

TURCHI: Well, because Shakespeare scholarship is what, 400 years old, 450 years old? It’s a huge, complicated, contentious conversation. At the high school level especially, unless that world is being taught, like, and students feel invited to be scholars themselves and to understand something about a scholarly process or where scholarship comes from—I mean, I would hope they are feeling invited into that, but if they’re not, then it’s just a set of ideas to cherry-pick from, and the writing is not very connected to what they’re really thinking about. It becomes a way of stringing together quotations that is not very interesting. I’m not sure what it’s really teaching students.

So yeah, I would say it’s a relief. I’d rather see something like what Jen was just saying: [Using] ChatGPT to generate a mediocre and even okay paper. And to do… a colleague of mine, Brandy Adams, talks about “interventions.” That is, thinking about the interventions between, say, a Shakespeare text and all the different things that mediate how the students understand it. To use ChatGPT as a way to think about those interventions seems a lot more useful and a lot more likely to bring critical media-thinking into a classroom. We know students need that long before college.

BOGAEV: You are getting to a really important context for this conversation, which is, I guess, the question, what exactly is the difference between secondary sources—like scholarly essays or Wikipedia articles or whatever—and some texts generated by an AI? So, can you quote AI?

TURCHI: If AI is part of the conversation, well, it’s an interesting… I mean, it is an interesting problem to imagine quoting AI, or the context where you would quote it. Maybe in reflecting on how, you know, what your voice is or something. That sounds kind of meta, but there’d be a chance to take it on as yet another text.

Ideally students are quoting people because somebody says something better than anybody else, and there’s an authority there. But there’s also a uniqueness and I don’t… so much of ChatGPT stuff sounds right now like it wants to be neutral or it wants to give the impression of neutrality and that is, again, it’s worth thinking about, but I’m not sure it’s worth citing.

BLACK: Can I just add to that, really quickly, that I think that sometimes students think that that’s what they’re supposed to be doing. I mean, the reading—that’s the reason for me right now reading ChatGPT-generated writing sounds like what I’m urging my students not to.

That, I keep saying to them, “I want you to take some risks.” I’d much rather you try out an idea that you’re not a hundred percent sure of than you give me a paper nicely-tied with a bow that you think, “Well, no one’s going to argue with this because…’’ Well, who wants to read something no one’s going to argue with? I mean, right now, ChatGPT, that’s what it does.

BOGAEV: Yeah. I mean, this is getting into the question of both creativity and also grading and expectations in schools and universities. How do you think this technology might change the way you grade and the way you weigh imagination and creativity and innovation of analysis maybe more heavily than logic?

I don’t know. Since this tech might be doing more of the basics stuff, you know, the thesis and the intro and the supporting paragraphs and the conclusion.

TURCHI: Well, I don’t know, John, about your practice, you might also really have something to say about this. But Jen and I have talked a lot about multimodal student projects and things that are not only written and why that kind of creative work may become more and more valuable as an expression of ideas and creativity and voice—and not reduced to predictable words, I guess is what that begins…

BOGAEV: Could you just remind us what multimodal means?

TURCHI: So, I heard the inflection in your voice when you said “group work,” and so we could certainly talk about the group work dimension of that, or how it could be.

But for me, multimodal will be… might be a podcast, might be a video, a digital story being told. Might do any sorts of ways that is expressing. Through media that is not only a typed page, a Word document or whatever.

BOGAEV: Ah, okay. Okay. That makes sense. And John?

LADD: Yes, I consider it a big part of my work as a teacher to help students think about how to interact with new technologies in their lives. So whether or not they are drawn to use ChatGPT to replace their work within a particular assignment, they are certainly going to encounter large language model-generated text in the world on the internet.

How do we think about interacting with that text? What does it mean to interpret something that comes from those kinds of models? How do you recognize those kinds of things?

There are so many really fascinating and sometimes troubling issues surrounding these large language models. There are ethical and labor issues. There are environmental issues. There are technical issues that I try to bring into any discussion of ChatGPT that I have with my students, whether it’s in the context of teaching writing or the context of teaching a piece of technology.

So, an answer to the question about whether ChatGPT seems like a threatening part of my practice as a teacher: I would say, instead, it provides a—right now—a really rich source of conversation about technology and how it interacts with society.

BOGAEV: What are some of those environmental or labor issues with ChatGPT or these AI technologies?

LADD: Yeah, so I was speaking specifically about a recent TIME magazine exposé about workers in Kenya that were paid $2 an hour to sort through toxic language and information so that it could be kept out of products like ChatGPT, so that these large language models wouldn’t wind up outputting the most abusive kinds of language. In order to stop them from doing that, real human beings—many of them underpaid and in countries other than the US—have to actually go through all of that material. It’s a very difficult and sometimes a psychologically-damaging task.

In terms of environmental issues, it can create a very large amount of power usage that can be damaging environmentally.

As well as the potential ethics issues of bringing this kind of text into the world when people aren’t prepared to interpret it in certain ways, or when they might take it at face value, which was something I think we were all just talking about, kind of worrying about our students interpreting text provided to them from a model as something that is somehow neutral.

BOGAEV: This is great. I mean, we’re talking so much about using this AI as a teaching tool. Laura, I want to give you a chance to jump in here. But I’m also thinking that I’ve read that professors are planning to teach… some professors are planning to teach newer or more niche texts that something like ChatGPT might have less information about now. For instance, teaching Shakespeare sonnets rather than Midsummer.

TURCHI: Yeah, I saw something about, 178 billion texts were fed into ChatGPT. I don’t know if that’s a real number. It seemed like an appropriately enormous, hard-to-imagine number.

BOGAEV: And half of them are about Shakespeare sonnets. [LAUGHTER]

TURCHI: No, but I mean, the idea that you would stay ahead somehow and letting that knock your syllabus out of whack, that seems crazy.

But so much of this does seem crazy. I mean, anytime folks are going to talk about a computer program that can “hallucinate,” or is worried when it starts hallucinating—which is, I think the way you describe it, ChatGPT saying things that make no sense or have no bearing on their real world. We’re in a strange and interesting time.

BOGAEV: I do really feel for you, and I also am glad you brought up this word “hallucinations” because it gets to the issue of anthropomorphizing AI tech, and talking about a chatbot hallucinating can really deflect blame away from the tech industry, which should take responsibility for improving ways to stop this technology from spreading misinformation and false statements.

LADD: I agree and think that there is a whole array of terms that we use that make analogy between large language models and human brains. Beginning with the idea of a neural network—an algorithm designed in imitation of a human neural network, but actually doesn’t bear that close a relationship to how the human brain actually seems to work—all the way to terms like artificial intelligence. And then yes, this idea of hallucination.

We have often used whatever the most advanced technology of the day was as an analogy for the human brain. At certain points, talking about the clockwork brain was maybe analogously useful. We talk about the computerized brain now. It doesn’t mean that the brain is a computer, and it doesn’t mean that the computer is a brain.

I think that it’s definitely worth—and I’ve done this myself—spending time with students, making sure that they understand that this kind of slippage of language often happens out in the world. But it does not mean that these models are thinking, nor does it mean that they’re hallucinating.

BOGAEV: Well, Jen and Laura, I want to give you time to answer the big question because you’ve been talking about some of your concerns and some of your hopes for this technology.

What do you think it will mean for English departments just more generally and will it have specific impacts on funding? I mean that in the sense that English departments are already struggling for funding on every level.

BLACK: I think it depends a lot on what we do with it and how we approach it, right?

So if we perceive it as a threat and we try to prevent students from using it. I mean, that’s like math teachers asking students not to use calculators ever, which doesn’t make any sense. I’m sure there are times in which it makes sense, but, you know, as a general rule to say, “Don’t use the technologies that are available to you.”

You know, Laura and I have talked about having students use SparkNotes, translations of Shakespeare, or the MyShakespeare website. And thinking, “Well, how do we acknowledge these things are available and they have their uses? Let’s figure out how to use these in ways that are useful for you.”

I think if, as English and literature departments, we look at this tool as a tool and to say, “Well, what is this tool good for? And how can we use it in our classes?” then I’m not too concerned about it.

My concern, I guess, is the idea that if people start to think, “Well, students don’t need to learn how to write because ChatGPT, or whatever AI comes after, can just do that for them.” As if writing is just a creative product, and that, you know, the process of literary analysis and sharing of ideas is not valuable for its own sake, that that would be problematic. I hope that that is not the future that we are looking toward. I think, again, like I said, a lot of it depends on how we approach this and what we do with it.

TURCHI: Right now, I think every class—certainly every class in writing—is in some way a class in media literacy or critical thinking in media literacy. That’s got to be valued by the society that we live in. So, right, to the extent that a university business model would start to be, you know, product, maybe even more than it is now, I mean, I certainly share Jen’s sense that that would be probably catastrophic.

But I think that the financial side of this, or the monetary side of this, again, has something to do with the broader question of what we value in the humanities, value in literature, and how we can continue to fight the good fight about what is good, what is valuable, what represents critical thinking.

BOGAEV: I do want to ask you what hopes or possibilities you see this AI tech having for research and your own research.

LADD: One thing I can say about my research, which takes a computational and data focused approach to early modern literature, is that there are many challenges to historical language research and historical language analysis that have posed problems for more traditional, natural language processing.

Those problems might be getting a lot easier to handle now. Things like how widely spelling varies across the 17th century, for instance. Being able to analyze things at scale and do so more accurately, I think is an exciting research opportunity.

I also think it’s a nice, bounded task for a large language model, rather than trying to answer open-ended questions. Asking it to kind of classify terms and using that as part of a data analysis is really one of the things it might shine at.

BOGAEV: How about you, Jen and Laura?

BLACK: The first thing that comes to my mind is, I do quite a lot of research about online teaching. You know, the fact that the kinds of the things I can do in online teaching now, I couldn’t have done ten years ago. The technologies have made pedagogical approaches possible that weren’t possible before.

So, that’s my first question for myself: What does ChatGPT offer me in terms of helping my students learn that maybe I couldn’t have done before I had this technology? That’s going to take some exploration and playing around, which I think is exciting.

Then I also feel like, you know, one of the… I mean, Laura referenced earlier, noticed the years and years, the centuries of Shakespeare scholarship, and that are continuing. You know, I know that students always—that some students worry that everything has been said. “What could I possibly say about Shakespeare now?” But, because our world is continually changing, we have new lenses with which to look at Shakespeare.

So, that’s my other thought—well, like John was saying—what new lenses do these technologies offer that will allow students to find new things in Shakespeare plays? Or to filter… you know, to approach the plays through our contemporary perspective with the issues of identity and some of the questions we’ve been talking about.

Does it matter who is saying something well in Shakespeare plays? Does it matter which lines are delivered by whom? Does it matter that it’s Polonius, who gives all these lines in Hamlet that are so quoted by other people and yet is a pretty despicable character?

I think for future Shakespeare scholars, this could be pretty exciting. It might open up new avenues of engaging and interacting with texts that leads to more understanding and insight.

BOGAEV: Great. Laura?

TURCHI: I work with the Shakespeare Center of Los Angeles on a Shakespeare and Social Justice project. One of the things we’re working on really closely is paraphrasing Shakespeare as a stage towards talking back to Shakespeare, making it your own, making it relevant, all those important lenses Jen was just talking about,.

I’m really thinking about those youth and their access to things like ChatGPT. Again, for me, it’s about scaffolding. Understanding to the point that you can talk back. That you can understand something well enough to incorporate it and also critique it or make it your own in an important way. I suspect ChatGPT is going to be a tool that youth are going to bring to this conversation.

So, that’s where I’m thinking some real careful research needs to happen about where students find their voices. Or, maybe, how students find their voices in a conversation with Shakespeare, those 400 years of scholars, everything else. Like, where do their lives matter or how can we help them feel their lives matter in the context of a great play?

BOGAEV: Oh, you three are inspiring me so much. I’m going to hang up here and then go ask ChatGPT what topics we should do next on our podcast. Yeah. Oh, thank you so much. I really appreciate it.

BLACK: Such a pleasure to get to talk with you all.

LADD: Yes. Thank you very much.
—————-
WITMORE: Our panelists were Laura Turchi of Arizona State University, John Ladd of Washington and Jefferson College, and Jennifer Black of Boise State University.

This episode was produced by Matt Frassica. Garland Scott is the associate producer. It was edited by Gail Kern Paster. Ben Lauer is the web producer, with help from Leonor Fernandez. We had technical help from Shane McKeon, Kristin Vermilya, and Voice Trax West in Studio City, California. Final mixing services provided by Clean Cuts at Three Seas, Inc.

If you’re a fan of Shakespeare Unlimited, remember to subscribe on your podcast platform of choice, so you never miss an episode.

Shakespeare Unlimited comes to you from the Folger Shakespeare Library. Home to the world’s largest Shakespeare collection, the Folger is dedicated to advancing knowledge and the arts. You can find more about the Folger at our new website, folger.edu.

Thanks for listening. For the Folger Shakespeare Library, I’m Folger Director Michael Witmore.