Facebook pixel The Future Is Yesterday | Pepperdine Magazine | Summer 2023 Skip to main content
Pepperdine University
text question in search bar with blinking cursor

The Future Is Yesterday

As generative AI tools make waves in fields as diverse as banking and the visual arts, the academy must decide how it will adjust to these new technologies—and quickly

On November 30, 2022, OpenAI, a privately held technology company in San Francisco, launched ChatGPT, a chatbot with a conversational, dialogue-based format that creates the illusion of a human-like back-and-forth with the user.

What makes ChatGPT and similar generative artificial intelligence (AI) programs such as Microsoft’s Compose and Google Bard different from the autocomplete feature on your smart phone is that they can generate seemingly original text based on specific input. Provided with a prompt such as “Write a five-paragraph essay about symbolism in Hemingway’s The Sun Also Rises for a college English class,” a generative AI program does just that.

The initial capabilities of ChatGPT seemed so impressive that just six days after the app was introduced, a headline in The Atlantic asked, “Will ChatGPT Kill the Student Essay?” Author Stephen Marche has apparently made up his mind since initially asking that question: The online article now runs under the title “The College Essay Is Dead.”

Whether his prophecy will ultimately prove true or not, Marche’s essay started a media firestorm of conflicting opinions—informed and otherwise—about how AI would either ring in a transformational new era for universities or sound their death knells. The predictions about how ChatGPT and similar technologies will affect higher education have been nonstop ever since, but what’s clear is that these tools have already taken root, and there will certainly be more of them to come.

 


 

AI in the Academy

Pepperdine, like most universities, is adapting to these generative AI programs as quickly as possible. “Generative AI is disrupting teaching and learning by changing how skill and knowledge assessments are done and creating new educational use cases and pedagogical approaches,” says David Smith, senior associate provost of information technology (IT), online learning, and international program facilities. “It could also negatively affect the cultivation of creativity and inquiry among students and pose ethical challenges for teachers and learners. On the other hand, it will also offer opportunities to enhance personalized learning experiences, support faculty in research, expand access to knowledge and resources, and foster interdisciplinary research.”

Cognizant of both the opportunities and risks that generative AI tools pose to higher education,  Jonathan See, Pepperdine’s chief information officer since 2012, recently established an AI Advisory Committee under the authority of the Office of the Provost. Comprising faculty members from all five of the University’s schools, as well as representatives from Human Resources, the Office of the General Counsel, the Office of Community Standards, and the IT department, the committee is charged with the mission of developing University-wide guidelines and best practices for the use of AI and has already produced a faculty guide for creating AI-use statements for course syllabi.

“Pepperdine recognizes that these types of generative AI tools are here to stay,” says See. “So the question becomes, ‘How do we embrace and adapt to these tools so that we’re enhancing student learning and maintaining our academic standards?’ AI is only going to get better in the future, so how do we use these tools effectively while maintaining data privacy and security?”

To prepare for his task, See even asked an AI tool about “ethical considerations for uses of AI” to help him generate areas for the committee’s focus. “AI creates a certain efficiency, but it still requires the human element to ensure that the content generated meets University standards,” he says. “We recognize that the human element is critical and cannot be forsaken.”

Questions about the ethical uses of generative AI go beyond how the technology is used; many have concerns about AI-generated content promulgating biases and prejudices—intentional or not—embedded in the source texts that make up a given program’s dataset. “AI was developed by humans who have inherent biases and flaws,” says Hattie Mitchell (MPP ’12), visiting professor of education and policy at the School of Public Policy. “We should expect AI to represent some of the same biases and flaws that the humans who created it do—and early research indicates that it does. But if used with discipline, boundaries, and thoughtfulness, AI can be of tremendous added value to our work at the university level.”

Pepperdine’s faculty guide to the use of AI tools, which has been accepted by the University’s provost Jay Brewster and forwarded to the deans of each school to disseminate to their faculty members, provides general recommendations to faculty, as well as examples of syllabus statements in three categories: for classes in which the use of AI tools is allowed or encouraged, for classes that allow limited or selective use of AI tools, and for classes in which the use of AI tools is prohibited. It is at the discretion of each instructor to decide whether or not to include a syllabus statement regarding the use of AI tools in their classes. 

“Each school will update its academic integrity policies, curricula, and syllabi to acknowledge the impact of these new technologies,” See says. “The committee has provided high-level guidelines with flexibility, so we can adapt them as the technology changes. If we are too specific, the technology may outrun us.”

The inevitability of generative AI impacting education seems to be almost universally accepted. “Is it something we should be afraid of?” asks John C. Buckingham, eLearning instructional designer for the Pepperdine Graziadio Business School. “Yes and no. To some extent, we should embrace it as an institution. There’s definitely some fear in education circles that students are going to use this technology to cheat. That fear is understandable. AI is a disruptive force to education, and while it’s going to force education to adapt, it will present some good opportunities along the way.”

 


 

 

Jordan Lott - Pepperdine University

 

“The full landscape of AI tools is going to change the educational environment over the next few years.”

—Jordan Lott


 

Assessing Assessments

One way in which educators are adapting to AI tools is in their means of assessing student learning. In educational theory, “authentic assessment” is a teaching approach that emphasizes the student’s ability to apply knowledge in new situations, rather than simply memorizing content. Tony DePrato (MA ’02), an educational technology expert and chief information officer at St. Andrew’s Episcopal School in Ridgeland, Mississippi, believes that generative AI may inspire instructors to use authentic assessment strategies more frequently and effectively.

“It could mean a return to the ‘blue book,’ where students write their response to a question or problem in front of a proctor,” says DePrato. “This kind of answer has to actually be read by the assessor, so it’s more work than a multiple-choice test. But that means it’s a more authentic assessment for the assessor as well as the student.”

Another adjustment that educators may have to make is more philosophical. “If teachers see themselves as experts who simply transfer knowledge to students and leave the difficult cognitive tasks for the students to do on their own, then of course the students are going to do that in the easiest way possible,” says Catlin R. Tucker (EdD ’20), an expert on blended learning, whose latest book, Shift Writing Into the Classroom with UDL and Blended Learning, will be published in January 2024. “But if you’re using class time to help students use these tools, then there’s unlimited opportunity for personalizing learning. It’s reimagining the approach to teaching and learning.”

Some educators are proactively taking on the challenge of incorporating AI tools into their students’ learning experiences. Artem Barsky (BSM ’18, MBA ’19), an adjunct professor of information systems technology management at the Graziadio School since 2020, encourages his undergraduate students to share their own experiences with ChatGPT in an online discussion forum, and he shares his uses of the tool as well.

“We discuss its strengths and its limitations,” he says. Barsky wants his students to learn how to adapt to this new technology quickly. “Can they automate processes? Can they give themselves more bandwidth to do other things? Can they build efficiencies?” he asks. 

Barsky also encourages his graduate students to use ChatGPT as a supplementary tool on their short-essay examinations (and to disclose if they did so), “but so far, they haven’t.” As to why not, he speculates that because his exams are timed, students may view adapting the answers that ChatGPT generates as an additional step that would be too time-consuming. Barsky believes that within a few terms, he’ll see more students use AI tools to either create or to review their first drafts. “The world is moving toward more integration of these tools,” he explains. “I want students to be as comfortable with this technology as they are with Google.”

 


 

When Teachers Become the Students

Catlin R. Tucker (EdD ’20)


We need to use these tools to maximize student creativity and potential

—Catlin R. Tucker (EdD ’20)

In an effort to support Pepperdine faculty in their understanding and use of generative AI tools, the Seaver College Center for Teaching Excellence and the University’s Technology and Learning team jointly held a three-day workshop, Teaching and Learning in the Age of AI, in June. Approximately 30 faculty members participated in the workshop, which mixed informative lectures with hands-on exercises that allowed faculty to explore how AI tools can be used by both students and instructors.

For instance, to simulate the student experience, participants used ChatGPT to write an original essay on a topic of their choice, to improve a poorly written rough draft, and to write multiple versions of an essay to find out if—and how quickly—ChatGPT could improve it. Faculty were also asked to experiment with AI tools to revise their test questions, to help craft a syllabus policy regarding AI usage, and to simulate student responses to test questions in order to improve and refine them. These exercises were designed to illustrate how both students and educators can use AI tools to do their work more efficiently.

“The full landscape of AI tools is going to change the educational environment over the next few years,” says Jordan Lott, who serves as senior manager of Pepperdine’s IT training and the Technology and Learning team and was a speaker at the June workshop. “Pedagogy and assessment will need to be adjusted. Very soon, it’s going to be hard to avoid having AI assist with your writing because it’s going to be built into all the tools. So looking at the process rather than the end result may help instructors see where student learning happened and where their knowledge of the content was applied.”

Many experts in the popular press have compared the rise of ChatGPT to the introduction of the calculator or the early years of the internet. At first, using these technologies was viewed as cheating, and some educators treated it as such. For example, students weren’t allowed to use calculators on the SAT until 1994. Almost 30 years later, calculators are so accepted as a valuable tool that the College Board website even provides tips on how to use them most effectively on the test.

“Once the calculator reached a certain level of penetration, it was no longer considered cheating,” says Lott. “Does it change the value of what’s being learned to use the tool? People can still be good at math while using a calculator. There can be value in reading high-quality content generated by these AI tools with the intent of improving it. I have improved my writing abilities by consistently reading the writing of colleagues who write better than I do. Why couldn’t ChatGPT serve the same function?”

 


 

The Way Forward

Some see generative AI tools as a much greater leap forward in technology than even the printing press. “Large language models like ChatGPT are able to speak to each other, to understand human emotion, to understand consequences; it’s totally different,” says Barsky. “We are building something that is likely going to become billions of times smarter than us. And if that’s the case, these tools may be successful in solving problems in ways we’ve never even considered.”

But others are more cautious. “There’s a lot of uncertainty and fear around AI tools, as well as a lot of hype,” says DePrato. “Some say, ‘It’s going to do everything!’ And others say, ‘We have to run away from it!’ My advice is to use the tools, to use them at scale, and to build an understanding of how the tools work. We have to get an understanding of where we are and where the technology is going.”

Tucker takes a practical approach to ChatGPT and other generative AI technologies, contending that AI usage can help teachers improve their students’ work. “We need to use these tools to maximize student creativity and potential,” she says. “If we block their use, we won’t understand how to use them, and we will create classrooms that are stuck in time, out of step with what’s happening outside the classroom. We don’t spend nearly enough time supporting students as they write, identifying what’s good, what isn’t, and why. ChatGPT may actually raise the bar because it can provide real-time support to students in their writing.”

So yes, the college essay—as we knew it—may be dead. Long live the college essay.

 


 

How It Works

ChatGPT sidebar graphic

The “GPT” in ChatGPT’s name stands for “generative pre-trained transformer.” In simple terms, the program generates responses to questions or prompts based on a large dataset that has been pre-trained according to a transformer model, a type of machine learning that is meant to mimic cognitive attention. In humans, cognitive attention is the ability of our brains to determine what is the most important information to focus on amid all the stimuli we receive and to filter out the information that is irrelevant.

ChatGPT sidebar graphic

ChatGPT is a large language model (LLM), a type of computer program that can read, summarize, identify patterns in, and generate text based on a dataset made up of huge amounts of text, the vast majority of which was written by actual humans, such as books, journal and magazine articles, and webpages. An LLM predicts the next word in a sentence based on the words that have come before it—using probability calculations based on the text in its dataset—and then the next word after that, and so on. When an LLM generates the first word of a sentence, it doesn’t know what the last word of that sentence will be.

ChatGPT sidebar graphic

And of course, this type of technology can also be used to create images, computer code, and more. If a human being can provide it with a large enough dataset to sample, a machine-learning program like ChatGPT or DALL-E (OpenAI’s image-generation app) can mimic and generate almost any type of output.