Skip to Content

How Emerson is—and isn’t—using artificial intelligence in the classroom

At Emerson, the integration of artificial intelligence on campus has been anything but cohesive. In response, the administration launched an AI Taskforce—and adopted a robot named Jibo.
Jibo sits on a desk in the XR-Studio
Jibo sits on a desk in the XR-Studio
Yogev Toby

On the third floor of the Ansin building, a large square outlined in blue tape is centered on the floor between a camera and a wall-sized screen. With a few keyboard clicks, whoever steps into the square can be transformed into a “Simpsons” character, a robot, or anything else in between—with the help of artificial intelligence. 

The experience is part of Emerson’s XR-Studio, which was installed last year in the college’s Emerging Media Lab as part of Emerson’s efforts to formally integrate AI into the curriculum. 

AI’s rapid emergence in 2023 has exploded into a global arms race, prompting tech industries to invest hundreds of billions of dollars in AI development. At Emerson, the administration is doubling down on its commitment to become a “leader in the ethical application of AI to communication and the arts,” Assistant Provost for Faculty Affairs, Brooke Knight, wrote in a statement to The Beacon. 

Knight is leading Emerson’s AI Taskforce—a group of faculty, staff, and administration working to assess the way the campus uses AI to develop a more cohesive approach to the technology. The team aims to survey Emerson faculty and staff during a day-long retreat and conjure a unified approach to the topic. 

As of this semester, Emerson does not have a unified approach to AI. According to Emerson’s Governance and Guidelines page, instructors can either “work with,” “work around,” or “work against” AI. The college also suggests that faculty “familiarize themselves with the academic misconduct process” in case students don’t follow the faculty’s AI policy.  

There are a total of 11 undergraduate and graduate classes offered across several majors, including journalism, visual media arts, interdisciplinary, and more that mention AI in their title or description, but some instructors have introduced the topic independently. 

Back at the XR-Studio, visual media arts professor D. Pillis cradled a robot in his arms. The robot, named Jibo, reacted to Pillis’ movement with purrs and head movements, like a pet. MIT created Jibo to be an emotional support robot that can talk, dance, and do other basic functions. Pillis calls Jibo his “teaching assistant” and often takes him across campus for social interactions with students.

D. Pillis holds his “teaching assistant,” Jibo at the XR-Studio (Yogev Toby/ Beacon Staff)

“It’s my responsibility to enable my students  [and myself] to flourish with technology, whatever shape it takes,” Pillis said. 

Emerson hired Pillis last August with the intention of “leading virtual production technologies,” he said. His previous work at MIT’s Media Lab dealt with how AI models can help with human reasoning and offer solutions to social issues. Pillis teaches a course on virtual production, in which students utilize AI technology to create real-time computer-generated videos at the XR-Studio.

Marlboro Institute professor and AI Taskforce member Russ Newman said that using the technology is not an extraordinary task anymore. 

“Much like the computers we’re using now and the internet, [AI] is just there … It’s a thing that is quickly integrating itself into every manner of process right now,” he said. 

According to Newman, the task force was proposed by President Jay Bernhardt last year. Since then, the college has faced an enrollment decline, which has led to budget cuts and layoffs at the beginning of this academic year. Newman said that investing resources in AI might help increase enrollment. 

“Figuring out what our unique stamp on an education that happens to incorporate AI is, is only going to strengthen our overall standing,” he said.

Last March, Bernhardt spoke at an Emerson panel on AI and said “It’s clear that [AI] will become a part of what [students] need to know to be successful.” 

Students, however, are reluctant to embrace the new technology. In his class, VMA professor Maurice Methot asked his undergraduate foundations students to experiment with an AI video generation tool used in the film industry. He said at least four students refused to participate in the assignment and requested to be excused. 

“I’m still thinking about that,” Methot said. 

Olivia Altiok, a junior VMA student in Methot’s foundations class, described an event where the class discussed the ethics of art ownership.

“Does art truly belong to anyone? I don’t know, but it shouldn’t really belong to a computer,” Altiok said.

Methot has decades of experience experimenting with emerging technologies, such as algorithms and computer-generated sounds. He said his approach to AI is that of an artist: “It’s not where you got it, it’s where you take it.”

Methot believes AI is a great tool for artists, but that it won’t replace people anytime soon. 

“[For true creation] you have to live,” he said. “You have to have kids. You have to have breakups and love affairs.” 

Those who spoke to The Beacon noted that the relationship between Emerson and AI is different than that of other schools because of Emerson’s focus on communication and the arts. As creators, some said, AI is conflicting directly with creative work.

Newman, however, said that Emerson students have a unique role as creators to help dictate the future of AI. 

“Creators need to have a sky-high view of how these industries are transforming and the role of policy,” he said.

Writing, Literature, and Publishing professor John Rodzvilla said he allows AI in his publishing class, but does not let students use it in his creative writing courses. 

“If you’re trying to learn how to write, having something [do the writing] for you is not helpful,” he said. 

Rodzvilla added that in a few years, students coming to Emerson will demand to learn AI skills to match the industry standard. 

Others, like WLP professor Steve Himmer, think AI is counterintuitive to higher education, and to Emerson’s values. 

“It is literally the opposite of self-expression and intellectual work and creativity,” he said. “It’s fundamentally opposed to what I’m trying to do as a writer and as an instructor.” 

Himmer follows the “work against” option suggested by the school and refuses to let students use AI in any capacity. 

D. Pillis and Eugene Kuznetsov stand in front of an AI generated video of them (Yogev Toby / Beacon Staff)

While the AI-Taskforce is an administrative-led project, some faculty members created a different group that examines AI through a critical lens. 

Marlboro Institute professor and AI-Taskforce member Ioana Jucan created the Data Fluencies Theatre Project and the Critical and Creative AI Studies Lab as part of the Data Fluencies Project. The project is a collaborative initiative between universities across the globe to investigate the impact of mis- and disinformation through a data fluency focus. 

Jucan’s theater project is an interdisciplinary multimedia artwork that incorporates live performances alongside AI. The critical and creative lab is a research space for artists and scholars acting as a think tank for the connection between AI’s dangers and potential for its impact on society. 

“These tools are very powerful. Personally, both as a teacher, an artist, and a researcher, I want to understand what they are and what they’re doing and engage with them critically,” Jucan said. 

Jucan said that being critical does not necessarily mean being negative. She emphasized the need to focus on the risks, societal implications, and politics that might exist underneath the surface of AI. 

“[We need to come] with a solid critical thinking framework to understand these tools, not just as technical systems, but as socio-technical systems that have real-world consequences,” she said.

At the XR-Studio, Pillis picks up his teaching assistant, Jibo, and steps inside the boundaries of the blue tape. Instantly, the screen behind him displays a rapidly morphing shape. In milliseconds, the computer recognizes Pillis as human and transforms his display into the father of computers—Alan Turing. Pillis raises Jibo to the camera’s line of sight, and the robot’s image quickly morphs into a 1940s computer, where AI first began. 

“[These technologies are] not going to disappear,” Pillis said. “So you have to figure out constructive ways to work with them.”

About the Contributor
Yogev Toby
Yogev Toby, Projects Editor
Yogev Toby (He/Him) is a junior journalism student and Projects Editor for the Berkeley Beacon. After moving to the United States from Israel, Yogev completed his associates degree at Portland Community College and transferred to Emerson. Yogev has years of experience in field reporting and multimedia journalism from his service as a combat photographer; he specializes in writing, photography, and videography. He is also the managing editor of WEBN TV. Outside of journalism, Yogev enjoys hiking, rock climbing, and watching films.