‘They’re logical things, but they’re not human beings’: What is ChatGPT, and what should we be concerned about?

By Meg Richards, Staff Writer

ChatGPT, a new artificial intelligence program that has the capability to write basically anything can write your essays for you. Emerson community members, along with those at other colleges, are adjusting to the new challenges and opportunities the program might provide.

In the midst of ChatGPT’s growing presence in the academic and journalistic landscape, concerns have been raised about students using it for cheating and plagiarism.

Zhao Peng, a journalism professor who teaches a digital journalism course, says the problem could be even bigger than it appears in regards to cheating. 

“It may hinder students [from thinking] deeply,” Peng said. “I’m more concerned with the thinking process because of the way [ChatGPT] provides answers. Students will put less and less time and effort into thinking. Verification is a very important part, and ChatGPT provides answers right away. People will take it and not give a second or third thought about it.”

In a survey conducted by the International Center for Academic Integrity over a period of 12 years, Dr. Donald McCabe of Rutgers University found that 58% of graduate and undergraduate students admitted to plagiarism. This statistic is a comprehensive review from 2003-2015, before the prevalent use of AI and ChatGPT. Since the introduction of these softwares, professors like McCabe have voiced concerns about the threat AI poses to education. 

First-year visual and media arts major Luke LaTorre doesn’t think ChatGPT poses a substantial threat to academic integrity.

“Even if you [use] it to write your essay, you’ve got to tweak it and make it your own,” LaTorre said. “In my opinion, it’s really obvious when you use ChatGPT.”

GPTZero is a software that is able to detect whether AI is used in a student’s work to plagiarize. The company partners with many colleges and universities, though they haven’t yet worked with Emerson. Peng said she plans to suggest this software to the school next semester.

“Because ChatGPT is still an algorithm, there is math behind it,” Peng said. “If you know the math, you can detect the probability between the words used, you can detect if students used ChatGPT.” 

A sophomore journalism major, who agreed to speak on the condition of anonymity, has used ChatGPT in the past, but stopped because of concerns about how AI will affect student’s well-being in education. This student’s identity has been verified by the Beacon.

“For a while, I was using it here and there, [but] I’ve been seeing everything on social media and it’s been scaring me,” they said. “After using it [for a while] I realized this really isn’t benefiting me in any way… In the end, I wasn’t really learning anything.”

What they’re referencing is the ability for teachers to detect cheating with GPTZero, as well as other apps to counteract these detection tools. 

The student also mentioned software that has been created to counteract algorithms trained to detect plagiarism and AI use. This software is called QuillBot, and it works around algorithms such as GPTZero to create AI-generated writing that is undetectable for plagiarism. 

Although the anonymous student never used ChatGPT to plagiarize entire assignments, they began to feel too dependent on the software.

“What’s starting to scare me is, I’d be thinking of people in middle school who are needing to grow,” the anonymous student said. “And they’re using ChatGPT, and if they continue to do that and turn their assignments in with little to no effort, they’re not going to be learning anything, and it’s just going to mess them up for the real world.”

According to a study by online magazine Intelligent, 30 percent of 1,000 college students surveyed use ChatGPT to write essays, and 60 percent of them use it to help with more than half of their work. 

The use of AI to create pieces of writing isn’t just prevalent in academic institutions. According to Peng, publications such as BBC have already incorporated the use of AI programs including ChatGPT into their reporting. Certain attributes of the software serve as helpful tools for journalists, Peng said, and could even revolutionize reporting. 

“This model is groundbreaking, but it’s not the first time journalists have been using AI tools for reporting,” she said. “This process could save you [a lot of] time.”

Although it might be able to replace some tasks involved with journalism, such as daily updates of stocks and copyediting, it will never be able to replace journalists, Peng said. 

“For journalism we need to do interviewing, writing, editing, commenting, and video-shooting,” she said. “For these five big pillars, ChatGPT can only help with writing. ChatGPT can’t do interviewing—it can’t read tone.”

Much like social media, as with any technological advances, Peng predicted that bigger news outlets will propel forward with AI while smaller ones struggle to keep up. 

“I’ve already seen some media outlets increase the use of [AI].” she said. “Still, this only happens in big news outlets. For local [news], they may not have professionals who know how to incorporate ChatGPT. With this revolution… it will make the distance between international news outlets and local news media even bigger.”

Additionally, LaTorre doesn’t see AI threatening job security or replacing jobs of journalists and authors.

“I don’t think it can be creative,” he said. “It can’t replace creative [jobs]. They need a personal touch, or otherwise they won’t be successful.”

While some argue that ChatGPT could jeopardize the integrity of academics and education, Peng noted that it also poses another, larger-looming threat: political indoctrination. Peng worries that algorithm bias could polarize users in a similar way to Facebook’s effect in recent years.

“The algorithm interacts with you,” Peng said. “If they sense that you’re liberal or conservative, they’ll tailor [the content] to your inclinations.”

Multiple sources have confirmed the presence of such bias, with a lack of coding in place to use inclusive language, as well as being prompted by users to confirm biases

In the same way that Facebook and Twitter generate money from clicks and engagement, AI does as well.  AI software like ChatGPT might aim to confirm users’ biases to generate maximal engagement, producing a polarizing effect, Peng said. 

“You know on social media, some people will get a biased [feed] because of the algorithm. It may generate an impact on marginalized groups,” she said. “People get more and more polarized. It’s because of the algorithm. When we consume, we’re feeding the algorithm. If the data is biased, the response you get will be biased.”

Peng said that data collection and distribution is another big market for AI.

“People don’t realize that they’ve given all their personal information to the algorithm,” she said. “Do you know who sits behind the algorithm? It’s very disconcerting.”

These websites collect personal data, according to Peng, such as IP addresses, and distribute them to third parties that the user did not consent to giving their information to. Ultimately, the goal is to make money by selling users’ data, under the guise of making their lives marginally easier, she said.

Recently, Italy banned OpenAI’s recent models of GPT due to breaches of data protection laws and other privacy concerns. They are the first to put a policy of this kind in place—but potentially not the last. Publications in the U.S. are just beginning to shed light on these data breaches. 

LaTorre, who uses ChatGPT to draft tweets and music recommendations, has no concerns about data collection or distribution from ChatGPT. He believes that by creating a new chat within the same tab open, his history is erased.

The anonymous student interviewed, on the other hand, said AI poses “scary” dangers. 

“Not every single one of those AI’s are completely accurate, or good, or useful,” they said. 

“Computers are literally going through your phone, taking your information, and seeing what you like,” they continued. “A lot of things should be confidential—this is your phone, you should have privacy at all times and it’s really scary that you don’t. Journalists aren’t writing about it, news isn’t publishing about it… Nobody knows that there’s an algorithm, or that they can track it on other sites. No one knows that detail, and people should know about it.”