ChatGPT is a helpful tool for students when used ethically

The emergence of ChatGPT, an artificial intelligence chatbot that can converse with users and produce written texts, has raised concerns about students turning in AI-written essays. However, when used responsibly, ChatGPT is a useful tool to increase productivity, without sacrificing ethics.
ChatGPT, created by OpenAI, can effortlessly produce rhyming poems, letters to loved ones, emails to bosses or essays covering nearly any topic taught in a college classroom. The implications of students turning in AI-generated work have already caused professors to reassess their classroom policies. University of Pennsylvania professor Ethan Mollick told The New York Times he would now require students to write first drafts under classroom supervision.
Other professors are embracing the technology, acknowledging that it would be tough to prevent students from using the free service. Professor Ethan Mollick of the University of Pennsylvania told NPR he is requiring students in his entrepreneurship classes to use ChatGPT as a brainstorming tool for generating project ideas.
Mollick has the right idea. ChatGPT’s revolutionary ability to brainstorm stops just short of academic dishonesty. Turning in work written by an AI would be plagiarism, just like turning in work written by another human. Still, picking the binary brain of ChatGPT to help stir up ideas is an effective and efficient way to produce quality work.
A study for Frontiers in Artificial Intelligence found participants created more ideas when chatting with what they thought was an AI than when they chatted with what they thought was a human. The lack of perceived judgment from a computer allowed participants to create more creative and diverse ideas.
In math class, once students master the basic mathematic functions, they rely on a calculator to increase speed and accuracy, allowing more space for real learning. ChatGPT is no different. It is a tool that needs human guidance, both with the input of message prompts and the decoding of responses.
An AI can spit out scores of responses. A human must know what to do with them.
While ChatGPT is a useful tool, the responses it generates are not infallible. I asked ChatGPT for some sources that might help my research. ChatGPT obliged, providing five links to articles about AI ethics.
Upon clicking the first link, an article supposedly from The Verge, the webpage displayed an error. In fact, all five links led to errors. “Are these real articles?” I asked ChatGPT.
“I apologize for the confusion,” ChatGPT responded. “As an AI language model, I do not have the ability to provide links to real articles. I was providing hypothetical article titles that could potentially be used as sources for research.”
It gave me fake articles. OpenAI acknowledges that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” Sometimes ChatGPT gets it wrong.
However, ChatGPT often gets it right. I gave ChatGPT links to some articles I had written and asked for more ideas that would match my interests. ChatGPT produced five ideas within seconds that varied from generic subjects to topics that could viably lead to interesting articles.
A topic is just a starting point. Researching, typing sentences and crafting a unique voice are areas where the writer must take over, lest they amble over the line of plagiarism.
The same method can be used for writing class assignments. ChatGPT can offer project ideas to get you started on your own class projects. The exercise is no different from springboarding ideas off your roommate to see if one sticks. Ideas are the building blocks of writing. The richer you can make your idea, the stronger your work. ChatGPT does not inhibit learning, it expands it.
Using ChatGPT for college assignments puts added responsibility on college students to use the tool appropriately and not turn in work written by AI. Plagiarism is never acceptable, but universities have already caught students turning in AI-written essays.
In his video, “I tried using AI. It scared me,” YouTuber Tom Scott explains that advances in technology typically follow a sigmoid curve, an S-shaped function that begins with slow growth, then rapidly shoots up, before tapering off at the end.
The first part of the curve represents the first several models of new technology, the bugs and breaks and slow adoption by the public. Then, the curve suddenly shoots upward as more people use the product and advancements happen in rapid succession. Eventually, progress tapers as the technology reaches its limit of improvement.
It is impossible to know how far along the AI sigmoid continuum we are, but ChatGPT could potentially represent the infancy of a resource that will become as commonplace as email. If we are at the beginning of the curve, AI use should not be something we fight, but something we implement in our daily lives.
Featured Illustration by Allie Garza
There are no comments at the moment, do you want to add one?
Write a comment