Welcome to part 1 of Crowdmark’s ‘Rise of ChatGPT’ series, which explores the impact of chatbots on education and grading in 2024
Since launching in January 2023, ChatGPT has generated wide discussion about automation, human creativity, and work.
The education sector is among the text-driven industries facing its share of changes as professors, researchers, and students determine how and where such tools can be appropriately used.
For starters, what is ChatGPT?
Created through a partnership between OpenAI and Microsoft, ChatGPT is a large language model-based chatbot. That means it’s a text-driven program designed to answer user questions and draft copy in response.
To be effective, large language models require a massive amount of text they can analyze to learn human speech patterns, expressions, and biases.
Once trained, the interface can respond to conversational prompts it is fed (e.g., “Write me a 200-word poem in the style of Edgar Allan Poe on the wonder of pumpkin scones”).
Curious about the result? Read our pumpkin scone poem here.
ChatGPT can also produce code (which is also text-based), although David Gewirtz noted in his experiments for ZDNet.com that it’s best at “assisting with specific coding tasks or routines rather than building complete applications from scratch” — at least, for now.
According to Reuters, ChatGPT is now “the fastest-growing consumer software application in history, gaining over 100 million users” in approximately two months.
Where can tools like ChatGPT benefit student learning?
Automation tools are great at pattern recognition at scales that overwhelm the human mind (e.g., analyzing images for the presence of cancer tumors). With chatbots, that built-in strength means they catch routine mistakes.
“Few humans enjoy repeatedly correcting the same spelling or grammar mistakes,” says Paul Michell, product designer for Crowdmark. “In the last thirty years, we’ve grown accustomed to seeing spell check and simple grammar tools built into composition programs like Microsoft Word or Google Docs. Tools like ChatGPT are the next evolution of the automated technology that helps us with the rote work of correction.
“At the same time, students need to know when they’ve used an incorrect term or dangled a participle,” continues Mitchell. “If chatbots can help students to catch errors before submitting their work—and, more importantly, grasp why the error is happening and how to prevent it—educators can focus on deepening their topical understanding while developing their ideas.”
Unlike spell check, ChatGPT also has the sophistication to analyze and summarize larger blocks of text for logical gaps. Popular prompts include submitting a short document and asking ChatGPT to assume the role of experienced professional in that given field while highlighting risks, gaps, or key information.
Is allowing students to proofread using a chatbot all that different than allowing them to get a friend or relative to act as your second eyes?
“In the same way that using a chatbot would never replace a lawyer’s advice, its feedback doesn’t replace an instructor’s time and energy,” says Jamie Gilgen, software engineer for Crowdmark. “But you can use chatbots to highlight information you might have missed and to prime you for a stronger follow-up discussion.”
“There’s also huge potential to explore the impact of tools like ChatGPT for students with learning disabilities,” says Mitchell. “It’s possible that, in the future, chatbots may help students to articulate and refine their ideas, automating more of the early feedback process before submitting for an instructor’s comments.”
Where are chatbots’ limitations for education?
But can tools like ChatGPT help students to discover their unique writing voice? And how do you prevent students from using it to cheat?
Questions like these bring us to the murkier use-case waters.
“Like all learning language modules, ChatGPT knows what it’s been fed or scraped from the Internet,” says Crowdmark CEO Michelle Caers. “A chatbot’s mimicked response can be incorrect, provide information that’s completely fabricated when asked to go deep on a topic, and/or replicate known biases. Educators and students should be fully aware of these risks.”
Beyond fact-checking anything a chatbot produces, students should also be aware that similar prompts may create similar output. Talking with a chatbot might be one way to brainstorm a list of possible ideas for a paper, but your peers may be having parallel conversations. Without heavy revision, two chatbot-generated essays on the same topic may sound similar.
And likewise, while Gewirtz notes in his ZDNet.com article that ChatGPT is capable of conducting specific coding tasks and building simple routines—which makes checking student-generated work a challenge for Computer Science instructors—it’s not so great at producing complex programs. It may, in fact, “produce absolutely unusable garbage.”
Using a chatbot’s capabilities to test and debug your own code may be the safer use case.
How do you stop students from plagiarizing with chatbots?
Caers maintains that there are two ways to look at the connection between chatbots and plagiarism: prevention and detection.
“In the prevention mindset, educators create assessments that require students to be analytical and provide their point of view on a topic while showing their work,” says Caers. “Crowdmark’s functionality supports this approach.”
“The second approach is to run submitted student work through software designed to detect and punish cheating,” says Caers. “This problem is a long-standing challenge for academic institutions. AI-generated text often has a distinctive voice, especially if students haven’t done extensive editing to adapt its output.
“However, in a punitive scenario, the learning opportunity is lost. The student gets a grade of zero and often has no recourse after being caught. There’s really no shortcut for authentic student writing.”
In our next installment in this series, we’ll explore the specific role of chatbots in grading.