Ignoring ChatGPT and its cousins won’t get us anywhere. In fact, these systems reveal issues we too often miss
In my spring lecture course of 120 students, my teaching assistants caught four examples of students using artificial-intelligence-driven language programs like ChatGPT to complete short essays. In each case, the students confessed to using such systems and agreed to rewrite the assignments themselves.
With all the panic about how students might use these systems to get around the burden of actually learning, we often forget that as of 2023, the systems don’t work well at all. It was easy to spot these fraudulent essays. They contained spectacular errors. They used text that did not respond to the prompt we had issued to students. Or they just sounded unlike what a human would write.
More Stories
From the Beatles to biologics – how Liverpool became a life science hotspot
The Brutalist and Emilia Perez’s voice-cloning controversies make AI the new awards season battleground
Will the EU fight for the truth on Facebook and Instagram?