New ChatBot changes how we interact with technology


Photo by Mohamed Hassan from PxHere.

A new chatbot released by OpenAI harnesses the power of machine learning to produce convincingly human responses. In the short time since its release, people have already shown it capable of producing works of journalism, stories, and even working code.

ChatGPT comes amidst a slew of other machine learning controversies, many of which are concerns over the potential for these models (a trained machine learning algorithm) to replace humans in their jobs. In addition, much of the data used to train these same machines has not been properly licensed, and the originators have not been compensated.

Pioneer Computer Science teacher Ted Emch shared that he had tried out the chatbot on his own. “I gave it a few problems myself and it wrote good code for them. So I was stunned absolutely, I was blown away,” Emch said.

Teachers are worried about the potential for students to cheat on writing assignments using ChatGPT.  “I’ve already had students pass off ChatGPT paragraphs as their own,” says Pioneer Literature and Humanities teacher Amy Vail. She notes that she was able to quickly identify these papers based on the difference in writing style, however, “If I were grading tons of quick answers, I might not be as careful,” she says. 

Despite these concerns, there is still room for these tools to be put to good use, in ways that would benefit everyone. “People will start to learn to use it in ways that are actually helpful, and not dishonest,” continued Emch.

For example, in the field of computer science, this bot opens many opportunities for increased efficiency and automation of many of the menial and repetitive tasks that come with programming. Optimizations like this would give developers the time to focus more on moving projects forward, and writing better code. “I don’t think there’s anything wrong with having a tool that makes our lives easier,” says Emch. “It’s saving you time.”

For those worried about the annihilation of their jobs by these machines, there are still many steps that would need to be taken for these models to come anywhere close to being ready to operate on their own. While able to produce incredibly accurate results, it will be confident in its answer, even if it’s completely wrong; for this reason alone, humans are still necessary. “You at no point want to hand over your trust to it. You must be smarter than it at all times,” warns Emch.

ChatGPT is also known to “produce pretty wooden, cliche responses,” says Vail. “It cannot create original content, really, not yet anyway. It just does mash-ups.” Vails says that when her 12 year-old daughter tried to teach ChatGPT to write a “book blurb” for “a [fictional] book in which the main character is a girl named E.J. who loves moss,” it crashed ChatGPT.

But, when Vail and her daughter added instructions to write a book blurb about a girl named E.J. who loves moss even though her parents and teachers think it’s not appropriate for girls, it produced a very convincing book blurb almost instantly. “There are just so many plots like that,” says Vail. “So ChatGPT could easily plug in moss and make it happen.”

While ChatGPT may not be the downfall of humanity, the future of machine learning is unpredictable, and it’s still unclear how fast we will advance. “This is a step closer to things that we need to be concerned about,” says Emch. “We need to be concerned if we have a bot that’s constantly changing its own code that it’s running.