
It all started with a story about a girl named Elara...
And another...
And another.
If you're not familiar with this tale, you probably haven't spent much time grading papers lately. And if you have done so, you probably know exactly where this is going...
Elara is a main character frequently used in stories generated by ChatGPT. At least, that has been the experience of one high school English teacher on social media platform Reddit.
Frustrated by students' frequent use of the chatbot, the teacher threatened to give a zero on any assignment that seemed AI generated. However, it was difficult to prove that an assignment was written by AI.
So they had to get creative.
The teacher did some research... and realized ChatGPT had some predictable habits...
For example, every time they asked the chatbot to tell them a story, it wrote about a girl in the woods named Elara.
The teacher created a simple assignment – write a story about anything you want. Instead of telling the students they would get a zero for using AI, the assignment instructions included this warning in smaller text...
If your main character's name is Elara, -99 points.
Sure enough, multiple students submitted stories about Elara. That's not exactly a common name... and the instructions were clear.
A grade of 1 out of 100 was all it took to get the message across.
This teacher's experience, shared in a viral online post, is just one example of the rising tension between educators and AI.
While some resort to clever tricks to catch cheaters, others are grappling with a far bigger challenge – how to prove students are using AI at all.
As tools like ChatGPT become more sophisticated, they're raising new questions surrounding academic integrity. Schools are facing the issue of whether to consider the use of AI an honor-code violation.
Violations like plagiarism have become easy to detect using simple Internet searches and specialized software.
But proving the use of AI is an entirely new challenge...
Many universities reported a surge in suspiciously well-written essays following the release of ChatGPT in late 2022. Some professors have turned to new AI-detection solutions from popular academic-integrity software like Turnitin.
However, these tools often produce false positives. This has created a gridlock in the debate around academic integrity.
At the same time, some educators are embracing AI.
Professor Christian Terwiesch at the University of Pennsylvania's Wharton School made headlines for testing an earlier version of ChatGPT on the final exam of his Operations Management MBA course. It scored somewhere between a B and a B-minus.
Another Wharton professor, Ethan Mollick, now requires students to use AI in their assignments, the same way some teachers require students to use a calculator. His new policy deems the use of AI as an "emerging skill."
The academic war over AI is just beginning. No matter the outcome, it's going to change how education works.
And as this battle rages in schools, the same is happening across corporate America...
Companies are choosing whether to adopt AI with open arms... or keep ignoring it.
Folks, it's tempting to blindly buy the former at any cost. Nobody wants to miss their shot to profit from this revolutionary technology.
But it's not that simple. Much like lazy students putting prompts into ChatGPT, some companies' AI plans have a lot more marketing than substance.
Before you invest in a business, make sure it's using AI in a productive way. Dig into the numbers. Check if a new tool is really attracting more customers or improving efficiency.
Companies are clamoring to grab attention with their supposed AI usage. But it takes more than slick marketing to generate long-term outperformance.
Regards,
Rob Spivey
June 24, 2025