Boston, MA— A graduating student from Northeastern University has sparked fresh discussion around AI use in higher education after accusing her professor of relying on artificial intelligence to create course content — despite instructing students not to.
Ella Stapleton, a final-year business major, recently filed a formal complaint with Northeastern’s D’Amore-McKim School of Business. She alleged that one of her professors used AI tools, including ChatGPT, to generate lecture slides, even though he discouraged students from using such tools themselves.
Stapleton claimed her suspicions arose when she encountered repeated typos, bizarre images — including distorted figures with extra limbs — and a citation in the presentation referencing “ChatGPT.” She pointed to this as evidence that AI had been used without transparency.
“He told us not to use AI, yet his own slides were made with it,” Stapleton told the New York Times, expressing her frustration over what she viewed as hypocrisy.
In response, she demanded a refund of approximately $8,000 — the tuition cost for the course. However, the university declined her request after a series of internal reviews and meetings.
Professor Confirms AI Use
Professor Rick Arrowood uses multiple AI tools while preparing lecture materials. These included ChatGPT, the perplexity AI engine, and Gamma, a platform for designing slide presentations acknowledged later.
Reflecting on the incident, Arrowood admitted he should have exercised more care. “Looking back, I should have reviewed the content more thoroughly,” he said. “I believe faculty should be open and responsible when using AI in teaching. If others can learn from my experience, that’s worth something.”
Northeastern’s Official Statement
In a broader response, Northeastern University emphasized its institutional approach to artificial intelligence. Renata Nyul, the university’s Vice President for Communications, stated, “Northeastern supports the use of AI to enrich teaching, research, and operations. The university offers a wide range of resources to ensure responsible AI use and continues to refine relevant policies across its departments.”
AI tools like ChatGPT have become widely used on college campuses since their release in 2022, with students using them to write essays and complete assignments. In turn, there are many institutions, including Northeastern have adopted strict guidelines, limiting or banning student use of generative AI in coursework.
This case has reignited a broader conversation: Should educators hold themselves to the same AI standards they set for students? And how transparent should faculty be when integrating AI into their teaching?