Statement on Generative “AI”

Programming is a skilled craft, and you are here because you seek to learn and practice it. Generative “AI” tools can be useful for professionals, but students who use them trying to speed things up tend to harm their own learning. One study found that writers using them engaged less with the material and remembered less about it (Kosmyna et al, 142–143). Another found students using chatbots as a “crutch” and then in the event of problems “either unable to detect these failures or unwilling to spend the effort needed to check correctness” (Bastani et al, 11, 16). In another, users of generative tools were seen as lazy, damaging their professional reputations (Reif, Larrick, and Soll, 5–6).

Artisanal, compliant, and tested software is good software. Code that is autocompleted, even with a programmer’s acquiescence, is never guaranteed to be relevant, appropriate, nor compliant. By design, chatbots cannot reason. Any semblance of reasoning in their output is a “brittle mirage” built with “fluent nonsense,” in the words of Zhao et al (14–15). What is more, because their training corpora are already fairly comprehensive, the models are unlikely to perform any better in the future than they do today (Coveney and Succi, passim).

Code generation only works well when an expert human — the programmer — supervises. Neither prompt engineering nor chain-of-thought reasoning nor strategic repetition can eliminate this requirement. One interactive system for increasing generator replicability “with humans in the loop,” using a strict data pipeline and multiple qualifying rounds of assessments by human experts (Shah, passim). If such a formal methodology seems complicated, that reflects the complexity of the design problems at hand.

You probably know that “AI” is environmentally costly and legally dubious. It is also artifically cheap; venture capital seeking to capture market share underwrites today’s loss leading free services. The generative inferences are likely to become very expensive once users are locked in, perhaps over $100,000 per developer per year (Szyszka, passim). The development of data centers that employ almost no-one but require huge amounts of power and water is finding opposition worldwide.

Because of all these problems, I promise not to use any generative tools to create the materials for your courses nor to evaluate your homework. The Student Code of Conduct, in its paragraph about plagiarism, requires you to do the same. Please keep your self-interest in mind: you will learn more if you do the work yourself.

Bibliography

Bastani, Hamsa, et al. “Generative AI Can Harm Learning” The Wharton School Research Paper, 2024).

Coveney, P. V., and S. Succi. “The wall confronting large language models” (ArXiv, 2025).

Kosmyna, Nataliya, et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” (ArXiv, 2025).

Reif, Jessica A., Richard P. Larrick, Jack B. Soll. “Evidence of a social evaluation penalty for using AI” (Proceedings of the National Academy of Sciences, 2025).

Shah, Chirag. “From Prompt Engineering to Prompt Science with Humans in the Loop” (Communications of the ACM, 2025).

Szyska, Ewa. “Future AI bills of $100k/yr per dev” (blog.kilocode.ai, 2025).

Zhao, Chengshuai, et al. “Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens” (ArXiv, 2025).