Hallucinations
AI now produces reports and imagery that carry institutional confidence without institutional understanding. In a recent example, a Utah police department testing an AI tool to draft reports found itself explaining why the software confidently claimed an officer had transformed into a frog, a misinterpretation traced back to a movie playing in the background of body camera audio. Such errors are not harmless curiosities. They reveal how easily AI can weave unrelated inputs into narratives that sound plausible, even in serious contexts. At the same time, real human experiences show deeper psychological risks. One former AI image startup worker recounts how hours of prompting generative models rewired her sense of self and triggered manic behavior and a brief psychotic episode — believing she could fly after seeing fantastical AI imagery and losing touch with her baseline reality. These cases are not metaphors; they are early signals that systems optimized for engagement and fluency can reinforce computer and real hallucinations. Higher education’s role is to remind us that coherence is not the same as comprehension, and measurable output does not guarantee verifiable insight.
Further Reading
John D. Graham, Hallucination, 1929. Public domain. The Phillips Collection.