A Palo Alto, California, lawyer with nearly a half-century of experience admitted to an Oakland federal judge this summer that legal cases he referenced in an important court filing didn’t actually exist and appeared to be products of artificial intelligence “hallucinations.”
Jack Russo, in a court filing, described the apparent AI fabrications as a “first-time situation” for him and added, “I am quite embarrassed about it.”
A specialist in computer law, Russo found himself in the rapidly growing company of lawyers publicly shamed as wildly popular but error-prone artificial intelligence technology like ChatGPT collides with the rigid rules of legal procedure.
Hallucinations — when AI produces inaccurate or nonsensical information — have posed an ongoing problem in the generative AI tha