If you’re in charge of an editorial team, you’re used to objections from the rank and file about using AI. “It gets things wrong.” “I don’t know what it’s doing with my data.” “Chatbots only say what you want to hear.”
Those are all valid concerns, and I bring them up often in my introduction to AI classes. Each one opens a discussion about what you can do about them, and it turns out to be quite a bit. AI hallucinations require careful thought about where to apply fact-checking and “human in the loop.” Enterprise tools, APIs, and privacy settings can go a long way to protecting your data. And you can prompt the default sycophancy out of AI by telling it to give you critical feedback.
There’s another objection to AI that’s been growing, however, and you can’t just prompt your way out of