There’s been a flurry of excitement this week over the discovery that ChatGPT-4 can tell lies.
I’m not referring to the bot’s infamous (and occasionally defamatory) hallucinations, where the program invents a syntactically correct version of events with little connection to reality — a flaw some researchers think might be inherent in any large language model.
I’m talking about intentional deception, the program deciding all on its own to utter an untruth in order to help it accomplish a task. That newfound ability would seem to signal a whole different “chatgame”.