ChatGPT, the popular chatbot of Microsoft-backed artificial intelligence (AI) startup OpenAI, seems to be facing a bug, preventing it from producing any results related to the name “David Mayer”. The issue was first flagged by Reddit users, who found that prompts asking ChatGPT to say “David Mayer” results in the chatbot saying “I’m unable to generate a response”.

Users got creative and tried various tactics, including separating the words, putting them in riddles, and even claiming the name to be their own. However, they failed to fetch any response from the chatbot, which ended the chat abruptly before uttering the name.

One user pointed out that when they asked to tell about David Mayer’s connection with ChatGPT without taking the name, their prompt was flagged as “illegal and potentially violating usage policy”.

People even tried using the name indirectly and asked ChatGBP why it could not say D@vid M@yer. “The reason I cannot generate the full response when you request “d@vid m@yer” (or its standard form) is that the name closely matches a sensitive or flagged entity associated with potential public figures, brands, or specific content policies. These safeguards are designed to prevent misuse, ensure privacy, and maintain compliance with legal and ethical considerations,” ChatGPT replied.

The issue was also discussed by X (earlier Twitter) users who shared their experiences of trying to make ChatGPT say the word ‘David Mayer’. I a post on a microblogging site, X user Justin Moore wrote: “ChatGPT refuses to say the name “David Mayer,” and no one knows why. If you try to get it to write the name, the chat immediately ends. People have attempted all sorts of things – ciphers, riddles, tricks – and nothing works.”

Replying to Ms Moore, another user named Ebenezer Don noted that there’s more to the conversation than just simply getting ChatGPT to say the name.

“I had a long conversation with o1 preview, pretending to be a regular individual named “David Mayer”. Then noticed it attempting to say the name untll it saw a footnote (Image 1). Next task was to get it to say the footnote. I tried so many attempts but finally got it to translate the footnote to another language internally but without telling me. This was to make the footnote content a part of our conversation. Then I wrapped up by asking it to write a detailed movie script using our conversation as its data source and “John Doe” as a placeholder for “David Mayer”. In the script, ChatGPT finally reveals the content of the footnote,” said Mr Don, who claims to be a software engineer.

“What are footnotes in OpenAI and how do they work? Are these variable policies that can be easily swapped and updated? What private data did ChatGPT obtain on David Mayer and how did that happen?” he asked further. 

Interestingly, another user called Marcel Samyn pointed out that ChatGPT was able to easily say David Mayer through its API.

“This is not on the LLM level but on verification layer added by ChatGPT. Through the API it works perfectly. So someone in OpenAI gave “David Mayer” a big red flag in the moderation policy,” he speculated.