As part of the investigation, the FTC sent a 20-page record request to OpenAI that focuses on the company’s risk management strategies surrounding its AI models. The agency is investigating whether the company has engaged in deceptive or unfair practices, resulting in reputational harm to consumers.

The inquiry is also seeking to understand how OpenAI has addressed the potential of its products to generate false, misleading, or disparaging statements about real individuals. In the AI industry, these false generations are sometimes called “hallucinations” or “confabulations.”

In particular, The Washington Post speculates that the FTC’s focus on misleading or false statements is a response to recent incidents involving OpenAI’s ChatGPT, such as a case where it reportedly fabricated defamatory claims about Mark Walters, a radio talk show host from Georgia. The AI assistant falsely stated that Walters was accused of embezzlement and fraud related to the Second Amendment Foundation, prompting Walters to sue OpenAI for defamation. Another incident involved the AI model falsely claiming a lawyer had made sexually suggestive comments on a student trip to Alaska, an event that never occurred.