Google says it's temporarily suspended the ability of Gemini, its flagship generative AI suite of models, to generate images of people while it works on Google says it's temporarily suspended the ability of Gemini, its flagship generative AI suite of models, to generate images of people while it works on updating the model to improve the historical accuracy of outputs.
Yes, I saw some talk and a screenshot somewhere that showed that apparently in its current state, Gemini can (or could) be asked to output the prompt enhancements it used along with the generated images.
The screenshot showed someone asking for images of fruit, and the enhanced prompt included “racially diverse groups of people”. Now if they’re inserting something like that even for images containing no people at, it stands to reason that this is just a default enhancement they ALWAYS apply, no matter the prompt, which would explain the racially diverse Nazis (and all the other brouhahahas we’ve seen from them).
Yes, I saw some talk and a screenshot somewhere that showed that apparently in its current state, Gemini can (or could) be asked to output the prompt enhancements it used along with the generated images.
The screenshot showed someone asking for images of fruit, and the enhanced prompt included “racially diverse groups of people”. Now if they’re inserting something like that even for images containing no people at, it stands to reason that this is just a default enhancement they ALWAYS apply, no matter the prompt, which would explain the racially diverse Nazis (and all the other brouhahahas we’ve seen from them).
That’s really what I’m expecting. My guess is that the training data is skewed, and the prompt cannot adjust.
Either the machine will need to understand what is expected, or the company will need to address this and allow people to enable or disable diversity.
The first option may be impossible to attain at this stage. The second can lead to inappropriate images.