I wanted an image to promote a presentation I’m giving: ‘Content Marketing in the Age of AI’.
I gave the same prompt to two different generative AI systems: I want an image of a slightly clunky, unattractive robot with a megaphone, plus a slick part-android, part human speaker. The crowd is listening to the human-android speaker and the robot is unhappy.


One gave me a skimpily-dressed, blonde, female speaker in fuck-me boots. The other gave me a blue-eyed white-man-android in a suit, speaking to white mostly-men.
Forget the random hand in the middle of the first image, there are far bigger issues visible here. Some serious stereotyping about who might possibly appear on stage and hold the interest of the crowd!
Somehow it’s more obvious in images than it is in words, but the same kind of stereotypes and distortions creep in when you ask AI to write for you as well. Because AI looks for the ‘most likely’ option, it tends to reinforce the status quo, including biases. An issue which has been flagged in previous technologies such as Google Translate and which UNESCO has flagged in writing produced by LLM AI systems.
Of course the stereotypes and biases start with human culture, not with the AI systems, but it’s worth remembering that this amazing ‘guru’ technology has been trained on the good and the bad of humans. It reflects both back at us.
Be careful with AI. Use it if you want to or need to, but don’t let it out on its own!