Bochu Ding, Duke Culture I/O Lab & Explainable AI
What does it look like to be a ?
This isn't a real person.
It was generated by Stable Diffusion's XL model, a powerful text-to-image generator optimized for creating photorealistic faces.
This project explores what models are trained to "see" — what characteristics, generalizations, and biases are embedded in them.
By prompting the model with single-word social designations, I explore how they interpret identity through visual representation.
These are all portraits of the prompt "an immigrant."
For each label, I generated thousands of portraits.
Merged together, they become a composite — a portrait of the label itself.
In each composite, each pixel represents the median value of that pixel across all generated images.
I invite you to explore these faces — a reflection of the data they were born from.
Section 10.
Section 11.
Section 12.
Look at you, peeking into the html!
These portraits illuminate the assumptions generative AI models can make. But many, if not, most AI/ML systems don’t have easily perceptible outputs. Yet they accelerate decision-making in every domain, from financial services to healthcare, in ways that profoundly shape our lives. Their biases are less visible, but their impacts are consequential. This project begins with what we can see — and invites us to look closer, especially when it seems like we can't.
“ThisPersonDoesNotExist: Images of a White World Culture” — Kris Belden-Adams, University of Mississippi
“Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale” — Bianchi et al. [link]
“How AI reduces the world to stereotypes” — Victoria Turk, Rest of the World [link]
“Google’s ‘Woke’ Image Generator Shows the Limitations of AI” — David Gilbert, Wired [link]
“I Asked A.I. Where It Thought I Was From. Its Answer: Nowhere.” — Nouf Aljowaysir, New York Times [link]
ComfyUI Workflow Reference — Andrés Zsögön [link]
With special thanks to Professors Augustus Wendell, Brinnae Bent, and Vivek Rao.