Distinguishing Non-White Faces

Author/s: Marianne Cadiz, Sama Adhami
Posted: 17 June 2020

This project is based on the racist adage, “All Asians look the same.” Or, to extend this even further, “You have small eyes and therefore you must be Asian somehow, and are therefore [insert stereotype here].” It’s this so-called visual interchangeability that we wanted to play with when teaching our AI model to distinguish between our non-white faces.

This teachable machine was trained to distinguish three faces—Sama, Marianne, and Janina (Marianne's friend). While we three don't necessarily have experiences getting mistaken for one another, we do have experiences getting mixed up with other people that look nothing like us.

We are not the same, even if the stereotypes can just portray us as some faceless mass of “other.” We are not the same, and each of us is our own person. While this might sound too obvious to state, we’re saying it anyway—because what you’re really saying when you say we all look the same is that we are all the same.

Data for the Teachable Machine

For the corpus creation we used videos of ourselves against a white background and took screenshots of that. We had around 120+ photos of each of us. It was a bit difficult due to the difference in our hairstyles. For example, Sama currently has a pixie cut so the AI focused on this. Thankfully Marianne and her friend Janina had some old pictures of themselves with pixie cuts, so we were able to adjust in that way. The AI also has difficulties with different facial expressions—its accuracy went down when we frowned or smiled so we also had to account for that and add photos of us making different facial expressions.

When training this AI we had to somehow make sure it focused on the differences in our faces and not the wall behind us, not the jewellery we wore, not the colour of our clothing, not our hairstyles, et cetera. We had to make it look at us, so to speak.

Drag a photo into the black box to determine whether you're Marianne, Janina, or Sama!

Another funny thing about training this machine to only see three faces is that it will only ever see any picture as either Marianne, Janina, or Sama. It's a nice reminder that AI feeds upon the data that you train it with—and therefore any failure to take people who don't look like you into account will translate into your technology.

Lastly, while this project is a question of visuals, we need to remember that it’s not just optics that should be worked on. Visibility is important, yes, but we must always ensure that this is not just a veneer for a degenerate, racist system. A system where people of color—especially black and indigenous peoples—suffer from negative stereotypes. Nowadays one could think that we live in a society which is more open minded, unprejudiced than a few decades ago. Sadly almost everyday what happens in the world confirms that every individual and every institution still need to work on this issue. Negative stereotypes of all kind impact lives and moreover decisions of whole communities and politics in our world. If we will not start personally working on ourselves and informing one another on the matter, and collectively working against these violent, unjust systems, massive race and gender gaps will be present for a long time.