There’s been a lot of conversation around the importance of making sure there’s diversity among developers and researchers who are building artificial intelligence. One of the primary concerns is that this diversity imbalance will lead to bias being built into algorithms. The proposed remedy is for teams to hire more minorities on their teams.
I’m a strong proponent for diversifying the makeup of those building artificial intelligence. But, what does one do if the minorities one hired to join a team continue proliferating bias in their code, bias that hurts them?
I recently heard a story about a group of minority developers building AI algorithms that wouldn’t have recognized their skin color. I don’t think this is something that has gotten much of any attention, but it’s something worth examining. Just because someone is a minority doesn’t mean they haven’t digested tropes about white people and those of color. Are these folks in the right mental space to think independently and proudly? Do they see color or pretend not to?
I’m going to be pondering this more, but wanted to put the thought out there. What frameworks do have in your backpocket for thinking through issues like this?