Engineered Inequity

This week’s readings offered insights into biases that find their way into all kinds of technologies, from hand soap dispensers to judiciary decisions. This week I'd like to take a closer look at a quote from each of the chapters we read that resonated with me and encapsulated some of the major points I gleaned from the reading.

Chapter One: Engineered Inequity

“Robots are not sentient beings, sure, but racism flourishes well beyond hate-filled hearts.32 An indifferent insurance adjuster who uses the even more disinterested metric of a credit score to make a seemingly detached calculation may perpetuate historical forms of racism by plugging numbers in, recording risk scores, and “just doing her job.” (Benjamin, 2019, p. 60)

For me, this quote made me understand the massive potential for hidden biases to influence decisions made by the most well-meaning human or organization. I think I have been mesmerized by the allure of decisions based on ones and zeroes because that is what we attribute to computer “thought”. And I have naively assumed that a person opposed to unethical biases would be able to detect and refute those unethical determinations. But, as in the example from our quote, if you blindly rely on the technology (which so many of us would and in a lot of professional and legal scenarios are bound to) you are bound to a set of logic with which you may not actually agree. This chapter pushes the reader to question how biases make their way into technology and how it has the potential to reinforce systemic societal inequalities.

Chapter Two: Default Discrimination

“The danger with New Jim Code predictions is the way in which self-fulfilling prophecies enact what they predict, giving the allure of accuracy.” (Benjamin, 2019, p. 83)

After establishing “the why” we should be cognizant of bias in technology, the author starts to make sense of how this happens. The quote I highlighted above makes a lot of sense- if we use existing reasoning to predict what is to come, we are likely to find what we set out for but are even more likely to miss alternative data outside of the feedback loop we’ve built for ourselves. An example in the chapter is the police reports that predict criminal behavior based on preexisting data and then send a police presence to those areas and not to others. This over-patrolling of specific areas will inevitably lead to increasingly more biased outcomes because of the inequitable data that give a false sense of accuracy.

How does this inform my current or future work and/or approaches to avoid default discrimination from technology?

I have a “policy” or “belief” against not blocking or unfriending people on social media who vocalize even very stark contradictions to my own ethics and moral beliefs. While I do not engage with social media much these days, I have had many inquiries from friends and family as to why I do not block these people, especially when I can be very upset by them. And the reason is this, I need to know what I am up against. I need to not walk through the earth with rose-colored glasses believing everyone believes as I do. I need to be challenged by their beliefs, I need to question my own,  and I need to always be in a state of growth, both in change and affirmation. But I have yet to apply this logic that I strongly believe into the perception of how artificial intelligence could also exist in a vacuum, constantly reproducing and perpetuating societal biases. By nature, we believe what we see. But what we see is often up to us. These two chapters have helped me understand the need for awareness that technology is not inherently neutral and that those who control what it “sees” need to apply critical examination to data in order to mitigate biased outcomes.

References

Benjamin, R. (2019). Race after technology: Abolitionist Tools for the New Jim Code. John Wiley & Sons.

Previous
Previous

Momming is hard, Ello helps…

Next
Next

AI + TbC = Completing the Circle