Uncovering the role of technology in greater diversity and inclusion
Most of us don’t spend much time thinking about how ingrained technology is with our ability to make decisions. But the truth is, it helps us in almost every facet of life. Technology helps humans make decisions in the medical field, with self-driving vehicles, facial recognition software, chatbots, recruitment processes, and in courts to determine the probability of a person recommitting a crime, just to name a few. Sometimes, we let technology make the decision for us entirely, dangerously assuming that it’s unbiased and more objective than human judgment.
Unfortunately, technology is only as inclusive as the people who create it. History in itself is biased because only some people’s perspectives were recorded or included. And history has greatly influenced modern algorithms because we generated massive data sets that consist of decades of information built on exclusion and discrimination. When modern algorithms rely on these databases to make automated decisions, they engage in algorithmic redlining — reproducing, reinforcing, and perpetuating pre-existing segregation (Allen, J. 2019)
But does it have to be that way? Is it possible to use technology to advance our diversity and inclusion efforts? Or is it simply impossible to create fair and inclusive algorithms?
An introduction to algorithmic bias
It makes sense that AI can either be as discriminatory or inclusive as the people designing and developing it. But unless you’re someone who is directly affected by non-inclusion technology, it might not seem as obvious to you. For someone like Joy Buolamwini, it was impossible to ignore.
MIT grad student Joy Buolamwini uses art and research to illuminate the social implications of Artificial Intelligence. Her work has led her to illuminate the need for algorithmic justice at the World Economic Forum and the United Nations. She now serves on the Global Tech Panel to advise world leaders and technology executives on ways to reduce the harms of AI. Her journey started when she was working with facial analysis software — that couldn’t detect her face unless she put on a white mask. Why? Because the algorithm was biased. This bias happened because the people who coded the application didn’t use a diverse data set, and teach it to recognize a diverse range of skin tones and facial structures.
How do algorithms become unfair?
Let’s make one thing clear, algorithms themselves aren’t inherently biased. They’re just mathematical functions: a list of instructions designed to accomplish a task or solve a problem. It’s the types of data that train the machine learning model, and who trains it, that introduce biases.
Data can be biased for different reasons. It could be due to:
- Lack of minority representation.
- Model features that are associated with race/gender. For example, due to the history of racial segregation in South Africa, where you live is very predictive of your race.
- The data itself reflects historical injustice, which through training, can be captured in models. (O’Sullivan, 2021)
Algorithmic bias is a systematic error (a mistake that’s not caused by chance) that causes unfair, inaccurate, or unethical outcomes for certain individuals or groups of people. Left unchecked, it can repeat and amplify inequality and discrimination and undermine our diversity and inclusion efforts. Thankfully, ethical practices are slowly emerging for developing more diverse and inclusive technology.
Want to know more about algorithmic bias? There’s a new documentary out called “Coded Bias,” that features the work of Joy Buolamwini and it’s a must-watch for anyone who uses AI technology in their daily lives.
Is inclusive technology possible?
Put simply, technology must play a role in our diversity and inclusion efforts. Why? Because the average person spends 3 hours and 15 minutes a day looking at a screen. Everything we read, watch, listen to, and interact with in the digital world influences the way we perceive, think, work, dress, parent, talk, and move in the physical world. There’s an incredible incentive there to make digital spaces inclusive and accessible for everyone. A good starting point might be to stem the spread of biased or inaccurate information (and data).
Inclusive technology can only come from companies that cultivate an inclusive workplace culture. The good news is that “the innovative nature of tech companies allows them to push the limits of what organizations can do in terms of diversity and inclusion.” (Frost, Alidina, 2019).
Developing inclusive technology requires us to:
- Determine a clear definition of what fair and inclusive technology is.
- Seek diverse data sets. If the available data sets you’re using to develop your product aren’t diverse enough, consider involving external UX research in the development process.
- Create a bias impact statement. A bias impact statement is a self-regulatory practice that can help prevent potential biases. It’s a “brainstorm of a core set of initial assumptions about the algorithm’s purpose prior to its development and execution.” (Barton, Lee, Resnick, 2019)
- Make sure we can accurately interpret and explain your machine learning model. Not only to translate the value and accuracy of your findings to executives, but help circumvent biased models. This book is a great place to learn how interpretability is a useful debugging tool for detecting bias in machine learning models.
- Reference a data science ethics checklist or framework.
Ultimately, we need to view technology as an enabler and not a savior. The impetus is on people and companies to practice and implement the principles of inclusion and diversity. Technology is a useful support system, but not something we should roll out as a “quick fix” to solve broader problems. But, it can educate us to make better choices, help us be consistent in our efforts, track our progress, and hold ourselves accountable to our mistakes.
Acrolinx and inclusive language
Inclusive language demonstrates awareness of the vast diversity of people in the world. Using inclusive language offers respect, safety, and belonging to all people, regardless of their personal characteristics.
Acrolinx supports your company at every stage of your diversity and inclusion journey — with inclusive language checking across all your company content. The Acrolinx inclusive language feature reviews your marketing, technical, support, and legal content for various aspects of inclusive language. It provides suggestions that make your content historically and culturally aware, gender neutral, and free from ableist language. Not to mention our inclusive language technology is also backed by an explainable, hand-crafted system to elevate transparency and reduce potential biases.
Want to learn more about how Acrolinx can help you roll out inclusive language across your organization? Make sure to download our latest eBook Can Technology Support Diversity, Equity, and Inclusion? Choosing the right solution for your enterprise D&I initiative.
Allen, James. A (2019) The Color of Algorithms: An Analysis and Proposed Research Agenda for Deterring Algorithmic Redlining, Fordham Urban Law Journal, Vol. 46. Retrieved from: https://ir.lawnet.fordham.edu/ulj/vol46/iss2/1
Frost, Stephen & Alidina, Raafi-Karim (2019) Building an Inclusive Organization: Leveraging the Power of a Diverse Workforce
Lee, Resnick, Barton (2019) Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harm. Retrieved from: https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
Molnar, Christoph (2021) Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, Retrieved from: https://christophm.github.io/interpretable-ml-book/