Why training for Fairness in Machine Learning Algorithms is difficult

Decisions made that are outside of our control can significantly affect our ability to pursue our dreams and achieve our goals. Whether it be an employment opportunity, a college application, or a loan for a home or business, machine learning algorithms have prompted many screening decisions. Arbitrary, faulty, or inconsistent decision making has raised serious concerns about how fairness is understood and incorporated in the AI that is prevalent in our everyday life.   Machine learning analyzes and filters resumes of job applicants. In the criminal justice system, a statistical score assigned to defendants predicts the propensity of them committing future crimes—the score factors into decisions about bail, sentencing, and parole. Statistics have traditionally promised to provide a more reliable basis for decision-making because of how it can assign weights to attributes that are relevant to the outcome of interest. Machine learning has helped uncover factors that humans may not see in the data, both current and historical evidence. For example, many people have experienced human arbitrary decision making based on intuition, not data. At least the algorithms can find relevance and connection points in data without any intuition at all.

I’ve written about the advances in image recognition where the machines learn by example. However, with regards to fairness, there is one thing missing – induction. Learning is more than memorizing what you see. The process of Induction means you draw general rules from specific examples that can not only account for past scenarios but also to unseen future scenarios too. Even if future cases are not the same as the past, we can still see that they are similar to past situations. Machines have no induction capabilities to date. Evidence-based decision making in machine learning is only as reliable as the evidence (data) used to teach it. The data needs to be well-annotated examples containing those subtle patterns and diversity of samples that show the many types of appearances the observation can take. Even if we say our machine learning model is “evidence-based,” it does not ensure it will lead to fair and reliable decisions. Cultural stereotypes, social groups, and demographic inequalities have driven many historical examples where outcomes almost always reflect past prejudices.

Even well-intentioned applications of machine learning might give rise to objectionable results.

Amazon uses data analysis to determine which neighborhoods would be good candidates to qualify for same-day delivery. A 2016 study found that in many U.S. cities, white residents were more than twice as likely as black residents to live in such a qualifying neighborhood. Amazon argued that race wasn’t a factor; nonetheless, the service opportunities exhibited racially disparate rates. Could this contribute to further inequality? What if the algorithm made the determination based on odd-numbered zip codes instead? What would the outcome be? The same can holds for the recent Apple Card fiasco from Goldman Sachs, which discriminated against women applicants. Why even ask for gender in a credit application in the first place? What would the algorithm produce if the data excluded gender? Would the same people obtain the same credit line than before?

I believe data scientists should not just faithfully reflect the data they have but also question it if there are signs of discriminatory behavior in our models. Fairness in applications of machine learning involves people. However, people collect the training data we use, and it may directly or indirectly already reflect demographic disparities. A counter-intuitive case of data collection involved a smartphone application called “Street Bump.” It was a rather innocuous project by the city of Boston to crowdsource data about potholes where the app automatically detected them and sent data back to city services. You would think this would have nothing to do with ethical or social demographic issues. The data patterns reflected the ownership of smartphones, which corresponded to wealthier neighborhoods compared to lower-income areas or areas with substantial elderly populations. While the goal was to improve infrastructure for all, the initial results directed repairs to the wealthier neighborhoods while neglecting the poor communities – where there were more potholes!

We already know that machine learning is an excellent tool for classification. There are large datasets with historical data collected over several decades that have embedded bias related to the race and gender of today. Self-identification by people nowadays don’t fit neatly into checkboxes as they did in the ’70s or ’80s. Gender is not a stable category; it has evolved dramatically during the 1990s. For example, there are at least six choices of gender in most forms today, including “prefer not to answer.”  If our training data only has two options for gender, what about the other four or more options? I think preferred gender pronouns are useful today because it helps avoid awkward communications. I wonder how many large datasets include not only She/her and He/his but also Ze/hir and They/them? These options are labels, and labels are what we use to generate our “ground truth” training dataset.

Is there a calculus for fairness?

There is hope, however! I have brought up many issues and hopefully have sparked your interest and awareness to stay woke with regards to the fairness of machine learning algorithms. Google AI recently released an optimization framework called “TensorFlow Constrained Optimization Library (TFCO). TFCO is a supervised machine learning algorithm built for optimizing inequality-constrained problems. Think about a bank making a loan. Should a model optimize approvals for consumers based on their likelihood to pay or minimize rejecting loan applicants based on faulty criteria? TFCO utilizes an arcane technique called Proxy Lagrangian Optimization from the field of multivariable calculus. Data scientists can now apply a strategy for finding the local maxima and minima of a function that is subject to equality constraints. You then convert a constrained problem into a form that allows derivative testing of the unconstrained problem. You get the best of both worlds. Any stationary point of the function satisfying equality constraints will also be the point where a linear combination of gradients at that point exists. The Lagrange multipliers are the coefficients in that linear equation. You have solved two problems by adjusting the decision boundary of the prediction model to accommodate constraints needed for fairness while minimizing the impact on the accuracy of the prediction model itself. I hope to see more applications of the TFCO libraries in algorithm development. It is new and still needs testing, but it is an excellent first start to help ensure machines don’t inherit human cognitive flaws such as inequality and confirmation bias.

References:

Amazon Doesn’t Consider the Race of Its Customers. Should It?” by David Ingold and Spencer Soper
The Hidden Biases in Big Data” by Kate Crawford, Harvard Business Review
Two-Player Games for Efficient Non-Convex Constrained Optimization” by Andrew Cotter, Heinrich Jiang, and Karthik Sridharan. arXiv:1804.06500 Cornell University

Photo by Ricardo Gomez Angel on Unsplash