Manage AI Bias Instead of Trying to Eliminate It

To remediate the bias built into AI data, companies can take a three-step approach.

Reading Time: 7 min 

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations. The initiative researches and reports on how AI is spurring changes to the workforce, data management, privacy, and cross-entity collaboration — all while generating new ethical challenges for business. It looks at new risks and threats in dependency, job loss, and security. And it seeks to help managers understand and act on the tremendous opportunity from the combination of human and machine intelligence. Research and analysis for the initiative is in collaboration with and sponsored by Boston Consulting Group.
More in this series

Leading With Impact

In this series, author and organizational coach Chris Clearfield talks with leaders who manage technology-driven teams at innovative organizations across the world. The series will examine universal big-picture challenges as well as specific lessons on sparking ideas and accelerating innovation.
More in this series

  • Businesses and governments must face an uncomfortable truth: Artificial intelligence is hopelessly and inherently biased.

    Asking how to prevent such bias is in many ways the wrong question, because AI is a means of learning and generalizing from a set of examples — and all too often, the examples are pulled straight from historical data. Because biases against various groups are embedded in history, those biases will be perpetuated to some degree through AI.

    Traditional and seemingly sensible safeguards do not fix the problem. A model designer could, for example, omit variables that indicate an individual’s gender or race, hoping that any bias that comes from knowing these attributes will be eliminated. But modern algorithms excel at discovering proxies for such information. Try though one might, no amount of data scrubbing can fix this problem entirely. Solving for fairness isn’t just difficult — it’s mathematically impossible.

    Hardly a day goes by without news of yet another example of AI echoing historical prejudices or allowing bias to creep in. Even medical science isn’t immune: In a recent article in The Lancet, researchers showed that AI algorithms that were fed scrupulously anonymized medical imaging data were nevertheless able to identify the race of 93% of patients.

    Business leaders must stop pretending that they can eliminate AI bias — they can’t — and focus instead on remediating it. In our work advising corporate and government clients at Oliver Wyman, we have identified a three-step process that can yield positive results for leaders looking to reduce the chances of AI behaving badly.

    Step 1: Decide on Data and Design

    Since complete fairness is impossible, and many decision-making committees are not yet adequately diverse, choosing the acceptable threshold for fairness — and determining whom to prioritize — is challenging.

    There is no single standard or blueprint for ensuring fairness in AI that works for all companies or all situations. Teams can check whether their algorithms select for equal numbers of people from each protected class, select for the same proportion from each group, or use the same threshold for everyone. All of these approaches are defensible and in common use — but unless equal numbers of each class of people are originally included in the input data, these selection methods are mutually exclusive. The type of “fairness” chosen inevitably requires a trade-off, because the results can’t be fair for everyone.

    The choice of approach, then, is critical. Along with choosing the groups to protect, a company must determine what the most important issue is to mitigate. Is the issue differences in the sizes of groups, or different accuracy rates between the groups? For group sizes, does fairness demand an equal number from each group type or a proportional percentage? For differing accuracy rates, is the data accurately labeled, and, if so, which group needs the most predictive equity? These various choices result in a decision tree, where many aspects — such as ensuring that certain groups are protected — must be aligned and standardized into company policy.

    Missteps remain common. One European software company, for example, recently created voice-processing AI software to reroute sales calls and enjoyed great success — except in situations where callers had regional accents. In this case, fairness could have been checked by creating a more diverse test group and ensuring that the risk of misclassification was the same for different regional groups.

    The final algorithm and its fairness tests need to consider the whole population, not just those who made it past the early hurdles.

    To ensure that the development and test data sets that shape the algorithms are sufficiently diverse, companies need to check for coverage of different sensitive attributes and also manage the ways in which data has been biased by the selection process. The final algorithm and its fairness tests need to consider the whole population, not just those who made it past the early hurdles. Doing this requires model designers to accept that the data is imperfect.

    Step 2: Check Outputs

    Once a company has a sound data and design approach, it needs to check fairness in output and impact, including intersections and overlaps between data types.

    Even when companies have good intentions, there’s a danger that an ill-considered approach can do more harm than good. Algorithms are superliteral and don’t deal well with intersectionality, so seemingly neutral algorithms can result in disparate impacts on different groups. If we say a credit product needs to be equally available to men and women, disabled or not, an algorithm’s solution could be to select male wheelchair users and only nondisabled women. This means that equal numbers of men, women, disabled people, and nondisabled people are included in the data, but disabled women can never be selected.

    One effective strategy is a two-model solution, such as the generative adversarial networks approach. This is a trade-off, or zero-sum, comparison between the original model and a second model that functions as its adversary or auditor, checking for individual fairness. The two models combined converge to produce a more appropriate and fair solution.

    This approach has been particularly successful in insurance pricing, where risk pooling has traditionally been used. Today, more advanced techniques, such as adversarial modeling, have moved toward pricing that is more reflective of the individual. One British insurance company, for example, was able to improve its intersectional fairness and reduce the risk of unintentional bias so effectively that it reduced its premiums for 4 out of 5 applicants.

    Step 3: Monitor for Problems

    It is important to frequently examine outputs and look for suspicious patterns on an ongoing basis. A model that passes all the tests can still produce unwanted results once it’s implemented with real-world inputs, particularly inputs that progress over time.

    People have become used to bias, so they seldom spot it. A fully diverse outcome can look surprising, so people may inadvertently reinforce bias when developing AI. For example, in an equal world, many committee member selections would be all female, just like some are all male.

    People have become used to bias, so they seldom spot it.

    Similarly, most people expect that rare events will not happen. Rare events are indeed improbable and uncommon, but not impossible, as a brain designed to simplify and generalize patterns will typically expect. Too often, people object when something rare happens and don’t object if it fails to happen. But companies aren’t defending against this absence and are unlikely to even spot it. People intuitively want rare events to be equally spaced, not to occur twice in a row, so low frequencies escape notice, and perfect randomness arouses suspicion.

    Predictive factors are based on this status quo, so they’re wrong. Men are more likely to repay a loan if they have a high salary, are in a particular set of professions, and have added a mobile phone number to their registered details. None of these factors has the same predictive relationship for women, but if there are more men than women in the data set, then it’s the male predictive factors that will be used in a model that assesses women.

    Similarly, misclassification or error rates will always be highest in the minority groups for which less data is available. Research shows that few clinical trials contain sufficient numbers of minority group members to predict their treatment outcomes as accurately as for the college-age White men who typically volunteer to participate. That same bias is found in marketing algorithms, pricing, credit decisions, text readers, and fraud detection systems. Many companies have found that even if people are not directly disadvantaged, underestimating a subgroup has a commercial impact for the company.

    Ongoing monitoring can pay dividends. One global retailer, for example, was able to improve its demand forecasting by adapting to changes in data and correcting the historical bias that affected predictions, even as seasonal interest varied. This increased accuracy allowed the company to improve its supply chains and shorten time to market by roughly 10% once it stopped defining accuracy as the ability to fit to historic data.

    If regulation and media scrutiny require companies to show that their AI is fair, they can produce an outcome measure that says it is. But if companies truly want their algorithms to work equitably with a diverse population, they must deliberately compensate for unfairness — or rewrite the laws of mathematics.

    Companies will never root out bias completely. But they can enhance, expand, check, and correct their practices for results that are more fair — and more diverse and equitable.

    Topics

    Artificial Intelligence and Business Strategy

    The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations. The initiative researches and reports on how AI is spurring changes to the workforce, data management, privacy, and cross-entity collaboration — all while generating new ethical challenges for business. It looks at new risks and threats in dependency, job loss, and security. And it seeks to help managers understand and act on the tremendous opportunity from the combination of human and machine intelligence. Research and analysis for the initiative is in collaboration with and sponsored by Boston Consulting Group.
    More in this series

    Leading With Impact

    In this series, author and organizational coach Chris Clearfield talks with leaders who manage technology-driven teams at innovative organizations across the world. The series will examine universal big-picture challenges as well as specific lessons on sparking ideas and accelerating innovation.
    More in this series

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.