As more people adopt consumer AI tools like ChatGPT, there’s growing concern about the potential bias in machine learning (ML). This bias can be more than just a source of embarrassment and anger among users. It can result in legal action. Here’s how to think about the legal implications of bias in ML.

 

What is machine learning bias?

This describes situations where ML-based data analytics systems show bias against certain groups of people. These biases often reflect common societal biases about race, gender, biological sex, age, and culture.

For example, AI art tools like Stable Diffusion or Midjourney may return an image of a Black person based on offensive imagery related to slavery or racist stereotypes. These incidents are often widely shared on Twitter and social media, generating controversy for the owners of these tools.

People of color using the PortraitAI art generator reported complaints when the tool returned AI-generated images that looked nothing like them. This is because the system was based on well-known paintings of the Renaissance era, most of which depicted white Europeans.

 

Types of bias in ML

There are three types of AI biases:

  • Algorithmic bias: The design of AI models contains a bias.
  • Data bias: The algorithms are trained using biased data.
  • Societal bias: The designers of AI systems may have blind spots in their thinking due to societal conditioning, which, in turn makes it hard for them to recognize algorithmic or data bias.

It’s important to remember that AI is not completely impartial. AI is built by humans. The algorithms reflect the way humans think, and that may include human biases.

 

What are the legal implications of bias?

The EU’s General Data Protection Regulation (GDPR) provides some room for AI data regulation. In the US, there is looser regulation. That may be changing soon. The American Bar Association issued a resolution last year urging the legal profession to address the emerging ethical and legal issues related to machine learning. As more people use AI tools and become aware of potential bias in ML, we are likely to see more legal actions related to their use.

Anti-discrimination laws are one area of possible legal action related to bias in ML. Google recently got in trouble for its advertising system that allowed advertisers – including landlords or employers – to discriminate against nonbinary or transgender people. Those running ads across Google or Google-owned YouTube were given the option to exclude people of “unknown gender,” i.e., those who hadn’t identified themselves as male or female. This allowed advertisers to discriminate against people who identify as a gender other than male or female, breaching federal anti-discrimination laws.

Government regulators are still figuring out the right way to address these types of cases. The AI Now Institute is arguing for the regulation of AI in sensitive areas like criminal justice and healthcare. A 2022 paper in the journal “AI and Ethics” urged governments to adopt a legally binding framework to address bias in ML, with mechanisms for flexibility and fast response to unexpected outcomes—either harmful or beneficial:

“The independent body needs to be able to prosecute a breach once those recommendations become law and impose fines based on a percentage of the company’s turnover. The fine needs to be high enough to act as a deterrent.”

Microsoft president Brad Smith recently said: “We need legislation that will put impartial testing groups like Consumer Reports and their counterparts in a position where they can test facial recognition services for accuracy and unfair bias.”

Bias in machine learning

Source: AI and Ethics

How can AI developers minimize the legal risk of machine learning bias?

As the legal framework develops around bias in ML, there are things AI developers and business owners can do now to avoid legal exposure.

  1. Be aware of protected classes: AI and ML developers should understand the legally “protected classes” in the United States, which generally include age, race, gender, religion, color, national origin, disability and ethnicity. Algorithms with a potential to interact with protected classes should be designed with those classes in mind and audited regularly for fairness.
  2. Avoid underrepresentation: A common cause of bias in machine learning algorithms is that the training data is missing example of underrepresented groups. Voice assistants, like Siri, can have a hard time understanding people with accents for this reason. In a famous incident, black people were identified as gorillas by Google’s algorithms. The solution to this type of bias is straightforward: Algorithm developers must ensure that training data is included from underrepresented groups.

Conclusions

It is important not only from a legal perspective, but also from a business perspective, that corporations ensure that their machine learning algorithms treat protective classes fairly. Sidespin Group can help ensure that the proper processes are in place for machine learning success.

No tags for this post.