What is AI Bias? Interesting Examples

The digital landscape is continuously evolving and impacting all domains of civilization relations. No one is questioning any more digitalization of human activities and the need for adaptation to novel and progressive technologies. This complex environment became even more challenging by the implementation of advanced Artificial Intelligence (AI) solutions.Generally, it can be said that AI is a technological revolution catalyst, that is conceptually designed to improve the life of modern humanity. AI benefits to the world market are clear and proven since it is cost-efficient, it effectively increases productivity, accelerates innovations with almost unlimited applicability in all branches – from human resources, real estate sector, marketing, security, finance to medicine, etc.Different algorithms are being used in the core of AI. Based on our interests and habits, they make decisions instead of us and even tell us what we want to see or buy. For instance, these are social network feeds, adverts, and pop-ups that appear on the screens, all the time when we are online.Today, companies and governments that are implementing different AI solutions are facing another important challenge – AI bias. The fear of overtaking human jobs by AI is not in the focus of professional debates anymore, but the questioning about the decisions and choices made by AI platforms.

So what is AI Bias?

AI bias is an anomaly in the output data, due to prejudiced assumptions. This anomaly is always resulting in different kinds of discrimination and a series of consequences for people and their lives. Discrimination is a phenomenon that prevents people from being in the same position based on some of their personal characteristics [TS]. This always results in inequality in the chances of exercising the freedoms and rights guaranteed by the constitution and laws.It is very important to notice that the AI is not generating bias by itself, its malicious imperfections, or bad intentions. When thinking about how AI bias happens, we should be aware that it is caused by different sources and even before the relevant data collection. Some reference points important for AI bias generation could be identified unambiguously.The real world is objectively imperfect, so the main conclusion is that the trade-off is always needed to make life works. The first thing engineers do when they start modeling a deep-learning platform is deciding what they want to gain and what is the goal of their AI model. During this process, the market needs or investors motivated by different business interests determine the wanted output without considering fairness or thinking about discrimination. We are talking about the human bias that significantly influences the generation of AI bias because humans always determine how the AI model will look like.For example, a deep-learning algorithm processes the input that is contaminated, in most cases absolutely unconsciously by human bias, and provides the expected output. Of course, there is nothing wrong with the AI itself, but with the provided unbalanced input data. There are more than 180 human biases that have been identified and classified by psychologists, and each of them can result in AI bias [AM].For example, a deep-learning algorithm processes the input that is contaminated, in most cases absolutely unconsciously by human bias, and provides the expected output. Of course, there is nothing wrong with the AI itself, but with the provided unbalanced input data. There are more than 180 human biases that have been identified and classified by psychologists, and each of them can result in AI bias [AM].At least, the AI engineers and scientists are primarily people without disabilities from a fairly homogeneous population – dominantly men at this moment, belonging to some race, educated and grew up within a specific social environment. That’s why it is so challenging for AI researchers to think broadly and take into account the issues and needs of significantly different people [NT]. Another important AI bias generator is the process of data collection, necessary for feeding the deep-learning algorithm. In some cases, the gathered data is not complete and does not necessarily represent reality. Consequently, the output cannot be reliable and the algorithm cannot correctly process the data if it is trained with only one, or several of many different and possible types of data. There’s no optimal representation of the world, so some data types will always be favoured compared to the others. In some other cases, the collected data can even reflect existing non-objective preconceptions or prejudices [TR]. It usually happens when deep-learning algorithm is fed by some inherited or traditional data and accordingly opens the door for discrimination. Another very important AI bias generator, positioned in the heart of the AI platform, is a selection of data attributes. Attributes selection directs the algorithm to consider or discard specific attributes of collected data during the processing. Some attributes or activated criteria could be age, gender, race, physical disabilities, education level, years of professional experience, creditability, yearly incomes, or any other. While attributes selection influence on AI platform’s accuracy is measurable, its impact on the AI bias is concealed and complex.The AI bias generators are mostly well disguised, making this phenomenon quite challenging. The malicious impact of input data and choices made during the modeling phase are hidden. They usually become apparent much later, when it is almost impossible to detect where the AI bias came from and how to overcome the problem with reverse engineering procedures. At the same time, many of the best practice processes in AI development do not consider AI bias and its influence on the outcome data anomalies. AI platforms testing is generally realized by the personnel that was involved in its developing, and later by the customer it was developing for. In both cases, the same outputs are expected and the same approach for data collection is shared, so the AI bias could not be detected. AI bias is also introduced by the lack of social context because different communities have different value systems and perceptions of fairness for different things. It means that one AI model with its specific data collection procedures and attribute selections cannot be a generalized and acceptable solution in all scenarios – there is no one-stop shop in AI.We can agree that the essence of the AI bias problem is not technological but social. In the end, deep-learning algorithms are capable to boost the bias in the data, so AI models have to be built carefully taking into account all possible pitfalls and if-loops [NT].

Examples of AI Bias

There are many examples of AI bias in the real world, which ordinary people face every day. We already know that AI has many benefits and improves our lives on a daily basis, but it is also known that AI bias offers us different kinds of discrimination. Several explicit examples of AI bias are discussed below.

AI bias and gender discrimination

What if AI results, influenced by bias, are used to show that a black person is less useful than a white one, or that men are better programmers than women? These decisions are suggested by different AI platforms every day and absolutely originated by the human biases [HB] or inherited “bias contaminated” inputs.
gender bias as an example of ai bias
Amazon started the AI project in 2014. for automating the recruiting process by reviewing the applicants’ resumes and rating them on predetermined criteria. During 2015. the company concluded that the new recruiting system based on AI was not rating the candidates fairly by discriminating against women. This scenario is a consequence of AI bias, due to the fact that Amazon used available data from the last 10 years to feed developed recruiting AI model. These inputs favored men who dominated in the technology industry. In the analyzed period, men represented more than 60% of employees in Amazon [AM] so the AI recruiting model improperly learned that women candidates are less acceptable than men and Amazon stopped using this AI model for the recruiting process.Moreover, millions of people use every day different voice assistants like Alexa, Siri, or Cortana. It is noticeable that they are all female and designed to serve. Simultaneously, there are also some examples with male voice assistants but dedicated to making some more important decisions, like IBM Watson – AI platform for business. Furthermore, the results of Google pictures search for CEO are mostly men and Google personal assistant results show mostly females. These obvious examples of gender inequality things are affecting young generations and their minds [HB].

AI bias and racial discrimination

The AI bias is also a significant cause of racial discrimination. In 2019, it was identified that US hospitals used an AI algorithm that was favoring white over black patients. This AI solution was designed to predict the need for additional medical care for more than 200 million people. It is interesting to notice that in this case the race was not a variable considered by the algorithm but healthcare cost history [TS] and the black patients had lower health care costs for a variety of reasons, compared to the white ones. It can be easily concluded that income and race are very converging metrics.
Racial Bias is an example of artificial intelligence or ai bias.
There are some other important examples of AI bias influence on racial discrimination, like COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). COMPAS is an AI platform used in US court systems, in order to predict if the defendant would repeat the crime. The results offered by the algorithm significantly put white offenders in a more favorable position compared with black. Furthermore, very interesting is an example of a happy family. If you search for “happy family” via Google and take a look at available pictures of search results, you will discover that the majority of happy families are white.

AI Bias applicability to healthcare – MRI, MSCT, X-ray images

The applicability of AI is very important to healthcare. Researchers and scientists are working hard on developing the algorithms that would be capable to assist the doctors and provide necessary support and help to patients. The role of AI is especially interesting in medical use-cases, such as interpretation of Magnetic Resonance Imaging – MRI, Multi-slice Computed tomography – MSCT, and X-ray images.
ai bias in healthcare example of ai bias
Some medical machines and instruments have different characteristics and produce images with specific tags. Respectfully, if you feed the deep-learning algorithm with only one or several types of data inevitably, AI bias will be included in the output. In everyday life, the algorithm could be tied to something that is meaningless without reacting to certain significant changes, which could potentially bring very poor results and endanger the life and health of patients. In these scenarios, it is very important to test the algorithm in real-life conditions in order to minimize all the types of bias – originated by humans, machines, and medical equipment or AI, and to save people’s lives [NT].

Conclusion

Instead of a conclusion, we should agree that AI bias is completely our responsibility and that it affects the AI potentials by spreading distrust and producing unreliable results. It is very important to ensure that AI models will improve decision making with minimal risk of generating discrimination.In order to minimize AI bias, it is necessary to involve heterogeneous communities and include diverse professions to prevent their generation. Generally, fixing AI bias and discrimination canceling in the output data is an ongoing and challenging process. Moreover, the establishment of internationally acceptable regulatory bodies capable of introducing standards for verification and testing is crucial for enabling AI to reach its full potential.

Reference list:

[AM] – Bias in AI: What it is, Types & Examples, How & Tools to fix it

Bias in AI: What it is, Types & Examples, How & Tools to fix it (aimultiple.com)[TS] – Real-life Examples of Discriminating Artificial Intelligence

https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-cae395a90070
[TR] – This is how AI bias really happens—and why it’s so hard to fix
https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/[NT] – AI Bias

https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html
[HB] – How to keep human bias out of AI

How to keep human bias out of AI | Kriti Sharma – YouTube

0 Comments
user placeholder

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe

Subscribe for latest Hitechies News on Crypto,Blockchain, NFT, Digital Marketing , Digital Transformation & More..
Do you want to boost your business? Drop us a line and keep in touch Contact us

Read also

View more

SydeLabs Raises $2.5m to Solve Security and Risk Management for Generative AI

SydeLabs Raises $2.5m to Solve Security and Risk Management for Generative AI SAN FRANCISCO, CALIFORNIA – March 28, 2024; Globally, policymakers continue to be concerned about the security and safety risks of Generative AI.1 Today, security & risk management startup SydeLabs announced their seed funding round of $2.5m to build solutions aimed at securing GenAI systems for enterprises....

Tennr puts fax machines back in vogue for healthcare organizations using AI, as it secures $18m from a16z

Tennr uses powerful home-grown AI models that allow healthcare suppliers and practices to grow by automating the painfully manual work it takes to move patients through the healthcare system. Notably, Tennr’s referral solution gets new patients into the practice faster than ever and simplifies communicating with insurances. NEW YORK, US – MARCH 26,2024; Fax machines...

Acurast joins the peaq ecosystem to decentralize cloud computing for DePINs

 Acurast will explore outfitting the devices on its cloud computing DePIN with peaq IDs, deploy its data oracle on peaq, and open its off-chain computing layer for real-world apps on peaq to use. March 13, 2024 — peaq, the blockchain for real-world applications, announces an expansion of its ecosystem as Acurast joins in to decentralize...