If AI is here, it should do the right thing

Artificial Intelligence (AI) is one of the most fascinating aspects of sci-fi movies. Many times, AI has been represented as a normal part of society in the form of physical robots, smart computers or even software forms.

We are now witnessing sci-fi technology moving from the big screen and into our reality. AI applications such as in the Minority Report are becoming part of our day to day lives.

In the last few years, AI systems have played important roles in decision-making tasks. For example, surveillance systems are using facial recognition to identify criminals; criminal courts are using AI to predict the likelihood of future crimes during sentencing; recruiting agencies are using AI to find the best candidates for their advertised job positions. It is fascinating how AI systems have become part of our modern life and how they are taking control of aspects of it. With AI influencing how the world operates, it is important that the decisions performed by such systems are fair and ethical.

Source of biases with AI

Research has shown that the same AI systems influencing our lives have social biases from the developer’s own experiences and beliefs. AI technology is moving so fast that it is hard to fix these biases. We know that AI systems were not intentionally built with bias, but, as with any new technology, they reflect the bias of their creators. To know how to solve the AI bias we need first to understand the bias of their creators, or, in other words, our natural bias. We are naturally biased by cultural, familiar and/or even particular beliefs. When we transfer our knowledge to someone we transfer our bias as well. The other person can become biased by our thoughts and beliefs. Societal bias is a dominant problem that has hampered humans since the dawn of civilisation. This has led to racism, sexism and even multicultural problems in society.

Machine learning applies the same concept. Machine learning is an AI approach that uses data to build AI systems. Generally, these systems require huge training datasets to achieve acceptable performance. A specialist provides and labels training datasets and therefore these end up with the specialist’s bias. Consequently, the AI systems learn that same bias.

Bias examples

We can observe this bias in several recent AI systems and has negatively affected society. In fact, major technology companies have demonstrated both skin color and gender biases that can lead to a grave threat to civil rights. Infamous examples are a system that identifies African-Americans as gorillas, or biased conclusions against African-Americans during sentencing, whilst underestimating future crimes by Caucasians. There is a growing list of examples that collectively raise a more fundamental question: How can we transform AI systems to make fair decisions and help society by removing the barriers of exclusion?

Let’s look closer at AI systems based on visual information (Facial Recognition, AI-based Surveillance Systems, etc.). Visual appearance such as clothes, beard, skin colour, and gender are common triggers for the bias problem in these systems. In surveillance systems, such bias can lead to incorrect decisions which affect civil rights. For those systems, what information should matter? An African-American man walking in the airport to catch a flight or a Caucasian man loitering with suspicious behaviour? Which information is important from a security perspective? In this example, most of the current AI based surveillance systems will sadly consider skin colour to make the decision. The AI system is probably prone to classify the African-American example as suspicious. But, from our understanding, the Caucasian man is the one who is showing suspicious behaviour.

An unbiased approach

For the sake of fair judgment, the iCetana AI algorithm disregards appearance, and only considers suspicious or abnormal events that are triggered by actions. This is how AI-based systems should work and this is what iCetana believes. We believe that action matters more than appearance and that an AI system can learn by itself what is normal and trigger what is abnormal. Developing an unbiased system is possible and achieving fair decision-making is feasible. This is where iCetana’s system shines – the system understands the environment and highlights just the abnormal events.

iCetana’s built its AI algorithm in a way that does not require training datasets provided by the users. Instead, the system looks for unusual changes in the data pattern. Therefore, it does not care about the appearance, it is pure (not biased) and does not lead to a grave threat to civil rights.

Given its powerful AI core, iCetana has been applied for several different use cases beyond surveillance environments. For example, manufacturing uses iCetana to monitor operational processes. Our system has helped organizations improve the security of their environments, reduce risks and at the same time, increase the return of the investment.

We are living in exciting times with AI giving rise to many new capabilities. It is vital that we make sure that we are doing the right thing to make these AI systems fair for all.

To find out more about iCetana contact us on info@icetana.com or call our head office on +61 8 6282 2811.

References:

https://www.technologyreview.com/s/607955/inspecting-algorithms-for-bias/

https://www.fastcompany.com/40536485/now-is-the-time-to-act-to-stop-bias-in-ai

https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/

https://www.mckinsey.com/business-functions/risk/our-insights/controlling-machine-learning-algorithms-and-their-biases