Ethical AI

Ethical AI header image

AI capabilities have advanced greatly in the last few years. Now it is quite common to use AI in consumer products and industry. For example, AI assesses bank loans. Also, smartphones use AI applications. We now take for granted some uses of AI such as predictive text on a smartphone.

As more of our world uses AI, what responsibilities do we have to make sure that it is used ethically? Read on to find out more.

We are a long way from AI being self-aware. Even though AI systems can learn from data, they don’t have a moral awareness. Applications are not aware of whether they are operating ethically. AI will produce results that reflect the biases of the designers, and the data provided. Like other software, AI has “garbage in, garbage out”.

This becomes complicated when the designer doesn’t understand exactly how the AI works. This means that the AI may give unexpected, non-deterministic results.

In addition to unexpected results, AI can cause unexpected consequences. Whilst we can anticipate many possible outcomes, we can’t foresee every possible interaction between the technology and the rest of the world. Despite the best intentions of the developer, a malicious person can repurpose almost every technology. Nevertheless, we should not stop a technology where there are clear benefits, just because we don’t want to put in the effort to prevent risks.

Attacking the problem

How can we tackle the problems of unexpected and unwanted outcomes? Some people propose regulation as a solution. However, regulation and laws can take a long time to enact. The pace of technological change is very fast. Also, laws are effective for regulating outcomes but not well suited for the regulation of technological approaches.

It is more practical for computer scientists to develop their own ethical guidelines, for each application area and technology used. Ethics is the area of philosophy that looks at right and wrong. Ethics is not a new area of study – can we learn from this area?

Medical doctors have the Hippocratic Oath. The central idea is to do no harm. A medical doctor must consider the consequences of her action. A doctor should not be indifferent to the life of a person when selecting an approach. This is called consequentialism.

Most moral and religious traditions have a series of predefined rules to guide decisions. These rules automatically proscribe some actions. This is called deontological ethics.

Developing ethical AI involves both deontological and consequential ethics. Therefore, the developer must consider both the approach and the potential outcomes.

Questions to ask

In the same way, a computer scientist must consider the impacts of data science work. Who does the software impact? The scientist has to ask some questions:

  • What is the software for? How does it affect people? Is this a moral thing to do?
  • How could someone misuse the software?
  • What is the worst possible outcome from using the software?
  • Does the software have an unfair bias?
  • Does it treat people differently based on race or gender?
  • Has the scientist been objective and selected fair and balanced data?
  • What assumptions did the scientist make in selecting data to learn from? Has she tested these assumptions?
  • Would an outside person think that this software is ethical?
  • How the public judge these decisions if published on the front page of the newspaper? Would the design decisions be defensible based on community standards, or by a reasonable person?
  • What is the role of human decision making in the operation of the AI?

Outcomes for AI

In general, as a society, we should expect that all actions by each member of society will result in either neutral or beneficial results. Society will take corrective action against any person or organisation that produces negative results. This action would logically extend to an organisation’s software, including software that uses AI technologies.

This means that this software should not disadvantage a particular group based on attributes that they didn’t choose. An AI should not reject a person of colour for a loan based only on their skin colour. It should not target civilians in a war, or secretly profile a person or gather private information for sale.

We believe that human judgement should be able to override a machine’s decision – so that a person has the final say in a critical decision. A human has learned context and has broad experience and can understand the perspective of another person. Since we don’t have general AI yet, it is unrealistic for a narrow AI algorithm based on clustering or pattern recognition or some other approach to make critical decisions.

How does this affect us at iCetana?

iCetana is AI-powered video analytics software for large camera networks. iCetana is used to improve security and health and safety. This video data is sensitive because it has significant impact if it is misused. If the operator misses an incident due to being overwhelmed with hundreds of video feeds, someone may be hurt.

iCetana uses movement vectors and does not look at a person’s race, colour, clothing or gender. Also, it also learns from what it sees, so this prevents biased data sets.

iCetana has helped to identify hazardous activities in the workplace. When hazards remain invisible, people may be hurt.  iCetana reduces the amount of video by 99%, removing clutter. This means that operators can focus on what matters.

We believe that iCetana should work with a security operator. Therefore, the operator makes the final judgement on whether to respond to an incident. Also, iCetana is not designed to replace human security operators. It simply allows them to focus on incidents and their other work.

iCetana is continually advancing our technology. We will have new decisions about using AI ethically as our technology moves ahead. We will be mindful of the impact of our work.