Can we trust AI?

AI is becoming increasingly powerful, and capable. It is now being used to perform tasks that were only entrusted to people, such as driving cars or diagnosing breast cancer. Can we trust AI in critical decision making?

To answer this question, it is important to understand what AI can do and can’t do. In general, AI is a system to make decisions based on previous examples rather than having them programmed in. For example, if you want an AI to recognise cats, you give it a lot of labelled pictures of cats and it will gradually learn from these.

How AI works

AI doesn’t have human level intelligence. AI finds mathematical patterns in historical data. It learns completely differently from people and without the benefit of broad context or any other knowledge. Results depend on having a broad range of training data. There’s one famous (possibly anecdotal) example where scientists trained an AI to recognise types of tanks. It learned quickly and got a high accuracy on the training and validation data sets given. But once the scientists tested the AI in real world situations, it completely failed. It turned out that all the photos of one type of tank were on a sunny day while the other pictures were on an overcast day. Instead of identifying tanks, the AI simply learned the difference in lighting levels.

AI in the real world

Many AI systems have a very narrow focus. For example, face recognition is an AI feature that is often offered in commercial software. This is because there are some fairly mature and well-tested algorithms available. This technology is best in controlled situations like passport control at an airport. At passport control, cameras are at face level and the system works with pre-registered faces. In most other security applications, cameras are high mounted and don’t have a clear view of the face or access to registered faces. Also, if there are two strangers fighting outside, face recognition won’t identify the people or even detect that they are fighting. AI is useful in a narrow context rather than a broad context.

Some autonomous vehicles have crashed, killing the occupants. There have been two cases where the driver had engaged Tesla’s Autopilot but the vehicle crashed due to over-reliance on the system by the driver. AutoPilot is not meant to be a fully autonomous system so still needs oversight. Since it handles freeway driving very well, sometimes the driver can become complacent, disengage and put too much trust in the system.

AI for video

Some video surveillance solutions claim to automatically detect certain behaviours such as someone carrying a weapon or fighting. These are usually based on deep learning approaches that learned from labelled video. So if it can match the current scene with one that it has learned, then it may identify a fight. But if the fight is different to what it has learned from, the system may not detect anything. It is largely limited to learning data in the training set.

iCetana takes a different approach by learning the difference between normal and abnormal movement. It then uses human judgement to determine whether the abnormal movement is important or not. At the current level of AI technology, it is impossible for a practical solution to have the broad context and life experience that an operator can provide. iCetana focuses on filtering out irrelevant camera feeds so that the volume of video does not overwhelm the operator. This is an appropriate and safe use of AI. It does what AI is good at (processing a lot of data) while leaving critical decisions to a person.

Conclusion

In conclusion, AI is not at the point where we can trust it to operate without human oversight. The “thought” processes used by AI are very different to how a person assesses something. A person uses broad context and wide experience. This is not available to an AI. We can’t overlook the core differences between an AI and a human.

It is best to use AI with someone who can verify the final action rather than just let it act autonomously. This is similar to an aircraft auto-landing system. The system can help a pilot, but she must still be alert and ready to step in in any unusual situation.

iCetana believes that this is the right approach to AI. In our view, it is best to use AI to enhance human judgement, not replace it. We don’t want a situation where we don’t listen to a cancer specialist and a robot automatically irradiates a person based on its judgement of a tumour. In a security situation, we don’t want an AI ignoring someone being stabbed because the scene doesn’t fit its profile of a fight, when a security operator would recognise this and act immediately. At iCetana, our focus is on using AI in a way that is focused on what computers are good at – quickly processing large amounts of data, and helping people with what they are best at – using judgement and being able to interpret a situation.