How technology and AI learns is by humans teaching it what’s correct. This is how facial recognition software works where humans need to teach it how to recognize faces, and also teach it not to be biased. However this can be slow, not to mention humans make errors as well, which means that sometimes certain things can slip past.
However MIT’s CSAIL thinks that they might have the answer in automatically de-biasing face detection AI. They have created an algorithm that can scan a data set, understand what the set’s biases are, and then resample it to ensure better representation regardless of skin color. Such bias is said to be a problem and there was even a recent report of how Amazon’s face recognition struggles with identifying darker skin.
They are not alone either as several years ago, there were some who accused Apple’s Face ID for being “racist” where there was an instance of how a Chinese woman’s phone was unlocked by her son. It is possible that her son closely resembled her, which Apple admits is one of the instances which Face ID can fail at, but it does highlight how the system is far from perfect yet.
So far MIT’s system is said to be capable of reducing what they’re calling “categorical bias” by 60% without affecting precision. That means that there’s still 40% in which it will “fail”, but it is still quite a significant improvement and should help improve the speed at collecting larger amounts of data.
Filed in AI (Artificial Intelligence).. Read more about