In the past few years, there has been an explosion of interest in artificial intelligence (AI). But along with all the excitement comes a healthy dose of skepticism and wariness. After all, AI is still in its infancy, and we don’t yet know all the ways it will impact our lives.
One of the major challenges facing AI is what’s known as the “algorithmic bias” or “machine learning bias” problem. This happens when algorithms are trained on data that is biased in some way. For example, if you train a facial recognition algorithm on a dataset of mostly white faces, it will be more likely to mis identify people of color.
This problem is compounded by the fact that most data sets are not representative of the entire population. They tend to be skewed towards certain groups (such as white males) because those groups are more likely to generate data (for example, through online activity). As a result, machine learning can inadvertently amplify existing biases.
Another challenge for AI is what’s known as “overfitting.” This happens when an algorithm has been trained on too few examples and consequently doesn’t generalize well to new data. Overfitting