Ready or not, Artificial Intelligence (AI) is increasingly being integrated into our lives.
For readers who are new to AI, it is “intelligence exhibited by machines.” One application of AI, called Machine Learning, is based on the approach of letting computers, or machines, learn for themselves by giving them access to data–lots of it.
So, as we go about our day surfing the Internet–searching and browsing content and products, the machines quietly learn, in the background, how we search, what we like and dislike, predict new or different products and experiences we may enjoy, and make recommendations. The more data and interactions we feed them about us as individuals, the smarter they get at predicting our individual preferences. This is personalization. Once in a while, machine learning systems might even surprise us with selections that we actually enjoy, but would never have thought of choosing them in the first place.
Beyond personalization, we see a wide range of machine learning use cases, from fraud detection in banking systems, risk assessment in self-driving vehicles, disease prediction for early medical intervention to even policy making aiming to save government money with more accurate allocation prediction of scarce public funds.
Machine learning could impact our lives in so many profound ways, but it is not without controversies. No. I’m not talking about scenarios of intelligent robots making humans slaves. I’m referring to the potential flaws in training machines how to “think,” which could lead to embedding bias in an algorithm.
Here are excellent cases in point—The Economist’s article entitled “Machine Learning: Of Prediction and Policy,” where controversies of machine learning application to public policy have been raised:
“Many American judges are given ‘risk assessments’, generated by software, which predict the likelihood of a person committing another crime. These are used in bail, parole and (most controversially) sentencing decisions. But this year ProPublica, an investigative-journalism group, concluded that in Broward County, Florida, an algorithm wrongly labelled black people as future criminals nearly twice as often as whites.”
In a separate medical application of machine learning, ethics have been called into question:
“Some applications may be thought unethical. Mr Mullainathan and his colleagues show that machine learning can help predict the risk of death. That could, say, help focus hip replacements on those likely to live longest. Some may think that a step too far.”
What’s valuable about discussions like these, in my view, is encouraging openness as we work towards extending the boundaries of machine learning. Understand which use cases are suitable for applying machine learning, and which ones are not. Check and challenge the training data to avoid prejudice, and be critical of the learning goals we set for machines. Also important is to have continuous public dialog—so we can help one another eliminate our individual blind spot, limit potential bias in the algorithm, and reap the benefits of machine learning.
This Saturday, March 25, we will have an opportunity to learn about our future with AI from the world’s smartest minds. At the MITCNC Tech Conference: The Future of AI, technology entrepreneurs and experts across different industries will share their insights in the latest AI development and applications and engage in a dialog with the audience.
If you are attending this conference, be sure to bring your questions. If you have not yet registered, there is still time. Join us!