Dave Gershgorn reports: In less than five years, a 2012 academic breakthrough in artificial intelligence evolved into the technology responsible for making healthcare decisions, deciding whether prisoners should go free, and determining what we see on the internet.
Machine learning is beginning to invisibly touch nearly every aspect of our lives; its ability to automate decision making challenges the future roles of experts and unskilled laborers alike. Hospitals might need fewer doctors, thanks to automated treatment planning, and truck drivers might not be required by 2030.
But it’s not just about jobs. Serious questions are starting to be raised about whether the decisions made by AI can be trusted. Research suggests that these algorithms are easily biased by the data from which they learn, meaning societal biases are reinforced and magnified in the code. That could mean certain job applicants get excluded from consideration when AI hiring software is used to scan resumes. Even more, the decision-making process of these algorithms is so complex that AI researchers can’t definitively say why one decision was made over another. And while that may be disconcerting to laymen, there’s an industry debate over how valuable knowing those internal mechanisms really is, meaning research may very well forge ahead with the understanding that we simply don’t need to understand AI.
Until this year, these questions typically came from academics and researchers skeptical of the breakneck pace that Silicon Valley was implementing AI. But 2017 brought new organizations spanning big tech companies, academics, and governments dedicated to understanding the societal impacts of artificial intelligence.
“The reason is simple—AI has moved from research to reality, from the realm of science fiction to the reality of everyday use,” Oren Etzioni, executive director of the Allen Institute for AI, tells Quartz. [Continue reading…]