Artificial intelligence is only as good the data it is based on. Unless it takes all factors and all population groups into account, faulty and biased decisions may come as a result. But how about ethics and principles of Artificial Intelligence in recent applications?
The field of Artificial Intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” says Kate Crawford, Co-Founder of the AI Now Institute: “But we urgently need more research into the real-world implications of the adoption of AI in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts.” It is for this very reason that the AI Now Institute was launched in late 2017 at New York University. This is the first university research institute dedicated to the social impact of Artificial Intelligence. To this end, it wants to expand the scope of AI research to include experts from fields such as law, healthcare, occupational and social sciences. According to Meredith Whittaker, another Co-Founder of AI Now, “AI will require a much wider range of expertise than simply technical training. Just as you wouldn’t trust a judge to build a deep neural network, we should stop assuming that an engineering degree is sufficient to make complex decisions in domains like criminal justice. We need experts at the table from fields like law, healthcare, education, economics, and beyond.”
Safe and just AI requires a much broader spectrum of expertise than mere technical know-how.
AI Systems with Prejudices are a Reality
“We’re at a major inflection point in the development and implementation of AI systems,” Kate Crawford states. “If not managed properly, these systems could also have far-reaching social consequences that may be hard to foresee and difficult to reverse. We simply can’t afford to wait and see how AI will affect different populations.” With this in mind, the AI Now Institute is looking to develop methods to measure and understand the impacts of AI on society.
It is already apparent today that unsophisticated or biased AI systems are very real and have consequences – as shown, in one instance, by a team of journalists and technicians at Propublica, a non-profit newsdesk for investigative journalism. They tested an algorithm which is used by courts and law enforcement agencies in the United States to predict repeat offending among criminals. They found that it was measurably biased against African Americans. Such prejudice-laden decisions come about when the data that the AI is based on and works with is not neutral. If it includes social disparities, for instance, the evaluation is also skewed. If, for example, only data for men is used as the basis for an analysis process, women may be put at a disadvantage.
It is also dangerous if the AI systems have not been taught all the relevant criteria. For instance, the Medical Center at the University of Pittsburgh noted that a major risk factor for severe complications was missing from an AI system for initially assessing pneumonia patients. And there are many other relevant areas in which AI systems are currently in use without having been put through testing and evaluation for bias and inaccuracy.
Checks Needed
As a result of this, the AI Now Institute took to its 2017 research report to call for all important public institutions to immediately stop using “black-box” AI. “When we talk about the risks involved with AI, there is a tendency to focus on the distant future,” says Meredith Whittaker: “But these systems are already being rolled out in critical institutions. We’re truly worried that the examples uncovered so far are just the tip of the iceberg. It’s imperative that we stop using black-box algorithms in core institutions until we have methods for ensuring basic safety and fairness.”