QA Should artificial intelligence be legally required to explain itself

first_img Country * Afghanistan Aland Islands Albania Algeria Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia, Plurinational State of Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, the Democratic Republic of the Cook Islands Costa Rica Cote d’Ivoire Croatia Cuba Curaçao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See (Vatican City State) Honduras Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Democratic People’s Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People’s Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, the former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Martinique Mauritania Mauritius Mayotte Mexico Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Norway Oman Pakistan Palestine Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Qatar Reunion Romania Russian Federation Rwanda Saint Barthélemy Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and the South Sandwich Islands South Sudan Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania, United Republic of Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States Uruguay Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Vietnam Virgin Islands, British Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe The Alan Turing Institute As artificial intelligence (AI) becomes more sophisticated, it also becomes more opaque. Machine-learning algorithms can grind through massive amounts of data, generating predictions and making decisions without the ability to explain to humans what it’s doing. In matters of consequence—from hiring decisions to criminal sentencing—should we require justifications? A commentary published today in Science Robotics discusses regulatory efforts to make AI more transparent, explainable, and accountable. Science spoke with the article’s primary author, Sandra Wachter, a researcher in data ethics at the University of Oxford in the United Kingdom and the Alan Turing Institute. This interview has been edited for brevity and clarity.Q: In what areas is transparency needed? A: An algorithm can do very boring work for you, it’s efficient, it doesn’t get tired, and it can often make better decisions than a human can. But transparency is needed where technologies affect us in significant ways. Algorithms decide if individuals are legitimate candidates for mortgages, loans, or insurance; they also determine interest rates and premiums. Algorithms make hiring decisions and decide if applicants can attend universities. St. George’s Hospital Medical School in London developed software for initial screening of applicants back in the 1970s. It was later revealed to show racial and gender discrimination. Judges and the police use algorithms for sentencing, granting parole, and predictive policing. Last year, ProPublica reported that a popular program called COMPAS overestimated the risks of black defendants reoffending. Robotics and autonomous systems can be used for surgery, care, transport, and criminal justice. We should be entitled to assess the accuracy and thinking behind these decisions. Q: How have regulators responded to the need? A: Regulators around the world are discussing and addressing these issues but sometimes they must satisfy competing interests. On the one hand, the public sector must ensure that algorithms, AI, and robotics are deployed in safe ways, and guarantee that these systems do not discriminate or otherwise harm individuals. On the other hand, regulation requiring transparency could hamper innovation and research, and have an adverse effect on business interests, such as trade secrets.Regulation can cause problems if requirements are not well defined from the outset. Regulation can also be problematic if it calls for something that’s technically impossible to implement. Some people in the AI community feel that you can’t always give explanations because not even the developers of the systems actually understand how they work. With AlphaGo, the programmers didn’t know how the algorithm came up with its moves.Q: Are there differences between how U.S. and European regulators have acted?  Sign up for our daily newsletter Get more great content like this delivered right to you! Country Q&A: Should artificial intelligence be legally required to explain itself? Algorithms that detect the threat level of airline passengers might operate without accountability.  Email Shutterstock/Anucha Maneechote. Adapted by Adham Tamer/Oxford Internet Institute By Matthew HutsonMay. 31, 2017 , 2:00 PM Click to view the privacy policy. Required fields are indicated by an asterisk (*) Sandra Wachter A: The U.S. believes in a more soft-touch, self-regulatory approach. Their current policies focus more on education of researchers and voluntary codes of practices for the private sector. This might be the result of their belief that too much regulation can have a negative effect on research, innovation, and economic growth.The EU is more inclined to create hard laws that are enforceable. The EU General Data Protection Regulation, or GDPR, which will come into force in May 2018, is an excellent example. This framework creates certain transparency rights and safeguards against automated decision-making. Article 22, for example, grants individuals the right to contest a completely automated decision if it has legal or other significant effects on them. Other articles require data collectors such as advertisers to provide people with access to the collectors’ data on them, and to inform people about the general functionality of the automated system when decisions are made using that data.Q: Has enough been made of the fact that human decision-makers are also “black boxes”? A: Yes, humans often have prejudices that lead to discriminatory decisions, and we often have no way of knowing when and why people are biased. With machine learning we have the potential to make less biased decisions. But algorithms trained with biased data pick up and replicate these biases, and develop new ones.Q: Can you give an example? A: If you’re hiring someone for a management position and you feed your algorithm data from the last 30 years, the data will be skewed, and the projected ideal candidate will be someone male, white, and in his 40s or 50s. I am a woman in my early 30s, so I would be filtered out immediately, even if I’m suitable for that position. And it gets even worse, because sometimes algorithms are used to display job ads, so I wouldn’t even see the position is available.Other times we have more latent biases. There’s a textbook hypothetical example. People with red cars might receive higher insurance premiums, which is not discriminatory against a protected group but could have unintended consequences. Sports cars are often red, and people who buy sports cars are often macho people who drive more dangerously and have more accidents, so if they have higher insurance premiums, that’s fair. But if red cars are more likely to be damaged in accidents and sold second-hand, then people with less disposable income might be more likely to drive them, too, and they will receive higher insurance premiums. So we don’t know just from the data we’re using that it could have discriminatory effects.But we can develop better tools to flag biases and act against them.last_img